•  
  •  
 

Abstract

Video classification is a vital area of research due to the growing volume of video content in various applications. Accurate category across various resolutions poses challenges, which include adapting to scaling, resizing, and compression. Therefore, this paper introduces an innovative Generative Convolutional Network (GCN) set of rules tailored for multi-resolution video classes. The proposed GCN model utilizes Convolutional Neural Networks (CNNs) combined with generative modeling to enhance the extraction of functions across varying video resolutions, which is crucial for maintaining class robustness in the face of common video adjustments, such as scaling, resizing, and compression. In contrast, traditional fashions frequently struggle with such versions, resulting in inconsistent class results. By integrating CNNs with a generative community and utilizing antagonistic training, the GCN refines its function, becoming familiar with and improving its typical class performance. The generative aspect simulates input variations, enabling the model to remain resilient under various video conditions.

Experimental results show that the GCN outperforms traditional strategies across key assessment metrics. On the UCF101 dataset, it achieved 98% accuracy, 94% Recall, 95% precision, and a 96% F-score. Compared to setting up models using SVM, CNN, KNN, and RNN, the GCN can consistently deliver advanced performance, particularly in terms of robustness, accuracy, and resistance to parameter modifications. These findings highlight the potential of GCN to become a new benchmark for video classification, particularly in real-time applications that require dependable evaluation of multi-resolution video content.

Share

COinS