Deep learning-based techniques for video enhancement, compression and restoration
| Dublin Core | PKP Metadata Items | Metadata for this Document | |
| 1. | Title | Title of document | Deep learning-based techniques for video enhancement, compression and restoration |
| 2. | Creator | Author's name, affiliation, country | Redouane Lhiadi; University of Mohammed 1st; Morocco |
| 2. | Creator | Author's name, affiliation, country | Abdessamad Jaddar; University of Mohammed 1st; Morocco |
| 2. | Creator | Author's name, affiliation, country | Abdelali Kaaouachi; University of Mohammed 1st; Morocco |
| 3. | Subject | Discipline(s) | Deep learning models; Video processing; Real-time processing; Restoration models; Super-resolution; |
| 3. | Subject | Keyword(s) | Deep learning; Real-time processing; Restoration models; Super-resolution; Video processing; |
| 4. | Description | Abstract | Video processing is essential in entertainment, surveillance, and communication. This research presents a strong framework that improves video clarity and decreases bitrate via advanced restoration and compression methods. The suggested framework merges various deep learning models such as super-resolution, deblurring, denoising, and frame interpolation, in addition to a competent compression model. Video frames are first compressed using the libx265 codec in order to reduce bitrate and storage needs. After compression, restoration techniques deal with issues like noise, blur, and loss of detail. The video restoration transformer (VRT) uses deep learning to greatly enhance video quality by reducing compression artifacts. The frame resolution is improved by the super-resolution model, motion blur is fixed by the deblurring model, and noise is reduced by the denoising model, resulting in clearer frames. Frame interpolation creates additional frames between existing frames to create a smoother video viewing experience. Experimental findings show that this system successfully improves video quality and decreases artifacts, providing better perceptual quality and fidelity. The real-time processing capabilities of the technology make it well-suited for use in video streaming, surveillance, and digital cinema. |
| 5. | Publisher | Organizing agency, location | Institute of Advanced Engineering and Science |
| 6. | Contributor | Sponsor(s) | |
| 7. | Date | (YYYY-MM-DD) | 2025-04-01 |
| 8. | Type | Status & genre | Peer-reviewed Article |
| 8. | Type | Type | |
| 9. | Format | File format | |
| 10. | Identifier | Uniform Resource Identifier | https://ijai.iaescore.com/index.php/IJAI/article/view/26644 |
| 10. | Identifier | Digital Object Identifier (DOI) | http://doi.org/10.11591/ijai.v14.i2.pp1518-1530 |
| 11. | Source | Title; vol., no. (year) | IAES International Journal of Artificial Intelligence (IJ-AI); Vol 14, No 2: April 2025 |
| 12. | Language | English=en | en |
| 14. | Coverage | Geo-spatial location, chronological period, research sample (gender, age, etc.) | |
| 15. | Rights | Copyright and permissions |
Copyright (c) 2025 Institute of Advanced Engineering and Science![]() This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. |
