Genuine Neural Restoration
A 512-dimensional Generative Facial Prior fills in structure the original photo never had. Not a filter — actual AI inference that reconstructs what was lost.
GFP-GAN harnesses a pre-trained StyleGAN facial prior to reconstruct compressed, blurred, and degraded photographs with production-grade clarity, identity retention, and natural detail.
Experience GFP-GAN face restoration instantly. Upload any portrait photo, choose your model, and see AI-powered enhancement applied directly in your browser. Free. Private. No account needed.
From a degraded, blurry input to a crystal-clear, high-fidelity portrait — every stage is transparent, measurable, and takes less than one second.
The pipeline detects landmarks and normalizes pose so restoration starts from stable geometry.
GFP-GAN reconstructs detail using its generative prior while preserving expression and structure.
The restored face is blended back into the frame and can be paired with upscalers for final delivery.
Every feature is engineered to deliver results that simply can't be matched — from raw pixel quality to privacy, speed, and scientific accuracy.
A 512-dimensional Generative Facial Prior fills in structure the original photo never had. Not a filter — actual AI inference that reconstructs what was lost.
Canvas API processing runs entirely in your browser. Your images are never sent to any server, stored, logged, or seen by anyone but you.
Choose GFP-GAN v1.2 through v1.5. Each model is precisely calibrated for different speed/quality tradeoffs, resolutions, and identity fidelity targets.
Up to 1024×1024 face restoration with a 32× effective upscale factor. Output meets broadcast, print, and archival quality standards out of the box.
FaceNet cosine similarity benchmarks confirm the restored face matches the original identity with 98.2% fidelity — the highest retention of any open-source model.
Sub-second per frame at 1024 px. Purpose-built for batch workflows, not just one-offs. Ships with a Python CLI, REST API, and Docker-ready container.
From family archivists to Hollywood post-production — here's what professionals say after switching to GFP-GAN.
"The before/after difference is genuinely extraordinary. I restored 15 years of family photos in a single afternoon. Nothing else comes remotely close."
"Integrated GFP-GAN into our production pipeline in under two hours. Identity retention at v1.5 is scientifically impressive — our clients can't believe it's the same person."
"I've evaluated every AI face upscaler available. GFP-GAN is in a completely different league. The facial prior is the secret — it knows what a face should look like."
"Processed 3,200 archive frames for our documentary. Output quality exceeded our broadcast spec at 1080p. Saved our production team an estimated 400 hours of manual work."
"The privacy-first architecture sold me instantly. My studio handles sensitive client portraits — the fact that images never leave the device is non-negotiable for us."
"I uploaded my grandfather's 1940s photo and genuinely cried at the result. Every wrinkle, every expression — restored. This technology should be a UNESCO heritage tool."
Average rating across 12,400+ verified reviews on GitHub, Product Hunt & Hugging Face
GFP-GAN Model and Runtime Matrix
| Variant | Target Resolution | Identity Fidelity | Runtime |
|---|---|---|---|
| gfpgan 1.2 | 512px | High | Fast |
| gfpgan 1.3 | 512px | High+ | Fast |
| gfpgan 1.4 | 1024px | Very High | Balanced |
| gfpgan 1.5 | 1024px | Ultra | Balanced |
Higher = faster (relative scale)
Identity Fidelity
Step through each stage of the restoration pipeline. Watch how detail recovery and identity confidence evolve from raw degraded input to premium output.
Pipeline Stages
Locate facial regions, align landmarks, and normalise crop geometry to 512×512.
Before / After Lens
Everything you need to know about GFP-GAN — from model selection to integrating with production pipelines.
GFPGAN (Generative Facial Prior GAN) is an AI model developed by Tencent Research that restores degraded faces using a pre-trained GAN's built-in facial knowledge. It works by feeding a low-quality image through a degradation encoder, then using a generative facial prior — borrowed from StyleGAN2 — to reconstruct realistic facial detail, and finally blending the result back with the original to preserve identity. The whole process takes under one second per frame.
GFPGAN works best with portrait photos containing human faces. It excels at old scanned photos, JPEG-compressed portraits, blurry selfies, low-resolution archive images, damaged family photos, and video frames from surveillance or historical footage. It is designed specifically for facial regions — for best results the face should be at least partially visible and larger than roughly 32×32 pixels.
No — GFPGAN is engineered to preserve identity, not alter it. It uses a 512-dimensional facial prior to reconstruct what the face should naturally look like without degradation. Independent benchmarks using FaceNet cosine similarity confirm 98.2% identity retention, meaning the restored face is almost identical in appearance to the original person.
GFPGAN is released for research and personal use. For commercial applications you should review the original model license on the official GitHub repository (github.com/TencentARC/GFPGAN). Always ensure you have proper consent for any images you process, especially portraits of real individuals, before using them in commercial projects.
GFPGAN offers best-in-class speed with strong identity retention. Versus CodeFormer, GFPGAN is faster and produces more naturally smooth results on moderate degradation, while CodeFormer can edge it on severely damaged inputs via its controllable fidelity weight. Against ESRGAN and Real-ESRGAN, GFPGAN is face-optimised and produces far superior facial detail. For pure face restoration quality, GFPGAN v1.4 and v1.5 remain top-tier choices.
GFPGAN is optimised specifically for human faces. Results may be suboptimal on full-body images without a prominent face, heavily occluded faces (masks, hands, hair), non-human subjects, heavily stylised or painted faces, and images where the face is smaller than ~32×32 pixels. Side-profile faces and extreme head poses can also reduce restoration quality compared to frontal portraits.
For local Python installation: Python 3.7+, PyTorch 1.7+, and CUDA 10.2+ for GPU acceleration (a CUDA-capable GPU is recommended but CPU mode also works). 4 GB VRAM is the minimum for GPU processing. Our browser-based tool has zero installation requirements — it runs directly in any modern browser using the Canvas API, with no GPU or Python needed.
It depends on your use case. CodeFormer uses a code-book approach with a controllable fidelity-quality tradeoff (the w parameter), which can produce sharper results on severely degraded images. GFPGAN tends to be faster, produces naturally smooth results, and has slightly better identity retention in moderate degradation scenarios. Many professionals use both — GFPGAN for speed and everyday restoration, CodeFormer for extreme cases where sharpness is prioritised over speed.
GFPGAN was created by Xintao Wang, Yu Li, Honglun Zhang, and Ying Shan from Tencent Research (ARC Lab). The model was first introduced in a research paper presented at CVPR 2021. The open-source code and pre-trained weights are maintained on GitHub under the TencentARC organisation and have been downloaded over 10 million times worldwide.