
L40 vs GeForce RTX 4090

L40
Popular choices:

GeForce RTX 4090
Popular choices:
Performance Spectrum - GPU
About G3D Mark
G3D Mark is a standard benchmark that measures graphics performance in real-world gaming scenarios. It simplifies comparing cards from different brands, where higher scores directly correlate with better fps and smoother gaming experiences.
Value Upgrade Path
This is the official ChipVERSUS Value Rating, comparing raw performance (G3D Mark) per dollar. Components placed above yours deliver better value for money.
Avg price is the current average price collected from markets across the web.
Performance Per Dollar L40
Performance Per Dollar GeForce RTX 4090
Performance Comparison
About G3D Mark🏆 Chipversus Verdict
🚀 Performance Leadership
The GeForce RTX 4090 is the superior choice for raw performance. It leads with a 16.9% higher G3D Mark score. However, the L40 offers more VRAM, which may be beneficial for texture-heavy scenarios at higher resolutions.
| Insight | L40 | GeForce RTX 4090 |
|---|---|---|
| Performance | ❌Lower raw frame rates (-16.9%) | ✅Leading raw performance (+16.9%) |
| Longevity | 🏆Elite Architecture (Ada Lovelace (2022−2024) / 4nm) | 🏆Elite Architecture (Ada Lovelace (2022−2024) / 5nm) |
| Ecosystem | Supports FSR Upscaling | ✨ DLSS 3/4 + Frame Gen Support |
| VRAM | 🎮 High Capacity (48 GB) | 🎮 High Capacity (24 GB) |
| Efficiency | 💡 Excellent Perf/Watt | ⚡ Higher Power Consumption |
| Case Fit | Standard Size (267mm) | Standard Size (304mm) |
💎 Value Proposition
The GeForce RTX 4090 offers a compelling cost-to-performance ratio. Priced at $1,649 versus $8,174 for the L40, it costs 80% less. While it maintains competitive performance, this results in a 479.5% higher cost efficiency score.
| Insight | L40 | GeForce RTX 4090 |
|---|---|---|
| Cost Efficiency | ❌Lower cost efficiency | ✅Better overall value (+479.5%) |
| Upfront Cost | ⚠️Higher upfront cost ($8,174) | ✅More affordable ($1,649) |
Performance Check
Real-world benchmarks and performance projections based on comprehensive hardware analysis and comparative metrics. Values represent expected performance on High/Ultra settings at 1080p, 1440p, and 4K. Modeled using a Ryzen 7 7800X3D reference profile to minimize specific CPU bottlenecks.
Note: Performance behavior can vary per game. Specific architectures may perform better or worse depending on game engine optimizations and API implementation.

League of Legends
Technical Specifications
Side-by-side comparison of L40 and GeForce RTX 4090

L40
The L40 is manufactured by NVIDIA. It was released in October 13 2022. It features the Ada Lovelace architecture. The core clock ranges from 735 MHz to 2490 MHz. It has 18176 shading units. The thermal design power (TDP) is 300W. Manufactured using 4 nm process technology. It features 142 dedicated ray tracing cores for enhanced lighting effects. G3D Mark benchmark score: 32,601 points.

GeForce RTX 4090
The GeForce RTX 4090 is manufactured by NVIDIA. It was released in September 20 2022. It features the Ada Lovelace architecture. The core clock ranges from 2235 MHz to 2520 MHz. It has 16384 shading units. The thermal design power (TDP) is 450W. Manufactured using 5 nm process technology. It features 128 dedicated ray tracing cores for enhanced lighting effects. G3D Mark benchmark score: 38,112 points. Launch price was $1,599.
Graphics Performance
In G3D Mark, the L40 scores 32,601 versus the GeForce RTX 4090's 38,112 — the GeForce RTX 4090 leads by 16.9%. The L40 is built on Ada Lovelace while the GeForce RTX 4090 uses Ada Lovelace, both on 4 nm vs 5 nm. Shader units: 18,176 (L40) vs 16,384 (GeForce RTX 4090). Raw compute: 90.52 TFLOPS (L40) vs 82.58 TFLOPS (GeForce RTX 4090). Boost clocks: 2490 MHz vs 2520 MHz. Ray tracing: 142 RT cores (L40) vs 128 (GeForce RTX 4090) with 568 Tensor cores vs 512.
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| G3D Mark Score | 32,601 | 38,112+17% |
| Architecture | Ada Lovelace | Ada Lovelace |
| Process Node | 4 nm | 5 nm |
| Shading Units | 18176+11% | 16384 |
| Compute (TFLOPS) | 90.52 TFLOPS+10% | 82.58 TFLOPS |
| Boost Clock | 2490 MHz | 2520 MHz+1% |
| ROPs | 192+9% | 176 |
| TMUs | 568+11% | 512 |
| L1 Cache | 17.8 MB+11% | 16 MB |
| L2 Cache | 96 MB+33% | 72 MB |
| Ray Tracing Cores | 142+11% | 128 |
| Tensor Cores | 568+11% | 512 |
Advanced Features (DLSS/FSR)
A critical advantage for the GeForce RTX 4090 is support for DLSS 3 Frame Gen. This allows it to generate entire frames using AI/Algorithms, essentially doubling the frame rate in CPU-bound scenarios or heavy ray-tracing titles. The L40 lacks specific hardware/driver support for this native frame generation tier.
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| Upscaling Tech | FSR 1.0 (Software) | DLSS 3.5 |
| Frame Generation | Not Supported | DLSS 3.0 (Native) |
| Ray Reconstruction | No | Yes (DLSS 3.5) |
| Low Latency | Standard | NVIDIA Reflex |
Video Memory (VRAM)
The L40 comes with 48 GB of VRAM, while the GeForce RTX 4090 has 24 GB. The L40 offers 100% more capacity, crucial for higher resolutions and texture-heavy games. Memory bandwidth: 960 GB/s (L40) vs 1008 GB/s (GeForce RTX 4090) — a 5% advantage for the GeForce RTX 4090. Bus width: 384-bit vs 384-bit. L2 Cache: 96 MB (L40) vs 72 MB (GeForce RTX 4090) — the L40 has significantly larger on-die cache to reduce VRAM reliance.
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| VRAM Capacity | 48 GB+100% | 24 GB |
| Memory Type | GDDR6 | GDDR6X |
| Memory Bandwidth | 960 GB/s | 1008 GB/s+5% |
| Bus Width | 384-bit | 384-bit |
| L2 Cache | 96 MB+33% | 72 MB |
Display & API Support
DirectX support: 12.2 (L40) vs 12.2 (GeForce RTX 4090). Vulkan: 1.3 vs 1.3. OpenGL: 4.6 vs 4.6. Maximum simultaneous displays: 4 vs 4.
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| DirectX | 12.2 | 12.2 |
| Vulkan | 1.3 | 1.3 |
| OpenGL | 4.6 | 4.6 |
| Max Displays | 4 | 4 |
Media & Encoding
Hardware encoder: NVENC 8th Gen (L40) vs 8th Gen NVENC (2x) (GeForce RTX 4090). Decoder: NVDEC 5th Gen vs 5th Gen NVDEC. Supported codecs: AV1,HEVC,H.264,VP9 (L40) vs MPEG-2,H.264,HEVC,VP9,AV1 (GeForce RTX 4090).
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| Encoder | NVENC 8th Gen | 8th Gen NVENC (2x) |
| Decoder | NVDEC 5th Gen | 5th Gen NVDEC |
| Codecs | AV1,HEVC,H.264,VP9 | MPEG-2,H.264,HEVC,VP9,AV1 |
Power & Dimensions
The L40 draws 300W versus the GeForce RTX 4090's 450W — a 40% difference. The L40 is more power-efficient. Recommended PSU: 750W (L40) vs 1000W (GeForce RTX 4090). Power connectors: 16-pin vs 16-pin (12VHPWR). Card length: 267mm vs 304mm, occupying 2 vs 3 slots. Typical load temperature: 80°C vs 80°C.
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| TDP | 300W-33% | 450W |
| Recommended PSU | 750W-25% | 1000W |
| Power Connector | 16-pin | 16-pin (12VHPWR) |
| Length | 267mm | 304mm |
| Height | 111mm | 137mm |
| Slots | 2-33% | 3 |
| Temp (Load) | 80°C | 80°C |
| Perf/Watt | 108.7+28% | 84.7 |
Value Analysis
The L40 launched at $31000 MSRP and currently averages $8174, while the GeForce RTX 4090 launched at $1599 and now averages $1649. The GeForce RTX 4090 costs 79.8% less ($6525 savings) at current market prices. Performance per dollar (G3D Mark / price): 4.0 (L40) vs 23.1 (GeForce RTX 4090) — the GeForce RTX 4090 offers 477.5% better value.
| Feature | L40 | GeForce RTX 4090 |
|---|---|---|
| MSRP | $31000 | $1599-95% |
| Avg Price (30d) | $8174 | $1649-80% |
| Performance per Dollar | 4.0 | 23.1+478% |
| Codename | AD102 | AD102 |
| Release | October 13 2022 | September 20 2022 |
| Ranking | #61 | #4 |
Top Performing GPUs
The most powerful gpus ranked by G3D Mark benchmark scores.














