Veii
Enthusiast
- Mitglied seit
- 31.05.2018
- Beiträge
- 1.486
- Desktop System
- QA Platform
- Laptop
- ASUS 13" ZenBook OLED [5600U]
- Details zu meinem Desktop
- Prozessor
- Intel Core Ultra 9 285K
- Mainboard
- ASRock OC Formula
- Kühler
- Alphacool T38 280mm
- Speicher
- G.Skill Z5 CK 9600
- Grafikprozessor
- GTX1080ti KP [XOC ROM] // EVGA GTX 650 1GB [UEFI GOP]
- Display
- KOORUI GN10 miniLED
- SSD
- Samsung EVO 850
- Soundkarte
- ESI Ambier i1 & AKG P820
- Gehäuse
- Open-Bench
- Netzteil
- Corsair SF85 // Seasonic GX-550
- Keyboard
- Topre Realforce 108UBK 30g [Silenced]
- Mouse
- Endgame-Gear OP1 8K
- Betriebssystem
- Win11
- Internet
- ▼42 MBit ▲15 MBit
@Wolf87
Makes sense if he tests L1D to L3 to Cache to L1D back
While i hoped for L1D or if bigger command ~ L2 to L2 betweeen cores.
// Where there are 1 op (zero delay) transfers and normal (long dly) transfers
L3 is shared, soo tracking is difficult
Most commands get broken down into chunks and allocated ~ without ever leaving cache
To my understanding
That should happen on all games, which is what can expose unstable CO or FCLK Package throttle
But i think if the understanding was to target mem oc effectiveness.
~ the cache to mem to cache approach is something that can visualize when MCLK pushing makes sense or not
Hmm difficult.
Optimally would be if both ways are testet and displayed as different visual examples, as this L$ - MEM - L$ , method already should be similar to Cache & RAM test.
Yet nothing of both should actively show FCLK Throttle ~ given that part is load balanced and may not be easily viewed as unstable (low load?)
EDIT:
If we do aim to show dynamic throttle into access time values
Internal testing should not leak to mem.
Above test is an alternate approach for very likely correct target, yet not what i was personally looing for as SiSandra replacement.
// And it will explain why values are different too. We look at two different questions.
Makes sense if he tests L1D to L3 to Cache to L1D back
While i hoped for L1D or if bigger command ~ L2 to L2 betweeen cores.
// Where there are 1 op (zero delay) transfers and normal (long dly) transfers
L3 is shared, soo tracking is difficult
Most commands get broken down into chunks and allocated ~ without ever leaving cache
To my understanding
That should happen on all games, which is what can expose unstable CO or FCLK Package throttle
But i think if the understanding was to target mem oc effectiveness.
~ the cache to mem to cache approach is something that can visualize when MCLK pushing makes sense or not
Hmm difficult.
Optimally would be if both ways are testet and displayed as different visual examples, as this L$ - MEM - L$ , method already should be similar to Cache & RAM test.
Yet nothing of both should actively show FCLK Throttle ~ given that part is load balanced and may not be easily viewed as unstable (low load?)
EDIT:
If we do aim to show dynamic throttle into access time values
Internal testing should not leak to mem.
Above test is an alternate approach for very likely correct target, yet not what i was personally looing for as SiSandra replacement.
// And it will explain why values are different too. We look at two different questions.

Zuletzt bearbeitet:


