SPEC2017 Single-Threaded Results

SPEC2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing it is good enough. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates on our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-source compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labeled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

SPECint2017 Rate-1 Estimated Scores

As we typically do when Intel or AMD releases a new generation, we compare both single and multi-threaded improvements using the SPEC2017 benchmark. Starting with SPECint2017 single-threaded performance, we can see very little benefit from opting for Intel's Core i9-14900K in most of the tests when compared against the previous generation's Core i9-13900K. The only test we did see a noticeable bump in performance was in 520.omnetpp_r, which simulates discrete events of a large 10 Gigabit Ethernet network. There was a bump of around 23% in terms of ST performance in this test, likely due to the increased ST clock speed to 6.0 GHz, up 200 MHz from the 5.8 GHz ST turbo on the Core i9-13900K.

SPECfp2017 Rate-1 Estimated Scores

Onto the second half of the SPEC2017 1T-tests is the SPECfp2017 suite, and again, we're seeing very marginal differences in performance; certainly nothing that represents a large paradigm shift in relation to ST performance. Comparing the 14th Gen and 13th Gen core series directly to each other, there isn't anything new architecturally other than an increase in clock speeds. As we can see in a single-threaded scenario with the Core i9 flagships, there is little to no difference in workload and application performance. Even with 200 MHz more grunt in relation to maximum turbo clock speed, it wasn't enough to shape performance in a way that directly resulted in a significant jump in performance. 

Test Bed and Setup: Moving Towards 2024 SPEC2017 Multi-Threaded Results
Comments Locked

57 Comments

View All Comments

  • colinstu - Tuesday, October 17, 2023 - link

    This power consumption / heat output is insane… this is putting their 90nm Netburst Prescott / Pentium D Smithfield days to shame. Remember when Apple left IBM/Motorola alliance? Power architecture power consumption going thru the roof, and intel JUST pivoted back to PIII/Pentium M-based Core arch. No wonder why Apple dumped Intel, they called what they were seeing really early on. Arm for windows/linux desktop needs to get more serious, apple's desktop arm is proving nearly as powerful using a fraction of the power draw. Windows is ready, and can even run non-arm code too.
  • herozeros - Tuesday, October 17, 2023 - link

    My AMD AM5 would like a word with you …
  • FLEXOBENDER - Tuesday, October 17, 2023 - link

    What point are you trying to make, that you have no clue how thermodynamics work?
    This 14900K manages to pull 430 watt peak. 430. 0.43 kilowatt. one CPU.
    It is still beat by a 80 watt peak 7800x3d. What is your point?
  • boozed - Wednesday, October 18, 2023 - link

    I think the point was that you don't have to abandon x86 for ARM to achieve good efficiency, just Intel.
  • The Von Matrices - Thursday, October 19, 2023 - link

    People remember Netburst CPUs as being absurdly power hungry, but they forget that even the most power-hungry Netburst CPUs still only had a TDP of 130W. Today that would be considered a normal or even a low TDP for a flagship CPU. It's actually understating the TDP if you compare it to a Netburst CPU.
  • GeoffreyA - Friday, October 20, 2023 - link

    And didn't Cedar Mill further drop that to a 65W TDP?
  • GeoffreyA - Friday, October 20, 2023 - link

    Possibly, ISA is just a small piece of the power puzzle, and the rest of the design is what's carrying the weight.

    An interesting article:
    https://chipsandcheese.com/2021/07/13/arm-or-x86-i...
  • Azjaran - Tuesday, October 17, 2023 - link

    Did i miss something or are there no temperatures shown? Because 428W shouldn't be on the low side and demands a good Cooling Solution.
  • Gastec - Tuesday, October 17, 2023 - link

    Just one question: do these AI "tools" connect to the Internet, after they "measure specific system characteristics, including telemetry from integrated sensors", to send that data to those Intel servers that are in the "cloud"?
  • TheinsanegamerN - Tuesday, October 17, 2023 - link

    Of course they do. Even if they say they dont.

Log in

Don't have an account? Sign up now