Comments Locked

74 Comments

Back to Article

  • magagne - Friday, May 16, 2008 - link

    This is all very interesting, a lot of insightful comments, as usual a great job by Johan,love it.

    I am suprised nobody has mentioned the obvious (or may have, some comments are really long). I don't have time to read every site and every benchmark to identify which Intel/AMD is superior in every given situation. And I don't care. For day to day work/gaming, it does not matter if I get 5 FPS more or less. Nor if power is 5 Watts more or less. Nor if is costs 20 bucks more or less. I just need a relatively performing CPU, like most people do.

    Yes I know the Core Duos 2/4 cores are overall better products on the Desktop. But I will still buy AMD CPUs for 2 (maybe questionable) reasons
    1) I have had them for 5 years, they still work fine, never had any problems (customer loyalty)
    2) AMD is the underdog. If they go down, you can kiss competition good bye (customer morals)

    Don't flame me as a fanboy, it's like buying bio-food guys, it comes down to choice. If they produced only crap, then I would be the fool sinking with the ship. But with reasonnably comparable performance/price ratio, it's AMD for me.
    Cheers to all!

  • crashmanII - Tuesday, May 20, 2008 - link

    Keep in mind that Intel has at least 10 Fabs and AMD just two. Every cache bit needs 6 transistors, so 1 Meg of Cache >50 M. I think
    AMD hasnt the capcities to build a wolfdale monster.

    btw, excellent article, good comments!
  • 0g1 - Thursday, May 15, 2008 - link

    Interesting article. Made me realize that AMD's K8/10 cache design is the main reason for its poor performance in desktop apps vs Core2. AMD's design is for smaller programs, heavily threaded. Intel's Nehalem design is similar, except even more heavily threaded programs. It can't compete with Penryn's performance in larger programs because of the amount and speed of the L2 cache. Its like 6MB vs 0.5MB programs. There's not very many 0.5MB desktop programs.

    L3 cache is almost irrelevant because of how slow it is. However it really helps with feeding a lot of processors with private L2's.

    Nehalem's cache seems to be worse than Shanghai's. 0.25MB vs 0.5MB L2 cache means even worse performance in desktop apps. The L3 cache speed of both CPU's seems to be the same.

    Nehalem has Hyper Threading, triple channel DDR3, and increased load store buffers (but these are only the result of HT) -- these things are more for greater parallelism (ie servers) but mean almost nothing desktop applications. I think HT will even slow down most of todays games that use 2-4 cores because 4 processors is faster than 2+2 virtual processors.

    Overall, I think Nehalem will be a lot slower compared to Penryn as a desktop processor and even a little slower per clock than Shanghai. It should be superior for massively parallel applications though.

    As we move forward into the future, adding more cores, using a shared L2 like Penryn's will be too slow. However, can't help but wonder if we have enough cores (ie 4) already to justify the move to this slower L3 cache heirarchy. Using a smaller amount of private L2 for each core will keep access latencies low. Especially with 4 cores. 4 cores accessing the same L2 could theoretically increase the latency 4 times. Considering L3 is 'only' 3 times slower, I guess it makes sense to change when we get to 4 cores. However, the lack of multithreaded software that uses smallish programs fitting into the L2 caches means buying a X4 or Nehalem is a very forward looking solution. And even with 4 cores (or 8 virtual) on a L3 cache design, the performance increase over a 4 core Penryn design would be minimal. I wouldn't invest in a Nehalem or Phenom unless there was enough software support and enough cores (12 cores sounds tempting).
  • EvilBlitz - Wednesday, May 14, 2008 - link

    Its nice that they focused on the server segment for greater margins, but you can not ignore the rest that makes up 80% of your revenue.

    I still think they should have released a dual core 65nm K8 with 2megs of L2. It would have been much more competitive in single threaded apps, esp games, and they wouldnt have to sell their cpus for such a crazy low price.
    I think it would have been a good enough stop gap given their limited resources.
  • Sunrise089 - Wednesday, May 14, 2008 - link

    Yes you can, if that 80% of your revenue only accounts for a small percentage of your profits.

    I love correcting the same mistakes I did 40 posts up.
  • Kaleid - Tuesday, May 13, 2008 - link

    the P4 was faster than what AMD had available they were still far too overpriced.
  • rogerpjr - Tuesday, May 13, 2008 - link

    Year 2000. WInter. Dual cpu.com came out with an interesting observation. The then new cpu on the block from AMD was the Duron 1gighz. NEW architecture allowed it to mimic the Athlon. ..... Remember? Heres how
    It was suggested that yes they could be OC´d. Sure. No biggie. BUT they could ( assuming proper cooling) be used with the dual cpu mobos from Tyan. The 216?(4). YES. My hotrod with a twist. My son and I made a living out of NOT OCíng them, BUT something even more twisted. Telling our customers what we had discovered, and then having at it. We did this with the 1.0 thru the 1.2 ghz durons. Of course if said customer said they wanted the ¨REAL¨ dual core we´d supply that. But that Duron 1.0 ( we tested them 1st, natch) thru the 1.2 was a real workhorse.
    Just a blast from the past.
    rogerpjr
  • EclipsedAurora - Tuesday, May 13, 2008 - link

    As the bandwidth advantage of Hypertransport interface using by AMD processors, Operaton still have an absolute dominance in the SAN/NAS storage/disk array processor market. Intel as well as other processor manufacturers can't stand in the water in this market!
  • EclipsedAurora - Tuesday, May 13, 2008 - link

    As the bandwidth advantage of Hypertransport interface using by AMD processors, Operaton still have an absolute dominance in the SAN/NAS storage/disk array processor market. Intel as well as other processor manufacturers can't stand in the water in this market!
  • jap0nes - Tuesday, May 13, 2008 - link

    I have to admit, Intel was very humble admitting they've had their asses kicked by a 2GHz barcelona!
    3GHz, 12MB L2 being crushed by a 2GHz 2MB L3
  • Plasmoid - Tuesday, May 13, 2008 - link

    So I have my old 90nm Athlon 4400 X2 at home with it's juicy 2x1mb cache... and AMD are floundering around because they can only spare 512k per core now.

    Seems to me like the 65nm transition never paid out in cache size like they expected it to and like it did for Intel.

    So i'm wondering if my 4400 X2 can give a K10 a run for it's money.
  • Visual - Tuesday, May 13, 2008 - link

    AMD are still the better choice when it comes to low-budget builds - cheaper quad-core, plus they now have tri-core as well.

    Also, the best integrated graphic solutions currently are for AMD. Sure you can add a separate gfx card and beat integrated performance, but that means higher price (not good for budget builds) and a pci-express slot used up (not good for small builds, like the HTPC I'm planning with a case that fits only 1 expansion card via a riser)
  • Justin Case - Tuesday, May 13, 2008 - link

    "leverage it's strong points"

    it's -> its

    Other than that, good article. This is something that everyone doing memory-intensive multithreading knows (or at least should know, if they're doing their homework), but it's good to see it posted on a mainstream site.

    AMD isn't competitive in the high-end single-threaded arena but when it comes to mid-range desktops they give about the same "bang for the buck" as Intel, and for HPC and some types of servers they offer more "bang" than Intel can deliver, period.

    Now if only they could make me a couple of dozen quad-core S940 Opterons... :-P
  • Roy2001 - Tuesday, May 13, 2008 - link

    Yes it is dead except niche HPC market. Fastest quad core won't even beat slowest Q6600. Northwood was fastest CPU when it was born but Barcy was NOT!
  • MauriX - Monday, May 12, 2008 - link

    I think the K10 architecture AMD was the first step towards what really is going to be revolutionary. AMD aims to create a CPU / GPU / PPU / xxx multi-core on a single silicon chip. K10 opened the door to an architecture of independent cores that perform different tasks. To realize the dream of AMD have a XPU, eg 4 CPU Cores + 3 GPU Cores + 1 PPU Core, it was necessary to posses an architecture where each core has full independence (energy, L2 cache, etc..) Between the cores.

    With the K10 failed to make significant progress in terms of performance (what everyone expected), but on an architecture basis for the next step: FUSION

    The problem is going to have a future AMD will be to create the abstraction layer of software between DirectX 10 and their specific Cores. We have already seen as the last IPG (780G) AMD are coupled perfectly through hybrid-Crossfire. I think that AMD intends to use that same concept in their "GPU's Cores and external video accelerator. With FUSION, would no longer be necessary to add new set of micro instructions (as SSE4) to accelerate certain codes programs; Directly ONE Core can be added with all the specific sets of instructions that were necessary (either as support for facial recognition, CAD, Physics, etc.).

    All this is pure speculation, but the fact of having broken the Karma Double Dual-Core / Double Double Dual-Core is an important step. They should now concentrate their energies on improving the CPU Cores and integrate new specific Cores.

    (from "Mauricio Fernandez, Punta Alta, Argentina." translated by Google Translator;)
  • Makaveli - Monday, May 12, 2008 - link

    Brand loyalty only hurts you the consumer! Wake up or shutup!

    And i'm posting this from my Opteron 170 that I love and still running well. However i'm not blind!

    The saying Ignorance is bliss really does apply here!
  • Ensoph42 - Tuesday, May 13, 2008 - link

    My reply to 7Enigma applies here as well.
  • Crank the Planet - Monday, May 12, 2008 - link

    AMD fanboy here and rightly so. Before C2D intel had crap. They haven't had a good proc since PIII. Netburst??? give me a break. Better internet experience- what a croc. Can you say "random restart because the chipset is crap?" Intel = Crap.

    That was my opinion before C2D. Now Intel has regained the crown in several areas where AMD had taken it away. So what else did you expect? All you intel fanboys need to stop talking trash. AMD is presently in a financial bind- everybody knew they would be if they bought ATI. Well guess what, there is only one year left and AMD will finish paying that off. After that they won't be cash strapped. They'll be able to put more into R&D etc. and then they will be able to put out amazing things. Fusion is going to be BIG!

    What innovations has intel come up with the last 2 years? ZER0! Heck, they haven't come up with a good idea since Netburst. HA HA HA HA!!!

    Even with all the cash, market share, yada yada yada. They can't come up with squat. Remember- a die shrink is not an innovation. All they can do with Nehalem is copy what AMD is already doing IMC etc., etc., and that is basically saying that the AMD engineers were right on that one. Intel has to do things like better interconnects between chips to stay competitive. Once you reach the speed barrier intel has nothing left. AMD will thrive because they are innovating and branching out.

    Don't get me wrong C2D is a good product, it's just that Intel will not be able to keep one step ahead of AMD forever ;p
  • Angeloni100 - Wednesday, May 14, 2008 - link

    I don't get this "fanboy" thing... I own a computer store in brazil, have been working with computers for about 20 years, owned just about every CPU ever created... For a long time I recomended AMD to my customers, especially during the Athlon rein, which lasted 7 or 8 years by the way (from athlon 500~ all the way up to the first Athlon 64's)... I loved my AMDs, had lots of them, K6-2 450Mhz and K6-3 550Mhz, Athlon 850, 1100 and the great 1400 that destroyed all P3s and P4s of its time... Had a couple of AthlonXPs, namely the 2100, 2400 and the 2800 with 333mhz fsb... Now I own a C2Duo 6600 (the first Intel I buy since 97).. It was the better CPU at the time, I've had it for a little over a year and love it too... It is unfortunate that AMD is no longer competitive with Intel, But being a "fanboy" is just not very smart, cause you end up buying products that not always are going to be the best... It is not easy to earn a living... we all have to work way too much in my opinnion... I just have too much love for my hard earned cash to throw it away at something simply because of its label... If AMD can come up with a better CPU then intel, I will buy it when I feel the need to upgrade or change my computer... I recently built a server with an Athlon 64 X2 6400 CPU in it for a customer based on my recommendation simply cause it was the best thing for his needs... I sell lots of AMD, but don't recommend them to everyone...
    I guess what I'm trying to say is that we need to be smart and faithfull to our money and not to multy billion dollar companies that have no idea of who we are and don't care to... and this goes to the apple fanboys too...

    Be smart kids... honor your work and the fruits of it... not these companies...
  • Ensoph42 - Wednesday, May 14, 2008 - link

    I agree 100% with what you're saying. However it's fun to cheer for you team, and take offense when it's disparaged. I'd consider myself an AMD "fanboi" but if I had the money to spend at the moment I'd weigh all possible options and build a system that gave me what I wanted.

    Also your pragmatism is couter-intuitive to how human nature works. When we buy cars we may at first research everything, but in the end it's that feeling we get when we sit behind the wheel.
  • Angeloni100 - Wednesday, May 14, 2008 - link

    Sure! I know that feeling behind the wheel all too well... But surelly it has something to do with performance, confort, handling, right?
    Can't get that feeling from a crapy car (not that AMD is a crapy product)...
    All I was trying to say is that you can only get that feeling from something really good, worth the wait..
    So, I am the kindda guy that dreams about the next big thing, for instance, I am putting money away to build me an SLI system... can't wait to have one... Specs:

    Intel C2D 8500 (great for OCing)
    Asus P5N-T Deluxe (Nvidia 780i)
    2GB DDR2 1066Mhz Matched Pair Corsair Dominator
    2X GeForce 8800GTS 512MB Alpha Dog Edition from XFX
    1TB HD From Samsung or Seagate

    Been dreaming of this system for a few months now...
    But you see, no brand loyalty, just the best possible performance for my money... Thought about going crossfire, but there is no product that will give me this kind of perf. for this kind of money...

    Last SLI system I had was a VooDoo 2 system with AMD K6-2 450 back in 98...

    Anyway... I just wanted to make these fanboys think a little... thats all...
    Thanks for listening
  • mathew7 - Tuesday, May 13, 2008 - link

    Excuse me??? It was only the P4...one product with 3 revisions. And are you sure the chipset you're blaming is an Intel? Are you sure it was not a heat issue (because of the processor)?
    I've used many systems from 286, and I can tell you that in the P1 era I switched to Intel chipsets because they were 99% stable compared to 70-80% for VIA/SIS. Since then I bought/recommended exclusively Intel chipsets, except for nforce4 with an Athon64.
    And the Northwood P4 processor was quite good. I admit Athlon64 to be better that P4 in all regards, but I only did this when I found more detailed information. And the only reason for me to bash Intel is for their marketing strategies, which is what kept Prescott alive so much time.
    BTW: did you know that Athlon64 was AMD's 1st real success? Don't forget that AMD was a simple factory for Intel in the XT(8086) time.
  • JPForums - Tuesday, May 13, 2008 - link

    The Athlon was AMD's first real success. However, many people don't know that the 386DX40 was AMD's first superior processor. Intel couldn't get the frequency up and thus moved on to the 486 line. They couldn't really fix the problem here either. Meanwhile, AMD got all the way up to the 486DX100 (not sure how to classify the X5-133). So Intel moved on to the Pentium which gave them a lead against the K5. They held this lead until the Athlon.

    The Athlon was the first processor AMD made that was based on its own platform. Super socket 7 was an incremental improvement of Intel's socket 7 platforms. It also beat the performance of the PII/PIII competition, though marginally. The Northwood core had a similar advantage over the Barton core as the the Athlon had over the PII/PIII. It wasn't until the K8 architecture (and later the Core2 architecture) that we started seeing massive advantages one way or the other.

    Stability of platform, as you said, was the major concern. With that in mind, I believe Intel makes the best chipsets on the planet. nVidia and ATI have narrowed the gap into insignificance now, but Intel is by no means behind any other chipset for either Intel or AMD platforms. (Particularly in the area of stability)
  • Locutus465 - Tuesday, May 13, 2008 - link

    Currently they're behind in both IGP performance and the ability to accelerate HD video content with the IGP... AMD is the current winner here, with nVidia set to shake things up a little.
  • JPForums - Tuesday, May 13, 2008 - link

    The nVidia and ATI(AMD) chipsets are both new. Wait for Intel's new IGP chipset to compare as it'll be out in the not too distant future.

    I do whole heartedly admit that Intels graphics performance is the craps, but if you want performance, you aren't going to use an IGP anyways. Also, businesses already seem to be under the assumption that Intel graphics is "good enough", so there is no real advantage there.

    If Intels HD decode capabilities end up "good enough" for most people, then there will be no real change of power. However, if they botch it, then there will be an opening in the small HTPC market that Intel is less optimized to fill.

    My main point was stability anyways. From that perspective, the new chipsets from nVidia and ATI are leagues behind (at least until they mature).
  • Locutus465 - Tuesday, May 13, 2008 - link

    Are you bringing drivers into the picture as well here? AMD vista drivers have been rock solid where basically everyone else has been struggling. Finally things are settling out, but AMD seemed to have things right since day 1.
  • JPForums - Tuesday, May 13, 2008 - link

    I had overlooked Vista drivers.
    I had a poor experience with an Asus board based on the 780G chipset under Vista, but that is as far as my experience goes with IGPs undet the operating system.

    As I've only dealt with one 780G based board and neither the latest nVidia nor the latest Intel IGP based chipsets, I have to concede that the 780G might be the most stable IGP on that platform.

    However, I've had no trouble with X38/X48/P35 boards under Vista. I have had issues with some of nVidia's (non-IGP) offerings, and to a lesser extent, AMD/ATI's offerings. Nothing insurmountable though. As I said, the gap is insignificantly small.
  • TA152H - Tuesday, May 13, 2008 - link

    The XT was 8088 based, and AMD was a licensed secound source for that and later processors (up to the 286), but it was not the only product they sold and they were not a factory of Intel's.

    AMDs 386s were considerably better than Intel's as well, and although they were out the same time the 486 was, they were very attractive chips.

    Their 486s were also pretty popular, particularly as upgrades.

    The K5 was a disaster, and the K6 kept them alive but little more. The K7, however, was a considerably better performing processor than the Pentium III, particularly the Katmai, mainly because it could run at significantly faster clock speeds and had a superior floating point unit.

    The Athlon 64 was arguably one of AMD's failures, as it was too small of an increase in performance over the Athlon. It's not hindsight, I was stunned in a negative way at how poor a processor it was when it came out. The brainless masses didn't see it, because the Pentium 4 was a horrible chip, and the Prescott version was even worse. So, the nitwits thought AMD did a good job, but it was really poor and should have been obviously so, since Intel's mobile line always had great performance and low power use. It's just no one ever used them in the comparisons.

    The Northwood being a good chip is a fallacy that people try to advance to show how much better they understand the market, and how overly simple most people are. The latter is true, but also saying the Northwood was so good is also an oversimplification. It was a huge, power hungry chip that was generally more powerful than the other company's much older design, but not always. It's the same argument I would use against the Athlon though, it used a lot more power, and was much bigger, but the performance advantage was more substantial since it could match the P III clock for clock (and greatly surpassed it in FP), and could also run at much higher clock speeds. The Northwood was enormous, and had miserable IPC, and didn't outperform the Athlon by very much.

    In the end, the Pentium III design as it moved along the mobile route proved to be an excellent and balanced design, and is finally the dominant processor again in the current iteration of Penryn. The Athlon and Athlon 64 only looked good against the grotesque Pentium 4 line, which combined the twin virtues miserable IPC with huge size/power use. Why Intel ever used this chip, after seeing how good the Pentium Ms were, is a mystery to me.
  • JPForums - Tuesday, May 13, 2008 - link

    So, a 10% - 25% (Averaged about 16% in my experience) improvement over K7 (at the same frequency) when comparing the K8 with a single (64bit) memory controller is a failure. Add an extra 5% for the dual (128bit) memory controller. By that metric, the only success in the industry was the Core2 and only when you compare it to the netburst architecture that you state was horrible. I suppose K6 to K7 might have been considered successful in certain areas as well.

    I agree that the Athlon64s got more credit than they should have due to the underwhelming performance of the P4s, but having made the switch from an AthlonXP 3200+ (2.2GHz) to a socket 754 Athlon64 3200+ (2.2GHz), I just can't understand where you are coming from.

    I can think of a host of architectures releases that were less impressive. My switch from PII to PIII for instance was much less impressive. Athlon to AthlonXP wasn't exactly impressive either. Going to the Athlon from a K6-III seems to be the only switch short of the P4 to Core2 switch that even compares. I'm not even sure the difference is that large when you compare Core2 to Core (PentiumM). The Athlon64 architecture didn't end at double the clock rate that the AthlonXP did, but you said you felt that way at launch. Most new architecture releases aren't launched with clock speeds that much higher than the previous ones anyway. The P4, as you put it, was horrible anyways.

    If you look at the server/workstation market (the opteron launched first) then the launch is more impressive. The only place I see that the K8 architecture disappointed was mobile use. I'll concede your point in the mobile space.

    Oh, the reason they didn't use the PentiumMs (Core architecture) in the desktop realm was they couldn't get clock speeds high enough to match the performance of the Athlon64s. The best overclocking results that I saw from Dothan was about 2.6GHz (yes, a few manufacturers actually did make desktop boards for Pentium Ms). They would have release retail versions no higher than 2.4GHz in the desktop space and probably only as an extreme edition processor. This puts them on par with an Athlon64 4000+ (2.6GHz). At that point, their P4 lineup performed better and nobody seemed to care that they could fry eggs on it.
  • Locutus465 - Tuesday, May 13, 2008 - link

    I really have to disagree with your assesment of K8... Fact is while it wasn't significantly better clock for clock in 32bit mode it added in a wildly successfull 64bit mode that oblitereated Intel's Itanic asperations and very quickly ushered in an era of 64bit computing for the "Wintel" world. If it wasn't for K8 we'd almost certainly still be living in a 32bit world right now very quickly running up against a wall...

    If you really want to try to bash AMD for a CPU design why not bash K7 for being slower (clock for clock) than the K6-III with regard to integer perforance. In fact to date I think the K6-III remains one of the fastest x86 CPU's for integer performance. Personally I'm ok with this since they've more than made up for that with much higher clocks, vastly improved FPU and a host of other innovations.

    What I'm looking forward to now are AMD's platform innovations, they're doing a really good job in this arena, which is actually probably part of the reason why CPU perforamce is lagging a little. Good things are coming, if AMD can hold long enough to make spider a mature platform I'm sure they'll revisit K archetechture and speed things up quite a bit.
  • Itany - Tuesday, May 13, 2008 - link

    What innovations of Intel in the past two years?
    What about Penryn? A 3GHz CPU may fit into a notebook?
    What about quad core processor?
    What about ATOM?

    It's true that Intel did not refresh the uArch for two years, hower, at least AMD just can't outperform the "old" Q6600 on desktop, with so many "innovations" of K10. All that AMD is good at inherites from the IMC and HT bus of K8. K10? Innovations? hehe...
  • JPForums - Tuesday, May 13, 2008 - link

    Intel's Core2 architecture is based on the Core and thus P6 architecture, so what wrong with AMD leveraging the strengths of their previous architectures. If it isn't broke, don't fix it. (Not saying they can't fix other areas of the processor)

    Atom is somewhat innovative, but is mostly based on older designs using newer processes with an extremely low power overall design goal. It's a very good design, but the innovation is more in creating a market where one didn't really exist before.

    Intel didn't innovate quad core, they innovated double dual core. This was really an innovation that they made with the double core Pentium Ds. While not a long term performance enhancing innovation, it should not be shrugged off either. It allowed Intel to offer two cores when they couldn't have otherwise. They also maintained higher margins this way as the yields were much higher than they would have been if they tried to get two cores on a single die. And guess what, AMD is following suit.

    Penryn wasn't all that innovative as an architecture. However, the new process is a huge innovation. It's not a simple refinement in lithography technology. They had to change the way transistors were made. You can only deny that it is innovative if you call it inventive.

    To be fair, AMD has been innovative as well. AMD's HT interconnect technology is routed like a crossbar switch. Effectively, this means that core 0 and core 2 can talk while core 1 and core 3 talk (or any other mutually exclusive combo) with no penalties. I don't know what Intels variation on HT will do, but if they design it like time division multiplexing (TDM) switches, they will have additional latency in some cases. More importantly, the processors will get extremely hot if they run the link fast enough to allow all processor simultaneous communications using TDM.

    The integrated memory controller is innovative in the sense that it is a progression towards a system on a chip. Bulldozer is another innovative step in this direction.

    As for K8 vs K10, there is a nice excerpt on the K10s shared cache in the article. They wouldn't be matched with Intel if they hadn't improved in some way.

    Both companies have been quite innovative. You just have to keep in mind that some innovations are for reasons other that higher frame rates in the latest shooters (I've mentioned nothing of the energy saving innovations deployed by both companies). Also, some innovations don't pan out the way they were intended. Netburst is a good example of that. It was a radical redesign and extremely innovative. It also allowed Intel to hit higher clock speeds that they ever had before nor have since. Unfortunately, like the shared L3 cache on Phenom, this doesn't always translate into the highest performance.
  • 7Enigma - Monday, May 12, 2008 - link

    Intel /= crap, fanboy = crap.

    Sorry had to fix the formula for you. I'm sorry but your reply is currently the worst of the bunch. It's sad to see someone so hypocritical in their own post. Do us all a favor fanboys, just put the keyboard down, you only hurt your own company. So sad, so sad....

    If you buy anything other than the best performance for the price you are a monkey and a slave to a company. Use your noodle...
  • Ensoph42 - Tuesday, May 13, 2008 - link

    To be honest I don't think you're post is any better than the one above yours.

    Keep in mind it's absolutely necessary for AMD to stay afloat, unless you want to be be paying $500 for Intels low end chips and have performance improvements in a 5 year time frame rather than 2. So if a few fanboys want to throw their money at AMD, more power to them.

    And, although being an AMD fanboy myself, I'm not unrealistic about the situation. But as long as they are able to supply me with the performance I want at a price I'm willing to pay I'll toss them my money. Since I don't O/C, do play my games at resolutions significantly highter than 1024x768, don't sit around converting DVDs to Divx and compressing 1GB into Rar files I find Phenom's performance to be more than adequate. I've never had a top of the line system so the company that has the performance crown means little to me.

    I'd also like to say that I've seen many reviews with questionable test setups (DDR2-800 on a phenom? why?) or data represented in such a way to be misleading (bar graphs scaled to show intel having some massive lead when really it's more like 1 or 2 percent.)

    Every system I've had for about the last 12 years has been AMD. They've been sturdy, stable, and solid and I've not regretting owning any of them. AMD has supplied me with positive product experiences and I would like that to continue.
  • varneraa - Tuesday, May 13, 2008 - link

    I'm not sure AMD sinking would creste an Intel monopoly, they could easily fumble the next processor or two and be bought out by someone big(Samsung for example) who would take over the competition with Intel. I'm sure AMD knows that they won't survive simply on fanboys and their CURRENT server performance, and Intel's not going to give them a break. If they don't execute better in the future then someone else will step up to take their place.
  • Locutus465 - Tuesday, May 13, 2008 - link

    Agreed, I've been using AMD for 10 years and have never really been disappointed. I did consider a core 2 system this time around, but at the end of the day AMD's platform inititives got me... Spider is exciting and AMD Live! seems to be doing better than Vive.

    My Phenom quad core seems to have more than enough horse power to handle anything I could possibly throw at it, including compressing/decompressing 3GB text files, breaking down said files and converting formats from pipe delimted to valid CSV, any kind of game I could want to play and having media center burn my HD recordings to DVD.

    I'm very happy with my upgrade, I went from an Athlon 64x2 with 2GB DDR400 memory and Gefore 7800GT to a Phenom x4, AMD/ATI Radeon 3870 and 4GB of DDR2-800. To be perfectly honest while I was expecting game performace to increase a (good) bit I wasn't expecting over all system performace to increase as dramatically as it did. AMD still has good stuff, yes intel currently has better CPU's but amazingly enough they're lacking in the platform arena.... Ironically exactly where the P4 was better than the Athlons.
  • antifanboys - Sunday, May 18, 2008 - link

    Innovations aside, Intel right now still offer the best bang for the buck. Performance is what matters and they are delivering it at the moment. Don't get me wrong, I love AMD and ATI, I would love nothing more than to see them bring something better to the table. Better products from all sides will help make things cheaper for everyone.
  • jamori - Monday, May 12, 2008 - link

    AMD does not consider the Phenom to be their "K10" core. Phenom is merely a revision of their previous K9 core. Bulldozer is the successor architecture to the K9; Phenom is simply a tide-over (granted, they hoped it would have good performance and clockspeeds, but it turned out not to)

    It would be more appropriate (but still wrong) to refer to Bulldozer as K10 -- AFAIK, AMD insists that there's no such thing as K10.
  • Visual - Tuesday, May 13, 2008 - link

    There is no K9. Phenom is K10.
    Remember, this is not as simple as counting. It is Marketeering ;)
  • RamarC - Monday, May 12, 2008 - link

    "Just a few weeks ago, Anand reported that Intel had no intention of flooding the desktop with 45 nm Core 2 chips very soon."

    according to the consumer desktop chart, 45nm desktop chips should make up half of the total chip volume in Q3 '07. Q3 starts in less than 3 weeks, so the chart seems to contradict the statement.

    could it be that intel's production capacity is ramping up allowing them to provide as many chips as either the desktop or server supply chain needs?

  • JohanAnandtech - Monday, May 12, 2008 - link

    Yes, you are right, it was inaccurate, I rephrased it a bit. What I meant is that Intel was and is in no hurry to supply 45 nm CPUs to the desktop market, but at the same time launched Harpertown a few months earlier than planned. Almost a year will have gone by between the first 45 nm chips and really good availability of desktop parts in the channel. I believe that was quite different with 65 and 90 nm.
  • Myrandex - Monday, May 12, 2008 - link

    I have wondered many times if AMD uses their cache as efficiently as Intel does. I personally love AMD and my 5000+ BE is serving me plenty well, but I used to run a 1.8GHz. Dual Core Opty with 1MB L2 cache instead of 512K L2 with the regular Athlon64 X2 CPUs, and the reviews of the time didn't show a large difference in performance between the two...Maybe this should be revisted with current software?
    Jason
  • strikeback03 - Monday, May 12, 2008 - link

    So AMD is competitive and possibly superior in servers. But when server sales only account for 16% of their revenue, can they sell enough server processors to make ends meet?
  • WaltC - Monday, May 12, 2008 - link

    Well, look at the breakdown above. Intel has to sell ~8 desktop cpus to equal the profits it makes selling 1 server cpu, and AMD has to sell ~11 desktop cpus to equal the profit it makes selling a single server cpu. What AMD is aiming to do is to ratchet up its share of the server market, because, per cpu, that's where the money is for both companies.

    Now that Barcelona has hit the B3 stepping I don't think that will be a problem for them with Opteron going forward. In the short run for the consumer desktop I think AMD will have a tiger of a cpu when the company introduces its 45nm Barcelonas with more L2. Meantime, I'm loving my Phenom 9850 on my MSI K9A2 Platinum desktop at home...;)
  • Sunrise089 - Monday, May 12, 2008 - link

    Exactly. Users like Strikeback above need to use their arithmetic skills. Revenue numbers mean nothing, profit means everything (it's why a company like Microsoft is generally considered to be preforming better than a company like General Motors). If I have revenue of 100 trillion dollars, and profit margins of 0%, a company with $1000 in sales and 50% margins will walk away with more cash.

    What the chart Johan posted above shows is that the 20% sales tail wags the 80% profit dog, and that AMD and Intel are roughly even at the moment. Intel's success comes from a much larger slice of the 80% sales slice of the pie, which is an edge but nowhere near as important as the 20% servie slice.

    Incidentally, the AMD vs. Intel situation is very close to the Nintendo vs. MS/Sony during the previous console generation. Forum poster after forum poster lamented Nintendo's small sales numbers, while a small minority of posters realized the importance of Nintendo's much higher margins of software and slightly higher hardware margins. Even without a runaway sales hit (which they now have) Nintendo made more profits than their rivals due to higher profit margins. Of course AMD isn't doing BETTER than Intel, nor are they guaranteed to combine margins with very high sales like Nintendo has done since the Wii/DS launch, but rumors off their death are greatly exaggerated so long as they hold on to 50% of the sub-market
    that is responsible for 80% of industry profits.
    @ Johan - great write-up. I have no knowledge of server computing at all, but i continue to like your articles more and more, especially for you offering such a clear overview of the market that anyone can understand.
  • Pirks - Monday, May 12, 2008 - link

    Hey, it's not only Nintendo/Sony story, it's also Apple/PC story - Apple makes gobs of money because their computers are so much more expensive than mass market Wintel PC computers. Even though Apple sells much less units - they still make MORE PROFIT than a lot of cheapo PC making companies like Dell. That's something I love to hit Wintel PC fanatics with here on anandtech/dailytech forums.
  • Griswold - Wednesday, May 14, 2008 - link

    Would be a good example if apples profit mostly came from desktop and notebook sales - which it doesnt. And if Dell wasnt such a limping dog. Why not compare it to HP for example?
  • Noya - Tuesday, May 13, 2008 - link

    Of course Apple has higher margins...their customers have lost most of their higher brain functions- observe how they can only talk while their hands swing wildly through the air and they can't comprehend how to use a two-button mouse.

    (think of the Handyman from "In Living Color" with his one good arm and gimp voice):
    "It just works for me!"



    "Something you love to hit" PC enthusiasts with?

    PC enthusiasts/hobbyists BUILD their own machine to the exact specs they want/need for market prices on the hardware with full upgradeability...they don't care about Dell/HP/etc.

    If it makes you feel "special" to pay 3x the price for the same hardware that has an Apple logo on it...you're called a sheep and watch too many Apple commercials.
  • Pirks - Tuesday, May 13, 2008 - link

    "3x the price" is an ancient urban myth that has been destroyed a long time ago, check your prices again.

    And while pc guys build their stuff you should not forget that vast majority of consumers buy brand name computers and they don't care what serial number is on their CPU and how much voltage it can take for proper OC. They care more about design, ergonomics, user friendliness and stuff like that - things that are totally alien to pc building enthusiasts. This is the area where Apple spits on Wintel PCs and rubs it in - 'cause their computers are head and shoulders more novice/non-techie (i.e. mass consumer) user friendly than any Windows PC. Just read all those reviews and horror stories about dumb Vista UAC, underpowered sllooowww PCs sold with Vista everywhere and stuff like that - they tell all the story the mass consumer needs to know.

    And no other arguments like Crysis or overclocking or other enthusiast lingo can make any difference for mass consumer - it's like talking nuclear physics to them.

    So, enjoy your niche PC enthusiast market, nothing wrong with that, but also do not forget about the masses and gazillions of computer illiterate people who don't give a thing about your PC enthusiastic overclocking watercooling blah blah glory - these are all future Apple clients, just look at how Mac sales grow all the time
  • Borreson - Tuesday, May 13, 2008 - link

    Where the ancient urban myth still rings true is that Apple has chosen not to compete in certain segments of the PC market.

    For instance, you can't buy a $500 budget Core 2 Mac tower. You can't buy something with the iMac's specs that lets you swap out the video card or that allows you to choose what monitor you prefer. You can't buy a sub-$2000 notebook nor a 13" subnotebook with a discrete GPU.

    The general level of hardware flexibility in expected by enthusiasts is only present in the Mac Pro, which starts at 3x the price of the white box that an enthusiast might consider building.

    When you go and compare that Mac Pro to a competing Dell or a white-box Intel server/workstation, the Mac Pro's lofty price tag looks a lot more reasonable. It's really difficult to build a dual-quad-core workstation with gargantuan memory capacity and all the bells and whistles and have the price tag not balloon beyond reason. With the Mac Pro you get the signature Apple friendliness and hipness wrapped around a monstrous piece of hardware, and if you buy it immediately following a new release or a price adjustment, it's usually hundreds of dollars less than the competition's arguably inferior product (entirely depending upon your priorities and how much of Steve Jobs' pot you've been smoking). Oh, and they're whisper quiet.

    But if you want a Mac with a PCI Express slot and don't want to pay $2300+, you're don't have *ANY* options, because for some reason having any expansion capability is bundled only with having an 8 core monster. The Mac Pro is complete overkill for most purposes and yet still has some significant hardware shortfalls in many areas (ie: memory performance for gaming, legacy hardware support, price/performance for upgrades) compared to an $800 white box tower.

    (There's nothing like an AMD vs Intel article to bring out the Mac vs PC platform war. Woo?)
  • Sunrise089 - Wednesday, May 14, 2008 - link

    "But if you want a Mac with a PCI Express slot and don't want to pay $2300+, you're don't have *ANY* options"

    This is correct. I don't think you can say the same thing about your specifics though.

    "For instance, you can't buy a $500 budget Core 2 Mac tower."
    Does the PC OEM world sell $500 Core2Duo machines in full-size chassis' often?

    "You can't buy something with the iMac's specs that lets you swap out the video card or that allows you to choose what monitor you prefer."
    Does the PC OEM world sell all-in-one PCs at any price with 24" displays and user-upgradeable video? Do they even sell all-in-ones with 24" displays?

    "You can't buy a sub-$2000 notebook nor a 13" subnotebook with a discrete GPU."
    Does the PC OEM world sell a 13" notebook with a discrete GPU (they probably do, but all I normally see are 15" models)? Does the PC OEM world sell sub-$2000 notebooks with high-end Core2Duo's, 2gigs of memory, and other higher-end features with extensive software suits with discrete GPUs?
  • Pirks - Tuesday, May 13, 2008 - link

    I was talking about the myth "3x the price for THE SAME hardware" - that one has died a long ago
  • highlandmoose - Monday, May 12, 2008 - link

    In fact for typical HPC codes Barcelona annihilates the Intel chips. We have just ordered a new HPC machine at my university and I had all the fun of running the benchmarks. For a single thread Intel was up to 50% faster, but for anything with a large memory footprint the Intel chips (even 45nm ones) with the FSB just fall over. For problems using 8GB RAM (8 threads) the K10's were twice as fast comparing 2.2 GHz Barcelona vs 3 GHz Harpertown chips. I just wish we could have had 3GHz Barcelonas...

    As for Nehalem I think it will help intel an awful lot, but if AMD can put on a 64 stream graphics processor with double precision capability then again performance for HPC will be stellar (compare say 64 Gigaflops [on-die ATI coprocessor @ 1 GHz] vs 48 Gigaflops [8 core Nehalem @ 3GHz]. Interesting times are ahead...
  • Itany - Tuesday, May 13, 2008 - link

    The "raw" capability of GPU could not turn into the real application performance. For AMD HD 3k family GPU users, what's the contribution of the GPU to performance except games and HD decoding?
    Under the condition that GPU suitable for acceleration, Intel Nehalem+NV CUDA is the best choice. Few would prefer the AMD plantform.
  • Griswold - Wednesday, May 14, 2008 - link

    "Under the condition that GPU suitable for acceleration, Intel Nehalem+NV CUDA is the best choice. Few would prefer the AMD plantform."

    And you base that claim on what? The non-existence of this combo or your apparent fanboyism? Most people live in "here and now" not in "what could be in 6-12 months if all goes well".
  • Locutus465 - Monday, May 12, 2008 - link

    And to be perfectly honest for the money it doesn't dissapoint. What I wanted was a reasnobly priced system to get me up to date with system speed wise with an emphasis on reliability. The solution? Just go AMD. Irronically enough, AMD has been kicking some major @ss in teh driver department, an area where nVidia used to dominate. Their Vista 64 drivers are complete, fast and reliable, nVidia has only been getting to that point more recently. In fact my nvidia based laptop currently has issues handling the new Zune video's which is dissapointing :P

    Also, while yeah I could build a faster intel system my current AMD build doesn't dissapointe. Running a Phenom 4x 9850, AMD Radeon 3870, 4GB OCz memory and a high end Asus board (the one with built in wifi) I'm able to play EVERY game available on the market right now (inc Crysis) with good quality settings and more than acceptable frame rates. Pretty much all of my "last gen" PC games (Doom 3, quake 4, WoW, Half Life 2 & Episode 1) are getting at, just over or near 100FPS with all quality settings maxed and FSAA jacked up played @ 1280x1024 (my 19" LCD's native res). I'm very very happy with that. Crysis I can run with many of the graphics settings set to high (not all though) and still maitain smooth game play. I don't have Crysis FPS numbers because I don't remember how to run the FPS counter in the game, nor do I know how to run a benchmark. If anyone has that info I'd be happy to post back with hard numbers.
  • antifanboys - Sunday, May 18, 2008 - link

    Lol. You got to be joking. If you want a fast computer then get intel, that's what good right now. If you're only going to game at 1280x1024 then you have no business talking about anything. Face it, AMD is doing bad for the moment, I hope they can rebound. But your post made no sense. Are we supposed to buy the second rated processors because you dont run a FPS counter?
  • plonk420 - Tuesday, May 13, 2008 - link

    if you bought AMD (procs) ONLY to game, that was a pretty dumb choice (unless you already had "good enough" memory and previously had a compatible expensive motherboard) unless you like throwing money at AMD. and don't bother with benches. AMD is beat in all but one or two games

    i personally am an AMD fanboy, but at LEAST had a reason to buy a Phenom: x264 video encoding. for the first 10-15 days i've had mine, i've used all 4 cores at LEAST 80% of any given 24 hours. it keeps up in a linear fashion (or better) price-to-performance-wise to an Intel Quad. AND the x264 team said there's more optimizations they can do, as well (IIRC).
  • Ensoph42 - Wednesday, May 14, 2008 - link

    Just wanted to emphasis that gaming on a phenom is perfectly reasonable. When comparing CPUs for gaming, benchmarking is done at CPU bound resolutions, typically 1024x768. If you're still gaming at these resolutions I would think that your money would be best spent elsewhere and not on a CPU. Once you get into high resolutions you less and less differences between AMD and Intel CPUs.
  • Locutus465 - Wednesday, May 14, 2008 - link

    Ditto... Just because you want to game on your PC doesn't mean you should avoid AMD... They're worth considering for sure, the platform is great.
  • Locutus465 - Wednesday, May 14, 2008 - link

    I bought AMD for all of the reasons stated including stability and an interst in seeing where spider goes. Frankly I'm very happy with my choice, I've been having a terrific experience.
  • Itany - Monday, May 12, 2008 - link

    AMD would loss very thing facing Nehalem. K10 core vs enhanced penryn core, two channel DDR2 IMC vs three channel DDR3 IMC, HT 2.0 interconnect vs QPI, no SMT vs SMT...

    The crash of AMD is just a matter of time
  • INeedCache - Monday, May 19, 2008 - link

    Isn't everything simply a matter of time? Do you realize how long folks have been saying just that about AMD? Trust me, you really don't want AMD to crash and disappear. If so, kiss those inexpensive CPUs goodbye.
  • K20 - Saturday, May 17, 2008 - link

    Why do unqualified people feel compelled to comment, I don't. But I do feel compelled to correct:

    "K10 core vs enhanced penryn core"
    It's K10.5/Shanghai Vs. "enhanced penryn" (considering people refer to C2D as Conroe then C2.5D should be Wolfdale, should it not?) "core".

    "two channel DDR2 IMC vs three channel DDR3 IMC"
    Nehalem needs more RAM bandwidth due to the cache coherency protacol it uses.

    "HT 2.0 interconnect vs QPI"
    It's HyperTransport 3.0.

    Both HyperTransport and Intel's Quick Path Interface transfer the same ammount of data per transfer and each transfer at this speed:
    HyperTransport = 5.2 GT/s Vs. QPI's 6.4 GT/s.
    But it still depends on the efficiency of the protocol being used.

    "no SMT vs SMT..."
    Yes it's sharing a 256KB L2 between 2 threads and a <7 MB L3 cache between 8 threads (assuming 4 cores) whereas K10.5/Shanghai will have a 512 KB L2 cache for 1 thread and a 6 MB L3 for 4 threads and 1 MB more of >L1 cache.

    "The crash of AMD is just a matter of time"
    Erm... What if they start turning a profit next quarter and a bigger profit next quarter etc.
    But obviously that can't happen as you can predict the future.


    Anyway it's nice to see that Anandtech hasn't completely written off the K10, I might pop by here more often now.
  • Griswold - Wednesday, May 14, 2008 - link

    I dont speak gibberish...
  • Ensoph42 - Monday, May 12, 2008 - link

    Not to jump the gun or anything right? What about Shanghai?

    Anyway, this was an interesting blog entry to read, and sort of echoed what I had been thinking even before the phenoms were released when I was just looking over some of the architectural details that had been made public. It looks like a server chip.

    I think AMD made some engineering choices that didn't translate to good marketing for the hardcore/gaming/OC crowd. AMD isn't dumb and it must have crossed their minds that Intel wasn't going to take the ass whooping handed out to them by the Athlon architecture forever. Since AMD just isn't able to have two seperate development lines for desktop and server due to their small size (conjecture,) they may have decided to engineer a chip more heavily towards what is probably the more profitable segment (servers). It's also possible that AMD's biggest engineering mistake is being too farsighted. The K10 is good engineering, but doen't have the benchmarking numbers to back it up. Simply put: modern desktop software isn't engineered to take advantage of the K10 architecture. Server software on the other hand just might.

    I look forward to the review. Keep up the good work guys.
  • Regs - Wednesday, May 14, 2008 - link

    What do you mean, Shanghai? It's based off the Barcelona. The only way AMD is going to improve on applications for Desk Top is with a redsign. If they're going to come out with 2.2 - 2.6GHz CPUs with 512KB L2, with little to no improvements on interger performance, then all Shanghai will be is a cooler running Barcelona.
  • Ensoph42 - Wednesday, May 14, 2008 - link

    I'm trying to do some digging around on some details about shanghai, but it's a more than a die strink and some extra cache if I'm not mistaken.

    Not keep in mind that the Phenom X4 9850 main competitor is the C2Q 6600. Looking at a couple of sites benchmarks they spend most of their time within spitting distance of eachother. The 6600 pulls away on a few, as does the 9850. Pound for pound Phenom as it stands is a respectable chip, it just can't reach the frequencies it needs to compete with the higher end, but I dont think the archtecture itself is bad.

    So if Shanghai can bring 20% due to archtecture tweeks, and a boost in clockspeed, that'll be great and keep AMD in the game. (Even assuming Nehalem will kick ass)

    Now your comment about integer performance and redesigning for desktop applications is sort going backwards. The future is multicore and desktop software needs to be written to take advantage of it from here on out, but software engineers aren't there yet. Not that there aren't improvments to be made in the pipelines of the individual cores.

    Of course this is all speculation on both our parts till we actually see some benchmarks by third parties.
  • Locutus465 - Wednesday, May 14, 2008 - link

    I would like to see AMD increase the IPC of the phenom's, the archetecture isn't bad (like you say) and there is plenty of room for improved IPC... Obviously I'm not in a position to say this absolutly, but I seriously doubt AMD needs to go back to the drawing board like Intel did.
  • mtdewcmu - Monday, May 12, 2008 - link

    AMD isn't going to be standing still waiting for Nehalem. AMD is going to turn up the heat with 45 nm later this year.
  • varneraa - Tuesday, May 13, 2008 - link

    Intel probably won't be far behind the AMD 45nm chips with 32nm chips
  • homerdog - Monday, May 12, 2008 - link

    Turn up the heat? Let's hope not :)
  • Regs - Monday, May 12, 2008 - link

    Don't buy a AMD cpu for a desk top?
  • JWalk - Monday, May 12, 2008 - link

    This is really interesting. It looks like AMD is very competitive in the HPC / Server market. I am glad to hear it. I am currently using a Core 2 Duo system for my desktop machine, and it is very fast. But, I certainly don't want the competition between AMD and Intel to come to an end anytime soon. Any good news for AMD at this point is good news for consumers.

Log in

Don't have an account? Sign up now