Late last year we did an installment of Ask the Experts with ARM's Peter Greenhalgh, lead architect for the Cortex A53. The whole thing went so well, in no small part to your awesome questions, that ARM is giving us direct access to a few more key folks over the coming months.

Krisztián Flautner is Vice President of Research and Development at ARM, and as you can guess - he's focused on not the near term, but what's coming down the road for ARM. ARM recently celebrated its 50 billionth CPU shipment via its partners, well Krisztián is more focused on the technologies that will drive the next 100 billion shipments.

Krisztián holds PhD, MSE and BSE degrees in computer science and engineering from the University of Michigan. He leads a global team that researches everything from circuits to processor/system architectures and even devices. And he's here to answer your questions.

If there's anything you want to ask the VP of R&D at ARM, this is your chance. Leave a comment with your question and Krisztián will go through and answer any he's able to answer. If you've got questions about process tech, Moore's Law, ARM's technology roadmap planning or pretty much anything about where ARM is going, ask away! 

Comments Locked


View All Comments

  • lada - Wednesday, June 4, 2014 - link

    Whatever happened to the ARM BIOS and/or other I/O standardization efforts? ("One Linux for all ARM systems") Will we see BIOS of some sorts, standard peripherals to rely boot on, or standard SATA controllers ? I think it's mandatory for ARM to grow into PC/server space to have standardized components besides instruction set (and with A5x 64bit ARM cores and up I see the best outcome in the cleanup of an ISA, but what about peripherals? Timers, interrupts, something for one kernel to rule them all?) To make Linus happy? ;)
    Some auto-discovery of peripherals akin to PCI,USB ? It wouldn't eat so many high-frequency transistors and could speed the development of kernel many times, IMO.

    Writing from RPi, best regards and thanks for the answers and your opinion.
  • KFlautner - Thursday, June 5, 2014 - link

    Check out . This is an organization we've set up to cater to the needs of the ARM-based Linux ecosystem. I think they've taken many of the right steps to make Linus happier.... As far as standardization, we have also been working on various platform design documents that help our partners deploy functionality in a common way to reduce unnecessary fragmentation.
    A great example of this is the Server Base System Architecture (SBSA) specification we collaborated on with our silicon and software partners.

    Johan from AnandTech had a good write up on it earlier this year
  • mercurylife - Wednesday, June 4, 2014 - link

    Part 1: How about a teaser about the next 64bit 'Big' Core ?
    Part 2: What can we expect from the next iteration of ARM Trustzone ?

  • KFlautner - Thursday, June 5, 2014 - link

    a) It's going to be 64 bit! ;)
    b) It's capitalized differently: TrustZone

    Sorry - I cannot really say much about future ARM products and roadmaps.
  • aryonoco - Wednesday, June 4, 2014 - link

    1) Which one gives you more sleepless nights, Intel or Imagination Technologies?

    2) Is ARM likely to make a big core SoC like Apple's Cyclone?
  • KFlautner - Thursday, June 5, 2014 - link

    Not sure either of these cause me sleepless nights but they do give my subconscious some story lines to work on when asleep. ;) I don't think competition is a bad thing ...

    Check out: ... The A57 is our current "big" core.
  • Dmcq - Thursday, June 5, 2014 - link

    Do you think there might be a comeback for ThumbEE type facilities in the 64 bit architecture to support JIT code or run time checking of things like overflows?

    It seems like everything is being virtualized, does this stop some type of developments that are incompatible with good virtualized performance?

    What is the main thing you know now that you really wish you knew or fully understood five years ago?
  • KFlautner - Thursday, June 5, 2014 - link

    Thanks for the questions.

    a) We do look at JIT performance as one of the metrics we optimize for. But we often find that it's better to cater to JITs at the microarchitecture level rather than at the architecture (instruction set). We've found that there are way too many different approaches to writing good JITs, and there are conflicts on when and how much the well-intentioned JIT-oriented instructions can be exploited.

    b) I am not aware of any architectural development that we didn't do because of virtualization issues. However, I agree with you, there are ways of catering to virtualizability and others that make it harder. Architects do think about these issues.

    c) It's a long list... But not much of it is related to computers. ;) Not because I claim to have known everything about the subject - but the objectives have been clear: how do you increase the efficiency of your designs (Performance / W) while keeping the costs down. It's a tall order and the means of achieving it change over time. But this metric has been front-and-center at ARM over much of its existence and I don't expect this to change radically.
  • twotwotwo - Thursday, June 5, 2014 - link

    Hrm, chances are I'm too late, but an oddball question:

    OS X, Android, and Chrome OS all use some kind of compressed swap, running a cheap algorithm like LZO or Snappy on not-recently-used pages of RAM--it's a neat trick to pretend you've got more memory than you really do, and it works well because CPUs are so stupid-fast now.

    Early slides for AMD's Seattle SoC advertise a dedicated compression coprocessor. I don't know how much dedicated hardware even helps with compression (from what little I know of the algorithms, it seems like you'd mainly need fast random access to a smallish memory cache, which the CPU should already be really good at). But if you could make some sort of compression "free", that could boost devices' effective RAM for the buck (or improve effective I/O speed for the buck, as the SandForce SSD controller's compression does).

    So, I suppose you can't really talk plans, but is dedicated compression IP an interesting area?
  • OreoCookie - Thursday, June 5, 2014 - link

    Intel covers its markets with variants of essentially two cores (Atom-class cores and Core-class cores) while ARM already has many more (at least 3 current A-series cores, discounting Cortex-M and Cortex-R cores). Is ARM comfortable with this strategy or is there a need for more specialized cores that address only a certain market? (E. g. does it make sense to bring a dedicated server core to market which is no longer suitable for smartphone and tablet applications?)

Log in

Don't have an account? Sign up now