AI Is More Than Deep Learning

At a high level, while deep learning is a form of artificial intelligence, the converse isn't always true; an application implementing AI does not necessarily use deep learning. Many AI applications use “conventional statistical” or “traditional” machine learning. After all, Support Vector Machines, Logistic Regression, K-nearest, Naive Bayes, and decision trees still make a lot of sense to use in automating information classification, especially if you don’t have a lot of data.

For example, Conditional Random Field (CRF) is used in natural language processing, and a lot of recommendation engines are based upon Boltzman Machines, Alternate Least Square (ALS), and so on. Case in point: one of most demanding and unique benchmarks – our "big data" benchmark – uses an ALS algorithm as recommendation engine ("collaborative filtering").  

Of course, the use of neural networks – itself a whole field of study – is booming, and their use tends to dominate the latest AI applications. Neural networks are also among the most demanding workloads, requiring lots of processing power and signing expensive (and well-published) hardware contracts. All of which contrasts heavily with logistic regression, which remains the most used machine learning method, and also happens to need much less processing. 

The reason for these difference in processing power requirements is, in turn, actually pretty simple. To quote Wouter Gevaert, an AI expert at the university department I work in:

“Each Neuron in a Neural Network can be considered like a logistic regression unit. Therefore, a Neural Network is like a massive amount of logistic regressions” (When you use sigmoid as activation function) 

With all of that said, however, while neural networks are the most processing-intensive of AI technologies (especially with a large number of layers), there are several traditional machine learning techniques that also require a lot of processing power. Support Vector Machines with their complex transformations also tend to require a lot of computational time, for example. And in our Spark test, the Stanford NER system is based on a supervised CRF model using a labeled collection of English data. In that test, it has to crunch through a massive amount of unstructured text – several hundreds of gigabytes.

And of course, most of the analytical queries are still written in good old SQL. For structured and semi-structured data, for OLAP cubes etc., SQL code is still prevalent. As a single SQL query is nowhere near as parallel as Neural Networks – in many cases they are 100% sequential – the CPU is best tool for the job.

So in practice, most data (pre) processing and lots of AI software is still running on a CPU. GPUs mostly run massively parallel HPC applications and neural networks, an important market to be sure, but still only a piece of the larger AI market. This is one of the reasons why NVIDIA closed on $3 Billion of datacenter revenue last year, while Intel’s datacenter group made $20 Billion. Yes, Intel’s number includes Networking and storage. Yes, that includes several other markets than HPC and Data analytics. But still, a significant part of that revenue is going to be based upon servers that store and process data for analytics.  

Compounding this whole picture, however, is not just revenue, but opportunities for growth. NVIDIA has been seeing massive growth in the datacenter market, while Intel has only seen single digit growth. Customer needs are continuing to shift as new technologies become available; the battle for the data analytics market has begun, and it is intensifying.  

Enter the Era of AI Convolutional, Recurrent, & Scalability
Comments Locked

56 Comments

View All Comments

  • Bp_968 - Tuesday, July 30, 2019 - link

    Oh no, not 8 million, 8 *billion* (for the 8180 xeon), and 19.2 *billion* for the last gen AMD 32 core epyc! I don't think they have released much info on the new epyc yet buy its safe to assume its going to be 36-40 billion! (I dont know how many transistors are used in the I/O controller).

    And like you said, the connections are crazy! The xeon has a 5903 BGA connection so it doesn't even socket, its soldered to the board.
  • ozzuneoj86 - Sunday, August 4, 2019 - link

    Doh! Thanks for correcting the typo!

    Yes, 8 BILLION... it's incredible! It's even more difficult to fathom that these things, with billions of "things" in such a small area are nowhere near as complex or versatile as a similarly sized living organism.
  • s.yu - Sunday, August 4, 2019 - link

    Well the current magnetic storage is far from the storage density of DNA, in this sense.
  • FunBunny2 - Monday, July 29, 2019 - link

    "As a single SQL query is nowhere near as parallel as Neural Networks – in many cases they are 100% sequential "

    hogwash. SQL, or rather the RM which it purports to implement, is embarrassingly parallel; these are set operations which care not a fig for order. the folks who write SQL engines, OTOH, are still stuck in C land. with SSD seq processing so much faster than HDD, app developers are reverting to 60s tape processing methods. good for them.
  • bobhumplick - Tuesday, July 30, 2019 - link

    so cpus will become more gpu like and gpus will become more cpu like. you got your avx in my cuda core. no, you got your cuda core in my avx......mmmmmm
  • bobhumplick - Tuesday, July 30, 2019 - link

    intel need to get those gpus out quick
  • Amiba Gelos - Tuesday, July 30, 2019 - link

    LSTM in 2019?
    At least try GRU or transformer instead.
    LSTM is notorious for its non-parallelizablity, skewing the result toward cpu.
  • Rudde - Tuesday, July 30, 2019 - link

    I believe that's why they benchmarked LSTM. They benchmarked gpu stronghold CNNs to show great gpu performance and benchmarked LSTM to show great cpu performance.
  • Amiba Gelos - Tuesday, July 30, 2019 - link

    Recommendation pipeline already demonstrates the necessity of good cpus for ML.
    Imho benching LSTM to showcase cpu perf is misleading. It is slow, performing equally or worse than alts, and got replaced by transformer and cnn in NMT and NLP.
    Heck why not wavenet? That's real world app.
    I bet cpu would perform even "better" lol.
  • facetimeforpcappp - Tuesday, July 30, 2019 - link

    A welcome will show up on their screen which they have to acknowledge to make a call.
    So there you go; Mac to PC, PC to iPhone, iPad to PC or PC to iPod, the alternatives are various, you need to pick one that suits your needs. Facetime has magnificent video calling quality than other best video calling applications.
    https://facetimeforpcapp.com/

Log in

Don't have an account? Sign up now