After sixty years of failed starts in AI, it appears that the tumblers are finally clicking and the breakthroughs are happening @JBElsea @WeathergageTeam
In my previous blog A(l)chemical Reactions, I drew an arc between the medieval proto-scientific philosophy of alchemy and self-driving cars. The connection wasn’t as tortured as one would think and it allowed me to get across two related ideas. First, important insights may come from unexpected places and applied in unexpected ways to unexpected things. Second, it takes a while for all the technology tumblers to click into place but when they do, breakthroughs happen and many previously locked doors fly off their hinges.
Such a breakthrough may have taken place with artificial intelligence (AI) and specifically, with a neural network-based approach called “deep learning”. A few months ago, we had the pleasure of hearing Frank Chen, head of Research at Andreessen Horowitz, present an excellent primer on AI. But before you read Frank’s thoughts, I’ll attempt to synthesize a few of his points and why they matter.
Artificial intelligence is one of those buzzy, short-hand terms that is frequently used and just as frequently misunderstood. The misunderstanding is to be expected since AI has a lot of components and a complicated taxonomy. For instance, deep learning is a subset of machine learning which itself is a subset of AI. In its simplest form, AI is a set of algorithms and techniques that allow computers to mimic human intelligence. How the mimicry is pursued, however, is where things get tricky because there are many approaches.
AI research began in the mid 1950’s, some of it in response to the Cold War. The U.S. intelligence services didn’t have enough Russian-speaking people to monitor the Soviets so they thought it would make sense to program computers to recognize and translate natural language. Turns out, natural language recognition is really hard, especially using 1950’s technology, and after spending millions of dollars and a decade’s worth of work, the effort was abandoned. There were later attempts at different AI applications—remember “expert systems” of the 1980’s? — but the result was decades of AI boom-bust cycles. While the technology was undoubtedly improving, the improvements were insufficient to crack open the AI locks.
But that was then and this is now. Maybe. After sixty years – that’s right, sixty years – of failed starts in AI, it appears that the tumblers are finally clicking and the breakthroughs are happening.
What’s changed? Pretty much everything. The original approach was for humans to teach the computer to mimic a human function. Researchers would create rules that described a behavior or speech, input the rules into the computer and the computer would then produce human-like output. As we’ve seen, that didn’t work out so well. Today’s researchers are taking a totally different approach. They are applying neural networks – an idea from the 1940’s – to model data structures that mimic the way the human brain processes information. I won’t pretend to have more than a superficial understanding of the mechanism, but I think it works something like this: researchers feed massive amounts of data into very powerful computers, provide some algorithms to help the computers learn from the data and then the computers use the data to basically teach themselves to mimic the targeted behavior. This last bit is the subset of AI called “deep learning” and it is considered to be the primary contributor to the recent AI breakthrough.
What else enabled this breakthrough? The same advances I talked about in my previous blog
Back in the day, compute was expensive, the data were sparse, and most of the research was buried inside government or corporate labs or done by small teams of underfunded academics. Not anymore.
Readers are already using products powered by AI, and have been for a while. If you’ve used Facebook or Amazon, watched movies on Netflix, read BuzzFeed, rented an AirBnb, or asked Siri a question, you’ve used some form of AI. And you’ve probably noticed that the products and services keep getting better and smarter all the time. (Check out Amazon’s Echo if you haven’t already—its logo should be a Trojan horse.) The semi-autonomous features of many new cars and trucks are powered by forms of AI, and more sophisticated applications will power the fully autonomous versions that will be on the road in the very near future. And AI is not just for consumer applications. Its constituents are already reading X-rays and diagnosing blood cancers at much higher rates of accuracy than human experts.
We believe that AI, and deep learning in particular, is another fundamental technological shift, and big changes are in store. Entrepreneurs and techy top guns are fully engaged, as are many venture capitalists. We expect that many of the AI winners will be venture-backed start-ups and that the biggest will be those who are doing something most people either haven’t thought of or thought impossible. We can’t wait!
P.S. About those images at the beginning of the blog. The first image is a beautiful photo of a night sky just outside Kruger Park in South Africa, taken by my partner, Tim Bliamptis. The second is the same image transformed by an AI algorithm called Deep Dream Generator. As the Deep Dream website explains, the algorithm initially was invented to help scientists and engineers to see what a deep neural network is seeing when it is looking in a given image. Since then the algorithm has become a new form of psychedelic and abstract art.
 If you want to know even more about AI, check out Pedro Domingo’s podcast. You can find it on the Farnham Street blog. https://www.farnamstreetblog.com/2016/09/pedro-domingos-artificial-intelligence/. Here’s shorter piece from IEEE Spectrum.
Reach out to us to learn more about Weathergage and how we can help your organization.Get In Touch