What the Digital Brains of the Future Might Be Like

The Atlantic

View photo

.
hawkins_570.jpg

Alexis Madrigal

It is the rare entrepreneur who hits it truly big twice. Those who do -- such as Ev Williams, Ted Turner, and Elon Musk -- tend to stay within the original industry that made them. In recent memory, Steve Jobs sticks out for his success in entertainment (Pixar) and computing (Apple).

Modest ideas that can change the world. See full coverage

Which is what makes Jeff Hawkins so intriguing. Having founded Palm (of Pilot fame) and sold it, Hawkins turned his attention back to his long-time hobby... neuroscience. And now he's got another company, Grok, that tries to apply what he learned about neurons and brain processes to the data problems that companies have. While technology has been Hawkins' job for most of his adult life, it's clear that the brain is his passion. His book detailing his synthesis of neuroscience research, On Intelligence, received unexpectedly great reviews from the research community. Nobel laureate Eric Kandel even blurbed the book, calling it "a must-read for everyone who is curious about the brain and wonders how it works."

We spoke at the company's modest offices in Redwood City for a Q&A that was published in this month's magazine.

Here, you can find the extended remix.

What is Grok?

Grok is software that helps companies take automated action from streaming data. It does this by finding complex patterns in machine -- generated data, and making predictions. It might use smart-meter data to predict energy needs, or data from complex machinery to predict equipment failures. The underlying technology is based

So how does it actually work?

Grok is self-learning -- it finds patterns in data without human intervention. Feed Grok streams of data, and it automatically models the data the way a human analyst might -- by understanding which data streams are useful, trying to represent the data, and tuning complex algorithm parameters to improve results. Because it's automated, Grok is ideal for analyzing thousands of data streams. Grok also learns continuously. Unlike most other analytics techniques, Grok learns from every data point, versus having to be retrained. No analyst needs to make a decision about when to take models offline and update them.

How do most people work with data now?

What most people do today is put data in big databases and analyze the correlations. Say you have 1 billion users on Facebook, and you're trying to figure out what advertisement to feed to 20 percent of them. You want one big model on all these data. What we do is different: Say someone has 10,000 smart meters, and they're trying to figure out what energy consumption is going to be two hours from now. We build 10,000 models. You can't have a data analyst doing that. If you want to model every machine in a factory or every windmill in a windmill farm, it's all about automation. We build lots of little models -- that's the future of data.

Most advanced analytics require substantial human expertise and are done in batch fashion -- data are gathered for some time period and then processed in big chunks. This process can be slow and is hard to apply to a broad range of problems as the world changes. It also means using huge databases, which are expensive -- it's complex to maintain and move large amounts of data around. Grok, like your brain, is a streaming system. Data pass through Grok, predictions are made -- but Grok doesn't need to store the data to function. With millions of devices generating billions of data streams, the ability to store only what's critical can be a significant advantage

In industrial applications, more -- basic approaches -- say, looking at a sensor to ensure a temperature does not exceed a certain value -- can be used to monitor equipment and alarms. These approaches have limitations: By the time an alarm is triggered, it may be too late. Or what's normal for one machine may not be normal for the next.

Grok lets you automate processes that previously required manual adjustment. Heating or cooling systems can be turned on or off intelligently. Applications can be migrated between servers based on load. Network traffic can be rerouted. Unusual behavior of heavy machinery can generate alerts that recommend specific action. Instead of reacting to problems, you can anticipate them.

Who else might use your software?

People who want to do anomaly prediction. Grok works as if it's listening to very noisy melodies and going, "I recognize some of this. That sounds a little familiar." And all of a sudden it says, "This sounds totally different. I've never seen this before," so it goes beep, beep. There are a lot of people looking for anomalous behavior in credit-card and security applications. It turns out this might be even bigger than prediction. But of course, anomalies are the flip side of predictions -- if I can't predict well, then I have an anomaly.

So how does this relate to your previous work on the brain?

First of all, we have a very complex brain; it's got all these different components. But we're just talking neocortex here. Every mammal, from a human to a mouse to a dolphin, has one. What is the neocortex doing? It's building a model of the world, of what we call sensory motor contingencies or sensory motor patterns: Why are you wearing glasses and what does that mean? Or: If I turn my head to the right, I have expectations about what I'm going to see. Most of what we learn about the world is how it behaves when we interact with it. The neocortex builds a model of what should happen in a particular context. A bigger neocortex lets you make a more complex model, and it lets you have more sensors. And that's what intelligence is: it's learning this model of the world.

And Grok uses a similar principle?

Here's what we do inside Grok: we build this 60,000-neuron neural network that emulates a very small part of one layer of the neocortex. It's about a thousandth the size of a mouse brain and a millionth the size of a human brain. So: not super-intelligent, but we're using the principle by which the brain does all the inference and motor behavior. I'm very confident that this sequence memory we use is the core of how all intelligence works. The brain's taking in streaming data, they're noisy, they're constantly changing, and it has to figure out what the patterns are and make predictions from them.

Is this different from other artificial-intelligence research that's going on these days?

I've been observing the AI and AI-neural-network fields for years, and I've always been a bit of a contrarian.

My view has been: let's figure out how the neocortex works, and once we understand those principles, that will be the path of building machine intelligence. Classic AI says: forget the neuro-science; it's a matter of programming and algorithms.

I have to ask, why would you want to build super-intelligent machines?

We can make the world more efficient, we can save energy, we can save resources, we can help detect diseases. When I ask myself, What's the purpose of life?, I think a lot of it is figuring out how the world works. These machines will help us do that. Many, many years from now, we'll be able to build machines that are super-physicists and super-mathematicians, and explore the universe. The idea that we could accelerate our accretion of knowledge is very exciting.

What might these artificial intelligences be like?

People today try to build walking, talking robots. And that probably will happen, but that's not where the excitement is. We're not very good at predicting where these things go, but I can say that the vast majority of machine intelligence will not be human-like at all. It won't be talking to us. It won't be evil. It will just be building lots of little models, or brains, trying to understand everything from vending machines to smart meters to windmills. Over time, you can imagine systems that model very complex things, like organization.

But what are all the people going to do once there are all these super-intelligent machines?

Take these models we're building with Grok. No human is going to be displaced by these things. No one is doing this -- it's impossible. Take the telephone system, where electronic switching replaced all those operators. If we had to have an operator place every telephone call in the world, there would be a billion telephone operators. Did we lose a billion jobs? Not really. We lost a few jobs, and advanced the quality of life. It's not some dystopian future where machines do everything and we sit around in lounge chairs.

Actually, that sounds pretty good to me. Can we talk a little bit about your past? I was pleased to learn that you came up with the Grid Pad.

The first tablet computer! And though we didn't make it a consumer product, that was my first hard idea.

At one of your talks, you had a throwaway line where you said that there was a bubble in "pen-based computing" from like 1989 to 1991. Really?

Well, Palm was successful. The bubble was the whole pen-computing world, companies like Go and EO and Slate -- not the Slate we know today -- Ink, Pen... Over a billion dollars -- 1992 dollars -- was invested in "pen-computing" and every one of those products failed. We were trying to start a mobile computing company and we kept saying, "We're not like those guys." In fact, my pitch when we were getting funding for Palm and I was actually being courted by VCs, I pointed to all these companies and I said, "See all these companies that just raised a lot of money, they're going to go out of business." And, I said "There's an opportunity, if you're willing to bear with it because mobile computing is going to be really, really big, but it's not the way they were thinking about it. It was hard, of course -- it didn't succeed right away -- but eventually, we had the Palm Pilot, which was a very successful product. Of course today, mobile is a big thing. I kind of knew that, back ten years ago, when I used to tell people, mobile is going to be big, everyone is going to have this [points to an iPhone]. I feel the same way now, in the sense that we're onto some big things in artificial intelligence, but it might take twenty years from now to be obvious.





More From The Atlantic
View Comments