World

Maverick Life, World

G-brain: Google’s simulated thinking computer system only has eyes for lolcats

G-brain: Google’s simulated thinking computer system only has eyes for lolcats

In an experiment designed to simulate the functioning of the human brain, 16,000 computers agree: cognitive cats are the only thing worth downloading. By RICHARD POPLAK.

Mountain View, California, is something of an innovation hub. It’s the place that brought us the internet search engine, the driverless car, Gmail. Without Mountain View, CA, the world would be unrecognisable. This is where Google lives and where the present morphs into the future, whether we’re ready for it or not.

Google X is the relatively creepy R&D department that sets high-paid boffins loose in chase of the next big e-thing. And ever since Alan Turing, the mathematics genius who posited the notion of Artificial Intelligence way back in the 1950s, the next big e-thing has been the thinking, reasoning machine. We’ve been warned about the perils of such a construction, but I fail to see how HAL 2000, birthed by dorks in a Cali complex replete with pool tables, pinball machines and fridges full of Coke Zero, is anything close to a great idea. 

But humanity has to be wiped out somehow, and if any company is up to the task, why not Google? They’ve made stalking a fine art, what with Streetview, and have promised to “do no evil” – the sort of utopianism that leads to utter, universal destruction. 

Which brings us to Google X’s latest innovation. Andrew Y Ng, a computer scientist from Stanford, along with a team of like-minded folk have created a neural network of 16,000 computer processors, boasting more than one billion connections, and sent them into the wilds of the internet. Sans biological mentor, and shorn of the usual programming instruction that helps software do all sorts of tasks humans used to perform at half the price, the neural network has recognised – wait for it – cats.

This is, it must be admitted, no small thing. The human brain is the neural system of neural systems, with millions of tiny “processors” making trillions of connections—those 16,000 computers at Google X are an Adam Sandler fan compared to your average Joe. One of the things the human brain, with all its manifest limitations, is good at doing is imbibing a YouTube video, and deriving meaning from its constituent parts. Without the benefit of smell or touch, when we see a calico feline doing something cute on our laptops, we say, “Awww!” That may be a cultural meltdown—surely our time would be better spent reading Ulysses – but it isn’t a processing problem: the amount of neural connections that allow us to consume kitties on the net is beyond staggering. 

One of the innovations at work here is a piece of software that allows computers to multitask. By doing so, more and more connections become possible. And the more numerous the connections, the more likely the computer system will start “thinking” – or, at the very least, learning. This wouldn’t be possible if there weren’t vast farms of computers in data centres, all of them getting cheaper as the price of technology drops. The idea of one powerful supercomputer running the world is all but obsolete – sorry about that, Hal. Instead, the future is networked. This isn’t new, per se. But the scale of it is unprecedented.

The experiment worked thusly: the computers were force-fed thumbnail images culled from 10 million YouTube videos, all of which were selected at random. According to the New York Times, “The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.”

Just like in the brain – or how scientists believe the brain works – individual neurons are “trained” to recognise certain patterns. But Ng cautions against making the leap from the Google X experiment to the human brain. “A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr Ng, encouragingly. The network is still tiny when compared to the human visual cortex, which is about a million times bigger. Go nature!

But, before we imagine all is safe, we’d best remember that when we factor Moore’s Law into the equation, we’re not that far off from simulating the human neural system. Like, 10 years not that far. But Ng isn’t so sure that their math is correct yet – scale the experiment up, and the algorithm comes undone. 

So, perhaps we can stave off the end of the world by another decade or so. Two things come to my biological mind. The first is that it has taken us 60 years of steady computing to get to this point – barely one human generation since Turing came up with the idea of AI. 

That said, it never fails to dazzle how complex a species we are, and how incredible the supercomputer that runs our individual mainframes. The only problem is that we’re smart enough to build something to wipe ourselves out.

But maybe, just maybe, SkyNet, in the midst of its destructive rage, will pause in flattening the world, to laugh its digital ass off at cute kitty pics. I CAN HAZ CHEEZEBURGER, indeed. DM

Read more:

  • “How many computers to identify a cat”, originally from the New York Times 

Photo: Reuters

Gallery

Please peer review 3 community comments before your comment can be posted

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options