Sebastian Thrun – Could It Be As Cool As This.

As you might imagine, crunching through enormous datasets to extract patterns requires plenty of computer processing power. In the 1960s they simply did not have machines powerful enough to do it, which explains why that boom failed. Within the 1980s the computers were powerful enough, however they discovered that machines only learn effectively if the level of data being fed for them is big enough, and they also were not able to source large enough amounts of data to give the machines.

Then came the web. Not merely did it solve the computing problem once and for all from the innovations of cloud computing – which essentially permit us to access as numerous processors since we need at the touch of the mouse – but people on the internet have been generating more data every day than has ever been created in the complete background of planet earth. The quantity of data being produced over a constant basis is totally mind-boggling.

What this means for machine learning is significant: we currently have more than enough data to truly start training our machines. Consider the quantity of photos on Facebook and also you begin to understand why their facial recognition technology is very accurate. There is no major barrier (that we currently are aware of) preventing A.I. from achieving its potential. We have been only just starting to work through whatever we can do along with it.

Once the computers will think by themselves. There is a famous scene from the movie 2001: A Place Odyssey where Dave, the primary character, is slowly disabling the artificial intelligence mainframe (called “Hal”) following the latter has malfunctioned and decided to attempt to kill all the humans on the space station it absolutely was meant to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it is scared of dying.

This movie illustrates among the big fears surrounding A.I. generally, namely what is going to happen when the computers start to think for themselves instead of being controlled by humans. The fear applies: we have been already working with machine learning constructs called neural networks whose structures are based on the neurons in the brain. With neural nets, the info is fed in and after that processed through a vastly complex network of interconnected points that build connections between concepts in much exactly the same way as associative human memory does. Because of this computers are slowly starting to build up a library of not just patterns, but in addition concepts which ultimately result in the basic foundations of understanding instead of just recognition.

Imagine you are considering an image of somebody’s face. When you initially view the photo, a lot of things take place in the human brain: first, you recognise that it is a human face. Next, you could recognise that it is female or male, young or old, black or white, etc. You will additionally use a quick decision from the brain about whether you recognise the face area, though sometimes the recognition requires deeper thinking depending on how often you have been exposed to this particular face (the event of recognising an individual although not knowing straight away where). All of this happens pretty much instantly, and computers are already able to do this all too, at almost exactly the same speed. As an example, Facebook are unable to only identify faces, but could also let you know who the face area is associated with, if said individual is also on Facebook. Google has technology that will identify the race, age as well as other characteristics of the person based just tstqiy a picture of their face. We have advanced significantly since the 1950s.

But true Udacity – which is called Artificial General Intelligence (AGI), in which the machine is really as advanced being a brain – is a long way off. Machines can recognise faces, but they still don’t actually know exactly what a face is. For example, you could take a look at a human face and infer a lot of things that are drawn from the hugely complicated mesh of various memories, learnings and feelings. You could examine a picture of a woman and guess she is a mother, which in turn might make you think that she actually is selfless, or indeed the opposite depending all on your own experiences of mothers and motherhood. A male might consider the same photo and find the woman attractive which will lead him to help make positive assumptions about her personality (confirmation bias again), or conversely realize that she resembles a crazy ex girlfriend that can irrationally make him feel negatively towards the woman. These richly varied but often illogical thoughts and experiences are what drive humans to the various behaviours – negative and positive – that characterise our race. Desperation often results in innovation, fear leads to aggression, and so on.

For computers to truly be dangerous, they need a few of these emotional compulsions, but this can be a very rich, complex and multi-layered tapestry of numerous concepts which is tough to train a personal computer on, regardless how advanced neural networks may be. We will arrive 1 day, there is however lots of time to ensure that when computers do achieve AGI, we is still in a position to switch them off if needed.

Meanwhile, the advances currently being made are discovering increasingly more useful applications inside the human world. Driverless cars, instant translations, A.I. cell phone assistants, websites that design themselves! All of these advancements usually are meant to make our everyday life better, and thus we must not be afraid but instead pumped up about our artificially intelligent future.