Share this news:Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPin on PinterestShare on RedditShare on TumblrShare on StumbleUponDigg this

Google Brain, the team within the company dedicated to developing and improving Deep Learning machines and artificial intelligence, has seen an unprecedented event: one of their IAS has learned to do simple encrypted messages by itself.

google artificial intelligence

The interesting thing is that the developers of the machine are not entirely sure how it has managed to develop this skill. This can be disconcerting those who are suspicious of artificial intelligence and fear a more typical of a dystopia of the science-fiction future.

The machines already learn alone
According to the magazine New Scientist, researchers Martin Abadi and David Anderson have proven that the neural networks, which work with artificial ‘neurons’ made programmatically, can create their own encryption system without the need to teach them basic cryptographic algorithms.

In other words, it can keep secrets even to their creators.

Initially, the three neural networks used in this experiment, Alice, Bob and Eve, were not able to communicate as the researchers wanted. Alice had to convert plain text into an unrecognizable network before sending it to Bob, so that a spy (Eve) could not understand it.

With the practice, they managed to be working better: Alice was sending a protected message and Bob was finding the way of deciphering it appropriately. They needed about 15,000 attempts, that yes, but Eve was not able to understand them enough: only managing to be right for pure coincidence.

For those who are afraid, they know that the encryption used by the machines is very, very simplistic and any human being with some knowledge in the area could capture and decrypt the message with ease. That does not mean that researchers behind the whole project will not know exactly how Alice and Bob have developed their encryption capabilities, of course. This is not bad thing and may have applications in the security of computing devices in the future.

The Doomsayers, like Bill Gates, Mark Zuckerberg, Stephen Hawking and Elon Musk, will see this as a potential danger. Others view the artificial intelligence as opportunities to be creative.