Nothing seems to be able to hinder the development of artificial intelligence. Use cases are multiplying in our daily lives, and researchers are creating increasingly large models. However, will they be “smart” one day?
The rise of AI is undoubtedly the central phenomenon of the decade 2010-2020. By relying on concepts born in the 1950s (!), Researchers and, today, companies are multiplying the use cases.
In 2040, this will be ubiquitous, omnipresent in our daily lives: “In the coming years, learning algorithms will be applied to many different fields,” explains Luc Julia, the current Chief Technology Officer and Senior Vice-President of Samsung Strategy & Innovation Center (SSIC).
“All the data that was not kept in the industry will help machines understand new areas and predict changes. This is how the predictive system will spread in industry and the health sector. We will use new data. DNA, for example, is an extraordinary statistical field and may well be better exploited in the future. Likewise in transport, the level 5 autonomous car will never exist, but level 4 will already make it possible to reduce the number of road accidents drastically,” he adds.
Paradoxically, this success of AI in business is based on well-known algorithms. Neural networks were imagined in 1956 and, after a long journey through the desert, artificial intelligence is now firmly established in companies. Despite the current “hype” effect, the results obtained remain very impressive, believes Marc Duranton, the researcher at CEA List.
“The real boom in AI took place in 2012, with Supervision, the network that significantly reduced the error rate in recognizing images from the ImagiNet benchmark. This went from 25% error to 15% and, a few years later, we are down to less than 1% error compared to 5% for humans. In this, neural networks are already achieving superhuman results.
This supervised learning, that is, with data annotated by humans, has limitations because it is not possible to annotate all the data manually. Other types of knowledge are currently gaining momentum. Thus, Reinforcement Learning, for which we give a cost function: the machine iterates on its own and generates endless cases to do better. This approach allowed Google DeepMind to develop AlphaGo Zero… and return humans to their sad condition.
Another approach that is gaining momentum, the “Transformers,” whose most famous example is currently GPT-3, the model created by OpenAI. “With 175 billion parameters, it is an extensive network which is capable of delivering fairly interesting results”, explains the researcher, who, however, moderates the enthusiasm that seems to cause this AI to have an answer to everything: “GPT-3 can deliver outstanding solutions to simple questions, but also give ridiculous answers to absurd questions. Transformers networks remain very influential in completing proposals, predicting the future through self-supervised learning. ”
Very suitable for NLP (Natural Language Processing), Self-Supervised Learning allows the machine to predict what is missing in a temporal sequence. On this model, we carried out GPT-3 on Wikipedia, on a corpus of books, etc. And he is now able to complete a proposal. Only, this AI has no understanding of the world, like this news from Isaac Asimov where the following question is asked to the machine: “If a hen and a half lays an egg and a half in a day and a half, how many nine hens will lay eggs in nine days?
The AI remains incapable of detecting the absurdity of the question; however, will an AI be able to have, one day, a fundamental understanding of the world, therefore a form of consciousness? Researchers remain very divided on the question.
“Architects of Intelligence: The truth about AI from the people building it,” Martin Ford interviewed 23 international artificial intelligence experts on this question.
For 16 of them, such an AI should not appear until 2099.
Only the famous futurist Raymond Kurzweil, current director of engineering at Google, estimates that such an AI could exist in 2029 .
Until then, new technologies could reshuffle the cards of artificial intelligence. This is the case for neuromorphic components, specifically designed to carry and accelerate neural networks and quantum computing.
“This new approach to computing is, in a way, akin to our brains and artificial intelligence. Perhaps, artificial intelligence will be able to exit from this quantum approach, but for now, nothing is known, and it won’t be straightforward to implement. Another technology that we have no control over today is biology. Recreating something that will look like our brains is a bit like Frankenstein’s dream, and ethical issues will be far more critical than those that computer AI poses to us today. We only understand 20 to 40% of our brain, and it will be difficult to imitate it. But this is a path that will perhaps materialize
In a competitive market like Toronto, standing out from the crowd is crucial for success.… Read More
The digital era has replaced the long-lasting gaming culture in recent years, especially for GenZ.… Read More
Live visit programming might be great if you believe that a magnificent way should be… Read More
Imagine a world where your donations can traverse the globe in seconds, bypassing traditional banking… Read More
Resource management is strategic not only for the success of projects but also for the… Read More
When your two year mobile phone contract comes to an end, you might find yourself… Read More