AI, soon a colleague like any other

0

Guillaume Buisson is a radiologist in the Lyon region. The IMVOC group with which he is ociated has used artificial intelligence to help practitioners detect certain pathologies. This breast cancer…

AI, soon a colleague like any other

AI, soon a colleague like any other

Guillaume Buisson is a radiologist in the Lyon region. The IMVOC group with which he is ociated has used artificial intelligence to help practitioners detect certain pathologies. This breast cancer specialist is convinced that he made the right choice and that in the future, “radiologists will not be able to do without AI” because it allows “a constant level of diagnosis”. In other words, a practitioner will be able to rely all the more on this tool in a specialty that is not his own.

“We thought the big bad wolf was going to eat us, but in fact the AI ​​is rather our watchdog”, he wants to believe, considering that he forms with the AI ​​”a good duo which benefits the patient” and which even enabled it to very slightly increase the number of pathologies detected.

In a completely different field of imaging, traumatology, the AI ​​is not far from having become autonomous, he adds: it almost never misses a fracture while analyzing an x-ray – it sometimes happens, on the other hand, to see lesions that do not exist. “AI is simply becoming part of the everyday practice of the radiologist profession,” he sums up.

A demographic necessity

For Philippe Coucke, a radiotherapist author of two books on AI and medicine, the violent jolt what this technology will cause in the world of health can also be beneficial, given the demographic context “The istance of machines must be seen as an opportunity as the population ages and the number of caregivers will decrease”. Especially since it is made essential by the fact that medicine “continues to become more complex and specialize”, he adds.

Luca Maron, an Italian author of numerous books on AI, including “AI for Dummies”, goes further by believing that this demographic issue goes beyond the world of health. He bets that, unlike the autonomous car whose fantasy caused a stir but did not correspond to any major economic need, “AI responds to a real need given the downward trend in the working population in our countries”. It will spread like wildfire because our future is necessarily written with it.

Artificial intelligence, a chance for workers? Certainly, this rapidly advancing technology has the potential to wipe out millions of jobs (see our survey of May 24). But provided it is used intelligently, it also carries the seeds of real opportunities.

Social equality

Beyond the demographic issue, it could have unexpected social repercussions. Unlike other technologies whose impact was concentrated on low value-added tasks, generative AI can benefit all workers. Vinciane Beauchene, ociate director at the Boston Consulting Group, believes that generative AIs such as ChatGPT could give a boost to relatively low-skilled workers, such as “this self-employed person who is not very comfortable with English or writing “.

“Generative AI gives everyone access to a common base of knowledge,” she summarizes. “Perhaps the big shot in medicine or the laboratory researcher specializing in molecules will find themselves more challenged than the nurse whose human qualities are impossible to replace”, wonders Vinciane Beauchene. Technology at the service of greater social equality?

When asked about the consequences of AI on jobs, most economists prefer to avoid getting out their crystal ball and opt for a look in the rear view mirror. History, they point out, shows that innovations have often created more jobs than they have destroyed. The authors of a Goldman Sachs study putting forward the figure of 300 million jobs lost refuse to give in to pessimism and cite a work by the economist David Autor according to which 60% of workers today perform jobs that did not exist in 1940.Nothing says AI isn’t going to be the exception that proves the rule. But this look at history can sharpen our gaze: it proves that innovations often lead to the creation of hitherto unsuspected tasks. What could these new missions entrusted to humans look like?

To provide the first elements of an answer to this question, there is nothing like wondering about the limits of the technology itself. The first, mentioned by all specialists, has a surprising name: hallucinations. This is how the mistakes made by the AIs have been baptized. A frequent and understandable phenomenon when we know that the latter essentially draw their “knowledge” from an ocean of data, the web, where the best and the worst coexist.

Those who say that we must ban ChatGPT from school are wrong: on the contrary, we must learn to use it, like any tool, by knowing its strengths and its shortcomings.

According to one very recent study conducted at the University of Hong Kong, only 63% of the claims generated by GPT Chat turned out to be correct. For Luc Julia, one of the great French specialists in AI, it is therefore essential that humans learn to use the tool with the necessary amount of hindsight. “Those who say that we must ban ChatGPT from school are seriously mistaken: on the contrary, you have to learn to use it, like any tool, by knowing its strengths and its shortcomings,” he says.

Blocking “fake news”

Faced with a technology capable of producing content of uncertain quality very quickly, we can say that the work of verification will prove to be crucial. Example with finance: founder of the company 73 Strings, Yann Magnan considers “extremely complicated to audit data produced by artificial intelligence”. Ideal for quickly producing a relevant overview, AI is a “precious tool, but one that you cannot rely on with your eyes closed”, especially in a world where a numerical error can have catastrophic consequences, including legally understood.

Faced with investors or a stock market regulator, it is better to trust human brains – or possibly AI exclusively trained on a corpus of totally irreproachable data, like that which the Bloomberg agency is developing. The same vigilance is required in legal functions, in the press or in matters of copyright. If the time devoted to the production of content is set to melt, that dedicated to fact-checking – or AI-generated images – can only grow. Otherwise, in particular, the proliferation of “fake news” observed in recent years will have been only a foretaste of what awaits us.

Cécile Dejoux, speaker and author of the book “Ce sera l’IA ou/et moi”, therefore believes that it will be necessary to “develop a critical spirit in the face of AI”. The corpus on which they trained “has its own ideological biases, just as Chinese AIs will have theirs”, she predicts, calling for people to be wary of these “black boxes” whose principle, by its opacity, “is contrary to the essence of the scientific method”. Black boxes that even have reason to worry some companies in terms of the confidentiality of the data entrusted to them.

Frédéric Messian, who directs the lonsdale company , helps its clients to “define or redefine the uniqueness of their brand”. He is happy to see AI speed up certain functions within his company but also sets limits: his clients sometimes prefer to avoid the use of generative AI in their strategic thinking, for fear that their requests will fall into the hands of the competition. .

The risk of standardization

The same Frederic Messian notes another limitation of AI.“Generative AIs are probabilistic systems that do not produce disruption, while our customers want to be different from each other,” he adds.

Luca Maron agrees: “If you rely on these technologies to define your product, you can be sure that it will be highly standardized”. Especially since the machine learns from your own attitudes, which risks accentuating the phenomena of “cognitive bubbles” that social networks have already highlighted: your AI may give you to read or see what you like, rather than what surprises you.

At the Boston Consulting Group, we summarize the situation as follows: certainly, the creation of content will be prodigiously accelerated. But the tasks upstream and downstream of it will have to be tough. Downstream: check the facts, broaden the spectrum of reflection to add a touch of originality and uniqueness, ensure legal solidity. And upstream? Know how to handle the tool. Thus begins to appear the science of the “prompt” according to the English term evoking the way of addressing AIs.

At Lonsdale, the importance of this discipline was quickly recognized and training was organized for all employees. Frédéric Messian affirms that with “a well-formulated prompt”, it is possible “to do in two or three hours what was done in one day until now”. Writing a query well, knowing how to refine it and even feeding an AI with relevant and well-understood data: from now on, anyone who knows how to whisper in the ear of AIs will be valuable on the job market.

Tomorrow the centaurs

The future therefore belongs to half-man, half-machine couples, dubbed “centaurs” by some AI experts. For the director of R&D on natural language processing at Bluenove Eric de la Clergerie, it is essential to move towards a world where, in this pair, “the human remains capable of directing the AI, which presupposes real expertise, failing which we could run into relatively serious concerns”.

Certainly, “we must not anthropomorphize these tools which have neither real reasoning, nor desire, nor will, and are far from seeking to take power”. But, adds the expert, “they would be stupid enough to do stupid things if they were given more autonomy than necessary”. For him, it is therefore reuring to see Europe seeking to establish a regulatory framework.

AI creates answers while only humans know how to find the questions because they have the sensors that allow them to understand the contexts

Cecile Dejoux

In the emerging environment, summarizes Luc Julia, “it will be complicated to replace humans because they are multi-card, while AIs are tools intended to be more efficient than us on certain very specific tasks”. AI, adds Cécile Dejoux, “creates answers while only humans know how to find the questions because they have the sensors that allow them to understand the contexts”. And he alone knows which solution is “not the best on the rational level, but the most acceptable on the political level”, judges the teacher.

The time when machines will replace humans may not have come (yet). But in this rapidly changing environment, warns Vinciane Beauchene, of the Boston Consulting Group, “everyone will be forced to train throughout their careers and companies will have to invest heavily in the development of employee skills”. His conclusion? “In the short and medium terms, it will still shake up. »

Source link

.

Leave a Reply

Your email address will not be published. Required fields are marked *