After Davos: From AI hype and reality
Advertisement: Click here to learn how to Generate Art From Text
AI was a major topic at Davos. As Report by FortuneOver two dozen sessions focused on AI directly at the event. Topics ranged from AI in Education to AI Regulation.
A who’s who of AI was in attendance, including OpenAI CEO Sam AltmanInflection AI CEO Mustafa Suleyman. AI Pioneer Andrew Ng. Meta Chief AI Scientist Yann LeCun. Cohere’s CEO Aidan Gomez.
From wonder to pragmatism
At Davos, the conversation was rife with speculations based on the newly released of ChatGPTThis year was more temperate.
“Last year, the conversation was ‘Gee whiz,’” Chris Padilla, IBM’s VP of government and regulatory affairs, said in an InterviewYou can also find out more about The Washington Post. “Now, it’s ‘What are the risks? What can we do to make AI more trustworthy?’”
Misinformation, job displacements and a widening of the economic gap between wealthy nations and poor ones were among the concerns discussed at Davos.
The most talked about AI risk in Davos was the threat to misinformation and deception, often in the forms of Deepfake picturesVideos and voice clones can further muddy the reality and undermine your trust. Recent examples include robocalls sent out before the New Hampshire primary election, using a Voice cloneThe apparent impersonation of President Joe Biden Try to avoid this:Suppressing votes is a crime.
AI-enabled deepfakes spread false information and create false reports by making someone appear to say something that they did not. In one Interview, Carnegie Mellon University professor Kathleen Carley said: “This is kind of just the tip of the iceberg in what could be done with respect to voter suppression or attacks on election workers.”
Reuven Cohen, a consultant in enterprise AI, was also recently interviewed told VentureBeat that with new AI tools we should expect a flood of deepfake audio, images and video just in time for the 2024 election.
Despite a considerable amount of effort, a foolproof method to detect deepfakes has not been found. As Jeremy Kahn observed in a Fortune article: “We better find a solution soon. Distrust is insidious and corrosive to democracy and society.”
AI mood swing
This mood swing from 2023 to 2024 led Suleyman to write in Foreign Affairs that a “cold war strategy” is needed to contain threats made possible by the proliferation of AI. He said that foundational technologies such as AI always become cheaper and easier to use and permeate all levels of society and all manner of positive and harmful uses.
“When hostile governments, fringe political parties and lone actors can create and broadcast material that is indistinguishable from reality, they will be able to sow chaos, and the verification tools designed to stop them may well be outpaced by the generative systems.”
Concerns about AI date back decades, initially and best popularized in the 1968 movie “2001: A Space Odyssey.” There has since been a steady stream of worries and concerns, including over the Furby, a wildly popular cyber pet in the late 1990s. The Washington Post reported in 1999 that the National Security Administration (NSA) banned these from their premises over concerns that they could serve as listening devices that might divulge national security information. Recently released NSA documents from this period discussed the toy’s ability to “learn” using an “artificial intelligent chip onboard.”
Contemplating AI’s future trajectory
Worries about AI have recently become acute as more AI experts claim that Artificial General Intelligence (AGI) could be achieved soon. While the exact definition of AGI remains vague, it is thought to be the point at which AI becomes smarter and more capable than a college-educated human across a broad spectrum of activities.
Altman has said that he believes AGI might not be far from becoming a reality and could be developed in the “reasonably close-ish future.” Gomez reinforced this view: “I think we will have that technology quite soon.”
Not everyone agrees on an aggressive AGI timeline, however. For example, LeCun is skeptical about an imminent AGI arrival. He recently told Spanish outlet EL PAÍS that “Human-level AI is not just around the corner. This is going to take a long time. And it’s going to require new scientific breakthroughs that we don’t know of yet.”
Public perception and the path foward
We know that uncertainty about the future course of AI technology remains. In the 2024 Edelman Trust Barometer, which launched at Davos, global respondents are split on rejecting (35%) versus accepting (30 %) AI. People recognize the impressive potential of AI, but also its attendant risks. According to the report, people are more likely to embrace AI — and other innovations — if it is vetted by scientists and ethicists, they feel like they have control over how it affects their lives and they feel that it will bring them a better future.
It is tempting to rush towards solutions to “contain” the technology, as Suleyman suggests, although it is useful to recall Amara’s Law as defined by Roy Amara, past president of The Institute for the Future. He said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
While enormous amounts of experimentation and early adoption are now underway, widespread success is not assured. As Rumman Chowdhury, CEO and cofounder of AI-testing nonprofit Humane Intelligence, stated: “We will hit the trough of disillusionment in 2024. We’re going to realize that this actually isn’t this earth-shattering technology that we’ve been made to believe it is.”
2024 may be the year that we find out how earth-shattering it is. In the meantime, most people and companies are learning about how best to harness generative AI for personal or business benefit.
Accenture CEO Julie Sweet said in an interview that: “We’re still in a land where everyone’s super excited about the tech and not connecting to the value.” The consulting firm is now conducting workshops for C-suite leaders to learn about the technology as a critical step towards achieving the potential and moving from use case to value.
Thus, the benefits and most harmful impacts from AI (and AGI) may be imminent, but not necessarily immediate. In navigating the intricate landscape of AI, we stand at a crossroads where prudent stewardship and innovative spirit can steer us towards a future where AI technology amplifies human potential without sacrificing our collective integrity and values. It is for us to harness our collective courage to envision and design a future where AI serves humanity, not the other way around.
Gary Grossman is EVP of technology Practice at Edelman and global lead of the Edelman AI Center of Excellence.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!