Lyse Langlois, CEO and govt director of the Worldwide Observatory on the Societal Impacts of AI and Digital Applied sciences, often called Obvia, poses throughout an interview with The Korea Instances at its newsroom in Seoul, Tuesday. Korea Instances photograph by Shim Hyun-chul
Canada is the birthplace of a number of pioneers in synthetic intelligence (AI). Nobel laureate Geoffrey Hinton, a pc and cognitive scientist identified for his breakthrough work on synthetic neural networks which earned him the nickname “the godfather of AI,” is from Canada. Different influential figures reminiscent of Yoshua Bengio, a number one researcher in deep studying, and Joelle Pineau, a distinguished laptop scientist and professor at McGill College, are additionally Canadian.
Regardless of this spectacular expertise pool, Canada has remained cautious in regards to the widespread adoption of AI in on a regular basis life.
In 2018, Canada launched the Worldwide Observatory on the Societal Impacts of AI and Digital Applied sciences, often called Obvia, with funding from the province of Quebec. Obvia is a analysis community designed to check the social implications of AI and supply steering to policymakers. Notably, it was established years earlier than ChatGPT made its international debut in November 2022 — an indication of Canada’s far-sighted, cautious method to AI.
“Our founding imaginative and prescient has remained unchanged,” Lyse Langlois, CEO and govt director of Obvia and professor at Laval College in Quebec, stated throughout a current interview with The Korea Instances at its newsroom in Seoul. “Growing AI is just not sufficient. We should make sure that it actually advantages society.”
Since its inception, the state-funded group has collaborated with 300 teachers from 12 Canadian universities, in addition to companions from public, non-public and neighborhood organizations. Its specialists come from numerous fields together with regulation, ethics, social sciences, engineering, economics and well being.
“What issues is the path society chooses. Obvia’s analysis reveals that the way forward for work relies upon largely on the political, financial and societal choices we make right now,” Langlois famous.
Langlois is at the moment in Korea as a visiting scholar at Korea College. Since her arrival in Seoul in September, Obvia has signed a partnership settlement with the college to strengthen scientific cooperation in information safety, digital safety and AI governance.
Amongst its many priorities, Langlois stated, Obvia locations a powerful emphasis on constructing public consciousness in regards to the accountable use of AI. “Since its creation, Obvia has made public consciousness — particularly amongst youthful generations — a precedence in an effort to foster accountable makes use of of synthetic intelligence,” she stated. “From the outset, we have now aimed to attract consideration to the social impacts of AI in addition to the constraints of present techniques, emphasizing that these applied sciences are neither impartial nor infallible.”
Canada’s cautious method to AI is in some methods akin to that of South Korea, a rustic that’s typically described as an early adopter of latest applied sciences.
South Korea ranks first globally in ChatGPT smartphone app consumer development charges and has one of many highest proportions of paid subscribers. It is usually the world’s second-largest contributor to ChatGPT’s income, following america. Mixed with robust digital infrastructure, Korea’s experimental, tech-savvy and trend-sensitive customers have lengthy made the nation a worldwide check mattress for varied industries.
The Korean authorities has responded to the general public’s enthusiasm by setting an formidable objective to make AI the nation’s subsequent development engine. It goals to place Korea among the many world’s prime three AI powerhouses, alongside the U.S. and China.
President Lee Jae Myung underscored this ambition throughout his current assembly with SoftBank founder and CEO Masayoshi Son on the presidential workplace.
Lee described what he referred to as an “AI-based society” as a central objective of his presidential time period. “It refers to a society the place all people, non-public corporations, public establishments and different teams undertake AI no less than at a primary degree,” he stated. “The Korean public understands each the dangers and the usefulness of AI. Subsequently, we are attempting to put money into applied sciences that decrease these dangers whereas maximizing its advantages.”
In response, Son urged Lee to embrace synthetic superintelligence (ASI) — a hypothetical software-based type of AI with mental capabilities past these of people — when shaping Korea’s AI insurance policies. He drew parallels to his previous suggestions to Korean leaders, together with a gathering with President Kim Dae-jung the place he emphasised the significance of broadband and a dialog with President Moon Jae-in, throughout which they spoke about AI. Now, in his assembly with President Lee, Son highlighted ASI as key to Korea’s future.
The arrival of AI has sparked each pleasure and concern. Some have fun AI as a recreation changer that can reshape future society, whereas others worry it can isolate people — notably within the office. A standard fear is that many duties at the moment carried out by folks will probably be automated, resulting in large-scale job losses.
Nevertheless, Obvia’s analysis signifies that these fears are largely overstated.
“AI is already reworking many industries, however the concept of a widespread substitute of people doesn’t replicate what analysis is displaying,” Langlois stated. “At Obvia, we observe that AI principally automates particular duties — typically repetitive or low value-added — reasonably than total jobs. The almost certainly trajectory is, subsequently, a redistribution of roles, the place AI helps staff as a substitute of changing them.”
She emphasised that there are areas through which people nonetheless clearly outperform AI.
“Findings from Obvia’s affect assessments spotlight a number of human capabilities that stay tough to duplicate. Jobs in training, healthcare, social work or justice depend on listening, contextual judgment and emotional understanding — dimensions AI can’t reproduce,” she stated. “AI can generate content material, however the means to interpret, make which means and create one thing genuinely new in dialogue with society stays uniquely human.”
The identical applies to the information business, she famous. Like different fields, newsrooms will undoubtedly endure important modifications.
“To start with, AI can automate sure newsroom duties. For instance, it could generate visuals, draft content material and produce routine updates reminiscent of sports activities scores, climate experiences or monetary summaries,” she stated. “This might unlock time for journalists to deal with higher-value work, permitting them extra time for investigation, evaluation and fact-checking.”
Regardless of these advantages, Langlois warned that new dangers require cautious consideration from journalists to make sure they don’t materialize.
Misinformation, the reinforcement of stereotypes and factual inaccuracies are a number of the dangers which will come up when AI is concerned in content material manufacturing. As such, journalists should completely double-check all supplies produced by themselves and their information organizations.
“People should stay chargeable for what’s revealed. They need to confirm information, present context and train moral vigilance,” she stated.
