AI for communicators: What’s new and what’s next
Including big regulatory moves.
The White House took its first baby steps toward regulating artificial intelligence. Meanwhile, tools continue to evolve in ways that create completely unforeseen reputational challenges, as was the case for The Guardian this week.
Let’s catch up on all this evolution and how it will affect your communications practice.
Regulation watch
A recent executive order from President Joe Biden signifies one of the American government’s first stabs at regulating AI. While Biden’s powers are limited – Congress will be the one to implement meaningful guardrails for the technology – the executive order is still an important move.
The order’s provisions call for establishing rules for government agencies’ use of AI, dealing with potential national security risks and more but stops short of creating enforceable industry standards.
Still, the executive order risks presenting an ambitious vision for the future of AI but insufficient power to bring about the industry-wide shift, Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University, said in a statement.
“The new executive order strikes the right tone by recognizing both the promise and perils of AI,” Kreps said. “What’s missing is an enforcement and implementation mechanism. It’s calling for a lot of action that’s not likely to receive a response.”
Vice President Kamala Harris kept the regulatory conversation going during a speech in London Wednesday, again calling on Congress to pass rules governing AI beyond what Biden’s executive order puts in place.
The New York Times quotes Harris discussing the current perils of AI:
“When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him? When a woman is threatened by an abusive partner with explicit deep fake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of biased A.I. facial recognition, is that not existential for his family?”
On an international level, there appears to be a slight thaw of tensions between the U.S. and China when it comes to tackling regulations cooperatively. CNBC reports that that Wu Zhaohui, China’s vice minister of science and technology, said his nation would participate in an “international mechanism [on AI], broadening participation, and a governance framework based on wide consensus delivering benefits to the people, and building a community with a shared future for mankind.”
We’ll see how this all plays out in practice.
New risks
These regulations all express some level of governmental concern for the expanding capabilities of AI, and several news articles illuminate the risks of using these tools – some of which could be solved by regulation, some which won’t.
The Guardian demanded answers from Microsoft after an AI-generated poll asking readers to speculate over a woman’s cause of death ran alongside an article in a news aggregator app. Readers blamed The Guardian, though the automated Microsoft-designed tool was to blame.
Among the demands made by The Guardian chief executive Anna Bateson, according to the outlet:
Bateson asked for assurances from Smith that: Microsoft will not apply experimental AI technology on or alongside Guardian journalism without the news publisher’s approval; and Microsoft will always make it clear to users when AI tools are used to create additional units and features next to trusted news brands like the Guardian.
There are more issues for the news industry and AI brewing. CNN reported that the News Media Alliance says major AI models, including Google and Open AI, have scraped information from copyrighted material, including news articles. The LLM models are not engaged in licensing agreements with the outlets or offering compensation, the News Media Alliance says.
It’s likely these issues will be hashed out in the courts and have staggering implications for the future of both industries.
The Washington Post has flagged another problem with how these AI models are trained: Because of the material they’re trained on, they can present a whitewashed, Euro- and American-centric version of the world, where beautiful people are all pale and white, all Muslim men wear turbans and houses in Mumbai are dirt buildings on dirt roads.
Remember: Tthese tools are in early stages. You must give them oversight, sensitivity and guidance.
New tools
But it isn’t all bad news. There are cool and exciting AI uses on the horizon.
LinkedIn released an AI bot to some users that will guide them through the job search process, from finding a position to prepping for the interview, The Hill reported.
Microsoft has also started selling Copilot, an AI tool for its Office suite aimed at business users. And Instagram is working to develop an AI “friend” with a customizable personality for chatting while scrolling.
We’re sure there’s plenty more to come.
Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.