AI for communicators: What’s new and notable
No shortage of news for comms pros.
It’s a big week for AI – but then, most weeks for the last 14 months or so have been big weeks for AI. Still, new tools are being rolled out by the biggest players in the industry, progress on regulation is inching forward and deep fakes are coming to make the 2024 U.S. election even more interesting.
Here’s what’s new and what it means for communicators.
New tools and uses
Both Microsoft and OpenAI have rolled out major new tools and pricing packages in the last week, further cementing the companies as frenemies (Microsoft has a large stake in OpenAI) and front-runners in the consumer AI industry.
Microsoft is now offering a supercharged version of its free Copilot assistant for $20 each month. The tech giant is offering all the benefits of Copilot, plus access to GPT 4.0 during peak times, faster image creation and the ability to use Copilot in some Microsoft Office tools to summarize documents and more. These are certainly superuser nice-to-haves, but if you haven’t tried out Copilot yet, this could be the time to play around and see what Microsoft has to offer.
OpenAI is also offering additional paid products. First is its long-awaited business tier, ChatGPT Team, which offers a happy medium between its enterprise offering and its individual subscriptions. ChatGPT Team offers smaller organizations data privacy protection, custom GPTs and other perks at a price point of $25-30 per person, per month, depending on billing preferences.
The ChatGPT Store is also opening its doors, allowing users to create their own bots which they can sell under a soon-to-roll-out revenue sharing plan. The custom bots are available only for users of Pro, Team or Enterprise accounts. Bots run the full gamut, from writing coaches and coding tools to GPTs that design your tattoo or create your star chart.
While these two players are leading the way when it comes to consumer-focused AI tech, there are intriguing new tools being rolled out by companies every day. One clever use is at Sam’s Club, where the visual AI is being used to eyeball what’s in your cart rather than having a human being check your receipt against your cart contents. While this technology exists in some small convenience stores, Walmart (which owns Sam’s Club) notes this is one of the first large-scale use of the technology. But we can certainly expect more to come.
Regulations
As technological capabilities race ahead, regulations are proceeding at a much slower pace. But they are proceeding.
The World Economic Forum in Davos, Switzerland brings together some of the biggest governments, companies and other dominant global players. It’s where decisions are made far above the gaze of mere mortals like us. And it does seem like things are being hashed out in the realm of AI. Microsoft CEO Satya Nadella said he does see a consensus emerging around AI, according to CNBC, and was welcoming of global AI rules.
“I think [a global regulatory approach to AI is] very desirable, because I think we’re now at this point where these are global challenges that require global norms and global standards,” Nadella said.
But Nadella may feel less positive about EU rumblings about a merger investigation into the partnership between Microsoft and OpenAI. CNN reports that the EU is only the latest to express concern over Microsoft’s stake in the company, which it denies is a merger. Both the U.S. and U.K. have also launched preliminary probes into the partnership. Given that these organizations are emerging, both jointly and separately, as the dominant players in the space, this is one to watch.
Risks
Even as the promises of AI become more apparent, so do the risks. We’ve yet another reminder of how powerful AI is in misleading people and the risks it poses to brand safety and democracy as a whole.
In an insidious twist on deepfakes, “Taylor Swift” was “seen” hocking Le Creuset cookware, the New York Times reported. The megastar has publicly expressed her love for the pricy pots. But social media ads are not only lying about showing Swift, they’re also lying about being associated with Le Creuset. The ads are promoting a giveaway of the cookware, but the brand denies involvement. It’s a scam that targets two high-end, high-trust brands, made more plausible by the fact that Swift has expressed affinity for Le Creuset in the past.
That situation is bad enough. But AI is taking on decidedly darker purposes in hands of 4chan, a message board infamous for its trolling. Another New York Times report chronicled how AI is being used to attack the judicial system, including members of parole boards.
A collection of online trolls took screenshots of the doctor from an online feed of her testimony and edited the images with A.I. tools to make her appear naked. They then shared the manipulated files on 4chan, an anonymous message board known for fostering harassment, and spreading hateful content and conspiracy theories.
4chan has also used AI to make it appear that judges are making racist comments. It’s all proof that even a small amount of video footage is now dangerous in the wrong hands. Vigorous monitoring is required to protect members of your organization.
OpenAI this week announced the steps it will take to attempt to prevent the tool’s misuse during the upcoming elections around the world. While its efforts are almost certainly doomed to failure, there are attempts being made, including efforts to prevent abuse, such as “misleading ‘deepfakes’, scaled influence operations, or chatbots impersonating candidate,” according to a blog post from OpenAI. They’ve also pledged to institute digital watermarks that will help people identify images made with its DALL-E generator, though their effectiveness is questionable.
The effects of AI are expected to be significant in this election, however, no matter how hard anyone tries to contain it. The same is true of the workplace. A new report from the International Monetary Fund anticipates that 40% of all jobs will be impacted by AI – and that number jumps to 60% in advanced economies.
In half of these instances, workers can expect to benefit from the integration of AI, which will enhance their productivity.
In other instances, AI will have the ability to perform key tasks that are currently executed by humans. This could lower demand for labour, affecting wages and even eradicating jobs.
Meanwhile, the IMF projects that the technology will affect just 26% of jobs in low-income countries.
In the meantime, let’s learn and do the best we can.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.
On the dangers of AI misuse, there was a recent story in the SF Bay Area about how hackers used AI to clone the voice of a college student and then called the mom and told her he had been in an accident and was in trouble. https://www.sfchronicle.com/bayarea/article/ai-phone-scam-18561537.php
It was a horrendous experience for the family but the mom HEARD her son’s voice on the phone.
It starts to create a culture of fear and suspicion at every turn.
I know, I actually had a conversation with my parents about this and how scary it could be. Horrible situation.
So will the use of a “safe word” work in this instance or can hackers know the safe word too??
I told them to simply hang up and call me back.