Earlier this week it was announced that Microsoft is joining forces with media website Semafor to use ChatGPT to create news stories. Microsoft and Semafor state they’re only using AI to research breaking news stories, but that actual journalists will write original content based on that research.
It’s hard to not at least question these claims given that Microsoft and its other partner Open AI, are already facing a lawsuit from the New York Times for using content from behind paywalls to train its AI chatbots. This latest venture seems like a blatant “we don’t care” aimed not only towards the NYT but also at journalists.
What is Signals?
According to Semafor co-founder and former Buzzfeed editor-in-chief Ben Smith, Semafor will create a newsfeed referred to as Signals for Microsoft, which Semafor has been paid a substantial sum to do.
The next part is a bit murky, “It will highlight breaking news and analysis, offering a dozen or so posts per day. All stories will be written entirely by journalists, with the AI effectively acting as a research tool.”
That could be the absolute truth, but I’m sure even they would understand that it’s a bit hard to trust a company already being sued for mining content produced by legitimate journalists and arguing it was fair use.
The slippery slope to this tool simply turning into AI journalists writing stories and taking jobs just seems a little too obvious for me to want to set foot on it. Microsoft has also announced that it will be linking up with the Craig Newmark School of Journalism, the GroundTruth Project, the Online News Association, and other journalism organizations.
Why, who knows? Maybe to lend the appearance of journalistic integrity, but making a few partnerships doesn’t earn you the journalistic integrity badge and put you beyond reproach.
As a journalist, the AI infusion into our work can be triggering. So, I reached out to Leonard Lee, a respected industry analyst and tech industry advisor for Next Curve, to share his insight and give us a different, refreshing perspective.
Lee shared these thoughts, “I think the operative word here is ‘assisted.’ This seems like Microsoft’s attempt to explore legitimate applications of generative AI tools to augment human-driven journalism. Microsoft describes this as a ‘collaboration’ among several it has penned with Semafor and other journalist organizations and publications to discover ‘best practices’ for enhancing journalism.”
Lee further states, “I see this as a great test for a wide range of Gen AI tools, from Bing Chat to CoPilot, that Microsoft and participating collaborators will evaluate. In many ways, this reads like a large POC for the news industry and journalism at large. It will be interesting to see what the industry learns from these exercises with Microsoft. At the moment, it doesn’t seem these collaborations are about eliminating the journalists in the middle but rather enhancing them. Time will tell how efficacious these Gen AI tools prove. It will take time.”
“The NYT thing is a different matter, in my opinion. That’s really about OpenAI and what they used and continue to use to train their models. The work with Semafor seems more of an effort by Microsoft to push the adoption of Gen AI. Their business valuation counts on it.”
Hmm, maybe I am thinking about this all wrong?
At the end of the day, Microsoft already pushes a great deal of news via MSNBC, and using AI to give journalists leads on news stories could be helpful. In today’s world, where controlling the narrative is more powerful than ever, these moves by Microsoft could blur the lines on many levels, as most corporations care more about their shareholder’s profits than things like fact-based news or journalistic integrity.
However, I can also agree with Leonard Lee’s perspective as maybe I am too quick to judge; maybe Microsoft and Semafar deserve the benefit of the doubt. I guess we will find out soon enough.