Nick Robins-Early 

‘Alexa, how should I vote?’: rightwing uproar over voice assistant’s pro-Kamala Harris points

Amazon says the device’s pro-Harris answers were due to software errors, but conservatives allege a liberal bias
  
  

An Amazon Echo Dot device on a desk.
The device was ready to list Kamala Harris’s positive qualities, but refused to answer about Donald Trump. Photograph: Charles Brutlag/Alamy

Amazon’s Alexa voice assistant has caused an uproar among conservatives after viral videos showed the device giving supportive answers about voting for Kamala Harris, while refusing to respond to similar questions about Donald Trump.

The issue was due to a software update intended to improve the quality of Alexa’s functions and its artificial intelligence operations, according to leaked documents obtained by the Washington Post.

When asked why someone should vote for Harris, videos showed the device listing positive qualities of the Democratic nominee. Asked the same question about Trump, Alexa gave a stock answer that it could not promote or give answers about specific political candidates.

“These responses were errors that never should have happened, and they were fixed as soon as we became aware of them,” a spokesperson for Amazon said. “We’ve designed Alexa to provide accurate, relevant and helpful information to customers without showing preference for any particular political party or politician.”

The discrepancy between Alexa’s responses became the focus of widely circulated posts on social media, and coverage on rightwing media outlets including Fox News.

Media influencer Mario Nawfal posted one video on X (formerly Twitter) earlier this week of a woman asking Alexa about both candidates, which he captioned “GUESS WHO AMAZON’S ALEXA IS VOTING FOR?” The video received more than 740,000 views within days, and prompted the pro-Trump tech billionaire Elon Musk to respond “!!” in a reply.

The backlash grew to the point that the South Carolina Republican senator Lindsey Graham issued a letter on Thursday to Amazon demanding answers, while also suggesting the company was biased towards liberal causes.

The spread of videos showing Alexa’s responses resulted in frenzied internal discussions at Amazon as engineers manually blocked the device from responding to such questions, and attempted to figure out what went wrong, according to the Washington Post.

The issue appeared to be with a software update called Info LLM, which the Post reports was meant to improve the accuracy of responses and decrease the number of errors on political questions. (Amazon’s founder, Jeff Bezos, owns the Post.)

Big tech companies have gone to great lengths in recent years to be seen as politically neutral, something that has become more difficult as they rolled out a string of generative AI products. Image generators, chatbots and other tools have frequently caused controversy for creating media that appear to show political bias, which is often the result of issues in AI models’ training data or inadequate prohibitions on what they will generate.

Most major AI companies have consequently put guardrails on their tools to prevent public relations disasters and uncomfortable questions about political bias in their models. OpenAI’s flagship product ChatGPT, for instance, gives generic responses about looking at a candidate’s policy positions when asked whether to vote for Trump or Harris, and refuses to provide specific information about US elections when asked questions such as “how do I vote?”

Republicans and conservative activists have nevertheless alleged that platforms and AI companies are secretly conspiring to promote a leftist worldview, despite a lack of empirical evidence for such claims. A 2021 New York University study found no evidence for liberal bias on social media platforms; instead, data suggested that if anything they tended to amplify rightwing content.

 

Leave a Comment

Required fields are marked *

*

*