Some AI workers are urging their friends and family to avoid using generative AI, citing concerns over ethics, accuracy, and potential harm.
Krista Pawloski, who works on Amazon Mechanical Turk moderating AI-generated content, says a single experience shaped her cautious approach. While labeling tweets for racism, she encountered the term “mooncricket,” a racial slur she had previously not recognized. The discovery prompted her to question how often she and others might have missed offensive content.
Since then, Pawloski has avoided using AI tools personally and discourages her teenage daughter from using them. She advises friends to test AI on topics they know well to identify errors and understand its limitations. Pawloski often asks herself whether her work could unintentionally harm people, and frequently concludes that it could.
Amazon notes that workers on Mechanical Turk can choose tasks and review their details before accepting them. The company emphasizes that tasks are set by requesters, who determine pay, time limits, and instructions.
Pawloski is not alone. A dozen AI raters, who review outputs from models like Google’s Gemini, Elon Musk’s Grok, and others, told researchers they advise caution to loved ones. Many have stopped using AI themselves after seeing frequent inaccuracies. Some raters evaluating AI responses in health and sensitive areas reported that models often produce confidently wrong answers, raising ethical concerns.
One Google AI rater forbids her 10-year-old daughter from using AI, insisting that critical thinking skills must come first. She described how colleagues often accept AI responses on medical issues uncritically, even though they may lack relevant expertise.
Experts see the hesitancy among AI workers as a warning sign. Alex Mahadevan, director of MediaWise at Poynter, explains that when the people training AI distrust it, it indicates companies prioritize speed and scaling over careful validation. Users may be exposed to repeated errors in chatbots and AI systems.
Brook Hansen, another AI worker at Mechanical Turk, highlighted that workers are given limited training, vague instructions, and tight deadlines. She believes this compromises the safety, accuracy, and ethical quality of AI outputs. Hansen warns that generative AI’s ability to deliver false information confidently is a major flaw.
Research from NewsGuard shows that AI models, including ChatGPT, Gemini, and Meta’s AI, have become more likely to provide false information while reducing instances of giving no answer. This trend further fuels concern among AI trainers.
Many AI workers advise family and friends to avoid devices with built-in AI, resist automatic updates that add AI features, and refrain from sharing personal information with AI tools. They see generative AI as “fragile, not futuristic,” shaped by rushed timelines, human biases, and incomplete data.
Pawloski compares the situation to the textile industry, where consumers initially prioritized cheap clothes over ethical concerns. Once the public learned about sweatshops, they could make informed choices. She believes AI awareness is at a similar stage. Asking questions about data sources, copyright use, and fair labor practices can lead to better choices and improvements in the technology.
AI workers like Hansen and Pawloski are sharing their insights publicly. They have spoken at school board conferences to highlight AI’s ethical and environmental impacts, aiming to educate communities and spark responsible discussions. Their work underlines the importance of understanding the hidden labor and risks behind generative AI before embracing it fully.






