Artificial intelligence is spreading fast, appearing in smartphones, customer service, healthcare, legal systems, and more. From ChatGPT to Google’s AI overviews, it seems almost impossible to avoid. Some apps let users turn AI off, and swearwords in Google searches can bypass AI summaries. But the real question is whether opting out is practical or even advisable.
Dr Kobi Leins, an AI governance expert, chooses to opt out of AI in some healthcare situations. She refused to allow AI transcription software during her child’s medical appointment but was told the specialist could not accommodate her request. “You can’t resist individually. There is also systemic resistance,” she said. “The push to use these tools beyond what makes sense is very strong.”
AI now powers many daily systems. Smartphones, social media, navigation apps, finance systems, online dating platforms, and even legal case assessments rely on AI. Hospitals use it to ease administrative workloads and help identify illnesses. Despite its benefits, trust remains limited. A University of Melbourne study found that half of Australians use AI regularly, yet only 36% trust it.
Prof Paul Salmon, a human factors expert, warns that avoiding AI is increasingly difficult. “In work, there’s pressure to engage with it. You either feel left behind or are told you’re being left behind,” he said. The risks are extensive. The Massachusetts Institute of Technology’s AI Risk Database lists over 1,600 hazards, including privacy breaches, discrimination, false information, scams, loss of human agency, and lack of transparency. It also warns of AI pursuing goals contrary to human values.
Greg Sadler, CEO of the Good Ancestors charity, notes that while AI can be useful, it should be avoided in situations where its output cannot be trusted or privacy is a concern. AI also carries a significant environmental cost. Google’s emissions have increased over 51% partly due to the electricity demands of AI datacentres. The International Energy Agency predicts datacentres’ electricity use could double by 2026, consuming 4.5% of global energy by 2030.
Some users try workarounds. Google’s AI interface, Gemini, acts as an “answer engine,” summarizing searches. Using profanity in queries can bypass these summaries and return standard search results. Browser extensions can block AI content, and repeatedly asking chatbots to “speak to a human” can also help. But fully avoiding AI requires stepping away from much of modern life. James Jin Kang, a computer science lecturer, says AI is so embedded that it cannot simply be turned off.
The debate extends to governance. Governments worldwide, including in Australia, are struggling to regulate AI and understand its societal impact. Big tech companies seek access to journalism, books, and other content to train AI, prompting scrutiny and calls for oversight.
Experts disagree on AI’s long-term threat. Some say it is transformative rather than dangerous if humans make careful decisions. Others highlight its potential misuse at scale, particularly in military or high-stakes contexts. Dr Leins uses AI selectively, focusing on evidence-based applications rather than hype or fear. “These tools can be positive or negative. It’s about using them wisely,” she said.
Ultimately, avoiding AI completely may not be realistic. But understanding its risks, choosing where and when to engage with it, and demanding transparency from developers can help people retain agency. The question is no longer whether AI will be part of life, but whether humans can still choose when to opt out.