The 5 Major AI Risks

In previous chapters, we explored AI's capabilities and use cases. But to truly "master" AI, understanding the risks is essential.

Understanding risks isn't about being afraid of AI. Just like learning how brakes work when driving a car, it's fundamental knowledge for safe and effective use.

Five major AI risks: Hallucination, Copyright, Privacy, Bias, and Deepfakes

Hallucination — When AI Confidently Makes Things Up

Hallucination is the phenomenon where AI generates information that isn't based in fact, presenting it as if it were completely true. It's arguably the single biggest risk in AI usage.

Why It Happens — Understanding the Mechanism

AI works by "predicting the most likely next word." In other words, it's not outputting "what's true" — it's outputting "what sounds plausible." AI doesn't judge truth or falsehood. It's not looking things up in an encyclopedia; it's doing pattern matching.

Hallucination rates vary significantly between models. A 2025 benchmark found that the best model (Gemini 2.0 Flash) had a rate of 0.7%, while some models hallucinated nearly 30% of the time[2]. Progress is being made, but it will never reach zero.

Real-World Incidents

  • Fake court cases — A U.S. lawyer submitted ChatGPT-generated case citations to court, only for the cited cases to be entirely fictional (2023)
  • Fabricated papers — AI generated plausible-looking paper titles and author names for studies that don't exist
  • Wrong statistics — AI confidently stated market sizes that were wildly different from actual figures
  • Legal domain — A Stanford study found hallucination rates of 69-88% in legal queries[2]

4 Defense Strategies

Four hallucination defense steps: Ask for sources, verify with primary sources, compare across AIs, use search-enabled AI

The most vulnerable information types are: statistics and numbers, names and proper nouns, legal and tax matters, and breaking news. Always verify these with primary sources.

Practical tip: AI tools with web search integration like Perplexity or Gemini return answers with source links. For important research tasks, using these search-enabled AI tools is the smart approach.

AI and copyright issues break down into two main questions: "Is it legal to use copyrighted work for AI training?" and "Does AI-generated content have copyright protection?"

Issue 1: Training Data and Copyright

AI models are trained on massive amounts of internet content (text, images, code). Lawsuits are being filed worldwide over the use of copyrighted works without permission.

LawsuitSummaryStatus (2026)
NYT vs OpenAINew York Times sued over unauthorized article use for trainingOngoing
Getty vs Stability AILawsuit over unauthorized photo use for trainingOngoing
Artist class actionsArtists allege unauthorized use of their work for image generation AI trainingMultiple ongoing
News orgs vs PerplexityMajor news organizations filed $66M+ damage claims for unauthorized article useFiled 2025

Issue 2: Copyright of AI-Generated Content

Whether AI-generated content qualifies for copyright protection varies by country.

  • U.S. — The Copyright Office's position is that "works created solely by AI are not eligible for copyright." Works with significant human creative involvement may receive partial protection
  • EU — Rules governing AI-generated content copyright are being developed
  • Other jurisdictions — Laws are still catching up with the technology

What You Should Do

  • For commercial use, always review each service's terms of use
  • Don't specify particular artist names in image generation ("in the style of [artist]" is a copyright risk)
  • Check whether AI output resembles existing copyrighted work
  • Disclose AI involvement when appropriate

Privacy — What's Safe to Share with AI?

In 2023, Samsung employees entered confidential source code into ChatGPT, making major headlines. This incident prompted many companies to establish AI data input policies.

Data entered into AI may be used for service improvement or model training. Assume that anything you input can't be taken back.

What's safe and what's not safe to input into AI: 5 no-go items and 5 OK items

Data Protection Strategies

StrategyDetails
Turn off chat historyBoth ChatGPT and Claude let you disable training data usage in settings
Enterprise plansBusiness plans guarantee your data won't be used for training
Anonymize dataReplace real names with "Person A" and company names with "Company X" before input
Run locallyUse open-source models via Ollama on your own machine

When in doubt, ask yourself: "Would I be okay if this information were published on the internet?" If the answer is "no," don't enter it into AI.

Bias and Deepfakes

AI Bias — Training Data Prejudice Reflected in Output

AI bias occurs when societal prejudices present in training data are reflected in AI's output. AI doesn't produce "correct answers" — it tends to produce answers that were "most common in the training data."

  • Gender bias — Generate "nurse" images and you get mostly women; "engineer" images mostly men
  • Cultural bias — Responses tend to skew toward English-speaking, Western perspectives
  • Confirmation bias — Majority opinions are over-represented while minority viewpoints are underserved

Countermeasure: Don't take AI's output at face value. Build a habit of requesting "consider this from another perspective" or "give me the opposing view." For important decisions like hiring and performance reviews, never rely on AI's judgment as the sole basis.

Deepfakes — Misinformation You Can't Tell Apart

As AI-generated images, video, and audio become increasingly realistic, deepfake risks (fake media created by AI) are surging. Deepfakes on social media grew from an estimated 500,000 in 2023 to 8 million in 2025[3].

  • Voice cloning scams — Phone scams using AI to mimic a family member's or boss's voice, created from just seconds of audio
  • Fake videos — Fabricated videos of public figures used for opinion manipulation or impersonation
  • Fake news — Mass distribution of AI-generated articles that look convincingly real

How to protect yourself:

  • If you receive an urgent call saying "send money now," verify the person's identity through a separate channel
  • Check the source of shocking images or videos you see on social media
  • Maintain healthy skepticism toward any content that could be AI-generated

Global Regulation — The Rules Are Being Written

Governments worldwide are rapidly developing legal frameworks to address AI risks.

RegionRegulationKey Features
EUAI Act (enacted 2024)Risk-based regulation. Strict standards for high-risk AI (hiring, healthcare, justice). Phased implementation starting 2025
JapanAI Promotion Act (May 2025)"Innovation-first" approach. Prioritizes advancement while establishing safety standards
U.S.Executive Order on AI SafetyNo comprehensive federal law yet. Regulation progressing state by state

In December 2025, Japan adopted its first "AI Basic Plan," committing approximately $7 billion (1 trillion yen) in AI-related investment over five years[1]. Japan's approach of promoting innovation while establishing safety standards — rather than heavy-handed regulation — has attracted international attention.

AI Safety Checklist

Now that you understand the risks, here are concrete action items. We've compiled checklists for both individuals and organizations.

AI safety checklist: 8 items for individuals and 8 items for organizations

For Individuals

  1. Don't take output at face value — Always verify important information against primary sources
  2. Don't enter sensitive information — Use the "would I want this published?" test
  3. Fact-check numbers and names — These are the most hallucination-prone areas
  4. Watch for bias — Stay aware of potential skews in AI output
  5. Check copyright — Review terms of use for any commercial application
  6. Disclose AI usage appropriately — Be transparent about AI involvement in reports and articles
  7. You bear the final responsibility — The consequences of using AI output fall on the user

For Organizations

  1. Create and communicate an AI usage policy — Document what data can be entered and for which tasks
  2. Conduct employee training — Educate staff on proper AI use and associated risks
  3. Establish review workflows for AI output — Build processes for human review before publication
  4. Consider enterprise plans — Business-tier plans offer stronger data handling guarantees
  5. Prepare incident response procedures — Define response protocols before problems occur

Summary: AI risks aren't "scary" — they're "need-to-know." The basics of hallucination defense are fact-checking, and the basics of privacy protection are the "would I want this published?" test. Making these two habits will help you avoid the vast majority of risks.

References

  1. "Japan adopts first AI basic plan." The Japan Times, December 23, 2025.
  2. Vectara. "Hallucination Leaderboard." GitHub, 2025. / Stanford RegLab. "Legal Hallucination Study." 2025.
  3. Deepstrike. "Deepfake Statistics 2025." deepstrike.io, 2025.

Related links:

In the next chapter, we'll explore the latest AI developments — multimodal AI, AI agents, and 2025-2026 trends.