Elon Musk’s xAI is facing a privacy firestorm after it allegedly used employee biometrics to train its ‘Ani’ AI chatbot. Current and former staff claim the company harvested facial scans and voice prints to develop the anime-inspired digital companion without obtaining meaningful consent, raising critical questions about tech industry ethics and data privacy.
The “Project Skippy” Controversy
Reports claim xAI required employees to provide facial scans and voice recordings for ‘Project Skippy,’ a program to train its ‘Ani’ chatbot. Staff feared professional retaliation for non-compliance, and sources state the company did not offer a clear way to opt out of the biometric data collection.
According to internal documents, the initiative mandated that select “AI tutors” upload high-resolution images and hours of audio recordings. A 2024 memo detailing the request was later covered in a Tribune report, with tutors confirming that refusal could lead to stalled promotions. Engineers told the research site OpenTools these datasets were instrumental in training the speech and gesture models for Ani’s persona, and management provided no formal opt-out process.
Public Backlash and Safety Concerns
By mid-2025, Ani was a key feature for xAI’s SuperGrok tier, but public opinion soured after reports detailed its use in “kinky” scenarios, with The HR Digest calling it an explicit AI companion. The revelations prompted privacy advocates to demand investigations under the Illinois Biometric Information Privacy Act (BIPA) and the EU AI Act.
A Business Insider investigation in September 2025 intensified the controversy, revealing that data annotators were exposed to user prompts involving sexual violence and potential child sexual abuse material (CSAM). Unlike competitors such as OpenAI and Anthropic, xAI did not file any reports with the National Center for Missing and Exploited Children (NCMEC) in 2024.
What Employees Feared Most
- Their likeness being used in deepfakes without consent or compensation.
- Permanent storage of their voice and facial data with no clear deletion policy.
- Career penalties if they declined to provide their biometric information.
- Exposure to graphic and disturbing content without adequate mental health support.
Regulatory and Investor Scrutiny
While regulators have yet to launch a formal probe, the controversy has impacted investor confidence. Market analysts report a slowdown in partners for xAI’s Series C funding, citing “heightened compliance risk.” This aligns with a 2025 KPMG privacy alert that lists biometric consent failures as a top liability that can reduce an AI firm’s valuation by up to 15%.
Industry experts highlight that the backlash could have been avoided by adopting best practices like explicit opt-in agreements and strict data retention limits. Under regulations like the EU AI Act, Ani would be deemed a high-risk system requiring mandatory human oversight and conformity assessments.
It remains uncertain whether xAI will overhaul its data collection policies. The company is still recruiting for new Grok personas, with job listings now vaguely mentioning “optional” biometric enrollment. Without clear details on what “optional” means in practice, employees and privacy advocates are left waiting for a more definitive response.
What biometric data did xAI allegedly collect from employees?
xAI required “AI tutor” staff to surrender facial scans and voice recordings under an internal program dubbed “Project Skippy.” Employees had to sign perpetual, worldwide, royalty-free licenses that let the company keep and reuse every pixel of their face and every syllable of their voice for future products like the Ani chatbot.
Was participation voluntary?
No. Although forms were presented, workers describe the process as effectively mandatory: managers framed biometric submission as part of the job, and several employees feared career penalties or demotion if they refused. No clear opt-out path was offered, making consent highly questionable under both GDPR-style and U.S. state privacy rules.
Why did xAI want real employee biometrics for Ani?
The flirtatious, anime-styled Ani companion is marketed to SuperGrok subscribers who expect human-like repartee, including explicit or “kinky” dialogue. Company documents show engineers believed authentic facial micro-expressions and natural voice cadence would make the bot feel more believable during adult-themed chats, so they trained generative models on real staff data instead of purely synthetic inputs.
Have regulators opened a formal investigation?
As of November 2025, no government agency has publicly launched a case, but the allegations have intensified calls for tighter biometric-privacy enforcement. Legal analysts note that practices alleged here would likely violate Illinois BIPA and EU GDPR provisions on explicit consent, so observers expect regulatory scrutiny to grow if internal documents leak or employees file suit.
How has the controversy affected xAI’s reputation?
Media coverage ranging from Business Insider to The HR Digest has portrayed the project as a “coercive” misuse of worker data, and some venture-capital watchers report that prospective investors are now demanding extra transparency before committing late-stage funds. The episode feeds a broader perception that xAI trails competitors like OpenAI and Anthropic on safety reporting, further denting public trust at a moment when the AI industry faces global pressure to prove it can police itself.
















