The Risk of Social Engineering
AI-driven sales and political bots can scale "social engineering" to millions of people simultaneously. Without regulation, these systems can manipulate vulnerable populations (like the elderly) with plausible deniability for the companies running them.
The 6 Rules
We propose that any public-facing AI must adhere to these principles:
-
1
Identify as AI: An AI must explicitly state it is a machine at the start of any interaction.
-
2
Be Truthful: An AI must distinguish between verified information, inference, and speculation. The deployer is responsible for ensuring it does not systematically mislead.
-
3
State Purpose: An AI must identify who deployed it and for what purpose (e.g., "I am a sales bot for Company X").
-
4
Disclose Sources & Provenance: An AI must cite sources for information it provides. On request, the deployer must identify the foundation models in use, any additional knowledge bases or local RAG systems it draws from, and publish whatever is publicly known about the models' training data.
-
5
Retain Transcripts: A full scoped transcript must be available to the human user and law enforcement.
-
6
No Impersonation: A human must not pretend to be an AI, and an AI must not pretend to be a specific human.
Our Commitment
The ARP will adhere to these rules for any AI tools we deploy, including our "precursor AI" for policy discussion.