Responsible and Ethical AI Labeling in Government Communications Act
The State REAL Act establishes transparency standards for the use of generative AI in official state communications. Agencies must disclose when AI is used to create or materially alter public-facing content, with defined rules for sensitive topics such as emergencies, elections, public health, and public safety. The Act promotes responsible use through documentation, procurement requirements, staff training, and implementation guidance from the State Chief Information Officer.
Key Provisions
AI Disclosure Requirement. Requires clear disclosure when generative AI is used to create or materially modify public-facing government communications.
Human Review Exception. Allows agencies to omit public disclosure if a designated human reviewer reviews and approves the content prior to release and the agency maintains a record of that review.
Limits for Sensitive Content. Requires disclosure for AI-assisted content involving emergencies, elections, public health, or public safety, regardless of human review.
Medium-Specific Standards. Establishes clear disclosure requirements tailored to text, images, audio, and video.
Documentation and Recordkeeping. Requires agencies to retain basic records of AI use, including the tool used and the approving reviewer.
Implementation Guidance. Directs the State Chief Information Officer to issue standardized disclosure language and model agency policies.
Procurement and Technical Compliance. Requires state contracts for AI tools used in communications to support disclosure, auditability, and recordkeeping.
Training and Oversight. Requires staff training and provides for administrative enforcement and corrective action, without creating a private right of action.
Model Language
Section 1. Short Title. This Act may be cited as the “State REAL Communications Act.”
Section 2. Legislative Findings and Purpose
The State has a compelling interest in preserving public trust in official communications.
Generative artificial intelligence can produce or manipulate content in ways that may mislead the public if not clearly identified.
The purpose of this Act is to require transparent labeling of AI-generated or AI-manipulated content released through official State channels, while allowing use of AI tools when there is documented human review and accountability.
Section 3. Definitions. For purposes of this Act:
“Agency” means any department, board, bureau, commission, office, authority, or other instrumentality of the executive branch of State government, including any agency head.
“Covered official” means the Governor, Lieutenant Governor, Attorney General, Secretary of State, Treasurer, Comptroller (or equivalent constitutional officers), and any officer or employee acting in an official capacity on behalf of an agency.
“Official channel” means any website, social media account, email distribution list, text alert system, press release channel, public notice system, printed mailing, or other communications medium that an agency or covered official uses for official business.
“Generative AI system” means a computational system that generates or materially alters text, images, audio, video, or multimedia content based on prompts or training data.
“AI-generated content” means content created in whole or in material part by a generative AI system.
“AI-manipulated content” means content where a generative AI system materially modifies, synthesizes, or fabricates any part of an image, audio, video, or multimedia recording in a manner that a reasonable person would consider significant to the content’s meaning or apparent authenticity.
“Human review” means review by an agency employee or designated contractor who is accountable to the agency, performed prior to release, and reasonably designed to confirm the content is accurate, non-deceptive, and appropriate for public release.
“Clear and conspicuous” means presented in a manner that an ordinary person can notice, read, hear, or otherwise understand, appropriate to the format and medium.
Section 4. Labeling Requirement for Public Release
General rule. An agency or covered official shall not distribute AI-generated content or AI-manipulated content to the public through an official channel unless: (a) the content includes a clear and conspicuous disclosure that AI was used; or (b) the content qualifies for the human review exception under Section 5.
Format-specific disclosures. Disclosures must be tailored to the medium: (a) Text: A statement at the beginning or end of the communication that reads: “This communication includes content generated using artificial intelligence.” (b) Image: A visible label on the image and, where technically feasible, embedded metadata indicating AI generation or manipulation. (c) Audio: An audible disclosure at the beginning of the audio and a written disclosure in any accompanying description or transcript. (d) Video: An on-screen disclosure displayed at the beginning of the video and in the video description, and where technically feasible, embedded metadata.
Standardized language. The State Chief Information Officer (or equivalent) may publish standardized disclosure language and technical guidance to promote consistency across agencies.
Section 5. Human Review Exception
Exception. A disclosure under Section 4 is not required if, prior to publication: (a) the content is reviewed by a human reviewer; (b) the reviewer approves it for release; and (c) the agency maintains a review record consistent with Section 6.
Limitations. The human review exception does not apply to: (a) content that depicts an individual saying or doing something they did not say or do, if that depiction is materially fabricated; or (b) content likely to be relied upon for emergency guidance, voting-related procedures, public health directives, or law enforcement public safety alerts, unless labeled.
Section 6. Documentation and Recordkeeping
Each agency shall maintain records for at least three (3) years for covered content released under the human review exception, including: (a) date of release; (b) official channel used; (c) the AI tool(s) used (vendor and product name, if known); (d) the name and title of the reviewer; (e) a brief statement of what was reviewed (for example, “final draft text,” “edited video,” “image generation”).
Records are subject to applicable state public records law, except where an exemption applies.
Section 7. Technical Measures and Procurement
Technical measures. The State CIO shall issue guidance encouraging agencies, where feasible, to use: (a) persistent visual labels for images and video; (b) metadata tagging for AI-generated or AI-manipulated files; (c) audit logs for generative AI tools used in content creation.
Procurement clause. Any state contract for generative AI tools used for communications must require: (a) the ability to export logs sufficient to meet Section 6; (b) the ability to apply disclosures or metadata tagging where technically feasible; and (c) vendor cooperation with compliance audits.
Section 8. Training and Implementation
Within 180 days of enactment, the State CIO, in consultation with the Attorney General, shall issue an implementation memo and model agency policy.
Each agency shall train relevant staff annually on: (a) when disclosures are required; (b) how to apply disclosures by medium; (c) the human review process and recordkeeping obligations.
Section 9. Enforcement and Remedies
Administrative enforcement. The State CIO may conduct compliance reviews and refer repeated or knowing violations to the Inspector General (or equivalent) and the Attorney General.
Corrective action. When a violation is identified, the agency must: (a) promptly add a disclosure or remove the content; and (b) publish a corrected version where appropriate.
No private right of action. This Act does not create a private right of action against the State (optional, but often helpful to keep the bill from turning into a lawsuit vending machine).
Section 10. Exemptions
Non-public communications. Content not intended for public release is exempt.
Classified or protected information. Content created for classified purposes or protected by law is exempt.
Law enforcement sensitive operations. Limited exemption where disclosure would reasonably risk an ongoing investigation or endanger safety, provided a disclosure is added when the risk subsides.
Section 11. Severability. If any provision of this Act is held invalid, the remaining provisions shall remain in effect.
Section 12. Effective Date. This Act takes effect January 1 following enactment, with rulemaking and guidance permitted immediately.