Shaping AI Policy to Address Risks to U.S. Citizens and National Security

For new AI technologies where both the risks and full promise are still emerging, policymakers struggle to forge guidelines that will ensure consumer and national security protections while permitting innovation to thrive. It is critical that future AI systems have built-in transparency and accountability mechanisms from the outset to ensure that risks related to bias, safety, discrimination and national security can be managed. In this guest article, Wilkinson Barker Knauer partner Evelyn Remaley discusses the technological shift and its impact in the United States. She examines three specific AI-driven risk scenarios involving sensitive data that are keeping policymakers up at night, as well as pending and proposed policy responses that are being considered to address those risks. She also recommends nine guiding principles for data integrity in the age of AI, including building consensus around voluntary principles to safeguard market potential and protect consumers. See “Takeaways From the New Push for a Federal AI Law” (Oct. 26, 2022).

To read the full article

Continue reading your article with a CSLR subscription.