The AI Bill of Rights: A Welcome Step toward Tech Accountability

All
By
Anuj Krishna
October 20, 2022 5 minute read

When was the last time you actually read the privacy policies of a website or app before clicking “agree”? Most people don’t, and they’re not to be blamed for it. As this Washington Post article [1] points out, the tiny “I agree” checkboxes with links to long-winding legalese are not designed to effectively inform users and collect meaningful consent. For a long time now, there has been a push for a more robust regulation of technology—especially AI—in the U.S., and the Oct 4, 2022 unveiling [2] of the much-anticipated AI Bill of Rights by The White House is a move in the right direction. One of the things this Bill does is to recognize places where meaningful consent can be given and push for giving the agency back to the user when it comes to consent.

The Bill comes with the promise of “Making automated systems work for the American people”, with the proposed goals of curbing the potential harm that ineffective and invasive algorithms and AI can cause against US citizens. In this article, I am sharing the key highlights of this bill.

In October 2021, the Office of Science and Technology Policy (OSTP) had declared their intent [3] for a bill that would aim to check tech platforms to protect citizens. In September 2022, The White House announced [4] a set of principles that would define future policymaking to bring accountability to tech platforms, such as preventing discrimination in algorithmic decision-making, healthy competition, and privacy protection. The AI Bill of Rights thus seems the logical next step towards turning these principles and months of planning into reality.

The ‘blueprint’ for the bill takes the form of five distinct aspects of tech accountability, which are:

1. Safe and effective systems:

Automated systems should be developed with sufficient pre-deployment testing, risk identification and mitigations as well as monitoring to ensure they are safe and effective. Independent evaluation and reporting as well as consultation with diverse communities, stakeholders, and domain experts is encouraged and results should be made public, if possible.

2. Algorithmic discrimination protection:

Automated systems that unintentionally discriminate or disfavor people based on race, color, ethnicity, sex, religion, age, and similar legally protected classifications because of ‘algorithmic discrimination’ should be prevented as much as possible. This recommendation would take the form of proactive equity assessments in systems, ensuring accessibility features, ensuring representative data pre-deployment, etc.

3. Data privacy:

The default state of any system (as opposed to advanced setting changes) should be designed to be sufficient to protect against privacy violations. Data collection should be taken with consent only in cases where it can be meaningfully given. Consent requests should be clear and easily understandable and give the person agency over their own data and its usage. Any surveillance technology is to be subject to heightened oversight with proper pre-deployment testing. Continuous surveillance and monitoring should not be used in education, work, housing, or other contexts where the use of such would result in limiting rights, opportunities, or access.

4. Notice and explanation:

When automation is deployed, citizens should be able to see in clear and plain language, exactly which parts of their experience is being automated, with significant changes and updates to the automation being accompanied by a notification. Citizens should be able to know if at any point there is a change between manual and automated input. Reporting systems summarizing processes for the same are suggested.

5. Human alternatives, consideration, and fallback:

Citizens should be able to opt-out from an automated system at any time if they so prefer and it is appropriate. In some cases, a human or alternative system may be required by law. Finally, automated systems working in sensitive domains such as criminal justice, employment, health, etc. should be tailored to their purposes and incorporate human consideration for adverse or high-risk decisions. Such human consideration and fallback should be equitable and maintained effectively through appropriate operator training. Finally, the data for such consideration and fallback processes should be publicly available.

As far as tech accountability is concerned, it is notable that developers, designers and deployers in the US have so far been working without a clear set of laws or guidelines to follow. The AI Bill of Rights is a step in the right direction for protecting citizens from exploitation and discrimination, unintentional or otherwise, from automated systems, AI and the algorithms that make them.

References

leader
Anuj Krishna
Cofounder and President - Technology & Growth

Anuj Krishna is a seasoned leader with nearly two decades of experience in crafting and implementing analytics, data science, and engineering projects for prominent global enterprises spanning various industries that unlock the full potential of organizational data. Anuj has played a vital role in defining processes and standards for analytical problem-solving, now widely adopted by leading Fortune 500 enterprises. He was engaged in developing MathCo's proprietary AI-powered platform, NucliOS, showcasing his commitment to innovation.

All

Data Poisoning and Its Impact on the AI Ecosystem

Read more
All

Harnessing the Power of ChatGPT: Asking the Right Questions

Read more
All

Q&A with Sourav Banerjee: How Customer Service Automation is Benefitting Businesses Across Industries

Read more