US and EU regulatory proposals for AI under the magnifying glass
Proposals for a US framework and a set of laws in Europe to regulate the development and use of artificial intelligence must carefully weigh the concerns of various groups if they are to gain popular support.
A recent White House blueprint for an AI basic regulation, outlining five key principles under which AI-based technologies should be developed and deployed, was seen as the right step in the right direction in terms of protecting Americans from harm from automated systems described.
While a World Economic Forum (WEF) article describes it as “a welcome initiative that needs to be properly placed in the context of other forthcoming initiatives both within the US and elsewhere,” another article published by Unite.ai said move did Potential to “change the AI landscape” and set new standards for how AI should be built, deployed and managed.
The White House earlier this month released a document titled “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,” aiming to respond to the many harms and risks Americans face every day through deployment are exposed to technology, data and automated systems. The document will be released together with an accompanying technical document that explains how the design should be implemented.
The WEF article, authored by forum officials and the CEO of consulting firm Cantellus Group, notes that while the blueprint has also faced criticism for not going far enough in scope and that it could limit innovation in the AI space. However, it offers important safeguards for groups like black and Hispanic people who may be adversely affected by prejudice in AI-enabled technology.
The blueprint launched by the Office of Science and Technology Protection (OSTP) was also hailed as timely as not only is it intended to influence the future of AI technology development and deployment, but it will also keep the United States at the forefront of global AI regulatory measures.
One of the issues that the Blueprint makes a strong case for is protecting people from insecure and ineffective systems.
The 73-page document is non-binding, meaning companies and state governments are free to comply with the draft’s regulations. Among other things, examples are given of use cases in which AI was problematic.
The recently released draft AI General Regulation is intended to be similar to the ethics guidelines for trustworthy AI outlined by the European Union Commission in 2019.
EU legislators are at odds over the “toughness” of AI regulation
Meanwhile, Members of the European Parliament (MEPs) are divided on whether to ensure their AI regulation gives more room for AI innovation or make respect for fundamental human rights a top priority.
This split comes at a time when lawyers such as European Digital Rights (EDRi) are warning that the bill under consideration must include clear safeguards protecting against mass surveillance and AI systems such as facial recognition, which can compromise privacy and entrench discrimination, The Das reports the Brussels Times.
As you recall, the EU has been working on an AI law that aims to regulate a wide range of AI applications in a manner consistent with citizens’ basic human rights. The panel is working on the regulation, which it describes as a “risk-based” approach.
Some lawmakers believe tighter regulation of AI use in the EU tech space could stifle innovation and discourage potential investors.
For their part, rights groups have also opposed allowing the use of facial recognition technologies in public spaces, calling them intrusive.
One of the concerns, according to The Brussels Times, is that while the draft AI regulation would ban the use of real-time facial recognition, it allows EU member states to use such systems for specific purposes, such as security.
Some fear this window could provide space for mass surveillance under the guise of security, which is where much of the fear lies.
As debates around the regulation continue, experts believe there is a need to strike a balance by creating a regulation that leaves room for AI innovation without compromising data security and human rights.
AI | AI Law | bias | Europe | European Digital Rights (EDRi) | Legislation | Privacy | Regulation | Research and Development | norms | United States | World Economic Forum