European Commission Proposes Taking Away the Cops' Big Boy Surveillance Machine - Gizmodo
European Commission Proposes Taking Away the Cops' Big Boy Surveillance Machine - Gizmodo |
European Commission Proposes Taking Away the Cops' Big Boy Surveillance Machine - Gizmodo Posted: 21 Apr 2021 12:40 PM PDT ![]() The EU is giving the U.S. a run for its money with privacy regulation, and now they've upped the ante with a dynamo of a proposal: banishing AI systems that violate "fundamental rights," with a special place in hell for law enforcement using real-time biometric identification. The end of that sentence is more of a personal interpretation, but the gist is that it's time to end the free-for-all. Advertisement The sweeping list of protected freedoms in the proposal includes the right to human dignity, respect for privacy, non-discrimination, gender equality, freedom of expression (infringed by the "chilling effect" of surveillance), freedom of assembly, right to an effective remedy and to a fair trial, the rights of defense and the presumption of innocence, fair and just working conditions, consumer protections, the rights of the child, the integration of persons with disabilities, and environmental protection in that health and safety are impacted. The proposed regulation is over 100 pages long, so here's a summary of the bans. BANNED:
In other words, law enforcement would have to hand over their spy toys for inspection and cut out the kind of abuse that's now rampant in the United States. Cops have abused face recognition software to make will-nilly suspect identifications. Baltimore PD was caught using face recognition to scan Freddie Gray protesters and pick them off for outstanding warrants. Predictive policing algorithms intensify targeting in Black communities and perpetuate the cycle of disproportionate arrests. Predicted recidivism algorithms have likely lengthened prison sentences. When we get mere glimpses of secretive technology, the scope is always more terrifying than imagined. G/O Media may get a commission Consumer uses, too, have wildly violated civil rights. Algorithms that assess mortgage eligibility have levied higher interest rates on Black and Latinx communities and limited healthcare access. Such tools would all likely fall under the European Commission's broad definition of an "AI system" which covers machine learning, "knowledge representation," statistical "approaches" and search methods, among other applications. Generally, the software can use a "given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." Advertisement The European Commission also proposes strict regulations on AI systems that it deems "high-risk." (The commission notes that overall this represents a very small proportion of systems in use.) "High-risk" uses include:
Providers for all of the above would have to regularly monitor their technology and report back to the European Commission. Developers are expected to create a risk management system, in order to regularly eliminate and mitigate risk. Dealers are expected to provide information and training to users, taking into account the end-user's level of technical knowledge (read, cops). They would be expected to keep records of who used the technology and how, including input data (ie, cops would have to admit they used Woody Harrelson's photo to make a suspect ID). They'd also need to Inform authorities when they're aware of a risk. Advertisement Government officials would still be fine to use biometric identification in a way that it doesn't necessarily cause harm, Vestager added in the speech. The commission considers fingerprint or face scans by border controls or customs agents to be harmless. While some have complained that this will stifle innovation, the commission has added protections for that too. It would encourage member states to set up "regulatory sandboxes," supervised by a member state or European Data Protection Supervisor. That sounds like a crackdown, but it's more like an optional incubator for start-ups that get priority access. Advertisement And the European Commission reminds us that the "vast majority" of AI systems don't fall under the above risk categories—think AI systems that don't drive human interaction or involve identification. They aim to encourage things like smart sensors and algorithms that help farmers maximize food production and sustainability at cost savings. So, no to barbaric policing and yes to sustaining life on Earth. Great, let's go right ahead and copy-paste this. Advertisement |
You are subscribed to email updates from "polygraph definition" - Google News. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
Comments
Post a Comment