TechViews News ….
Have you seen the American TV show, Person of Interest? This program has its protagonists solve crimes or locate people by using facial recognition through the hundreds of thousands of cameras on street corners in most every city in the US. We already have the technology, now it is being placed into action.
For the past few years, the world’s biggest tech companies have been on a mission to put artificial-intelligence (AI) tools in the hands of software writers and coders. The benefits are clear: Coders familiar with free AI software from Google, Amazon, Microsoft, or Facebook are becoming more in demand from major communications companies worldwide. Selling pre-built AI tools to other companies has become big business for Google, Amazon, and Microsoft.
Today these same companies are under fire from their employees over who this technology is being sold to, namely branches of the US government like the Department of Defense and Immigration and Customs Enforcement. Workers from Google, Microsoft, and now Amazon have signed petitions and quit in protest of the government work.
It’s had some impact: Google released AI ethics principles and publicly affirmed it would not renew its contract with the Department of Defense in 2019. Microsoft told employees in an email that it wasn’t providing AI services to ICE, though that contradicts earlier descriptions of the contract on the company’s website, according to Gizmodo.
This debate, playing out in a very public manner, marks a major shift in how tech companies and their employees talk about artificial intelligence. Until now, everyone has been preaching the Gospel of Good AI: Microsoft CEO Satya Nadella has called it the most transformational technology of a generation, and Google CEO Sundar Pichai has gone even further, saying AI will have comparable impact to fire and electricity
That impact goes well beyond ad targeting and tagging photos on Facebook, whether researchers like it or not, says Gregory C. Allen, an adjunct fellow at the Center for New American Security, who co-authored a report called “Artificial Intelligence and National Security” last year. While it seems we’re collectively OK with AI being used to spot celebrities at the royal wedding, some of us have second thoughts when it comes to picking out protestors in a crowd or shoppers in a busy shopping mall.
AI – facial recognition is what’s called a dual-use technology, meaning its implementation can have positive or negative impact depending on how it’s used. Researchers from OpenAI and the Future of Humanity Institute grappled with this distinction in a February 2018 report laying out the potential dangers of AI.
“Surveillance tools can be used to catch terrorists or oppress ordinary citizens. Governments and powerful private actors will have access to many of these AI tools and could use them for public good or harm,” the report says.
To spur further advancements, tech companies have published their research and spread code on the open internet, a boon to researchers around the world. But that practice also means others like defense contractors have access.
In September, the FBI announced that it has achieved “full operational capability” of its Next Generation Identification system—a billion-dollar AI project to replace the bureau’s old fingerprinting system with the world’s biggest biometric database.
This makes it possible for the first time to link multiple kinds of biometric identification—including voice features, palm prints, and even DNA profiles—and combines civil and criminal information within one master database.
But perhaps most controversially, it will also use state-of-the-art facial recognition technology, allowing the government to identify suspects across a gigantic database of images collected from mug shots, surveillance cameras, employment background checks, and digital devices seized with a search warrant.
There’s something unsettling about the notion that the government is actively trying to recognize its citizens by face: It suggests that the simple liberty of going out in public anonymously could become a thing of the past.
The new system has come under fire from privacy rights advocates who fear that the federal databases will eventually be cross-referenced against other data, connecting your face to your medical, financial, legal, and driver’s license records. And there is no real way to opt out; as Jennifer Lynch, a senior staff attorney for the Electronic Frontier Foundation, testified during a 2012 US Senate hearing on facial recognition technology,
“Americans cannot participate in society without exposing their faces to public view.” Even Eric Schmidt, executive chairman of Google (which has secured several patents to boost facial recognition accuracy for its products) has said that he finds the technology a little “creepy.”
Currently, the Australian government is planning to allow private companies to access the National Facial Recognition Database. Documents obtained under Freedom of Information laws reveal that both “major telecommunications companies” and financial institutions have expressed interest in using the expansive database. This information contains photographic data of Australians with a facial form of identification, such as driver’s licenses, job identification, military ID, etc.
While Google, Microsoft, and Amazon are likely the most capable to provide large-scale computing to government entities, they’re far from the only ones. But no matter who implements the technology, the public can push for transparency and pressure for ethical guidelines to be set in place.
How exactly this all plays out is yet to be determined, but a privacy debate seems certain to ensue in the coming months. When government uses a violation of privacy rights to make money, at what point will any company be able to buy any piece of information about you, for a fee?
Be Safe – Backup Your Data Regularly!
And don’t forget to take advantage of our FREE subscription to the TechViews News Updates. You will receive all of our updates and posts the moment they are published.