Microsoft exec warns that unregulated AI development will lead to an Orwellian future

Share This:

This article comes from “

A Microsoft executive predicts that runaway artificial intelligence (AI) could lead to an Orwellian future if appropriate laws to protect the public aren’t enacted soon.

Microsoft president Brad Smith made the comments during an episode of the BBC‘s “Panorama,” aired May 26, that focused on the potential dangers of AI and the race between the United States and China to develop the technology.

“I’m constantly reminded of George Orwell’s lessons in his book ‘1984,’” Smith said. “The fundamental story was about a government that could see everything that everyone did and hear everything that everyone said all the time. Well, that didn’t come to pass in 1984, but if we’re not careful, that could come to pass in 2024.”

Smith’s warning comes about a month after the European Union released a draft of regulations attempting to set limits on how AI can be used. Few, if any, similar regulations exist in the U.S., where legislation is largely focused on limiting regulation and promoting the technology for national security purposes.

Artificial intelligence turns facial recognition into a powerful surveillance tool

Artificially intelligence generally refers to machines that can learn to solve problems automatically, without the need for a human operator. Many of today’s AI systems rely on machine learning, a suite of computational algorithms used to recognize patterns in large amounts of data. In theory, this means that a machine learning AI becomes more accurate with each pass.

Machine learning has been applied to everything from basic mathematical theory to simulations of the early universe. But the technology has also found use by governments and corporations for surveillance, the most common form of which is facial recognition.

“Facial recognition is an extraordinarily powerful tool in some ways to do good things, but if you want to surveil everyone on a street, if you want to see everyone who shows up at a demonstration, you can put AI to work,” Smith explained. “And we’re seeing that in certain parts of the world.”

China, for example, has started using AI in mundane and alarming ways. In some Chinese cities, facial recognition is used instead of tickets in buses and trains.

In 2017, the Chinese government reportedly laid out a plan outlining its ambition to become the world leader in AI by 2025, according to Reuters.

China wants to lead the world in AI-powered surveillance

Last year, a new report by the UN patent agency revealed that China had now topped the world in artificial intelligence patents, pushing the U.S. out of the top spot it has held since the global was first set up over four decades ago. In 2019 along, 58,990 applications were filed from China, beating the 57,840 filed from the United States.

Research by Comparitech, shows that cities in China have the heaviest CCTV surveillance in the world. The same study also shows that China has 54 percent of all the world’s 770 million CCTV cameras.

“I don’t think that Orwell would ever [have] imagined that a government would be capable of this kind of analysis,” said Conor Healy, director of IPVM, to the BBC.

Beijing’s heavy-handed approach to surveillance is most heavily felt in the autonomous Xinjiang region, the Chinese government has been using AI machine learning and facial recognition to track people, especially those of the Uyghur minority, assessing their guilt before they’re arrested and interrogated, according to the BBC. New York-based think tank, the Council on Foreign Relations, estimates that this has led to the detention of about three million Uyghurs in “reeducation camps” since 2017, usually without any criminal charges or legal avenues for release.

EU looking to regulate AI, but US is too focused on military applications

The European Union’s potential AI regulations would ban systems that attempt to circumvent users’ free will or systems that enable any kind of “social scoring” by the government.

In addition, applications deemed “high risk” must meet requirements of transparency, security and oversight to be put on the market. These include the use of AI for critical infrastructure, law enforcement, border control and biometric identification, including face-and voice-identification systems. Systems such as customer-service chatbots, on the other hand, are considered low risk and are not subject to scrutiny.

On the other side of the pond, the U.S. federal government has largely focused on encouraging the development of artificial intelligence for national security and military purposes.

But this focus has led to controversy. For example, in 2018 Google killed “Project Maven,” a contract with the Pentagon that would have automatically analyzed video taken by drones and other military aircraft. The company argues that the project would have only flagged objects for human review but critics feared that it could be used to automatically target people for drone strikes. The project was brought to light by whistleblowers from within google itself, leading to public pressure strong enough that the company called off the project.

Project Maven is just one AI project that the U.S. military spent money on. The Pentagon now spends more than $1 billion a year on contracts related to the technology.

According to Bernard Trout, a professor at the Massachusetts Institute of Technology who teaches a professional course on ethics and artificial intelligence, military and national security applications of machine learning are inevitable, given China’s enthusiasm in achieving supremacy in the field.

“You cannot do very much to hinder a foreign country’s desire to develop these technologies,” Trout said in an interview with Live Science. “And therefore, the best you can do is develop them yourself to be able to understand them and protect yourself, while being the moral leader.”

Efforts to regulate AI in the U.S. are largely being led by state and local governments. In 2019, San Francisco banned the government use of facial recognition software, a move that many cities soon followed. More recently, Washington state’s King County, also did the same, making it the first county in the U.S. to do so.

“If we don’t enact, now, the laws that will protect the public in the future, we’re going to find the technology racing ahead,” Smith said, “and it’s going to be very difficult to catch up.”

Follow for more of the dangers posed by artificial intelligence and machine learning when used for surveillance.

Sources include: 1 2

Share This: