Ideas Worth Exploring
Problems Worth Answering
Answers Worth Implementing
PWI eMod Co-Labs Worth Activating
A Community Worth Growing with Trust/Solidarity

Face-reading AI Will Be Able to Detect Your Politics and IQ

Every Perspective Counts
Contribute Your Thoughts
Empower Our World

Complete the Private Comments Below

Opening Insights: Dangers of Technology

In 2012, a U.S. Army Research Laboratory funded study, led by Carnegie Mellon University in Pittsburgh, Pennsylvania, presented a paper demonstrating how activity forecasting would work. The study focused on the "automatic detection of anomalous and threatening behavior' by simulating the ways humans filter and generalize information from the senses." Based on a high-level artificial intelligence infrastructure called a cognitive engine, the program learned to connect relevant signals and behaviors with background knowledge.

For centuries, scientists, visionaries and leaders have warned us, not of the dangers of technology, but rather our short-sighted intention and implementations of these advancements. Science fiction authors, script writers and movie producers have been trying to wake us up. Science and research has proven the fiction writers RIGHT. Now the question is, who is listening, who will hear and WHAT CAN WE DO ABOUT IT?

Informational Insights: Face-reading AI

The Guardian article by Sam Levin, dated September 12, 2017, discusses the work of Stanford Professor, Michal Kosinkski, which suggests technology can detect whether a person is gay or straight and will soon reveal traits such as criminal predisposition. His study has gained much attention, and criticism, but as he so correctly states his research and software is nothing new.

Is Kosinkski's research an exciting prospect, or the next step towards the "Minority Report" society, where our political beliefs, IQ's and behaviors will all be predetermined not by ourselves but by an algorithm?

Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”

[..]

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

[..]

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”

[..]

The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”

Source: https://www.theguardian.com/technology/2017/sep/12/artificial-intelligence-face-recognition-michal-kosinski

Possibilities for Consideration: Technology Takes Control

As technological advances surpass believability, in the urge to help and save humanity, what can we do? Should we:

  • Watch technology and the people who control the technology take control?
  • Be a part of a solution to support the humane application of technology - to support humanity rather than control it?

Add Your Insight:

The urge to save humanity is almost always only a false-face for the urge to rule it.
H.L. MENCKEN

eMod SocraticQ Conversation


Every Perspective Counts
Contribute Your Thoughts to Empower Our World
Complete the Private Comments Below



FOOTNOTE of Importance


Our world is experiencing an incredible revolution powered by technology that has used its tools to:

  • deceive the public
  • disrupt tradition
  • divide the people

This has inadvertently resulted in a Fear-based Shadow Culture™ that has hurt many people.
A powerful group of influence has joined together to deliver a proven antidote by shifting from impersonal development of Artificial Intelligence (AI) to replace people to utilize AI to empower Human Intelligence (HI).

 

To Empower The People:

 
  

Distraction Junction

 
 

What is a Modern Hero?:

.

We invite Heroes and Visionaries
to explore accessing these powerful methodologies and resources
to achieve their individual visions.




Every Perspective Counts
Contribute Your Thoughts to Empower Our World
Complete the Private Comments Above