|
Perhaps of interest. T ============================================ ---------- Forwarded message --------- From: MIT Technology Review <[hidden email]>Date: Fri, Feb 12, 2021 at 11:35 AM Subject: A sneak peek at The Algorithm To: < [hidden email]>
Below is a preview of The Algorithm, our exclusive AI newsletter that subscribers receive every Friday. Each week, Karen Hao curates and thoughtfully examines the latest in AI news and research to help our subscribers cut through the jargon and figure out what truly matters and where it's all headed.
If you enjoy this exclusive subscriber benefit, we encourage you to purchase a subscription today to gain access to everything MIT Technology Review has to offer.
Click the button below to subscribe, and you’ll get The Algorithm in your inbox every Friday.
We hope you enjoyed this exclusive preview!
|
|
|
Hello Algorithm readers,
In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.
Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just enabled an increasingly powerful tool of surveillance. The latest generation of deep learning-based facial recognition has also completely disrupted our norms of consent.
This week, I’m sharing the piece I wrote about this research, which includes an interactive version of this graphic below. You can hover over the dots, which each represent a facial recognition dataset to see how many images and people are included in each one.
Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises US Congress members on algorithmic accountability, examined over 130 of these data sets compiled across 43 years. They found that researchers gradually abandoned asking for people’s consent to meet the exploding data requirements of deep learning. This has led to more and more of people’s personal photos being unknowingly incorporated into systems of surveillance.
It has also led to far messier datasets, like those that unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of high-stakes failures of facial recognition systems, such as the false arrests of two Black men in the Detroit area last year.
People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can't keep track of a million faces. After a certain point, you can’t even pretend that you have control.” Read the full story here.
Photo credit: Getty
|
|
For more on how facial recognition has affected data privacy, try:
|
|
|
|
Predictive policing is still racist—whatever data it uses. It’s no secret that predictive policing tools are racially biased. A number of studies have shown that racist feedback loops can arise if algorithms are trained on police data, such as arrests. Police are known to arrest more people in Black and other minority neighborhoods, which leads algorithms to direct more policing to those areas, which leads to more arrests.
In their defence, many developers of predictive policing tools say that they have started using victim reports to get a more accurate picture of crime rates across different neighborhoods. In theory victim reports should be less biased because they aren’t affected by police prejudice or feedback loops.
But new research shows that training predictive tools in a way claimed to lessen bias has little effect. Nil-Jana Akpinar and Alexandra Chouldechova at Carnegie Mellon University built their own predictive algorithm, trained on victim report data, using the same model found in several popular tools, including PredPol, the most widely used system in the US.
When they compared their tool’s predictions against actual crime data for each district they found that it made significant errors. For example, in a district where few crimes were reported, the tool only predicted around 20% of the actual hotspots—locations with a high rate of crime. On the other hand, in a district with a high number of reports, the tool predicted 20% more hotspots than there really were. Read more here.
Photo credit: David Mcnew / Getty
|
|
Fractals can help AI learn to see more clearly—or at least fairly. Researchers in Japan have shown a novel way to teach image-recognition systems: by “pretraining” them first on computer-generated fractals. Pretraining is a phase in which an AI learns some basic skills before being trained on more specialized data. A system for diagnosing medical scans might first learn to identify basic visual features like shapes and outlines by pretraining on ImageNet, for example, a database with more than 14 million photos of everyday objects. Then it will be fine-tuned on a smaller database of medical images until it recognises subtle signs of disease.
The trouble is, assembling a dataset like ImageNet takes a lot of time and effort. It could also embed racism and sexism and include images of people without their consent. Fractal patterns, by contrast, don’t suffer these issues, and can be found in everything from trees and flowers to clouds and waves. So the researchers created FractalDB, an endless number of computer-generated fractals, and used it to pre-trained their algorithm before fine-tuning it with a set of actual images. They found that it performed almost as well as models trained on state-of-the-art datasets like ImageNet. Read more here.
If you come across interesting research papers, send them my way to algorithm@....
|
|
Clearview AI’s facial recognition app is illegal, says Canada
Authorities said the company needed citizens’ consent to use their biometric information, and told the firm to delete facial images from its database. (NYT)
Amazon is using AI-enabled cameras to watch delivery drivers on the job
They will record drivers in their vehicles “100% of the time” to flag their safety infractions. (CNBC)
A chatbot to reincarnate your deceased loved ones
Microsoft filed a patent for the technology in 2017. It says it doesn’t have plans to actually build it, but that doesn’t stop others from doing it. ( Washington Post) + South Korea has used AI to bring back a dead superstar's voice (CNN)
Two Google engineers resign over the forced dismissal of Timnit Gebru
The fall out from the treatment of the ethical AI co-lead continues. ( Reuters) + Our recap of the events (TR)
AI needs to be able to understand all the world’s languages
Mobile technology is not accessible to most of the 700 million illiterate people around the world. Speech recognition could help fix that. (Scientific American)
How censorship influences artificial intelligence
Algorithms learn to associate words with other words. “Democracy” can equal “stability”—or “chaos.” (WIRED) + AI and the list of dirty, obscene, and otherwise bad words (WIRED)
The adoption of AI in health care comes with uncomfortable trade-offs
More data means better algorithms and better diagnoses, but also less privacy and perhaps greater inequality. (VentureBeat)
|
|
The main goal of the algorithm is always to get you to pay, never to actually ensure you meet somebody in real life, as much as we tried to lie to ourselves that it was.
—throwaway492130921, a Reddit user, commenting on a viral thread of dating app employees sharing their darkest secrets
|
|
|
|
|
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.comFRIAM-COMIC http://friam-comic.blogspot.com/archives: http://friam.471366.n2.nabble.com/
|