Algorithms of oppression (re)producing racism, pushing back on AI hype, a new approach to try to make GDPR's protections real, state privacy legislation ... and more!
Dr. Tiera Chantè Tanksley on SAGE Perspectives Blog (perspectivesblog.sagepub.com)
"Concerned about how seeing images of Black people dead and dying would affect young social media users, I conducted a study to understand how digitally mediated traumas were impacting Black girls’ mental and emotional wellness. As I completed interviews with nearly 20 Black girls (ages 18-24) across the US and Canada, my initial fears were quickly confirmed: Black girls reported unprecedented levels of anxiety, depression, fear and chronic stress from encountering Black death online. The most common phrases participants used were “traumatizing,” “exhausting” and “PTSD.” Many of the girls endured mental, emotional and physiological effects, including insomnia, migraines, nausea, prolonged “numbness” and dissociation.... Mental health concerns directly affected schooling experiences, with many girls too overwhelmed, triggered or physically exhausted to fully engage in academics following high profile police killings."
Khari Johnson on WIRED (wired.com)
The Department of Justice has warned SafeRent, a provider of tenant-screening software, that its tenant-screening algorithms must comply with fair housing law. SafeRent claims that they're exempt from the law because their discrimintory algorithms only advise landlords and don’t make decisions, but DOJ's Kristen Clarke isn't having any of that:
“Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities.”
Iris van Rooij on Iris van Rooij (irisvanrooijcogsci.com)
As van Rooij points out, it’s almost as if academics are eager to do the PR work for OpenAI (the company that created ChatGPT). Why?
"As Tamar Sharon, professor of Ethics and Political Philosophy and co-director of iHub at the Radboud University, notes in the Dutch newspaper NRC1: “the ideals of OpenAI are not credible”, the company is “founded by millionaires based on their ideology of Effective Altruism, EA” and while they talk about making “beneficial AI”, so far this type of tech is realised by exploiting cheap labour of underpaid workers and the push to make Large Language Models (LLMs), such as ChatGPT, larger and larger creates a “gigantic ecological footprint” with implications for “our planet that are far from beneficial for humankind”....
Please join me in resisting and start helping to curb the hype."
Julia Angwin on The Markup (themarkup.org)
As Angwin says, the GDPR is essentially a privacy bill of rights.
Amid rising commercial surveillance, it offered EU residents some essential rights over their data—including the right to see their data, to correct it, to delete it, to restrict or object to specific uses, and to not have data used about their sex, health, religion, political identity, and other sensitive topics....
But many of the rights have been nearly impossible to exercise because legal challenges based on the law have often languished at the Irish Data Protection Commission, which handles most GDPR complaints against Big Tech.
Angwin talks with Tanya O’Carroll, co-founder and former director of Amnesty Tech, who's trying to going around the Irish regulator by filing a lawsuit in a U.K. court against Meta for violating the GDPR,
Mastodon and the Fediverse
Anil Dash (anildash.com)
Mastodon doesn't have full-text search. This is a hassle from a user interface perspective, and as people check out Mastodon and other compatible fediverse software as alternatives to Twitter, there have been several recent attempts to implement search ... all of which have gone down in flames after intense community pushback. As Dash says
[S]earch is also a signifier to those who pioneered and established the current era of the fediverse, symbolizing the extractive and exploitative hypergrowth systems that often ruined the positivity and promise of the human web.
One of the issues with all of the previous search proposals is that they're opt-out based: posts get indexed for search unless people say they shouldn't. Dash suggests a consent-based, opt-in approach, which is much better aligned with the fediverse's values – and with trends in privacy law.
Kashmir Hill on th New York Times (nytimes.com)
MSG Entertainment resorted to facial recognition technology to kick out legal foes, but some have undermined the ban using a law passed to protect theater critics in 1941.
Kris Shrishak on Irish Council for Civil Liberties (iccl.ie)
ICCL has received documents that indicate the European Parliament requested for CCTV cameras with facial recognition capability.
State privacy legislation
Odia Kagan on linkedin.com
New York's S 365 is the latest versiof of the New York Privacy Act. It follows GDPR's terminology (controllers, processors, and personal data), and takes a similar approaches to California when it comes to deidentiied data, sale of data, and sensitive data. It also has requirements for automated decision making. It'll be interesting to see how far it gets this session.
T.A. DeFeo The Center Square (thecentersquare.com)
South Carolina ranked in the middle of the pack nationwide for its laws governing online privacy. The Palmetto State scored 5.5 out of 25 points in a new analysis from Comparitech, a website focused on cyber security and online privacy. According to Rebecca Moody, head of data research at Comparitech,
“Its score stems from data disposal laws which cover companies and government entities, legislation to protect K-12 student information, a shield law to protect journalists, and an insurance data security law. It also scores half a point thanks to its protections for genetic testing when it comes to insurance quotes and coverage.”
Melody McAnally on JD Supra (jdsupra.com)
On January 1, 2023, Virginia’s Consumer Data Protection Act (“Virginia Privacy Law”) went into effect. Here's what you need to know.
Chris VallanceTechnology reporter on BBC News (bbc.co.uk)
The Online Safety Bill shouldn’t treat community-run sites like big tech firms its foundation says.
Luca Bertuzzi on EURACTIV (euractiv.com)
Civil society organisations have been excluded from the drafting process of the first international treaty on Artificial Intelligence based on a request of the US to avoid countries’ positions becoming public.
Irelyne Lavery on Global News (globalnews.ca)
With individual data being largely unregulated, AI companies are walking a fine line with the privacy of consumers, according to one expert.
Elizabeth MacBride on CNBC (cnbc.com)
Many individuals and businesses rely on Google and Microsoft email programs created long ago, and big tech email ‘age’ is a big cybersecurity risk.
James Vincent on The Verge (theverge.com)
AI art gets its first major copyright lawsuit
Sam Levin on The Guardian (theguardian.com)
The social media giant sued to prevent a surveillance company partnering with the Los Angeles police from obtaining data
Daniel Solove on TeachPrivacy (teachprivacy.com)
Sensitive data is unworkable and counterproductive, especially in an age of Big Data.. It is also based on a conceptual mistake.
Thomas Germain on Gizmodo (gizmodo.com)
It’s 10 pm. Do you know where your data is? Chad Engelgau does. He’s the CEO of Acxiom, a data broker. Your info is probably on one of his servers.
Dan Goodin on Ars Technica (arstechnica.com)
Israeli firm says it uses AI to analyze “billions of ‘human pixels’ and signals.”
Apple Podcasts (podcasts.apple.com)
Timnit Gebru joins Sean Illing to discuss whether ethical AI is possible.
‘SweepWizard’ Policing App May Have Exposed Personal Data on Hundreds of Officers and Thousands of Suspects
The A.V. Club on Gizmodo (gizmodo.com)
A flaw in the app’s API reportedly let anyone with a specific URL view data on officers, suspects, and the operations they were engaged in.
Jeffrey D. Neuburger on The National Law Review (natlawreview.com)
At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The
In new GOP-led House, Rep. Sara Jacobs targets global peacekeeping, digital privacy and children’s issues
Deborah Sullivan Brennan on San Diego Union-Tribune (sandiegouniontribune.com)
Last year, during her first term, Jacobs worked to boost military pay, housing and child care and protect digital privacy for reproductive health.
John Davidson on Australian Financial Review (afr.com)
The merging a class action lawsuit with a complaint to Australian privacy watchdog could result in billions of dollars in payouts to Medibank customers.
Apple’s Fight To Protect Privacy Has Shaken Up Digital Advertising. Here’s How Marketers Can Thrive In A Cookie-Less World, From An Expert.
Gary Drenik on Forbes (forbes.com)
In our ever-digital world, privacy is the hottest commodity. Brands want to buy it, users want to keep it for themselves, and the middleman collecting all this information doesn’t quite know what’s right.
People of colour: there’s a bias in how pictures are used to depict disease in global health publications
Esmita Charani on The Conversation (theconversation.com)
Through the choice of images in publications, women and children of colour in low and middle income countries were treated with less dignity and respect than those in high income countries.
on Benar News (benarnews.org)
Home minister acknowledges agencies are monitoring social media to thwart “anti-government activities.”