Advocacy in Technology and Society/Civic and GovTech, Regulation in the time of Big Tech

From Wikiversity
Jump to navigation Jump to search

Civic and GovTech: Regulation in The Time of Big Tech[edit | edit source]

This week we discussed the ethics around surveillance technology, machine learning, and black-box algorithms, and how we may be automating discrimination due to data bias. We discussed the various use and misuse of facial recognition tech and we did a case study of an apartment complex in Brooklyn where a landlord was using facial technology to monitor the tenants. The tenants felt violated by the tech, organized together, and successfully got the biometric technology out of their apartment complex.

Machine Learning

Machine learning is a subfield of Artificial Intelligence that uses advanced programming to predict an algorithm. For example, inputting primary data and processed data into a machine learning program that processes that into and outputs a formula for how the data was processed. The following is a brief description of how machine learning works using the pizza analogy.

  • Traditional Programming: In traditional programming, we have "ingredients" (input), which are used in a "recipe"(construction) to create a "pizza"(product)
    • Ingredients (input: what do you have available)?
    • Recipe (logic: putting everything together)
  • Machine Programming: In machine programming, we have "ingredients" and "pizza", allowing us to use logic to discover "recipe".
    • Pizza (target/what is made)
    • Ingredients (input/whats available)

Surveillance technology

Surveillance technology can be a device or system that collects and stores data to track and monitor an individual or community. This could include biometric information, location tracking, or dietary preferences. The combination of data and A.I. can be helpful to track and identify unauthorized credit card purchases but can also be used against marginalized communities and contribute to gentrification.

Black box algorithms

Machine learning generates algorithms based on primary data. These algorithms (AKA predictive algorithms) are often complicated and a mystery to even the designers of the program. The designers don’t always know what connections the program has made and what predictions it will continue making. The possibilities are endless and potential good or harm is unknown but it doesn’t prevent designers and corporations to push them out into the real world as commercial products.

Case study: Atlantic Towers Facial Technology [1]

While facial recognition technology is normalized in personal devices such as unlocking your phone or logging into your banking app. Its use in the public sphere to track and surveil citizens has long been criticized. In 2019, an apartment complex in Brooklyn came under public scrutiny when they installed facial recognition tech at the building to collect data on tenants and use the tech to gate keep everyone who enters and leaves the building. The tenants felt violated and came together to take action. Under the leadership of a long-term resident, Tranae Moran[2] organized the neighbors and filed a complaint with the state, and won against the landlord. The group has raised this concern on a city level and has led to the “No Barriers to Housing Act.”

Recent Legislations

San Francisco becomes the first city in the US to ban facial recognition technology.[3] "Stop Secret Surveillance" bill will prevent local government agencies from using this technology. Businesses and private individuals will be able to continue using it, however, it prevents tech giants like Amazon and Microsoft from selling to US law enforcement.        

Sparked by the Atlantic Towers incident, Brooklyn legislator introduced the “No Biometric Barriers Housing Act of 2019"[4]. If the bill becomes law, it would ban public housing agencies like HUD from using facial and biometric recognition technology in most federally funded housing. The bill would further require HUD to report on how this emerging tech plays out in public housing.

To prepare for this week's discussion, students read the following texts:

New York tenants fight as landlords embrace facial recognition cameras, by Erin Durkin

Organizing as Joy: An Ocean-Hill Brownsville Story, with Tranae Moran and Fabian Rogers

Get Ready for New York City’s New Biometric Identifier Information Law

Insights[edit | edit source]

The common theme among student presentations of the day and class material was surveillance and regulations. Arguments were presented both for and against surveillance. A student shared how her mom was driving cross-state solo for 8 hours and having the tech access her location anytime was reassuring. Other shared, parents and in-laws can be overbearing with access to location and battery power. The class concluded that the tech isn’t inherently harmful but depends on who has access to it and how they use this information. A suggestion was to give more agency to the user by notifying them when someone checks in on their location. We agreed there needs to be a better regulation to prevent misuse. This led to a conversation on accountability.

Accountability concerns

With machine learning and automation technology becoming a new standard the largest concern was who would be held accountable when harm is done by a faceless machine and who is responsible for mitigating the danger and preventing harmful designs. For so long, tech giants have encouraged an experimental space with no restriction in design or policy. This allowed the space to create and accumulate innovation, talent, money, and historic power. However, with unchecked power came ethically questionable products that have led to automation of racism, sexism, ablism and more. For example, the growing concern of social media addiction among youth and the high rates of mental health issues related to use of social media. Silicon Valley’s ”Do First Apologize Later” motto reflects poorly on the dominant group for being careless of products and algorithms that burden marginalized communities.

Running themes

Surveillance isn't inherently bad but depends on whose has access and if they are weaponizing it.

Surveillance is regularly seen as a negative for the public, often thought of as a way for the government to watch over and control their people, but maybe there is more to that. As displayed in a variety of presentations this week, surveillance is much more nuanced than some may make it out to be. It seems that the good and ugly of surveillance tech relies more so on the user and the purpose for which it is being used.

For example, one student discussed how using tracking apps like Life360 have really benefitted her and her family as they all tend to be spread across the country and her father works a potentially dangerous job, thus having the app saves all of them from worrying too much about whether the family is safe. Another student had a very different reaction to the same app and felt she wasn’t comfortable sharing her location with others. In response, those individuals felt so strongly about her using the app to the point where she felt she was being nonconsensually tracked.

Other examples from the day included the idea of ancestry DNA testing being uploaded to websites that aid the police in arrests, but can also help solve Doe cases and geotags being great for finding lost items but being a threat when unsuspectingly used for nefarious purposes. It seems most surveillance tech operates as a double-edged sword.

Biases in machine learning occur due to the data being input.

Is there such a thing as neutral data? This was a question asked in class this week as bias in machine learning was discussed. Due to the racial implications of the society we live in, even neutral data points may encode race where race was not originally part of the data set, something that tends to happen with machine learning. The example used in class was referencing loan allotment based on neutral data sets like mileage on car, but while a machine may see a high mileage car of less worthy of a loan, it isn’t taking into account that it may be primarily BIPOC or lower-income individuals wanting to purchase higher mileage vehicles. This algorithm has now developed a tendency to up the repayment fees for people of color or people experiences poverty looking to take out car loans. This is largely because algorithms and technology tend to only be as good as the initial programmers and continue to carry the bias of the coder, transforming it as the technology evolves.

Machine Learning and Artificial Intelligence is only as good as the initial programmers.

So long as humans have a hand in technology, it will fall victim to the same societal archetypes its creators do. This is why it is imperative that programmers are educated in social justice issues, consider history in their design and employ multidisciplinary approaches to tech development that incorporates a large swath of viewpoints and experiences to mitigate the potential of harm and stop the further marginalization of already marginalized groups.

Examples of biases built on neutral data further marginalizing already marginalized individuals:

  • Costanza-Chock (2020)[5]
    • In an example from Costanza-Chock (2020)[5], the idea of neutral data points driving marginalization feels especially potent. In the opening pages of the book the author details their anxiety over traveling as a trans individual through airports– in specific, security. They’ve come to learn that the UI of millimeter security scanning does not take into account bodies outside the “standard” binary, meaning those who may present outwardly feminine but not have the female genitalia to match tend to be flagged for further inspection. This then becomes quickly embarrassing as TSA agents respond to the machine’s cries and debate over who will pat down the author– a duty normally assigned based on binary gender/sex. No real way to bypass this procedure, it’s become easier for the author to simply not travel by flight and spare themselves the embarrassment that comes with the procedures, limiting their access to much of what those within the binary neglect to think about.
  • Class example:
    • Similarly, one student in class shared their own experience of an interaction with tech that may have been using neutral data points but still clearly targeted them due to their race/natural hair. This student used an example of how every time they go to an airport they tend to be searched for drugs. Though never actually caught with drugs, this continues to happen to the student. At one point they inquired with airport security over what about them kept getting them flagged as a potential risk, in which the security officer could only point to their hair. The student regularly wears their hair in dreads and the officer noted that it seems the computer is more likely to flag people with natural hairstyles like dreads as risks for drug trafficking. So while this algorithm was hopefully never meant to raise alarms on people based on their appearance, clearly it was designed with enough bias that seemingly neutral data points have lead to a program keen on stereotyping passengers.

What does advocacy look like here?[edit | edit source]

Disruptive Tech

Advocacy in technology may take on many forms, but by far the most reported on form is that of disruption. Disruptive technologies can look drastically different from one another, but they all serve the same goal: to alter the way consumers interact with a technology.

Examples:

  • Gender PayBot[6]
    • Within this week, there was a presentation that discussed the Gender PayBot, a disruptive Twitter bot that took to the app on International Women’s Day 2022 to call out UK organizations for gender pay disparities. Targeting any organization that tweeted in favor of the international holiday, the bot made waves throughout the Twitterverse for pulling already published data on pay gaps within each company and publishing it, making it overwhelmingly accessible to the masses.
  • Google takes down apps that surveils Muslim people[7]
    • Google recently took down dozens of apps from its play store that were found to have spyware secretly tracking users, a lot of which targeted Muslim users. This comes after a recent VICE expose on Muslim Pro[8], a app popular among Muslims across the world to check prayer times and read the Quran. The VICE expose found MuslimPro to be selling user data to multiple third parties including the US military. The American Muslim community was outraged given the history of surveillance and illegal spying in the community. Google taking initiative to monitor the app store and banning harmful apps, is a step in the right direction in upholding users rights to privacy and prevent further harm caused by the US police state on marginalized communities.
  • Hyphen-Labs facial-recognition scarves[9]
    • Hypen-Labs, a femme BIPOC collective, released multiple disruptive accessories at the Tribeca Film Festival in 2017, making waves across the event. At the time of the debut, Hyphen-Labs was approaching the commercial release of silk scarf designed with a carefully plotted pattern that threw off facial recognition software, rendering the surveillance tactic useless. The design had the potential to overload software by creating thousands of faux-faces, taking up the space where human faces would normally be analyzed. Tested on apps like Snapchat that utilize filters, the scarf proved effective at throwing off facial recognition and completely penetrated the algorithms normally successful at recognizing features. Disruptive technology like this is especially helpful at times where facial recognition may put someone in danger.

Annotated References[edit | edit source]

[1][9][6][2][10][7][3][5][4]
Cite what we covered in class and include 4 additional resources you find on the topic. Cite resources that are related to the topic, they can be in agreement with the topic, extending it, in disagreement or present conflict with the topic.
  1. 1.0 1.1 "New York tenants fight as landlords embrace facial recognition cameras". the Guardian. 2019-05-30. Retrieved 2022-04-27.
  2. 2.0 2.1 "Organizing as Joy: An Ocean-Hill Brownsville Story, with Tranae Moran and Fabian Rogers". Logic Magazine. Retrieved 2022-04-27.
  3. 3.0 3.1 Conger, Kate; Fausset, Richard; Kovaleski, Serge F. (2019-05-14). "San Francisco Bans Facial Recognition Technology". The New York Times. ISSN 0362-4331. Retrieved 2022-04-27.
  4. 4.0 4.1 "Reps. Clarke, Pressley & Tlaib Announce Bill to Ban Public Housing Usage of Facial Recognition & Biometric Identification Technology". Congresswoman Yvette Clarke. 2019-07-25. Retrieved 2022-04-27.
  5. 5.0 5.1 5.2 Costanza-Chock, Sasha (2020). Design Justice. The MIT Press. ISBN 978-0-262-35686-2. http://dx.doi.org/10.7551/mitpress/12255.001.0001. 
  6. 6.0 6.1 Holpuch, Amanda (2022-03-09). "Twitter Bot Highlights Gender Pay Gap One Company at a Time". The New York Times. ISSN 0362-4331. Retrieved 2022-04-27.
  7. 7.0 7.1 Smith, Zachary Snowdon. "Google Reportedly Bans Dozens Of Apps Containing Spyware". Forbes. Retrieved 2022-04-27.
  8. "How the U.S. Military Buys Location Data from Ordinary Apps". www.vice.com. Retrieved 2022-04-27.
  9. 9.0 9.1 Zakarin, Jordan. "Facial Recognition Scrambling Scarf From Hyphen-Labs on Sale Soon". Inverse. Retrieved 2022-04-27.
  10. "Get Ready for New York City's New Biometric Identifier Information Law". The National Law Review. Retrieved 2022-04-27.