Skip to main content

Cyberfeminism - AI and Gender Biases

Hello Friends, 

This blog is my response to the task assigned to us by our Prof. Dr.DilipSir in Cultural Studies: Cyberfeminism, Artificial Intelligence and the Unconscious Biases. So read, understand and enjoy. Happy Learning!




Cyberfeminism

Cyberfeminism is a feminist approach which foregrounds the relationship between cyberspace, the Internet, and technology. It can be used to refer to a philosophy, methodology or community. The term was coined in the early 1990s to describe the work of feminists interested in theorizing, critiquing, exploring and re-making the Internet, cyberspace and new-media technologies in general. The foundational catalyst for the formation of cyberfeminist thought is attributed to Donna Haraway's "A Cyborg Manifesto", third wave feminism, post-structuralist feminism, riot grrrl culture and the feminist critique of the blatant erasure of women within discussions of technology.

Theoretical Background

Cyberfeminism arose partly as a reaction to "the pessimism of the 1980s feminist approaches that stressed the inherently masculine nature of techno-science", a counter movement against the 'toys for boys' perception of new Internet technologies. According to a text published by Trevor Scott Milford, another contributor to the rise of cyberfeminism was the lack of female discourse and participation online concerning topics that were impacting women. As cyberfeminist artist Faith Wilding argued: "If feminism is to be adequate to its cyberpotential then it must mutate to keep up with the shifting complexities of social realities and life conditions as they are changed by the profound impact communications technologies and techno science have on all our lives. It is up to cyberfeminists to use feminist theoretical insights and strategic tools and join them with cybertechniques to battle the very real sexism, racism, and militarism encoded in the software and hardware of the Net, thus politicizing this environment."

Critiques

Many critiques of cyberfeminism have focused on its lack of intersectional focus, its utopian vision of cyberspace, especially cyberstalking and cyber-abuse, its whiteness and elite community building.

One of the major critiques of cyberfeminism, especially as it was in its heyday in the 1990s, was that it required economic privilege to get online: "By all means let [poor women] have access to the Internet, just as all of us have it—like chocolate cake or AIDS," writes activist Annapurna Mamidipudi. "Just let it not be pushed down their throats as 'empowering.' Otherwise this too will go the way of all imposed technology and achieve the exact opposite of what it purports to do." Cyberfeminist artist and thinker Faith Wilding also critiques its utopian vision for not doing the tough work of technical, theoretical and political education.


A Brief History of Cyberfeminism

“By the late 20th century, our time, a mythic time, we are all chimeras, theorized and fabricated hybrids of machine and organism,” wrote post-humanist scholar and feminist theorist Donna Haraway in her iconic 1983 A Cyborg Manifesto. “In short, we are all cyborgs.” Her essay addressed the artifice around gender norms, imagined the future of feminism, and proposed the cyborg as the leader of a new world order. Part human and part machine, the cyborg challenged racial and patriarchal biases. “This,” Haraway wrote, “is the self [that] feminists must code.”

The field of cyberfeminism, which will be explored by the digital art resource Rhizome as part of their upcoming initiative Net Art Anthology, emerged in the early 1990s after the arrival of the world wide web, which went live in August 1991. Its roots, however, go back to the earlier practices of feminist artists like Lynn Hershman Leeson. Cyberfeminism came to describe an international, unofficial group of female thinkers, coders, and media artists who began linking up online. In the 1980s, computer technology was largely seen as the domain of men—a tool made by men, for men. Cyberfeminists asked: Could we use technology to hack the codes of patriarchy? Could we escape gender online?

Haraway’s cyborg became the cyberfeminists’ ideal citizen for a post-patriarchal world, but there were other writers leaving an impression on the nascent movement, such as African-American sci-fi writer Octavia Butler. Her Xenogenesis trilogy (1987–89) is set in a post-apocalyptic future ruled by gene-trading aliens. Butler’s books broke with conservative conceptions of race and biology, describing aliens that were neither male nor female but a third sex, and who practiced interspecies breeding.

Cyberfeminism’s star rose throughout the 1990s, as a growing constellation of women began to practice under its umbrella in different corners of the world, including North America, Australia, Germany, and the U.K. The VNS Matrix, a four-woman collective of “power hackers and machine lovers” in South Australia, began to identify as cyberfeminists in 1992. In their own words, the collective “decided to have some fun with French feminist theory,” coding games and inventing avatars as a way to critique the macho landscape of the early web. As one of its members, Virginia Barratt, recalled in an interview with Vice’s Motherboard, “We emerged from the cyberswamp…on a mission to hijack the toys from techno-cowboys and remap cyberculture with a feminist bent.”

They wrote their own Cyberfeminist Manifesto for the 21st Century (1991) in homage to Haraway, presented as an 18-foot-long billboard, which was exhibited at various galleries across Australia. The text bulges from a 3D sphere, surrounded by images of DNA material and dancing, photomontaged women that have been transformed into scaled hybrids. “We make art with our cunts,” the manifesto reads. “We are the virus of the new world disorder.”



“Cyberfeminism is not a fragrance,” it reads, “not boring... not a single woman... not a picnic… not an artificial intelligence... not lady-like... not mythical.”



Cyberfeminism resisted easy definition and, as the manifesto showed, there were multiple iterations and conflicting notions of what it was—and was not. By 1997, the movement was running into trouble. Haraway and Butler’s texts had called for the dissolution of gender and racial hierarchies, but it was increasingly clear that cyberfeminism had failed to address race at all.

AI AND UNCONSCIOUS BIAS

How AI Can Reduce Unconscious Bias In Recruiting

A major feature of AI for recruiting is its ability to reduce unconscious bias.

In other words, unconscious bias is an ingrained human trait. That’s why some experts believe reducing unconscious bias requires a non-human solution: technology.

Here’s how AI for recruiting can help you reduce unconscious bias during hiring.

Why unconscious bias is so hard to eliminate

The best-selling book Thinking, Fast and Slow explains the dual systems theory of the human mind. System 1 is fast, instinctive, and effortless. System 2 is slow, deliberate, and effortful.

Unconscious bias is a product of System 1 thinking. Because unconscious biases affect our thinking and decision making without our awareness, they can interfere with our true intentions.

Unconscious biases are so hard to overcome because they are automatic, act without our awareness, and there are so many of them: Wikipedia lists more than 180 decision making, social, and memory biases that affect us.

How recruiting AI reduces unconscious bias

AI for recruiting is the application of artificial intelligence such as machine learning, natural language processing, and sentiment analysis to the recruitment function.

AI can reduce unconscious bias in two ways.

1. AI makes sourcing and screening decisions based on data points

Recruiting AI sources and screens candidates by using large quantities of data. It combines these data points using algorithms to make predictions about who will be the best candidates. The human brain just can’t compete when processing information at this massive scale.

AI assesses these data points objectively – reduced assumptions, biases, and mental fatigue that humans are susceptible to.

A major advantage AI has over humans is its results can be tested and validated. An ideal candidate profile usually contains a list of skills, traits, and qualifications that people believe make up a successful employee. But often times, those qualifications are never tested to see if they correlate with on-the-job performance.

AI can create a profile based on the actual qualifications of successful employees, which provides hard data that either validates or disconfirms beliefs about what to look for in candidates.

2. AI can be programmed to ignore demographic information about candidates

Recruiting AI can be programmed to ignore demographic information about candidates such as gender, race, and age that have been shown to bias human decision making.

It can even be programmed to ignore details such as the names of schools attended and zip codes that can correlate with demographic-related information such as race and socioeconomic status.  

This is how AI software in the financial services industry is used. Banks are required to ensure that their algorithms are not producing outcomes based on data correlated with protected demographic variables such as race and gender.

AI still requires a human touch to reduce unconscious bias

AI is trained to find patterns in previous behavior. That means that any human bias that may already be in your recruiting process – even if it’s unconscious – can be learned by AI.

Human oversight is still necessary to ensure the AI isn’t replicating existing biases or introducing new ones based on the data we give it.

Recruiting AI software can be tested for bias by using it to rank and grade candidates, and then assessing the demographic breakdown of those candidates.

The great thing is if AI does expose a bias in your recruiting, this gives you an opportunity to act on it. Aided by AI, we can use our human judgment and expertise to decide how to address any biases and improve our processes.

Tackling bias in artificial intelligence (and in humans)

AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems as well.

Will AI’s decisions be less biased than human ones? Or will AI make these problems worse?

Two opportunities present themselves in the debate. The first is the opportunity to use AI to identify and reduce the effect of human biases. The second is the opportunity to improve AI systems themselves, from how they leverage data to how they are developed, deployed, and used, to prevent them from perpetuating human and societal biases or creating bias and related challenges of their own. Realizing these opportunities will require collaboration across disciplines to further develop and implement technical improvements, operational practices, and ethical standards.


Underlying data are often the source of bias

Underlying data rather than the algorithm itself are most often the main source of the issue. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities. For example, word embeddings (a set of natural language processing techniques) trained on news articles may exhibit the gender stereotypes found in society.

Human judgment is still needed to ensure AI supported decision making is fair

While definitions and statistical measures of fairness are certainly helpful, they cannot consider the nuances of the social contexts into which an AI system is deployed, nor the potential issues surrounding how the data were collected. Thus it is important to consider where human judgment is needed and in what form. Who decides when an AI system has sufficiently minimized bias so that it can be safely released for use? Furthermore, in which situations should fully automated decision making be permissible at all? No optimization algorithm can resolve such questions, and no machine can be left to determine the right answers; it requires human judgment and processes, drawing on disciplines including social sciences, law, and ethics, to develop standards so that humans can deploy AI with bias and fairness in mind. This work is just beginning.

★ Six potential ways forward for AI practitioners and business and policy leaders to consider


Minimizing bias in AI is an important prerequisite for enabling people to trust these systems. This will be critical if AI is to reach its potential, shown by the research of MGI and others, to drive benefits for businesses, for the economy through productivity growth, and for society through contributions to tackling pressing societal issues. Those striving to maximize fairness and minimize bias from AI could consider several paths forward:

1. Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could exacerbate bias.

When deploying AI, it is important to anticipate domains potentially prone to unfair bias, such as those with previous examples of biased systems or with skewed data. Organizations will need to stay up to date to see how and where AI can improve fairness—and where AI systems have struggled.

2. Establish processes and practices to test for and mitigate bias in AI systems.

Tackling unfair bias will require drawing on a portfolio of tools and procedures. The technical tools described above can highlight potential sources of bias and reveal the traits in the data that most heavily influence the outputs. Operational strategies can include improving data collection through more cognizant sampling and using internal “red teams” or third parties to audit data and models. Finally, transparency about processes and metrics can help observers understand the steps taken to promote fairness and any associated trade-offs.

3. Engage in fact-based conversations about potential biases in human decisions.

As AI reveals more about human decision making, leaders can consider whether the proxies used in the past are adequate and how AI can help by surfacing long-standing biases that may have gone unnoticed. When models trained on recent human decisions or behavior show bias, organizations should consider how human-driven processes might be improved in the future.

4. Fully explore how humans and machines can work best together.

This includes considering situations and use-cases when automated decision making is acceptable (and indeed ready for the real world) vs. when humans should always be involved. Some promising systems use a combination of machines and humans to reduce bias. Techniques in this vein include “human-in-the-loop” decision making, where algorithms provide recommendations or options, which humans double-check or choose from. In such systems, transparency about the algorithm’s confidence in its recommendation can help humans understand how much weight to give it.

5. Invest more in bias research, make more data available for research (while respecting privacy), and adopt a multidisciplinary approach.

While significant progress has been made in recent years in technical and multidisciplinary research, more investment in these efforts will be needed. Business leaders can also help support progress by making more data available to researchers and practitioners across organizations working on these issues, while being sensitive to privacy concerns and potential risks. More progress will require interdisciplinary engagement, including ethicists, social scientists, and experts who best understand the nuances of each application area in the process. A key part of the multidisciplinary approach will be to continually consider and evaluate the role of AI decision making, as the field progresses and practical experience in real applications grows.

6. Invest more in diversifying the AI field itself.

Many have pointed to the fact that the AI field itself does not encompass society’s diversity, including on gender, race, geography, class, and physical disabilities. A more diverse AI community will be better equipped to anticipate, spot, and review issues of unfair bias and better able to engage communities likely affected by bias. This will require investments on multiple fronts, but especially in AI education and access to tools and opportunities.




2,543 Words.

Works Cited

1. Gomez, Diego. "How AI Can Stop Unconscious Bias In Recruiting." Ideal, 22 Apr. 2021, ideal.com/unconscious-bias/.

2. Scott, Izabella. "How the Cyberfeminists Worked to Liberate Women Through the Internet." Artsy, 13 Oct. 2016, www.artsy.net/article/artsy-editorial-how-the-cyberfeminists-worked-to-liberate-women-through-the-internet.

3. Silberg, Manyika, Jake. "Tackling Bias in Artificial Intelligence (and in Humans)." McKinsey & Company, 6 June 2019, www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans.






Popular posts from this blog

Marxist Criticism & Feminist Criticism

Hello Friends,  This blog is my response to the task assigned to us by our Prof. Dr.DilipSir on Marxist and Feminist Criticism. So, read, understand and enjoy. Happy Learning! MARXIST CRITICISM What Marxist critics do 1. They make a division between the 'overt' (manifest or surface) and 'covert' (latent or hidden) content of a literary work (much as psychoanalytic critics do) and then relate the covert subject matter of the literary work to basic Marxist themes, such as class struggle, or the progression of society through various historical stages, such as, the transition from feudalism to industrial capitalism. Thus, the conflicts in King Lear might be read as being 'really' about the conflict of class interest between the rising class (the bourgeoisie) and the falling class (the feudal overlords).  2. Another method used by Marxist critics is to relate the context of a work to the social-class status of the author. In such cases an assumption is made (which a...

Celebration Committee Report

Committee Members,  Khushbu Lakhupota  Sneha Agravat Hello Friends,  In this blog there is the report of the celebrations that have taken place in the year 2020 to 2022 in Department of English, MKBU.  "Coming together is a beginning, staying together is progress, and working together is success." – Henry Ford 1. International Yoga Day 2. ICT Day 3. Teacher’s Day 4. Farewell Function  5. Welcome Function  6. Independence Day 7. Republic Day 8. Virtual Literary Fest 2020 9. Hindi Day 10. Research & Dissertation writing workshop 4 Jan 2022 11. Translation Workshop  12. Research Methodology Workshop 7 Jan 2022 “People of our time are losing the power of celebration. Instead of celebrating we seek to be amused or entertained. Celebration is an active state, an act of expressing reverence or appreciation. To be entertained is a passive state-it is to receive pleasure afforded by an amusing act or a spectacle.... Celebration is a confrontation, givi...

Wide Sargasso Sea Novel by Jean Rhys

Hello Friends,  This blog is my response to the task assigned to us by our teacher YeshaMa'am on the novel Wide Sargasso Sea by Jean Rhys. So, read, understand and enjoy. Happy Learning! "Our parrot was called Coco, a green parrot. He didn't talk very well, he could say Qui est la? Qui est la? And answer himself Che Coco, Che Coco. After Mr. Mason clipped his wings he grew very bad tempered. . . .    I opened my eyes, everybody was looking up and pointing at Coco on the glacis railings with his feathers alight. He made an effort to fly down but his clipped wings failed him and he fell screeching. He was all on fire." Antoinette's story begins when she is a young girl in early nineteenth- century Jamaica. The white daughter of ex-slave owners, she lives on a run-down plantation called Coulibri Estate. Five years have passed since her father, Mr. Cosway, reportedly drunk himself to death, his finances in ruins after the passage of the Emancipation Act of  1833 , whi...