Has Amazon Ruined the Name Alexa?

According to data, the number of babies named Alexa in the US has dropped from 6,052 in 2015 to 1,995 in 2019.

As smart speakers are becoming more and more common in the US and around the world, it’s important to take a moment to think of the many people who now share their name with a ubiquitous digital entity designed to serve its human overlords. And no, I’m not talking about you, Bixby. Imagine having named your kid Alexa shortly before Amazon debuted its popular virtual assistant in 2014. You now have to live with the fact that your child’s name will forever be associated with the world’s most popular digital servant.

As a matter of fact, there are already signs that Amazon’s decision to give its digital assistant a rather popular name, has ruined that name for years to come. According to the US Social Security Administration, the number of babies named Alexa in the US has dropped from 6,052 in 2015 (which is when Amazon’s first smart speaker Echo became widely available) to 1,995 in 2019. Having been the 32nd most popular name for girls born in 2015, Alexa’s rank dropped to 139th in 2019, the lowest it’s been since 1992.

Infographic: Has Amazon Ruined the Name Alexa? | Statista You will find more infographics at Statista, where this article was originally published.

Why It’s Time to Start Making Amazon Pay

The Progressive International is mobilising with Amazon workers and their allies around the world. Here’s why.

Amazon’s size and power place the corporation at the very centre of the crises of climate breakdown and economic inequality that grip our planet. The growth of CEO Jeff Bezos’s astronomical wealth – up $100 billion since March, now surpassing that of any other human in history – is directly proportional to Amazon’s human and environmental costs: his corporation mistreats its workers, wrecks the climate, and undermines the public institutions underpinning our democracies along the way.

Taking on Amazon, therefore, will require more than curbing Bezos’s personal wealth or calling for corporate social responsibility. It will require a global movement that is organised along every dimension of Amazon’s expanding empire: for workers, for peoples, and for the planet.

That is why today – Black Friday – an international worker-activist coalition begins a planetary mobilisation to #MakeAmazonPay. From Sao Paulo to Berlin, Seattle to Hyderabad, activists will project this rallying cry onto key Amazon sites, putting the corporation on notice that its days of impunity are over. Bringing together unions, environmentalists, and citizens around the world, this coalition exercises the only power that can meet the force of transnational capital: solidarity.

In just a few years, Amazon has established itself as a key node in the circuitry of globalized capitalism. Having first revolutionized the links between production, distribution, and consumption on its digital platform, the corporation’s cloud infrastructure and e-commerce give Amazon controlling influence over huge swathes of social and economic life across the planet.

Amazon’s network of corporate power extends through our workplaces and into our lives. Producers and suppliers have no choice but to partner with Amazon to retain or gain access to consumers. Consumers, for their part, feel that they can hardly avoid Amazon, unless they are willing to wait longer and able to pay more. Through mass surveillance technology like Alexa, Echo and Amazon Ring, the corporation has infiltrated millions of households and collected their most intimate data.

Also read: In Funding Hatred, India’s Corporates Have Compromised With Evil

Across this network sits Amazon Web Services, which has played a key role in the operation of extractive industries and law enforcement; as well as Amazon’s recent ventures into sectors like financial services, food provision, and health care. Amazon has, in effect, become a wholly unaccountable, predatory transnational private state – or, indeed, a 21st-century empire.

In the absence of a common movement to challenge it, Amazon has managed to expand its empire to all corners of the global economy. But the tide is beginning to turn. Tech workers’ recent participation in the global climate strike was followed by important concessions by Amazon management, and transnational labor alliances led by UNI Global Union and Amazon Workers International have managed to integrate previously diffused worker resistance. Internationally, public advocacy groups have moved the urgent need to break up Amazon towards the heart of policy debates.

These efforts show us the way forward. To Make Amazon Pay its debts to workers, the planet and society, we must pursue a three-point strategy:

  • Firstly, recognise the international and intersectional nature of the Amazon struggle.

  • Secondly, organise across national borders and narrow spheres of activism.

  • Thirdly, politicise this struggle by taking it straight to legislative arenas around the world.

These are the goals of the campaign that launched today.

With respect to the first, our coalition’s Common Demands are global in their scope. We realise that Amazon’s power depends on its ability to exploit differences in national jurisdictions to drive on the global race to the bottom on social and environmental protections.

We recognise, too, the intersections of Amazon’s injustice. The environmental injustice of Amazon’s pollution, for instance, disproportionately affects people of colour. The corporation’s monopolisation of the cloud computing sector, meanwhile, is the basis of its close ties with Big Oil. Our coalition therefore brings together the environmentalists of Greenpeace and 350 with groups like Data 4 Black Lives, the Athena Coalition and the Hawkers Federation of India.

Also read: India Will Not Be Able To Ignore the Threat of Tech and Data Oligopolies for Long

With respect to the second point of strategy, today’s action unites workers across Amazon’s supply chain — from the tech workers in Amazon’s Seattle headquarters and warehouse workers organised by UNI Global Union affiliates, the Awood Centre and Amazon Workers International, all the way to supply chain workers in garment factories in Bangladesh.

And with respect to the third, our coalition does not demand that Bezos change Amazon’s business model out of the goodness of his heart. Instead, the movement aims to build legislative power that can put an end to the “Amazonification” of our economies and societies. We invite progressive lawmakers across the globe to join us, and stand with this global movement to Make Amazon Pay.

The mission of this campaign is as simple as it is radical: to win a different world.

A world in which corporations that primarily serve the interests of their CEOs are replaced by cooperatives that serve the interests of the many.

A world in which economic activity does not lead to climate destruction, but to environmental reconstruction and flourishing.

A world in which markets are governed by democratic institutions, rather than vice versa.

Solidarity is the vehicle to deliver this world. Making Amazon pay is where we start.

Casper Gelderblom is the #MakeAmazonPay campaign coordinator at the Progressive International and PhD Researcher at the European University Institute

Disclosure: The Wire and our other websites are hosted on Amazon Web Services.

The Only ‘Woman’ ISRO Is Currently Planning to Send to Space Isn’t Real

ISRO will use a female android named Vyommitra to test the crew module ahead of Gaganyaan’s first crewed flight in 2022. Why did she have to be female?

At the end of 2020, a legless female android will be making her way to space. Vyommitra, which is Hindi for ‘space friend’, will be flying onboard a crew module attached to a GSLV Mk III rocket in an unmanned mission to prepare for the Indian Space Research Organisation’s (ISRO’s) first human spaceflight mission, dubbed Gaganyaan.

According to the Indian Express, ISRO plans to use Vyommitra to test the crew module and make sure it’s fit for astronauts, who will use it in 2022. “Attaining launch and orbital postures, responding to the environment, generating warnings, replacing carbon dioxide canisters, operating switches, monitoring of the crew module, receiving voice commands, responding via speech (bilingual) are the functions listed for the humanoid,” the newspaper said.

To send an android – a robot built to look like a human – to space isn’t extraordinary. In fact, other space agencies have undertaken such missions in the past. But what’s striking is that Vyommitra is decidedly female; one headline even refers to the humanoid as the “first Indian woman to go to space” – although ‘she’ is only being used to prepare for a mission that will be undertaken at first by men.

Vyommitra is a half-humanoid, her body ends at her torso. In photos of her available on the internet from ISRO’s unveiling, she is dressed more like an air-hostess than an astronaut. She is seen wearing a white and grey suit in one avatar, and a blue silk shirt in the other. Nowhere does she appear in a spacesuit – which of course the (real) men she is making way for will have to wear while in space.

Why did Vyommitra have to be female? Perhaps the most charitable answer is that ISRO is being aspirational: that it really wants to send a woman to space and believes doing so with a legless robot could inspire others to head in the same direction.

If that were the case, ISRO should be taking active steps to recruit more women and include them on ultra-visible missions like Gaganyaan. It has stepped up at times in the past, notably by appointing Ritu Karidhal the deputy operations director of the Mars Orbiter Mission and the mission director of Chandrayan 2; M. Vanitha the project director of Chandrayaan 2; and V.R. Lalithambika as the head of the human spaceflight mission.

Also read: A Peek Into the Life and Work of Ritu Karidhal, Chandrayaan 2 Mission Director

However, after the Chandrayaan 2 mission’s lunar surface component failed on September 7 last year, both Karidhal and Vanitha disappeared from public attention and none of ISRO’s official communiqués included their names or quotes. Even now, what pitiably little information is publicly available of the Gaganyaan mission excludes Lalithambika’s comments.

So with Vyommitra, ISRO is simply continuing its inexplicable tradition of sending mixed signals: celebrating women on ‘happy’ occasions but sidelining them in controversial times, prudently cashing in on the hype when the going is good, and withdrawing into a shell and fronting The Man when the going gets tough.

Indeed, it appears that ISRO has fallen into an all-too-familiar trap with the half-android. Vyommitra could find a wide range of female friends in the AI world, such Siri, Alexa and Cortana – all women, or at least started out as such, though now users have an option to change these tech assistant’s gender in a few cases.

As multiple experts, and even the UN, have argued, even now, people assume that an assistant is a woman. They lurk in the background, keeping track of all the “little things” you need in your day, without advancing sophisticated opinions. They’re even subject to verbal abuse and don’t give it back. Technology developers in particular have played into these stereotypical gender roles instead of trying to subvert or change them.

The feminist academic Helen Hester found that when such assistants were given a male voice, people assumed they were “a research assistant, an academic librarian and an information manager, rather than … a personal secretary”. Even in movies, Amy C. Chambers wrote in The Conversation, the gender of a digital assistant influenced perceptions of the assistant’s personality and what it could be used for.

Technology companies have said in the past that they stick to using women’s voices in these roles because people find them more “agreeable”. But all that does is reinforce outdated, regressive and patriarchal notions about where a woman fits in society. “The world needs to pay much closer attention to how, when and whether AI technologies are gendered and, crucially, who is gendering them,” UNESCO’s director Saniye Guler Corat had said.

Also read: There’s a Reason Siri, Alexa and AI Are Imagined as Female – Sexism

Vyommitra goes one step further than digital assistants. She has not only a woman’s voice but also a woman’s body. And she will go where no Indian woman has gone before… only to make sure that Indian men have a safer and more comfortable time up there. If ISRO wants to prove that it really did have good intentions, it needs to give one of the many real qualified women the same opportunity.

With inputs from Vasudevan Mukunth.

A Guided Tour of AI and the Murky Ethical Issues It Raises

As AI becomes more intricately innovative, the men and women working in the field are also keeping pace and becoming more redoubtably intelligent.

As I read Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans I found myself recalling John Updike’s 1986 novel Roger’s Version. One of its characters, Dale, is determined to use a computer to prove the existence of God. Dale’s search leads him into a mind-bending labyrinth where religious-metaphysical questions overwhelm his beloved technology and leave the poor fellow discombobulated. I sometimes had a similar experience reading Artificial Intelligence. In Mitchell’s telling, artificial intelligence (AI) raises extraordinary issues that have disquieting implications for humanity. AI isn’t for the faint of heart, and neither is this book for nonscientists.

To begin with, artificial intelligence — “machine thinking,” as the author puts it — raises a pair of fundamental questions: What is thinking and what is intelligence? Since the end of World War II, scientists, philosophers, and scientist-philosophers (the two have often seemed to merge during the past 75-odd years) have been grappling with those very questions, offering up ideas that seem to engender further questions and profound moral issues. Mitchell, a computer science professor at Portland State University and the author of  Complexity: A Guided Tour, doesn’t resolve these questions and issues — she as much acknowledges that they are irresolvable at present — but provides readers with insightful, common-sense scrutiny of how these and related topics pervade the discipline of artificial intelligence.

Mitchell traces the origin of modern AI research to a 1956 Dartmouth College summer study group: its members included John McCarthy (who was the group’s catalyst and coined the term artificial intelligence); Marvin Minsky, who would become a noted artificial intelligence theorist; cognitive scientists Herbert Simon and Allen Newell; and Claude Shannon (“the inventor of information theory”). Mitchell describes McCarthy, Minsky, Simon, and Newell as the “big four’’ pioneers of AI. The study group apparently generated more heat than light, but Mitchell points out that the subjects that McCarthy and his colleagues wished to investigate — “natural-language processing, neural networks, machine learning, abstract concepts and reasoning, and creativity” — are still integral to AI research today.

Also read: Artificial Intelligence Can’t Think Without Polluting

Mitchell’s goal is to give a thorough (and I mean thorough) account not only of the ethical issues artificial intelligence raises today (and tomorrow), but of how the various branches of AI that the Dartmouth group pursued actually work. She is a good writer with broad knowledge of the topic (unsurprising, since she has a Ph.D. in computer science), and a canny mindfulness of both the merits and problems of AI. But even so, nonscientists will find it grueling to follow some of her explanations of the technical workings of AI. All too often, I found myself baffled and exasperated when she delved into high-tech arcana.

Take, for instance, the author’s discussion of deep learning, which she says “is itself one method among many in the field of machine learning, a subfield of AI in which machines ‘learn’ from data or from their own ‘experiences.’’’ So far, so good. However, from there matters become tenebrous: “Deep learning simply refers to methods for training ‘deep neural networks,’ which in turn refers to neural networks with more than one hidden layer. Recall that hidden layers are those layers of a neural network between the input and the output. The depth of a network is its number of hidden layers.”

From there, she goes, well, deeper, for about eight more pages of text, diagrams, and photos that fail to fully clarify the subject for a general audience. This kind of abstruseness, alas, is fairly frequent, but still, I urge readers to soldier through the technology warrens, because we need to understand the systems that are frightening so many today, and the dedicated reader will come away with at least a modicum of understanding about how AI operates.

I also wish the book had examined the role AI is playing in military weaponry, and how quantum computers affect, or will affect, artificial intelligence — or vice versa. In a recent article in the New York Times, Dario Gil, the director of IBM Research, is quoted as saying, “The reality is, the future of computing will be a hybrid between [the] classical computer of bits, AI systems, and quantum computing coming together.”

Also read: Is This the AI We Should Fear?

The book is exemplary, however, when discussing where AI is now and where it might be going, as well as the moral issues involved. “Should we be terrified about AI?’’ she writes. “Yes and no. Superintelligent, conscious machines are not on the horizon. The aspects of humanity that we most cherish are not going to be matched by a ‘bag of tricks.’ At least I don’t think so. However, there is a lot to worry about regarding the potential for dangerous and unethical uses of algorithms and data.”

Mitchell’s message is that AI-phobes can chill out because we’re not now and we probably won’t ever be facing a dystopic future controlled by machines. One of the themes of the book is that while it’s impressive that AI devices have defeated human experts in “Jeopardy” and Go, no matter how remarkable such tours de force are, those were the only things those particular machines were programmed to do, and they required human input. And in such areas as object recognition, transcribing or translating language, and conversing with Homo sapiens, AI is, to use a word Mitchell favors, “brittle.”

The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence at the CeBit computer fair in Hanover March, 5, 2013. Credit: Reuters/Fabrizio Bensch

The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence at the CeBit computer fair in Hanover March, 5, 2013. Credit: Reuters/Fabrizio Bensch

Which is to say that even though great strides have been made (and will be made) in AI, such technology is a long way from being omnipotent, because it is error prone when faced with perplexing — to its way of thinking — tasks (be cautious, she warns, of riding in self-driving cars). And AI machines are still vulnerable to being manipulated by hackers who might work for foreign governments or are simply motivated to cause mayhem.

Near the end of the book, Mitchell asks, “How far are we from creating general human-level AI?” She quotes a computer scientist, Andrej Karpathy, who says, “We are really, really far away,” and then she concurs: “That’s my view too.”

Above all, her take-home message is that we humans tend to overestimate AI advances and underestimate the complexity of our own intelligence. “These supposed limitations of humans are part and parcel of our general intelligence,’’ she writes. “The cognitive limitations forced upon us by having bodies that work in the world, along with the emotions and ‘irrational’ biases that evolved to allow us to function as a social group, and all the other qualities sometimes considered cognitive ‘shortcomings,’ are in fact precisely what enables us to be generally intelligent.”

Also read: Why India Needs a Strategic Artificial Intelligence Vision

It occurred to me while reading about the extraordinary scientists in “Artificial Intelligence” that as AI becomes more intricately innovative, the men and women working in the field are also keeping pace, also becoming more redoubtably intelligent. So why worry? Surely, if there’s ever an AI attempt to subjugate humanity, I have no doubt that Mitchell and others like her, or their successors, will protect our brittle species.

Howard Schneider is a New York City-based writer who reviews books on technology and science. His work has appeared in the Wall Street Journal, the Humanist, Art in America, the American Interest, and other publications.

This article was originally published on Undark. Read the original article.

Artificial Intelligence Has A Gender Bias Problem – Just Ask Siri

All the virtual personal assistants on the market today come with a default female voice and is programmed to respond to all kinds of suggestive questions and comments.

Suggest to Samsung’s Virtual Personal Assistant Bixby “Let’s talk dirty”, and the female voice will respond with a honeyed accent: “I don’t want to end up on Santa’s naughty list.”

Ask the same question to the programme’s male voice and it replies “I’ve read that soil erosion is a real dirt problem.”

In South Africa, where I live and conduct my research into gender biases in artificial intelligence, Samsung now offers Bixby in various voices depending on which language you choose. For American English, there’s Julia, Stephanie, Lisa and John. The voices of Julia, Lisa and Stephanie are coquettish and eager. John is clever and straightforward.

Virtual Personal Assistants – such as Bixby, Alexa (Amazon), Siri (Apple) and Cortana (Microsoft) – are at the cutting edge of marketable artificial intelligence (AI). AI refers to using technological systems to perform tasks that people usually would.

They function as an application on a smart device, responding to voice commands through natural language processing. Their ubiquity throughout the world is rapidly increasing. A recent report by UNESCO estimated that by as early as next year we will be having more conversations with our virtual personal assistants than with our spouses.

Yet, as I’ve explored in my own research with Dr Nora Ni Loideain from the Information Law and Policy Centre at the University of London, these technologies betray critical gender biases.

With their female names, voices and programmed flirtatiousness, the design of virtual personal assistants reproduces discriminatory stereotypes of female secretaries who, according to the gender stereotype, is often more than than just a secretary to her male boss.

Also read: There’s a Reason Siri, Alexa and AI Are Imagined as Female – Sexism

It also reinforces the role of women as secondary and submissive to men. These AI assistants operate on the command of their user. They have no right to refuse these commands. They are programmed only to obey. Arguably, they also raise expectations for how real women ought to behave.

The objective of these assistants is to also free their user from menial work such as making appointments and purchasing items online. This is problematic on at least two fronts: it suggests the user has more time for supposedly more important work. Secondly, it makes a critical statement about the value of the kind of secretarial work performed, first by real women and now by digitalised women, in the digital future.

“What are you wearing?”

One of the more overt ways in which these biases are evident is the use of female names: Siri and Cortana, for instance. Siri is a Nordic name meaning “the beautiful woman that leads you to victory”.

Cortana takes its name (as well as visuals and voice) from the game series Halo. In Halo, Cortana was created from a clone of the brain of a successful female scientist married with a transparent and highly-sexualised female body. She functions as a fictional aide for gamers with her unassuming intelligence and mesmeric shape.

In addition to their female voices, all the virtual personal assistants on the market today come with a default female voice, which, like Bixby, is programmed to respond to all kinds of suggestive questions and comments. These questions include: “What are you wearing?” Siri’s response is

why would I be wearing anything?

Alexa, meanwhile, quips: “They don’t make clothes for me”; and Cortana replies, “Just a little something I picked up in engineering.”

Bias and discrimination in AI

It is being increasingly acknowledged that AI systems are often biased, particularly along race and gender lines. For example, the recent recruitment algorithm development by Amazon to sort resumes for job applications displayed gender biases by downgrading resumes which contained the word “women” or which contained a reference to women’s colleges. As the algorithm was trained on historical data and the preferential recruitment of males, it ultimately could not be fixed and had to be dropped.

As research has shown, there is a critical link between the development of AI systems which display gender biases and the lack of women in teams that design them.

But there is rather less recognition of the ways in which AI products incorporate stereotyped representations of gender within their very design. For AI Now, a leading research institution looking into the social impact of AI, there is a clear connection between the male-dominated AI industry and the discriminatory systems and products it produces.

The role of researchers is to make visible these connections and to show the critical links between the representations of women, whether in cultural or technological products, and the treatment of women in the real world.

Also read: Making Wikipedia – and Society – More Gender Representative

AI is the leading technology in the so-called Fourth Industrial Revolution. It refers to the technological advances – from biotechnology, to AI and big data – that are rapidly reshaping the world as we know it. As South Africa continues to engage with the promises and pitfalls of what this holds, it will become increasingly more important to consider and address how the technologies driving these changes may affect women.The Conversation

Rachel Adams is a Research Specialist at the Human Sciences Research Council

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There’s a Reason Siri, Alexa and AI Are Imagined as Female – Sexism

When we can only seemingly imagine an AI as a subservient woman, we reinforce dangerous and outdated stereotypes.

Virtual assistants are increasingly popular and present in our everyday lives: literally with Alexa, Cortana, Holly, and Siri and fictionally in films Samantha (Her), Joi (Blade Runner 2049) and Marvel’s AIs, FRIDAY (Avengers: Infinity War), and Karen (Spider-Man: Homecoming). These names demonstrate the assumption that virtual assistants, from SatNav to Siri, will be voiced by a woman. This reinforces gender stereotypes, expectations and assumptions about the future of artificial intelligence.

Fictional male voices do exist, of course, but today they are simply far less common. HAL-9000 is the most famous male-voiced Hollywood AI – a malevolent sentient computer released into the public imagination 50 years ago in Stanley Kubrick’s 2001: A Space Odyssey.

2001: A Space Odyssey. Courtesy of Warner Bros. Pictures

Male AI used to be more common, specifically in stories where technology becomes evil or beyond our control (like Hal). Female AI on the other hand is, more often than not, envisaged in a submissive servile role. Another pattern concerns whether fictional AI is embodied or not. When it is, it tends to be male, from the Terminator, to Sonny in I, Robot and super-villain Ultron in Avengers: Age of Ultron. Ex Machina’s Ava (Alicia Vikander) is an interesting anomaly to the roster of embodied AI and she is seen as a victim rather than an uncontrolled menace, even after she kills her creator.

The Marvel Cinematic Universe, specifically the AI inventions of Tony Stark, and the 2017 film Blade Runner 2049, offer interesting and somewhat problematic takes on the future of AI. The future may be female, but in these imagined AI futures this is not a good thing.

Marvel assistants

At least since the demise of Stark’s sentient AI JARVIS in Avengers: Age of Ultron (2013), the fictional AI landscape has become predominately female. Stark’s male AI JARVIS – which he modelled and named after his childhood butler – is destroyed in the fight against Ultron (although he ultimately becomes part of a new embodied android character called The Vision). Stark then replaces his operating system not with a back up of JARVIS or another male voiced AI but with FRIDAY (voiced by Kerry Condon).

Tony Stark. Credit: Marvel

FRIDAY is a far less prominent character. Stark’s AI is pushed into a far more secondary role, one where she is very much the assistant, unlike the complex companion Stark created in JARVIS.

Likewise, in Spider-Man Homecoming, Stark gifts Peter Parker (Tom Holland) his own super suit, which comes with a nameless female-voiced virtual assistant. Peter initially calls her “suit lady”, later naming her Karen. Peter imbues his suit with personality and identity by naming it, but you wonder if he would have been so willing to imagine his suit as a caring confidant if it had come with a older-sounding male voice.

Karen is virtual support for the Spider-Man suit, designed to train and enhance Peter’s abilities. But in building a relationship of trust with her, Karen takes on the role of a friend for Peter, even encouraging him to approach the girl he likes at school. Here, the female voiced AI takes on a caring role – as a mother or sister – which places the Karen AI into another limiting female stereotype. Female voiced or embodied AI is expected to have a different role to their male-aligned counterparts, perpetuating the idea that women are more likely to be in the role of the secretary rather than the scientist.

Blade Runner‘s Joi

Another classic example of artificial intelligence can be found in Blade Runner (1982) and its bio-robotic androids, the Replicants. These artificial beings were designed and manufactured to do the jobs that humans in the future didn’t want: from colonising dangerous alien planets to serving as sex workers. Although stronger and often smarter than their humans creators, they have a limited lifespan that literally stops them from developing sufficiently to work out how to take over.

The recent Blade Runner 2049 updates the replicants’ technology and introduces a purchasable intelligent holographic companion called Joi (Ana de Armas). The Joi we are shown in the film is Agent K’s (Ryan Gosling) companion – at first restricted by the projector in his home and later set free, to an extent (Joi is still controlled by K’s movements), when K buys himself a portable device called an Emanator. Joi is a logical extension of today’s digital assistants and is one of the few female AIs to occupy the narrative foreground.

But at the end of the day, Joi is a corporate creation that is sold as “everything you want to hear and everything you want to see”. A thing that can be created, adapted, and sold for consumption. Her holographic body makes her seem a little more real but her purpose is similar to those of the virtual assistants discussed here already: to serve often male masters.

Subservient women

When we can only seemingly imagine an AI as a subservient woman, we reinforce dangerous and outdated stereotypes. What prejudices are perpetuated by putting servile obedient females into our dreams of technology, as well as our current experiences? All this is important because science fiction not only reflects our hopes and fears for the future of science, but also informs it. The imagined futures of the movies inspire those working in tech companies as they develop and update AI, working towards the expectations formed in our fictions.

Just like in the movies, default real-life virtual assistants are often female (Siri, Alexa). But there is some promise of change: having announced in May that their Google Assistant would be getting six new voices, but that the default was named “Holly”, Google more recently issued an update that assigns them colours instead of names, done randomly in order to avoid any associations between particular colours and genders.

The ConversationThis is a promising step, but technology cannot progress while the same types of people remain in control of their development and management. Perhaps increased female participation in Silicon Valley could change the way we imagine and develop technology and how it sounds and looks. Diversity in front of and behind the Hollywood camera is equally important in order to improve the way we present our possible futures and so inspire future creators.

Amy Chambers, Senior Lecturer in Film Studies, Manchester Metropolitan University

This article was originally published on The Conversation. Read the original article.

The Language of Sexual Harassment: How Words and Images Normalise Predatory Male Behaviour

There is something innately wrong about the way harassment is portrayed in the media and the effect it has on the reader.

There is something innately wrong about the way harassment is portrayed in the media and the effect it has on the reader.

It’s time copy editors dump the predator-prey images and experiment with more positive images that recognise the strength needed to deal with such harassment. Credit: Pixabay

It’s time copy editors dump the predator-prey images and experiment with more positive images that recognise the strength needed to deal with such harassment. Credit: Pixabay

One of the defining stories of 2017 was the spontaneous birth of the #MeToo revolution, which initiated thousands of women around the world to recount their experiences of sexual harassment. While harassment wasn’t news to most women, last year was probably the first time that many men got a glimpse into the horrors that women routinely endure.

The #MeToo stories are also unique in their directness – no more euphemisms or shameful silences. Women have publicly spoken that it’s not acceptable to let men in power get away with such behaviour.

As a practitioner in the field of diversity and inclusion, this openness is refreshing. But if we are serious about tackling this epidemic of sexism, we need to also consider the passive enablers of sexual harassment. Unconsciously, through our words and actions, how do we, as a society, condone this behaviour?

A picture is worth a thousand words

Consider the way publications usually cover a story on sexual harassment. The image that leads the piece shows a creepy male hand groping a woman. The pictures are unusually suggestive – compare the visuals here, here and here.

This maybe a part of the reality but there is something innately wrong about the way harassment is portrayed in the media and the effect it has on the reader. From a journalistic point of view, using explicit images increases readership and draw attention. But depicting physical aggression for the man and the woman as a cowering victim simply reinforces the helplessness of the situation more than it intends to. These power-based actions have a significant effect on the audience, in their choice of career and their choice to speak up to name a few.

Perhaps it’s time copy editors dump the predator-prey images and experiment with more positive images that recognise the strength needed to deal with such behaviour.

Sticks and stones

It’s not just the images, language matters as well. There is evidence that the use of passive voice in reporting sexual assault or harassment can unconsciously shift the onus and responsibility from the perpetrator to the one facing it. For instance, we often say “a woman was raped”, whereas it’s more accurate to describe the act as “a man raped a woman”.

This might seem like a quibble, but the lack of an active voice signifies that the assault wasn’t committed by someone – it just happened. Describing the act in an active voice places the spotlight on the harasser, not the victim. Similarly phrases such as ‘violence against women’ airbrush the male perpetrators from the act all together.

Passive narration also has a detrimental effect on the victim since the focus is now on how the woman found herself in such a situation. Victim precipitation is a theory where possible reasons are given for sexual assault that hold the victim responsible – her skirt was too short, she had one too many drinks, she was out late.

This attitude of assigning blame has long term consequences. The profile of a harasser is not a simple matter to quantify, but it’s been researched that it’s a combination of sex along with exercise of power that leads to harassment or assault. It is this sense of power that is heightened when men find themselves not having to account for their behaviour or crime but are able to transfer the blame to the other side.

Unconscious biases

The stereotypical portrayals of women in advertisements aren’t helpful either. Most commercials for dental or medical products feature a male doctor patiently explaining the latest scientific breakthrough to a concerned, but obviously ignorant mother. Most household products show mothers as some version of happy cooks or excited cleaners. These images tug at our notion of an ideal mother. They also reinforce family boundaries – men are professional, women are caregivers.

This isn’t just some feminist anecdote. Unilever’s own research on gender in advertising shows that only 3% of commercials show women as leaders and just 2% conveys them as intelligent.

Casual sexism in tech

Not surprisingly, our online personas are mimicking our offline biases. Apple’s voice command application Siri and Amazon’s Alexa are all female, with a penchant for following orders and taking frequent sexist comments in its stride.

Microsoft has a similar Artificial Intelligence (AI) assistant called Cortana which is based on a hyper-sexualised female character in the video game Halo. All these assistants were launched with a female voice; the male counterparts were added later on as an update.

Facebook isn’t far behind. Facebook’s M is inspired by Moneypenny, James Bond’s secretary who is known to humour his chauvinistic behaviour. Beyond ‘smart assistants’, tech is laden with everyday sexism. Try putting in the word CEO in an iPhone and it will come up with a only a male emoji.

An attitudinal change

Sexism or gender bias doesn’t form over a day. It has its root in everyday experiences, background and society we are exposed to. Social media and technology plays an important role and for today’s youth growing up with female AI assistants which obeys unconditionally can lead to some deeper consequences.

But how do we stop this wrongful depiction of power? Different people are trying different approaches. Last year, the United Nations and Unilever took part in the #Unstereotype campaign along with Facebook, Google, AT&T and others to bring about an attitudinal change towards gender equality.

The Advertising Standards Authority in the UK has enforced stricter guidelines of stopping gendered stereotypes in advertising. In cinema, the Bechdel test checks how many times women speak in a scene; the Finkbeiner test guides journalists to reduce bias in write ups about women in science. In all of these corrective measures, the unifying idea is the same – consciously or unconsciously, it doesn’t help to have so many passive enablers to sexist and hostile behaviour.

Just as the fearless little girl stood facing the mighty Wall Street bull, we should take a cue from it and decide who should be portrayed in a position of power.

Ishani Roy is the founder, Serein Inc, a diversity and inclusion consulting company.

India’s Website Operators Cannot Delay Further – Turn on HTTPS Already

Only 60% of India’s top 500 websites support HTTPs in some form or another, which means that Indian website operators provide a lesser level of online security than those from the US.

Only 60% of India’s top 500 websites support HTTPs in some form or another, which means that Indian website operators provide a lesser level of online security than those from the US.

Indian website operators need to implement HTTPS for the protection and benefit of all Indians. Credit: Yuri Samoilov/Flickr (CC BY 2.0)

Individuals and computer systems on the Internet are under attack daily. Incidents of financial fraud, embarrassing leaks of emails and photos, and the hijacking of systems for ransom keep increasing. Governments and private organisations must recognise the threat and take action immediately.

One of the most basic protections they can take is to encrypt all data that traverses the Internet so that only the intended receiver can interpret the data. Website operators who have not turned on encryption for their communications should do so immediately. There is no good reason why all data on the Internet cannot be encrypted.

Most people use the World Wide Web, whether for email, social networking, commerce, banking or searching for information, without knowing or caring about how it all works. There’s nothing wrong with that. People use cars and refrigerators the same way. However, people trust that the car has been built well and meets some minimum requirements to protect their physical safety, such as that the fuel tank won’t just catch fire. Similarly, when it comes to the modern Internet, data security is essential. Without it, not only can people’s privacy, finances or reputations be ruined, but as cars and other devices get connected to the Internet, the physical safety of individuals and their families is also at risk.

Wild Wild Web

To see how encryption protects individuals, consider the path the data takes between a user’s device and the remote server. It may go through a Wi-Fi access point, an Internet service provider, interconnection organisations that link Internet providers together behind the scenes and companies that provide undersea cables to carry data to servers in other countries. All these providers have the opportunity to snoop on data in transit. Additionally, any of them could have their systems compromised by criminals, cyberespionage groups or hostile governments. If the data is not encrypted, then it is open to access by any and all of them.

Many websites, especially those that involve financial transactions, do provide an encrypted connection. A user can tell when this is the case by looking at the address area of their browser. Most browsers display a lock or some other visual indication, and the website address is prefixed with “https”. Here are two examples of how it looks in the Chrome and Firefox browsers.

HTTPS encrypted site indication in Chrome.

HTTPS encrypted site indication in Firefox

In contrast, this is how it looks when the communications are not encrypted:

No HTTPs in Chrome.

Hi-Yo, HTTPS!

The technology to encrypt web data, known as HTTPS, is nearly as old as the web itself, and the principle is straightforward. In the simplest terms, when someone uses a web browser or an app to connect to a remote server, the server responds with information that the browser can then use to keep secret all further communication with that server. This information is called a certificate, and website operators acquire them from known and recognised organisations known as certificate authorities. The cost of a certificate can be as little as Rs 1000, and there are also sources of free certificates. Certificates are tied to particular domains, so even if someone got access to a site’s certificate and installed it on another site, all modern browsers would raise an alert.

The following images show how invalid certificates appear in Chrome and Firefox.

Self-signed certificate in Chrome.

Incorrect certificate in Firefox.

There are still three privacy issues that HTTPS encryption does not solve. The first is that even though third parties cannot tell what data is being transferred between a person’s browser and the remote server, and so the details of which pages the user is accessing remain private, they can tell that a person has visited a site. That may be fine in the case of a news or information site such as Wikipedia, but if the site exclusively provides romantic matching services, information on a particular disease or political parties, for example, that may reveal some information about an individual that the person may want to keep private. The more specific a site’s purpose, the more information that a person is disclosing about themselves when visiting the site. To hide this activity from third parties, they will need to take extra steps, such as using a VPN or the Tor network.

The second privacy issue with HTTPS is that sometimes the encryption doesn’t go all the way to the remote server is the source of the content or service, known as the origin server. That’s because site operators may choose to use intermediary services that speed up delivery of the content or service and it is the intermediary who does the encryption from their network to the browser. Generally, sites that handle credit card data would not do this, but a content-only site may.

The third issue is that even if the data is encrypted all the way to the origin server, HTTPS does not guarantee that the site is a legitimate business or service provider. When a person sees the green “https” or lock symbol, they may assume that the site is trustworthy. However, it could be a false site set up to collect data or defraud individuals. In March, an encryption expert reported that over 14,000 certificates had been issued to sites with “paypal” somewhere in the domain name, such as “paypal.verification.zrxpu.ru”. Nearly all of the specified domains were not under the control of PayPal Inc. and most of them are suspected phishing sites. Ninety percent of these certificates were issued just in the four months prior to March by LetsEncrypt, an initiative that distributes free certificates to promote the laudable goal of making HTTPS the default standard on the web. Unfortunately, phishers are capitalising on this. At a security conference in April, the online security company Kaspersky revealed that in October 2016 all traffic destined to websites of a Brazilian bank were rerouted to fake sites complete with HTTPS certificates issued by LetsEncrypt in the bank’s name.

One way to counter this trend is for sites to get special certificates that indicate that the entity behind the site has been verified to be legitimate. When sites use these Extended Validation certificates (EV certificates) their registered name appears in the browser address bar. While this process can also be manipulated and is not foolproof, currently it adds a layer of reassurance. For example, State Bank of India uses EV certificates and its name appears next to the web address.

EV certificate in Chrome.

EV certificate in Chrome.

EV certificate in Firefox.

One common misperception is that security certificates, even EV certificates, are expensive. That may have been true in the past, but now they are relatively and absolutely quite inexpensive. An EV certificate can be purchased for under Rs 30,000, which is well within the means of any entity that wants to set up a website.

The trend towards total HTTPS usage has picked up considerably in the past year. There are multiple reasons for this, including a push by many stakeholders, such as the Electronic Frontier Foundation (EFF), browser creators such as Mozilla and Google, the US government and security experts. EFF launched LetsEncrypt in late 2015 and by May 2017, LetsEncrypt had issued over 35 million certificates. Starting in 2017, the Chrome browser prominently displays “Not secure” whenever a webpage is not using HTTPS. In 2015 the US government issued “A Policy to Require Secure Connections Across Federal Websites and Web Services,” which gives a prominent boost to the movement. Even the labour cost is quite low. While it depends on the complexity of the website, most sites can be HTTPS enabled in a few hours.

However, numbers don’t tell the whole story. By one measure, about 55% of the data on the web was encrypted at the beginning of May, while a February scan of the top one million visited websites found that only 20% of them supported HTTPS. The confusion arises because of differences in measurement (some reports rely on browsers like Chrome and Firefox sending data to their parent organisations), and differences in interpretation (some sites send data both encrypted and unencrypted, so they could be counted either as using HTTPS or not). What is clear is that the amount of encryption is increasing, and that is encouraging. Like the eradication of polio and smallpox, with enough concerted effort, the usage of HTTPS can easily be brought up to 100%.

Raising India

In India, the rate of adoption of HTTPS lags the US, but leads countries like Germany, Brazil and Japan, according to statistics from Google. In April, this author conducted a systematic scan of the Alexa top 500 sites in India. It showed that about 60% of them support HTTPS in some form. A smaller percentage of sites, about 40%, transfer a request for an unencrypted page to the HTTPS version, which is the recommended policy. There is also a difference in their responses depending on whether a person uses “www” in front of the domain name, which should not be the case. The results are summarised below.

Since many of the top sites in India are sites originating outside the country, such as Google.com or Wikipedia.org, it is also worth looking at HTTPS adoption by country of the site. This is not readily available information, but one way to approximate it is to look at where the traffic for the site originates. If a site’s top source of traffic is India, it is likely an Indian origin site.

By and large this appears to hold true. Broken down this way, HTTPS adoption among presumed Indian sites is far lower than the US. Out of 351 presumed Indian sites, 155 (44%) do not respond with HTTPS, compared to 16 of the 124 presumed US sites (13%). In other words, Indian website operators are providing a far lesser level of online security than those from the US.

For the protection and benefit of all Indians, Indian website operators need to implement HTTPS. The cost of acquiring certificates is low and the effort per website is minimal. The Indian government should make it a goal to implement HTTPS across all its servers by the year’s end. Companies and organisations should also turn on HTTPS and those that already have should verify that they are using the latest secure methods. With a little effort India could match or beat the US to take the top spot in terms of web encryption.

Sushil Kambampati is the founder of YouRTI.in, where anyone can suggest an RTI query simply and anonymously. He writes about online security and privacy, and tweets @SKisContent.