Stefanie Ullmann

Brilliant Minds

What can we do about hate speech and harmful online content, and the inequalities and biases built into Artificially Intelligent (AI) communications technology such as Google Translate? How do we regulate these systems so that they don’t foment racism or sexism, without restricting freedom of speech?

These challenges are what Eddington resident Dr Stefanie Ullmann is tackling as a postdoctoral Research Associate at the University of Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH).

She is currently working on a project called Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology.

 

Background

Dr Ullmann studied English in Germany. She particularly enjoyed the work of Noam Chomsky, who provided an introduction into linguistics, but also for an insight into critical studies in politics and the media.

She said, “I found linguistics interesting because it was a challenge – it was not something I had learned in school. I focused on that and then went on to explore cognitive linguistics and particularly the use of metaphorical language, which I pursued in my PhD thesis.”

 

Hate Speech

She analysed how metaphorical language was used in socio-political contexts and conflicts, focusing on the Arab Revolutions. Dr Ullmann built a corpus of more than 11 million words, taken from different media outlets and core political speeches. She then conducted an analysis of these words, asking questions like, ‘what does that mean?’ and ‘how can we group these conceptualisations together, and ‘how do they influence the way we think about events?’

 

“I’ve always been interested in the use of language to exert power over people, and how new media works together with language to influence people.”

 

 When she moved to Eddington in 2018 the problem of hate speech was on the rise, so Dr Ullmann and CRASSH colleagues started working on that – and they are still working on that. She said, “It’s kind of shocking that there is still not a solution to it.”

 

Gender Bias

Dr Ullmann and her colleagues are also analysing gender bias, looking at tools like machine translation when using Google Translate. For instance, if you want to translate the word ‘doctor’ from English into German, the algorithm has to make a gender choice, and it will most likely go for the male version of ‘doctor’, which is the default setting for that program. Being bilingual and specialising in linguistics, Dr Ullmann understands the complexities of grammatical gender, and how it can appear in different languages.

“Google has made some changes, but it’s all been reactive”, she said, “People start complaining, and then Google will look into it. It’s not an easy issue to fix, especially when you look at different languages – they all have different grammatical gender systems and when you have to account for all of the language pairs that Google offers, you need to involve linguistic specialists who know what they are doing. Slowly but surely Google is catching up on this, but it’s difficult.”

She has also been surprised by how much hate speech has been engineered into political rhetoric, even in modern governments. Having been invited by the Hrant Dink Foundation to an event in Istanbul organised to look at ways of combatting hate speech in Turkey recently, she was introduced to another aspect of this phenomenon that we tend not to be exposed to in the UK.

“When I spoke to Turkish people about hate speech they said, ‘What do you do when you have a government that wants and supports hate speech, that hires people who are good at producing hateful messages, because they are a tool of manipulating and influencing people?’ And that wasn’t something I had considered before because I have been lucky enough not to live in an authoritarian country.

“You even see this in countries like America, where Donald Trump increased the divisions between people. What do you do when your head of state is using hate speech, in political speeches and also social media?”

 

Working in CRASSH

Dr Ullmann is grateful to work in CRASSH and finds the way the organisation pools resources from different academic disciplines together incredibly useful. The team collaborates on new ideas and looks at what we can do to solve problems.

She said, “When I did my PhD I was very isolated, but with CRASSH you share an office with other postdocs and visiting fellows, and you have the opportunity to discuss things with people who are not from your discipline.

 

“Our team consists of linguists, engineers and computer scientists; and we have collaborated with experts from philosophy, psychology, and the social sciences.” 

 

Dr Ullmann explained that when she has conversations with students of computer science or engineering, it’s often the case that they are simply not aware of the potential problems involved with the code that they write. When she tells them about issues such as gender bias in machine translation, they are surprised – and keen to do something that could make a positive impact.

“We need researchers to stop existing in their bubbles, especially in tech, and learn more about the social and ethical side effects of their work. This way of working is the only way we will find solutions.”

Dr Ullmann believes in free speech, but also that some things should not be allowed to be said. She also realises that making everything illegal is not the solution. So what can be done to resolve these issues? One solution is software Dr Ullmann and her colleagues have developed a quarantining tool that detects hate speech and treats it like a computer virus, puts it in quarantine and essentially protects the user from immediate exposure to harmful content.

Another potential solution is ‘counterspeech’, which has been shown to be an effective remedy to hate speech. Counterspeech undermines hateful or harmful speech, and Dr Ullmann believes it is an excellent tool for fighting hate and misinformation online. It uses a diplomatic approach of responding with empathy instead of aggression, and challenges statements of hostility and the narrative of hate speech in a rational way.

She has been looking at ways of improving the automation of counterspeech, a task that requires a multidisciplinary effort to realise. To this end she organised an online event last year to bring together practitioners, linguists, philosophers, sociologists, anthropologists, mathematicians, and computer scientists to discuss the issue. Dr Ullmann is collating an edited volume on this topic to be published by Routledge. She said, “Combining our efforts and using technology in creating counterspeech can effectively fight hate speech online.”

 

Living in Eddington

 

“I didn’t know exactly what to expect, moving from a different country. It’s been nice living here. The flats are almost perfect.”

 

Being immunocompromised, it was very important for Dr Ullmann to isolate safely during lockdown. Despite this she was still able to enjoy nature thanks to the green open spaces in Eddington. She spent a lot of time at Brook Leys Lake during the pandemic, “I went there every day to feed the swans. That was really helpful; it was a way to get out for a walk and to an area that was not overcrowded. It also helped to alleviate the fatigue that I get from my health conditions.”

No matter how challenging hate speech and gender bias are to tackle, Dr Ullmann and her colleagues at CRASSH are combining some of the brightest minds from diverse academic disciplines at the University of Cambridge, to create solutions to these global challenges.

 

Toggle navigationToggle search