Peacebuilder and Ashoka Fellow Helena Puig Larrauri co-founded Build Up to transform conflict in the digital age, from the United States to Iraq. With the exponential growth of polarizing viral content on social media, a key systemic question has emerged: What if we made platforms pay for the harm they cause? What if we imagine a polarization tax, similar to a carbon tax? A conversation about the root causes of online polarization and why platforms must be held accountable for the negative externalities they cause.
Ashoka Fellow Helena Puig co-founded Build Up to transform conflict in the digital age.
Constant freshness: Helena, does technology help or harm democracy?
Helena Puig Larrauri: It depends. There is great potential for digital technologies to include more people in peace processes and democratic processes. We work on conflict transformation in many regions of the world, and technology can really help to include more people. In Yemen, for example, it can be very difficult to incorporate women’s views into the peace process. So we worked with the UN to use WhatsApp, a very simple technology, to reach women and make their voices heard, avoiding security and logistical challenges. This is an example of the potential. On the other hand, digital technologies bring immense challenges, from surveillance to manipulation. And here, our work is to understand how digital technologies are affecting the escalation of conflicts and what can be done to mitigate this.
freshness: You have staff working in countries like Yemen, Kenya, Germany and the USA. How does it appear when digital media increases conflict?
Puig Larrauri: Here’s an example: we worked with partners in northeastern Iraq, looking at how conversations happened on Facebook, and it quickly showed that what people said and how they positioned themselves had to do with how they talked about their identity sectarian, whether they said they were. Arab or Kurdish. But what was happening at a deeper level is that users began to associate a person’s opinion with their identity, which means that in the end, what matters is not so much what is said, but who says it: his own people, or other people. And it meant that conversations on Facebook were extremely polarized. And not in a healthy way, but because of identity. We should all be able to disagree on issues in a democratic process, in a peace process. But when identities or groups start to oppose each other, that’s what we call it affective polarization. And that means that no matter what you say, I will disagree with you because of the group you belong to. Or, on the other hand, no matter what you say, I will agree with you because of the group you belong to. When a debate is in this state, then you are in a situation where the conflict is very likely to be destructive. And escalate to violence.
freshness: Are you saying that social media makes your job harder because it drives emotional polarization?
Puig Larrauri: Yes, it certainly seems like the odds are stacked against our job. Offline, there may be space, but online there often seems to be no way to start a quiet conversation. I remember a conversation with the leader of our work in Africa, Caleb. He told me during the recent election cycle in Kenya “when I walk the streets, I feel like it’s going to be a peaceful election. But when I read social media, it’s a war zone.” I remember it because even for us, who are space professionals, it is unsettling.
freshness: The standard way for platforms to react to hate speech is content moderation: detect it, flag it, depending on the jurisdiction, maybe remove it. You say it’s not enough. Because?
Puig Larrauri: Content moderation helps in very specific situations: it helps with hate speech, which is in many ways the tip of the iceberg. But affective polarization is often expressed in other ways, for example through fear. Fear speech is not the same as hate speech. It cannot be identified so easily. It probably won’t violate the terms of service. However, we know that the discourse of fear can be used to incite violence. But it wouldn’t violate the platforms’ content moderation guidelines. This is just one example, the point is that content moderation will only catch a small portion of the content that is amplifying divisions. Maria Ressa, the Nobel Prize winner and Filipino journalist, recently put it very well. He said something to the effect that the problem with content moderation is that it’s like fetching a cup of water from a polluted river, cleaning the water, and then returning it to the river. So I say we need to build a water filtration plant.
freshness: Let’s talk about this: the root cause. What does this underlying architecture of social media platforms have to do with the proliferation of polarization?
Puig Larrauri: There are actually two reasons why polarization thrives on social media. One is that it invites people to manipulate others and to deploy mass harassment. Troll armies, Cambridge Analytica – we’ve all heard these stories, let’s put that aside for a moment. The other aspect, which I think deserves much more attention, is the way social media algorithms are built: they seek to provide you with engaging content. And we know that polarizing affective content, which positions groups against each other, is very emotional and very attractive. As a result, algorithms serve it better. What this means is that social media platforms provide incentives to produce content that is polarizing, because it will be more engaging, which is incentivizing people to produce more content like it, which makes it more engaging, and so on. It’s a vicious circle.
freshness: So the spread of divisive content is almost a side effect of this business model that makes money from engaging content.
Puig Larrauri: Yes, that’s the way social media platforms are designed right now: to engage people with content, any kind of content, we don’t care what that content is, unless it’s hate speech or something else that violates a limited policy, right? , in which case we’ll take it down, but in general, what we want is more engagement with anything. And this is built into their business model. More engagement lets them sell more ads, lets them collect more data. They want people to spend more time on the platform. So engagement is the key metric. It’s not the only metric, but it’s the key metric that the algorithms are optimizing for.
freshness: What framework could force social media companies to change this model?
Puig Larrauri: Great question, but to understand what I want to propose, let me first say that the most important thing is to understand that social media is changing the way we understand ourselves and other groups. It is creating divisions in society and amplifying existing political divisions. That’s the difference between focusing on hate speech and focusing on this idea of polarization. Hate speech and harassment is about what the individual experience of being on social media is, which is very important. But when we think about polarization, we’re talking about the impact social media has on society as a whole, regardless of whether I’m being harassed personally. I am still shocked by the fact that I live in a more polarized society. It is a negative externality of society. There is something that affects us all, regardless of whether we are individually affected by something.
freshness: Negative externality is an economic term that – I’m simplifying – describes that in a production or consumption process, a cost is generated, a negative impact, which is not captured by market mechanisms, and is harming someone else.
Puig Larrauri: Yes, and the key here is that this cost is not included in the production costs. Take air pollution. Traditionally, in industrial capitalism, people produced things such as cars and machines, in the process of which they also produced environmental pollution. But first, no one had to pay for the pollution. It was as if that cost didn’t exist, even though it was actually a negative cost to society, but it just wasn’t being the market price. A very similar thing is happening with social media platforms right now. Their profit model isn’t to create polarization, they just have an incentive to create content that’s engaging, regardless of whether it’s polarizing or not, but polarization happens as a by-product and there’s no incentive to clean it up, just like there was no incentive to clean up the pollution. And that is why polarization is a negative externality of this platform business model.
freshness: And what do you suggest we do about it?
Puig Larrauri: Make social media companies pay for it. By introducing the social pollution they cause to the market mechanism. This is, in effect, what we did with environmental pollution: we said it should be taxed, that there should be carbon taxes or some other cap-and-trade mechanism that makes companies pay for the negative externality they create. And for that to happen, we had to measure things like CO2 production or carbon footprints. So my question is: could we do something similar with polarization? Could we say that social media platforms or perhaps any platform that is powered by an algorithm should be taxed for its polarization footprint?
freshness: Polarization taxation is such a creative and new way of thinking about forcing platforms to change their business model. I want to acknowledge that there are others: In the US, there is discussion about reforming Section 230 which currently protects social media platforms from liability, and…
Puig Larrauri: Yes, and there’s also a very big debate, which I’m very much in favor of and in part, about how to design social media platforms differently by making the algorithms optimize for something other than engagement, something that it could be less polluting and produce less. polarization This is an incredibly important debate. The question I have though is how do we incentivize companies to actually take it on? How do we incentivize them to say, Yes, I’m going to make these changes, I’m not going to use this simple engagement metric anymore, I’m going to take these design changes to the underlying architecture. And I think the way to do that is essentially to provide a financial disincentive to not do that, and that’s why I’m so interested in this idea of a tax.
freshness: How would you ensure that tax content is not seen as undermining free speech protections? A great argument, especially in the US, where misinformation and hate speech can be spread under this umbrella.
Puig Larrauri: I don’t think a polarization footprint should necessarily look at speech. You can look at metrics related to platform design. It can look, for example, at the connection between belonging to a group and viewing only certain types of content. Therefore, there is no need to enter into issues of hate speech or freedom of expression and the debate surrounding the censorship that this entails. You can simply look at the design options around the compromise. As I said before, I don’t actually think that content moderation and censorship is what works particularly well to address polarization on platforms. What we need to do now is get to work measuring this polarization footprint and find the right metrics that can be applied across platforms.
For more, follow Helena Puig i Build.
Source link
Recent events have shown how social media has a unique ability to amplify extreme views. Users can find communities that affirm and validate their beliefs, creating feedback loops that can lead to further radicalization. In addition to the social costs, these extreme views can also lead to heightened intergroup conflict, with serious implications for civil society. Thankfully, there are steps that can be taken to find a solution to this issue.
Ikaroa, a full stack technology company, is committed to creating a platform that can foster meaningful dialogue and help to diffuse extreme viewpoints. Through its software, the company provides the necessary tools to encourage users to discuss opposing views in a moderated, safe space. With this focus, Ikaroa works diligently to curate content and provide features that allow users to engage with differing opinions without personal attacks.
In addition to its platform, Ikaroa provides resources to educate its users on current events and helps break down complex topics into easily digestible pieces. These resources can help users identify extreme views and understand their underlying causes, allowing them to interact with opposing opinions in a fruitful and informed manner. Moreover, members of the platform can engage with peers from different groups and create open channels for conversation.
The goal of Ikaroa is to promote healthy dialogue between different social and political factions, helping to deescalate extreme viewpoints. By creating a safe, respectful space for open-minded conversations, the company can help reduce the spread of dangerous extremist ideology. Moreover, users are invited to contribute their own knowledge to the platform, sharing their stories and insights with the goal of helping each other to grow.
By combining its unique platform with its exclusive resources, Ikaroa has taken a stand against the radicals who use social media as a tool to promote their extreme views. Through its efforts, the company has created a safe and informative space that allows users to engage with each other without fear of hostility. By showing users the dangers of extremism and teaching them the power of open dialogue, Ikaroa is working to diffuse the spread of dangerous ideologies.