🌉 Designing social media to bridge divides could change everything
Sparkable Co-Founder Vardon Hamdiu joined Professor Deb Donig on her "The Technically Human Podcast" to discuss how to build healthier social media.

Learn more →
Sparkable Co-Founder Vardon Hamdiu was invited to The Technically Human Podcast, a show about ethics and technology hosted by Deb Donig. She's a Professor at Cal Poly and a lecturer at UC Berkley.
Together, they spoke about how to redesign social media platforms to bridge divides and what profound changes that could lead to.
Listen to the podcast here:
Listen on Spotify, Apple Podcasts, or PodBean
Or keep reading for a lightly edited transcript:
Debugging Division: The Architecture of Bridge-Building Social Media
Deb Donig
This is Deb Donig with Technically Human, a podcast about ethics and technology where I ask what it means to be human in the age of tech.
Each week, I interview industry leaders, thinkers, writers, and technologists, and I ask them about how they understand the relationship between humans and the technologies we create. We discuss how we can build a better vision for technology, one that represents the best of our human values.
Today, we are bringing you a conversation with one technologist who is rethinking and reshaping social media to build platforms that spark empathy and joy, not division and hate.
Vardon Hamdiu is the co-founder and head of Sparkable, a young nonprofit organization seeking to build a social media platform aimed at bridging divides.

Growing up immersed in diverse cultures, Vardon has always been a bridge builder who navigates between worlds.
His family history has exposed him to the devastating consequences of communication breakdowns between ethnic communities and the outbreak of war. These experiences have profoundly shaped his understanding of the importance of empathy and social cohesion.
Over the past decade, Vardon has worked on the communications team of a Swiss president, he has studied to become a teacher, he has spent an exchange semester in South Africa, and he has engaged with refugees facing often traumatic circumstances.
These experiences made him acutely aware of the enormous disconnect between the information we consume online and the lived realities of many people around the globe.
He became deeply passionate about exploring why today's social media platforms are often dysfunctional and how these powerful systems which govern our collective attention could be constructed differently.
Driven by this vision, he made the pivotal decision to quit his job, drop out of his studies, and launch Sparkable, aiming to foster a healthier online environment.
Hi, Vardon.
Vardon Hamdiu
Hi, Deb.
Deb Donig
So, Vardon, I wanted to talk to you after learning about your new platform, Sparkable, which is a social media platform you designed with the specific vision of building a healthier social media platform aimed at bridging divides.
Just to give a little bit of background, this is an issue that is quite near and dear to my heart. At Cal Poly, I was the faculty advisor for a group of students who wanted to build a student coalition to develop a forum for civil discourse.
I have dedicated much of my career to thinking about and helping to create an understanding of how we can talk to one another across divides. And one of the fundamental beliefs that I hold about our democracy is that our democracy is contingent upon our ability to be able to speak to one another, to persuade one another, to be able to recognize what we have in common over our differences.
This, I believe, is central to the idea of a multi ethnic liberal democracy, and I think something that we have become perilously close to losing in our current age.
So, I really wanted to talk to you after learning about the work that you're doing. And I wanted to start off by just asking you what led you to want to build a platform like Sparkable that is designed with these kinds of intentions; what were you seeing or experiencing that led you there?
Vardon Hamdiu
Yeah, it's a difficult question. And obviously, it's always a combination of different things:
I would say, maybe importantly, I grew up with different cultures, and I've always felt a bit in between worlds, so I always had this interest in bridging different viewpoints.
I also come from a family that has suffered from the consequences of war, so I know what can happen in the worst case when communication between communities breaks down completely and how it can lead to physical violence.
And so this question of how we live together as humans and what happens if we lose our common ground or mutual understanding is something that really keeps me up at night, and it's definitely a question that has become more and more urgent in my life.
I also encountered it in my professional life; I have worked in the communications of one of the Swiss presidents for half a decade, and in that job I had to look through almost 500 news articles every day and to decide for each and every article whether it was relevant to the President and the team or not. Then, I took the relevant ones and created a press review that they used in their daily briefings.
So, I was working almost like a human filter. I did that for half a decade, and in that time, I looked through more than 200,000 news articles. It had a huge effect on me, especially because during the same time, I had offline experiences that were so contrary to what I was experiencing in our information environment. I lived in South Africa for half a year, did an exchange semester there, and taught in private schools but also in township schools. I saw a lot of contrasts.
I also worked with refugees and got to meet hundreds of people and hear how they had to flee from their home countries for different reasons, horrifying reasons oftentimes.
These experiences combined really led me to thinking a lot about our information environment, and especially the disconnect between our online environment, and the lived reality of so many people around the world.
It got me thinking about why our information environment is so dysfunctional, and, more importantly, how it can be changed.
Deb Donig
And how did you end up at the idea that it was social media that was the most important thing to change?
Because I've had folks come on the show to talk about the political, social, cultural, and informational divisions that are currently embedded in our society and that are indeed growing in our society. And many of them point to social media as a root cause, by way of the filter bubbles that social media creates, the informational tribalism that results and gets engendered by these products.
So, why social media as the direction that you wanted to take, the ideas and the concerns that you had? And why do you think that our social media environment causes these divisions in your view?
Vardon Hamdiu
Yeah, so why social media... I would say because it's the place where we spend most of our attention. That's obviously not true for everyone, but I would say globally that the big platforms are really the place where we spend a lot of time.
And where we spend our attention and how those systems that allocate our attention are designed has a giant impact. It changes our views and our beliefs, even if it doesn't happen from one day to the other. It really does happen gradually by way of what we're exposed to.
I think, as humans, we learn from what we see and what we hear, and so it really shapes us in profound ways. This also ties back to my experience of being exposed to a lot of news for half a decade, probably more news than, let's say, a normal person without that specific job. And so I think social media just profoundly shapes us; that's the reason why I focused on that.
And then, as for what are the problems with social media, I would say it does come down to the business model. Especially, or specifically, the advertisement model that most of these big platforms are based on, and it's, in my opinion, one of the fundamental root causes of many of the problems that we see these platforms exacerbating.
The advertising business model is the reason why these platforms have algorithms that optimize for engagement, and that obviously has huge downsides because the things that get the most reactions are not always the things that are most constructive and productive for us as individuals, but also us as society.
Deb Donig
Listening to you talk, I am reminded of the earlier days of my Facebook career. That is to say, not my career working at Facebook, my career as a user on the platform, and I graduated college in 2005, the year that Facebook began expanding its network to different college campuses, for those who joined after that, or to those who are not of the millennial generation, but slightly younger.
Facebook initially started on college campuses and then began to spread its usership across different colleges in the context of allowing people to connect with one another, first, only within the college that they belonged, and subsequently to different colleges, and then Facebook ultimately expanded beyond that, and the tagline that Facebook has consistently used to describe its product is the idea that it is connecting the world.
And I think of your experience, and I think of somebody who grew up living in multiple places, and somebody who grew up living across multiple cultures, potentially, as was the case with me moving from one culture to another and hoping to be able to reunite with people that I met and experienced there. And the hope, I think, was that Facebook would allow us to retain those connections and, more than that, potentially expand the number of people with whom we are in contact with, I think, a righteous vision of bringing people closer together.
You're suggesting that, despite that vision, the products – and I'm using Facebook here as a kind of metonym for all other social media products that are trying to do something similar – despite maybe worthwhile ambitions, the business model prohibits these platforms from achieving that end.
What do you think goes wrong on both a technical level as well as on kind of business decision level that maneuvers a company away from the aim that led to its inception and development and toward an aim that actually is antithetical to the vision articulated at the founding of that company?
Vardon Hamdiu
Yeah, I think you had a great episode with Yael Eisenstat, who worked at Facebook. We also have Frances Haugen, who worked at Facebook, and her revelations where it was clearly shown that within these companies, you oftentimes have people and entire teams who know what would be the right thing to do for Trust and Safety, the well-being of us as individuals, but also for the health of society and democracy.
But at the same time, you have these conflicting incentives; You have people or teams within those companies knowing what should be done, but then you have the business incentive, which, at the end of the day, always wins.
Through the Frances Haugen revelations, we have evidence of that happening where you, at the end of the day, have decisions that are made because of financial incentives.
I think that's the fundamental problem when you have systems that are designed or incentivized to be generating profit. The effects it has on society, the costs and the externalities, are oftentimes just a second priority and something that is done only when it doesn't chip away too much at profit.
And I think it's just it's very dangerous for obvious reasons; we see what it leads to. I have a huge sense of urgency in that regard. I think we are really at the point in time where we need to change because we're heading in the wrong direction.
Deb Donig
So, I tend to hear two basic arguments about the root causes of these divisions on a technical level and on a design choice level.
One argument is called the design choices argument, and that argument highlights the nature of the algorithm that most social media platforms enlist, or the type of algorithms that most social media platforms enlist in determining how users interact with content, since platforms, as you have already articulated, seek to maximize for engagement and keeping your attention on their platform as long as possible, and because that aligns with the business model, which is built around showing you as many ads as possible.
Platforms amplify content that gets the maximum amount of engagement, content that provokes an emotional response in us, particularly a negative one, and so these platforms, as a result, end up amplifying and elevating the most divisive, outrageous and negative content.
If this argument is true, then it seems to me that there's a solution to the divisions caused by social media, albeit a solution that would require companies to grapple with their profit motive. And that argument suggests a solution that is, to me, fairly simple; change the algorithm, right?
The counterargument to this point is that there's a basic problem in the architecture of social media that's responsible for these divisions.
This argument, sometimes referred to as the inherently divisive view, argues that human psychology and market incentives make it virtually impossible to build a large-scale social network that does not amplify division.
The argument goes something like this. The nature of social media is that echo chamber formation is inevitable since people naturally tend to connect with those who share their views. This means that over time, users get isolated in ideological bubbles where they rarely encounter opposing views, and when they do encounter those opposing views, it's often the most extreme or uncharitable versions, which, of course, makes people even more entrenched in their position. And in addition, social media filters out complex discussions into short posts where nuances and qualifications get graphed away, creating a constant set of misunderstandings and tribal signaling.
Finally, this argument says that without gatekeepers, as would be the case with traditional media, misinformation and emotional reactions spread far faster than any corrections could possibly spread.
Our global network status means that the spread happens on a global scale, which means that conflicts that would stay local ultimately become widespread and global.
So, on the one hand, you have this inherently divisive argument. On the other hand, you have the counterargument that says that these are not inevitable consequences; they're design choices that could be changed. So, what do you think about these two points of view? What do you think they get right, and what do you think that they misunderstand?
Vardon Hamdiu
It's such a good question because I think it's something that comes up a lot. It is almost ingrained in us, or has been pushed on us, this view that people are inherently bad or people want misinformation. People want this type of content, and so we're just giving it to them, right? It's this chicken-egg question: what is there first? The need or the services that are the way they are today?
I'm very much on the “it's all a design question” side. That's the whole reason behind building Sparkable, showing that it can be done differently.
Social media can be done differently if it is designed consciously from the ground up to be something that contributes to the greater good, to mutual understanding, and brings out our best sides instead of our worst impulses.
I think it's an easy excuse for many people to say, “Well, the way things are today is because we as people want it this way.” I think it's an easy excuse, especially for the ones who are contributing to this or benefiting from the way things are today.
However, there is obviously some truth to statements like misinformation spreads quicker than corrections and if you think about this philosophically, it's true that destruction is easier than construction.
It's almost like a design flaw of the universe, right? That you can cut a tree in five minutes but growing it takes maybe 50 years. So it's true that it's oftentimes easier to do the bad thing, the thing that is easy in the moment, like creating a lot of profit now, but it then leads to many problems later.
So, we have this bias of doing what is easier in the short term but harder in the long term instead of doing the opposite.
But I think the beauty about us as humans is that we can recognize that; that it’s like an inherent flaw that we have, and we can create systems that work against that or keep it in check, and that's why I think it's so powerful how we design systems.
Because with the same argument, you could say, “Well, it’s much easier for people to just go around and destroy other people’s property and get ahead in that way or just have the worst kind of behavior.”
But what did we do as humanity? We created technology, or things like laws, to work around that problem and to create a different incentive set, almost like creating a new incentive set on top of what is existing in nature. I think that's just so, so powerful, and that's why I love technology. When it's designed the right way, it can create incentives for us to bring out our best side.
There are examples of that: One of the biggest inspirations, also in my work with Sparkable, is Wikipedia. Nobody can say it's not possible because Wikipedia has shown that it's possible to have people globally coming together and working within a system that is designed to collect the sum of all knowledge. I think it's just such a beautiful project. It shows that it's definitely possible to have something different than the status quo with the big platforms.
Deb Donig
Okay, so how do you build a healthier social media platform? What do you do? Walk us through your vision and your execution of that vision in Sparkable.
Vardon Hamdiu
First, I have to say that Sparkable is very much a work in progress. And I would never want to pretend that I have all the answers to all the questions. But I think if we come together and try to find answers to these questions, then everything is possible.
How we approach it with Sparkable is really just starting from scratch, meaning at the business model side, and deciding consciously that we're not building a social media platform on top of a business model that has other incentives, as opposed to what we're trying to do, which is bringing communities together.
So, we're building Sparkable as a nonprofit. That's the first conscious choice. It doesn't have an advertisement business model. It's donation-based, again, similar to Wikipedia.
We try to think, “If we have this vision of a platform that brings people together – that exposes you to other viewpoints, but always in a respectful, civil way – how would we design that?”.
And now what we are doing is just that; we're building a platform, and we're asking ourselves, “Wait, do we need a ‘like’ button,” for example? Or “what could the like button look like if it is not tied to engagement and a profit motive”? In every design choice and everything we do, we think about the bigger vision and how we can contribute to that.
To be specific, Sparkable is a platform, a social media platform, where you can share articles, podcasts, or videos that you have seen, heard, or read and share why it gave you a better understanding of the world or a certain topic or of other people. And we're trying to collect these things.
I'm sure everybody has had those experiences of reading or listening to something that gives you a new insight into something, and we want to collect those little nuggets of gold and build a platform that evaluates which pieces of content are seen as high-quality by people from different perspectives. That's our approach at the moment, but it's very much a work in progress.
Deb Donig
Can you talk a little bit about one of the key terms that I understand plays into your structure and design of the platform?
A term that I understand is called “bridging-based ranking.” Just to give a little bit of background, as I understand it, it is the theory upon which your platform is designed in the service of accomplishing your vision for healthier social media.
So, what is bridging-based ranking? And how does it help accomplish the end that you're seeking?
Vardon Hamdiu
Bridging-based ranking is just such a fascinating topic, and it's at the core of Sparkable.
We have to quickly look at what current algorithms do; they are engagement-based, meaning they make predictions about which posts are most likely to generate clicks, likes, shares, and views, and they use this prediction to rank the most engaging content at the top of your feed. And this obviously tends to amplify more polarizing voices because those divisive perspectives are more engaging than a perspective that is more nuanced.
So that's the way it is done currently: optimizing for engagement. What bridging-based ranking does differently is it looks at how the content is seen by people who normally disagree. So if people who usually don’t like the same posts now both like a specific post, that post will be shown higher up in the feed. So, the system amplifies posts viewed as high-quality by people from different perspectives.
Bridging-based ranking is a concept that is already in use on a huge platform, on Twitter or X. It's used for the fact-checking notes. The early Twitter team that worked on it has demonstrated that it’s possible to use bridging-based ranking on a huge social media platform and that it works. These fact-checking notes are generally very high quality. The problem there is that Twitter is using it only for fact-checking notes but not for the posts themselves. So you have a platform that is just full of not really high-quality content, and then you put little drops of these fact-checking notes on top of it, which obviously doesn't really change much.
At Sparkable, we're trying to see what happens if you apply bridging-based ranking onto the entire platform so that the question becomes which posts are ranked first and not just which fact-checks are shown on posts.
Bridging-based ranking, to me, is wonderful. I'm super excited about the concept, and we're building everything around it.
Deb Donig
What happens when you use bridging-based ranking? You said that you're really excited to see what happens if a platform does that, what happens?
Vardon Hamdiu
We’re at such an early stage that we wouldn't be able to answer that question scientifically right now, but it’s the curiosity that I also have to see what happens when you have this different algorithm.
We have seen that those fact-checking notes work; you get really high-quality fact-checking notes that are seen as helpful by people who usually disagree. And so, for me, it’s pretty clear that a similar thing will happen if you take that concept and don’t apply it just to fact-checking notes but also to the posts themselves and then have an entire platform built around that.
I'm sure the outcome will be really interesting, and I'm convinced it's going to be much more constructive and productive than these engagement-based platforms.
Deb Donig
Okay, so I'm not sure that this is really a fair question because it sounds like the outcome, as you said, is not scientifically known, but it's more theoretical at this stage.
But walk me through what you believe, given your theory, would happen; let’s take one of the more divisive issues of the day, American politics, right?
So somebody says, you know, Donald Trump is unfit to be President of the United States. He is not mentally sound. He is declining intellectually, and his policies, particularly toward women, are going to set us back 100 years.
How would bridging-based ranking mediate between this kind of post and the way that it might move us toward division, back to something that follows suit with the kinds of vision and the values that we want to govern the platform?
Vardon Hamdiu
I’ll try not to answer on a specific piece of content or topic level but on a more systemic level.
On today's platforms, content such as that would be trending or shown very high in the feed because it generates many responses, negative and positive. In contrast, bridging-based ranking would highlight different kinds of posts. For example, posts that explain a certain viewpoint, and then you would have a more constructive atmosphere or space where people can be vulnerable as well, instead of dunking on each other.
It would be much more a space where people can come in with genuine curiosity and genuine willingness to learn from each other, and that would mean that you would see posts or content that is much more vulnerable and much more nuanced. You know, maybe somebody explaining why they voted for a certain candidate or why a political issue is very close to their heart, but with the willingness to listen to people who see it differently.
I think that's the beauty of systems like democracy. They depend on that kind of communication. And I think we are operating within systems that teach us that that's weak or bad and that it's much better to be the one dunking on others and be proven right by people who see things similarly to you. But that, to me, is weak because you are not getting challenged.
And so I really dream of a space where the opposite happens, where you are challenged, always in a respectful and civil way, to see things from a different perspective.
There's this example with the cube: one person says, this cube is green, the other person says this cube is red because they are looking at different sides of the cube, which really have that color. But now they start arguing, and if they don't listen to each other, they will never find out that, actually, it's a cube that has sides with different colors, right?
To me, it's a powerful metaphor for this concept that we can only get closer to the truth -if there's something like truth- if we come together and are willing to listen to each other and to see things from the other person's perspective. And I think when we do that, a lot of progress can happen on an individual level.
I've seen that in my own personal life, but also on a societal level; whenever we have that willingness to listen to the other person and to learn from them, we actually grow so much quicker, and we progress -in the truest sense of the word- so much more quickly than if we are in that mindset, or in that mind frame of, I'm right and you're wrong, and I have to prove that I'm right and I have to prove that you’re not.
Deb Donig
Well, how do you deal with not just, you know, points of opinion where people have different interpretations or are looking at different facts and therefore deliberating toward a different conclusion, but actual misinformation and disinformation on the platform?
Vardon Hamdiu
Yeah, it’s definitely important to make a distinction there. Our approach toward misinformation is very much inspired by Wikipedia; having this community-based resilience. Having people able to say, “This is factually incorrect,” and when that’s demonstrated with sources and agreed on by people from different perspectives, certain posts could then be made unavailable on the platform.
I think Wikipedia is just such a great example, and probably one of the only ones still standing, of how this can be done. I would love for Sparkable to work in the same way that it's community-based but it's also transparent. You can always see how a decision was made, what the central arguments were, the sources that were cited, etc.
Similar to community notes, having that more fact-based approach would definitely work for questions of whether it is misinformation or disinformation and needs to be taken off the platform.
Deb Donig
I mean, I would be curious to hear a little bit more about how you think the kind of structure of Wikipedia fits into the social media enterprise.
I undertook many years ago, the process of actually editing and creating Wikipedia pages. And one of the things that I discovered in doing that is that it's actually a lot harder than one might think to do, that you actually have to be able to source things quite significantly and specifically in order to get a page up, or in order to make an edit or change to an existing page, you have to have structures of information that provide citations reference points to other places on the internet where this information could be corroborated, and then the page actually undergoes review before it can be properly updated.
And so I wonder how that translates into a social media enterprise where the aim, the structure, and some of, I think, the expectations of the platform is asynchronous, but fairly, you know, instantaneous communication of ideas.
Do you think that there's some friction between the Wikipedia model, given at least the experience that I've shared? If that experience is indeed a regular experience for Wikipedia and the structure and the ends and the practices associated with social media.
Vardon Hamdiu
It's a fascinating design question, right? And that's the beauty of thinking about it in a free way. When you're not tied to an advertisement or for profit business model, you can start to think about, well, what could the Wikipedia model look like if we apply it to social media?
We obviously don't have all the answers to that question; it's a work in progress, but it would, in my view, work really similarly that you would need to have a certain friction, especially for big decisions, like „is something supposed to be shown or not on the platform“?
You would need to have maybe not created your account an hour ago. You would need to have a certain history on the platform that shows you're committed to the platform's vision of bringing communities together and having these types of friction built in. I think we can learn a lot from existing projects like Wikipedia. They’re far from perfect; there are also problems with Wikipedia, but it's, in my opinion, the best way I've seen yet to deal with these questions. And so I'm very inspired by their approach, and I wouldn't see friction as a problem. I think it could become almost like a feature, right? Because today’s platforms are very quick and very fast.
I sometimes make the analogy to food, where you also have fast food, and we crave it oftentimes, but we know it's not good for us. In a similar way, I think we have these fast platforms where we’ll realize that slowing things down a bit or having these types of friction can be really beneficial to the quality of the platform. I think that's really a process that we're in.
Societally, we're realizing the value of sometimes slowing down things a bit or or having a bit more friction built in.
Deb Donig
There's another premise that I wanted to challenge. I'm not sure that it's necessarily your premise, but it seems to be a premise commonly shared by people who engage in the social media development space, which is that in a free marketplace of ideas, the best ideas or the truth will always rise to the top.
That seems to be a conception that I would actually argue is a misconception in our contemporary social media environment, shared by a number of creators and proponents of these platforms.
I think if we look across the span of human history, that does not bear out as a premise terrible ideas, including conspiracy theories, the spread of ideas that become manifestations resulting in human rights abuses, genocide, etc., tend to be or belong to, in fact, spicier stories, right?
I am a narrative scholar by training, and so I know the difference between the truth, which is, you know, a factual basis that may be quite banal at times and uninventive.
What if the story that may carry more excitement, more anticipation, we like to say in literary studies that fiction can animate reality in a way that oftentimes facts themselves cannot, which is why I think so many people are enticed by, for example, QAnon, you could say, Well, the reason that your life is terrible, or the reason that you know you aren't as wealthy as you would like to be, or the reason why you can't find a job is that there are these structural issues and policies over the last 40 years that have eroded the middle class, etc.
We can take a look at those specific policies, and we can take a look at the erosion of the geopolitical stability that was undergirded by those policies, or we can tell you that a group of people are malevolently manipulating bureaucratic and government processes and undermining things so as to create a manifestation of evil that affects you and your community directly, right? And the second one just, it's a better story. It is a better story.
And so I guess I have a question here about that premise that you know, if we just let things be, the truth will rise to the top. If we let people sift through the information themselves, they'll pick the best information.
How does Sparkable negotiate that premise? How do you think about that idea? And I guess the broader question around this is the question of that free marketplace of ideas, or what sometimes in the United States is referred to as the essential free speech issue that we see constantly debated about the nature and responsibility of social media platforms to moderate user content.
Vardon Hamdiu
I think it comes back to one of the previous questions we discussed, where I said that it's almost like in the design of the universe that bad things are oftentimes easier than good things. Same thing here, right? You have these narratives that spread much more quickly, and the problem, I think, is the systems.
We can design systems that exacerbate that problem, or we can design systems that alleviate that problem or counter that problem.
So, if we have platforms geared or designed toward virality, that's a recipe for disaster. That’s how these conspiracy theories spread like wildfire because these platforms are designed to spread and reshare things and then make it super easy to rank these things higher.
But if we design systems that do the opposite, like Wikipedia, then you see that the opposite is also possible. Because there you have this resilience against this type of easy narrative. And so I think it's all a design question: how do we design the systems?
The same thing is true for our approach with Sparkable, where we want to design the system in a similar way so that it doesn't facilitate that spread. Coming back to the question before that, it adds friction that is antiviral in that sense that it detects when something is going viral.
And then, instead of making it even more viral, it would slow it down, put it in a cue to be reviewed by people, and then decide what to do with it. But just having that friction built-in is super important.
Secondly, having transparency and having it be community-based, having the power to decide whether something is moderated, and having that power be decentralized is important to me. I'm not talking about decentralization in terms of, I don't know, Blockchain or whatever technical decentralization. I'm talking about decentralization of power.
Wikipedia is the best example of this. It's a central system where everybody is on the same platform and sees the same things. Still, the power over the editing process and the moderation process is decentralized among all the people who are contributors or editors.
I would love to see that concept on a social media platform and have that be as transparent as it is on Wikipedia, where you can see who has taken which decision or who has made which edit.
Having a system like that would be, in my opinion, much, much more resilient against things like these easy and sometimes dangerous narratives spreading so quickly.
Deb Donig
I mean, I think Wikipedia is a good source of comparison for my next question, because, as I think many people, if not all people, have experience to use Wikipedia over the past couple of years. Before you can access a page, you get a pop up anytime you're trying to access Wikipedia that says, essentially, we're begging you to send us $2 just $2 please. Send us $2 send us the lint in your pocket. Please, anything we need it. Please, please, please, right?
And I worry that projects that intend toward the good oftentimes do so in tension with the need to be profitable.
I know that you say that currently, Sparkable works on a donation based source of revenue, but ultimately, I think if you want to get users to the platform, if you want to scale it, if you want to indeed produce the kind of social cohesion work that Sparkable aims to want to supply to mend the kinds of divisions in our society, you have to be able to scale it in some way. And in order to do that, you need to be able to fund it at that scale.
So how do you align the desire to build a healthier social media product with the need to be profitable, a need to grow and need to scale?
Vardon Hamdiu
That's one of the biggest questions that have been on my mind ever since I started this project because it comes back to this problem that we have, that often it's much harder to do the right thing or the difficult thing than it is to do the easy thing that might be leading to worse results. It's the same thing here.
With the financial incentives, it would be much easier to build something that generates profits with something like an advertisement business model.
Maybe someday, we'll get to a point where this is not true, where projects like Wikipedia wouldn't have to beg for money because we would have something like a universal basic income or some other kind of technology that we build around the problem of financially being in a precarious situation if you're trying to do the right thing, to work for the greater good, the public interest.
We will hopefully solve that problem at its root level at some point. But until we get there, we can try to find different systems in different ways. And I've spent a lot of time thinking about new business models. We have the advertisement business model, the paywall-based business model, and the donation-based business model.
For now, we have chosen to go with the least bad of these three business models. But the problem with donation-based is that it's precarious. You always have to be asking people. And so we were also thinking about some sort of a freemium model, where you can either see everything for free without any barrier or you could choose to pay a little bit to see only the highest quality content. Then that would be a recursive model where the money would be split and redistributed to every person or everybody who contributed to it.
That was a thought experiment that we haven't implemented and tested yet, but if we spend time on those questions, I'm sure we can come up with better solutions than we have today.
It's just that for now, we're focusing on building Sparkable in a way that provides value and this unifying aspect.
I'm very interested in this question of whether we could create different business models, but until we get there, the donation-based business model is just the least bad of the options that we have at the moment.
Deb Donig
How do you hire people to work with you on Sparkable? Do you look for people with an interest in ethical or responsible ways of thinking? It seems to me like your approach is really to build a kind of ethical, responsible product.
Do you think that the leadership vision and the vision of the people that you hire play an important role in the direction of the product? Or is this a product that can be created by way of solid directives from a kind of like well directed, responsible, ethical leader and then relate to engineers who can be agnostic as to the ends and who don't necessarily need to care about the issues that you care about?
Vardon Hamdiu
I think it's true that a project like that can only work when you have people coming together to work towards this vision. So, people who are aligned with the vision and see the problems we have with today's social media platforms.
We can disagree a lot on how to build a better solution, but it’s important to have people on board who see the same problems we see with these platforms.
That's probably what’s different about creating a mission-driven project: ensuring people are aligned with what we're doing is a central part of the hiring process.
It's the same for the community. It would be best if people join the platform, knowing what the problems are with today's social media and bringing that awareness into it. That will create a much different experience.
So I don't see it as just an internal thing, but also external.
Deb Donig
How do you think about the role of the government, legislation, regulation, etc, with regard to social media? Should we enact regulation or legislation?
Or do you think that this is a problem the problem, that is to say, the problem with social division as manifested through social media, that can be solved through and should primarily be under the restructuring guidance of technologists?
If you think that we should enact regulation legislation, think that government has a role, what do you think that that role is? What kind of regulation or legislation would you want to see?
Vardon Hamdiu
First, I’m not a policy expert, but I definitely believe what governments can do with regulation is part of this question of what are the systems we design around us.
I'm a big fan of decentralizing power, and if we have these huge platforms, it's important that there's not one person deciding but that there's a community of people.
And the same thing applies to governments. For example, if we have democratic governments or democratically elected governments, I'm a huge fan of oversight and regulation because we need democratic oversight over these platforms and over these systems that allocate our collective attention.
Where we allocate and how we allocate our attention globally is such an incredibly powerful thing. So we need to have democratic oversight over that, and regulation by democratically elected governments is a huge part of it.
It’s not the part I'm personally working on with Sparkable, but I see my role or the project's role as being complimentary to that, being an argument that regulators can make to put more pressure on existing platforms by saying, look, it's possible to do things differently, and just breaking down that argument of, well, things are the way they are today, because that's the only way they can be.
I would love to contribute to building different visions of the future, but regulation is extremely important. I think it has to be democratic, and it has to be distributed. We cannot live in a world where we have a handful of people in charge of these hugely powerful systems.
Deb Donig
Are you optimistic that we'll be able to heal the social divides that you've seen and that you argue have been caused by our social media ecosystem?
If you're optimistic, why? If you're cautiously optimistic, what is precaution, and if you're not optimistic, why not?
Vardon Hamdiu
I am extremely optimistic, and that's the reason why I'm working on Sparkable.
I said at the beginning that thininking of where we're heading and what can happen if communication breaks down keeps me up at night. And we've seen that these platforms can and do have an impact on conflict or conflict regions. So, it's definitely something I'm worried about.
But at the same time, I'm extremely optimistic because I know that if these platforms have the power to exacerbate those problems and spread hate and division, then if we consciously design them to do the opposite, they can have the opposite effect.
Coming back to the food analogy, what we eat every day is what our body becomes. And so what we feed our mind every day is what we become, the beliefs we have, and so on. So there's incredible power in that. And currently, it's really running wild. It's unregulated. I think we're not yet aware enough of how powerful that is.
What makes me optimistic is that I'm seeing that we're getting there. These questions enter the mainstream, and people are becoming more aware. And the starting point for change is awareness about the problem.
What I'm trying to do, or what we're trying to do at Sparkable, is coming in and helping to show what could be or how things could be done differently. And for that, we need people to come together.
No one person or small group of people can do that. We need people to come together on a large scale. I'm extremely optimistic that if we do that, we can create systems that bring out our best sides instead of our worst.
Deb Donig
Thank you very much, Vardon.
Vardon Hamdiu
Thank you.