While there are myriad shared problems for which government can provide good solutions—from public safety to healthcare to education to child support—trust in public institutions is at an all-time low. Around the world, spurred by political divides and the rise of cheap, unregulated digital communications, countries are arguing about the role of their leaders, and the balance between the individual and the collective.
In response to this crisis of trust, we’re a deep dive into resilient democracy this year. As we’ve been programming FWD50 this year, we’ve spoken with literally hundreds of amazing public servants, academics, civic tech leaders, and innovators. Many of them will be taking our stage in November.
One of my favourite pastimes is introducing two people who should have known one another for years—and watching them discover how much they have to discuss. I met two such people, and needed to get the two of them together—and that’s what this episode of FWDThinking is about.
Jamie Joyce is cataloging all of the big arguments in the world to give us a framework to understand Wicked Problems like climate change, and controversial arguments. And Dr. Yin Yin Lu—who describes herself as a communications scientist—did her Ph.D on the rhetoric of Brexit. Properly understanding the structure of arguments and the role of rhetoric in making good decisions as a society is essential, so it was an immense pleasure to introduce two people who’ve been tackling the same problem from different, but complementary, ends.
Get ready for a wide-ranging chat in which we talk about Daniel Kahneman’s System 1 and 2 thinking, neuro-linguistic programming, behavioral economics, and even the nature of truth. We did math puzzles, and I got shamed for fear-mongering clickbait. It was excellent.There’s no good way for me to properly summarize the conversation, and I don’t want to paper this post with out-of-context quotes, so you’ll just have to listen to it for yourself. We actually kept talking for quite a while after the end of this recording; if you want more of them, they’re both part of our 2020 FWD50 faculty!
All opinions expressed in in these episodes are personal and do not reflect the opinions of the organizations for which our guests work.
Click to read the full transcript of this episode.
[00:00:00] Alistair Croll: [00:00:00] Hi, and welcome to another episode of FWDThinking. FWDThinking is a joint production of the FWD50 digital government conference and the Canada School of Public Service, Digital Academy. Each episode, we talk to public and private sector executives, innovators, and thinkers about the work they’re doing and how it’s changing the world. This week, I am absolutely thrilled to introduce two amazing people to you who are changing the world in their own specific ways. Dr. Yin Yin Lu is a researcher who studied the rhetoric of Brexit and how that affects what people decide. And Jamie Joyce is cataloging all of the big arguments in the world to try to come up with an ontology for how we understand things and how we reach reasoned discussion in an era of record-low trust in government institutions.
The conversation about how to reach agreement [00:01:00] has never been more important. We met both Dr. Lu and Jamie Joyce as part of putting together the lineup for FWD50. And we couldn’t believe they didn’t know each other. So we thought we’d introduced them to one another and to you in the session. As you’ll see, it’s a lot of fun. We talk about system one and system two thinking behavioral economics, the nature of truth. I get schooled a little on hyperbole. We do a math puzzle. We had a blast and I think you’ll find it to be a fascinating discussion. It’s a lot of fun to join in with. To be honest, we kept going for quite a while after the end of this recording, but we wanted to cut it short in about an hour.
You’ll be able to hear more from both of them at FWD50 this November. So without further ado, please join me in giving a very warm welcome to Dr. Yin Yin Lu and Jamie Joyce.
Jamie Joyce: [00:01:48] Hello.
Dr Yin Yin Lu: [00:01:49] Hi.
Alistair Croll: [00:01:50] Hi. So, I’m ask both of you to explain what you do and where you’re from first, and then you’re each gonna realize you should have known each other years ago.
Jamie, why don’t you go first?
Jamie Joyce: [00:01:59] Sure. So [00:02:00] I’m Jamie, I’m the executive director and founder of the society library. And what we do is we extract arguments, claims and evidence from various forms of media, including social discourse in order to create a structured database of all arguments and claims on all points of view on really complex social and political issues. And we map out and visualize this data for public education awareness and to upgrade the way we have mass societal communication, because social media has enabled us to have mass societal communication, and now we need to evolve that into a structured communication. So that’s what we do.
Alistair Croll: [00:02:36] Alright, Yin Yin. How about you?
Dr Yin Yin Lu: [00:02:39] So, first of all, Alistair, thank you so much for that more than generous introduction. I would actually like to begin by saying I am not an expert in communication. And actually I think that one of the main reasons why we have a critical issue with communication and resilient democracy is because we live in the era of the expert. More on that later.
[00:03:00] So, my PhD is a measure of my unhealthy obsession with communication, not my expertise in it. Clear distinction there. I think extracting back from that on a higher level, I have called myself recently a rhetoric doctor, a communication scientist, and a happiness engineer. It’s the three kind of facets, the same exact thing. And I think one of the things I like to do is refer to myself with different labels for different audiences. I’m also a product marketing manager. I, up until May, was a product manager in the software space. Before that I was a project manager also in the software space and a marketing manager as well.
So two conclusions there: firstly recruiters tend to hate me because they cannot classify me. Secondly, you can see from the job titles I’ve had that communication has been absolutely fundamental to what [00:04:00] I do. There’s kind of two kinds of roles and companies. These are the people facing roles, so lots of comms, and there’s the actual doing roles like sales and engineering and software companies. So I’ve been mostly on the people side of things. So, higher level, conclusion over, I have spent five years obsessed, obsessing over the rhetoric and resonance of Brexit tweets at the Oxford Internet Institute, which I think is where indirectly led me to meet Alistair.
Alistair Croll: [00:04:29] So, tell me, when you talk about rhetoric, can you explain rhetoric for the lay person?
Dr Yin Yin Lu: [00:04:36] Is that me?
Alistair Croll: [00:04:37] Yeah.
Jamie Joyce: [00:04:37] You’re the doctor.
Alistair Croll: [00:04:38] You’re the doctor, you’re the person with an unhealthy interest,
Dr Yin Yin Lu: [00:04:42] This goes to show the degree to which I don’t see myself as an expert in this. Wow. Okay. I love that question because- random anecdote- about two months ago, a repair person came to fix my partner’s television in my house. And I was [00:05:00] talking to him about rhetoric. It came up in the conversation. And he just looked at me and said, I’m sorry, I don’t understand what that means. And that just- I was speechless because I’ve been living and breathing this for over 10 years. So I couldn’t conceive of what it meant to not know what it meant.
And again, that’s another issue that’s led to this crisis of disagreement that we fundamentally see the world so differently and we assume so many things that are not true. So, to define rhetoric, I would say it’s the science of persuasion. Very tricky word to use, especially in today’s context, very controversial. But it’s all about trying to- it’s a communication or a purpose, not just describing something, but trying to, for example, galvanized someone to take action. There’s some sort of action orientation to the communication. And I do view it as a science because the most [00:06:00] skilled rhetors have analyzed techniques and ways to galvanize people to do things in a very systematic way. And that’s what makes them so rhetorically powerful.
Alistair Croll: [00:06:11] I didn’t even know rhetors was a word. And now I do. I’ve actually talked to a few people about concepts, like neurolinguistic programming. For example, if I were to say to you, “I hear the weather’s great”, or “it looks like a lovely day outside”, or “it feels warm out”, I just used auditory or visual or tactile words to convey that. And if you are someone who’s dominantly visual, auditory or tactile in your speaking, you will then relate to me and go “Wow, there’s something about that person. We just got along.” And there’s tricks like this that salespeople and negotiators use, you know, to program people, to get them to accept certain ideas that we see more and more in politics. Do you think that the average person needs to level up their understanding of communications? Like, is this a skill that we need to teach [00:07:00] in the world of modern instant communications? I’ll throw that one over to Jamie.
Jamie Joyce: [00:07:04] I absolutely believe that. I think in general, we need to improve our media literacy, our digital literacy and our linguistic literacy as well. Rhetoric and neurolinguistic programming are not the only two concepts that can be deployed in order to manipulate people. It also can be done through reason, right? But hollow and empty and illogical reasoning. So I do think that in order to be a critically-thinking society, in order to have a democracy that actually works by operating from the consensus of an informed electorate, that we do need to upgrade our understanding collectively in many different ways.
Alistair Croll: [00:07:37] So Jamie, I noticed you did air quotes. When you said we have, you know, all parts of an argument- I think when we were discussing this, you said that there are obviously millions of data points about how someone discusses climate change, or you’ve recently moved towards code response and other issues. But you’re able to prune that down to a much more manageable sort of architecture of the [00:08:00] discussion. Can you describe that process? Because it sounds like a superpower.
Jamie Joyce: [00:08:04] Yeah. I mean, we’ll see how super powerful it is. But, when we originally spoke, I mentioned that we’ve been mapping the climate change argument in the English-speaking United States. And so what that means is we started taking in various forms of media, transcribing that into text and then extracting the claims and arguments from that text. And that means the derived claims, implied claims and implicit claims that would need to be there in order to complete the reasoning of an argument. And so in doing so, that yielded tens of thousands of arguments for every point that we found. But over time, we recognize that there are, like, patterns of argumentation. There are categories of claims. And so we started collapsing those. And while we maintained that there are so many different ways to explain a claim, or express a claim, one may be more rhetorical than reasonable, for example. We can collapse those into, like, a claim ID and then what we found from that is that in the English-speaking US, we found about 220 [00:09:00] sub topics of debate about climate change that fit under six different questions.
So the six questions that we found determine the relevance of the claims and arguments that respond to that question, but there’s still 220 subtopics, at least two positions for every subtopic, tens of thousands of arguments each and potentially just as much evidence. So there’s a lot of content, but we deduplicate when the meaning is the same. And it’s easy to deduplicate when we reduce down claims down to its smallest unit of meaning, if that makes sense.
Alistair Croll: [00:09:29] Yeah. So, I mean, that sounds awesome if you’re distilling it down to its most rational, you know, unbiased statement of a thing, you know, “climate change data is real” or something like that. And then it sounds like that’s where you start sprinkling a lot of rhetoric onto it. And I mean, the old idea of debating was logos, pathos, ethos, right? You have rational discussion, logic, you have empathy, like feeling for the other person’s sense of connection. I mean you have ethos, an appeal to their moral [00:10:00] compass and their ethics and so on. When we have conversations, are we thinking in those three terms when we unpack a discussion? So, Yin, maybe, when you looked at the rhetoric around Brexit, did you classify it into those kinds of structures or something else to understand which things have the most impact? Because my real question is, Jamie, it’s great that you’ve brought it down to like these nice, manageable 200 things that the UN needs to deal with. And then someone says something in a very inflammatory way. They use a term that is polarizing, right? So they say “egghead” instead of “scientists”, for example. And then the whole thing has moved from a realm of rational discourse, but people love irrational discourse. Like, that’s what gets clicks. So it does feel like we need to, for each of the branches of your argument, flag, like logos, ethos, pathos for that argument. Did you see that stuff in your research on Brexit, Yin?
Dr Yin Yin Lu: [00:10:56] Very happy you asked that, Alistair, in that I’d spent about one year [00:11:00] just reading relevant theories and literature, and to make a framework that I could actually measure in the tweets, the goal being “okay, what makes a tweet in this persuasive context, more rhetorically powerful in social media as measured by clicks, likes, shares?” And so obviously, that logos, pathos, ethos framework kind of was the lowest level of the pyramid. It kind of undergirded the different rhetorical devices that I looked at. So I was quantifying the degree to which, for example, specific emotions were being displayed in tweets. I named three in particular, as they are known to be very effective in political debate, art in particular, and are tied to the way that our brains actually work. And those are enthusiasm, fear and anger. Now, having done all that and classified and done all my models and the data science aspect of it, and dah, dah, dah, I’ve actually come to the conclusion that everything is [00:12:00] emotional to some degree. And that there’s no such thing as fact. Like, you can just look at a tree and go, “There are 10 apples on the tree. But actually, maybe due to your flawed sensory perception, you didn’t see a couple of apples that were hidden behind a tree branch. So actually, maybe there’s 12 apples in the tree. Or maybe you’re a bird and you can see from above and you can see 25 apples.” I mean, the point being there is like- calling something a fact is a rhetorical device. And I absolutely hate the way that politicians say “We rely upon the facts. The experts were putting this stack, this stack, the stack”. It’s all emotion in the end. And they’re using the device of disguising something that’s not really that scientific as science. The word science, I hate it. It’s all pseudoscience to me, in a way. Sorry, I get very emotional about this, especially with COVID. Sorry, I’ve calmed down.
Alistair Croll: [00:12:56] No, please don’t. I was just saying, I think I have a candidate for you.
Dr Yin Yin Lu: [00:12:59] Ah, [00:13:00] okay. Right. I promise at the actual conference I’ll come across as a lot more rational.
Alistair Croll: [00:13:04] No, please don’t. Besides, rationalism is a lie, as we’ve established. Science, in fact, are just opinions, right?
Dr Yin Yin Lu: [00:13:10] I’d also like to kind of comment on what Jamie just said, because I am absolutely fascinated by her methodology and the way she is scientifically, truly scientifically approaching this because to me, what Jamie is doing is almost a paradox because I cannot separate the idea from the person who’s speaking. What Jamie is doing is she’s doing just that. So in terms of, like, a thought experiment, brilliant. Like an artist, a work of art in a way. That’s how I view what Jamie is doing. And I really appreciate that. But on a practical level, I’d love to hear more about how Jamie is using this tool to actually drive impact and how she measures the impact. Because to me it’s like fantastic in concept, but like, in real life, we believe people we like and respect and admire. Doesn’t matter what’s [00:14:00] being said, who’s saying it, who’s listening. And that’s why your rational discourse is so effing difficult.
Jamie Joyce: [00:14:05] Well, Yin, thank you for passing on to me. I have so many things to respond to with you too. And I, have to say that what you said before, I couldn’t agree with you more in terms of, you know, how could we possibly determine what a fact is? And that fact is being used as a rhetorical device. I’m in love with it. And as someone who is knee deep and going through scientific papers, I can completely understand how ridiculous it can be to defer to scientific authority because when you start, like, looking under the hood and finding contradictions and finding weasel words, it can kind of blow your mind. And so one of the reasons why I’m so committed to this paradox of deconstructing the logic of what people are saying is to really uncover that, because what I imagine you mentioned earlier in that we are living in the age of the expert is that perhaps that we’re all suffering pretty badly from appeal to authority. And we now have the technology, the information technology to really unpack and transparently, collectively inquire into the truth of things. And we really need to grapple [00:15:00] with uncertainty and the extent to which we can really know anything and know that we know anything.
Now, as for your comment about the paradoxical nature and artistic nature of my work, it is very important when it comes to persuasion that the delivery is key, right? Whether this is coming from a person that people trust, an authority or institution that people trust. I will say, though, that the nature of our work is a collective and transparent inquiry into truth, and the extent to which we know something, it’s not necessarily to persuade anyone to believe anything. How we model ourselves as a system of representation of ideas so that people can visit a library of ideas and perspectives and view which types of evidence that people use to support their points of view and claims and they just explore that for themselves. And it’s not really our role to convince anyone to believe a certain thing by the time they’re leaving the library. Although, there are educational and messaging programs that we want to couple with what we’re doing to accomplish [00:16:00] a few design goals. One is intellectual humility. We want to increase intellectual humility and empower people to grapple with the uncertainty of things. We want to depolarize. We want to inoculate against this information. And then, we also want to increase subject matter knowledge. So right now, we’re working on climate change. We have some COVID work, some election integrity issues, but our goal really is to create a new digital library, which is just presenting this information uncensored in a new way instead of packaging it in books, you know. Packaging it in these what we call ideological narratives and just giving that education to people.
Alistair Croll: [00:16:34] Wow. Yeah. So I have so many questions too and jump back in and interrupt. So I’m assuming that you’re familiar with Jonathan Haidt and the sort of moral reasoning stuff, and you just talked about an appeal to authority. So I’ll get a very quick overview. One of Haidt’s things in there in the righteous mind, he says that humans make decisions, or basically rationalize their behavior, according to five dimensions of the moral mind, which are fairness, [00:17:00] freedom from harm, respect for authority, tribal affinity and purity. And these are not politically aligned. So purity on the left might be being anti-vaxxer facts or eating only vegan food. Purity on the right, may be no premarital sex. It’s still the same idea of something that goes into it comes out of your body, affects your worth, but it appears differently on each side of the political spectrum because of culture. That’s his thesis. It’s fairly well-researched, I think he’s increased it to six now. But he’s reproduced this research across many countries and what he’s found is that the political right, the sort of conservative view of the world, tends to support tribal affinity, meaning you’re part of my in-group, respect for authority, which is, “Hey, someone told me to do that” and period. And on the political left, it tends to be more about fairness and freedom from harm. And when you look at that, you can start to understand why people on the left might support, say, affirmative action, and people on the right might support Guantanamo Bay, for example. And I’m not throwing anybody under the bus. These are just generalized statements.
But what amazes me about [00:18:00] that is when you unpack this- I’ve had this conversation with a lot of people on different sides of the political spectrum- is that the word fairness on the left means equality. We’re all entitled to the same starting conditions, the same things. And the word fairness on the right means equity. I get out what I put in. If I work harder, I’m entitled to more. And so two people can sit there and go “we both believe in fairness, but have completely opposite views of what fairness means”. And then there’s almost this sort of socioeconomic externality where a person of privilege might ignore the inherent unfairness of what’s going on in the background. So it seems to me like, and we’re coming up with taxonomies here, Jamie, you’re building a library of how people might understand things. And I have lots of questions about libraries too, but let’s stick with this one. There’s an appeal to authority, which is under question right now, I think, because of the fact that we all have access to the world’s information in our fingertips. And it has to be said that when you search for Google, you’re not searching for- you’re [00:19:00] not asking it a question, you’re searching for an answer. So, if I say, you know, climate change is fake, I’m going to find a certain set of results. And if I say climate change is real, let me find another set. So we don’t have a system that teaches us. We have a system that answers our questions, but our questions are often leading. And so it really seems to me, like when we talk about coming to an agreement, we have to factor in these different dimensions of sort of moral reasoning and how they are amplified through Yin’s rhetoric and structured through Jamie’s sort of ontology of an argument.
How’s that going to change political discourse? Like, do we need to do a different version of this for, you know, this is the arguments for climate change that stem from purity? This is the arguments for climate change that stem from fairness. This is the arguments for climate change that stem from respect for authority and so on. Because it seems like until we sprinkle that nuance of moral reasoning onto it, we’re not actually going to produce [00:20:00] societal outcomes.
Jamie Joyce: [00:20:01] I do want to quickly jump in and say there’s a reason why we use claims and arguments instead of facts and opinions when we’re labeling things, because a claim can be any assertion of truth. It can be fact, opinion, it can be an emotional appeal, it could be, you know, very, very straightforward logic. So I just want to, like, sprinkle that into is that we do have, and especially at the higher levels of, like, positions, there are appeal to authority points of view. And there are plenty of parts of our taxonomy where people just- there’s a whole cluster of arguments about safety, there’s a whole cluster of arguments about the efficacy of certain climate change solutions, the cost feasibility, et cetera, et cetera. So those tend to be what we call, like, categories under which, like, there’s buckets and buckets of arguments that relate to that particular theme.
Alistair Croll: [00:20:47] So Yin, when you look through the rhetoric around Brexit, how much of it was based on fact, how much of it was based on just, you know, well fact again as an ambiguous thing, but where are the claims [00:21:00] mostly rational claims or attempts to sound rational or were they mostly, you know, based on just hearsay and opinion. Tell us what you found, what was the summary of your PhD?
Dr Yin Yin Lu: [00:21:10] Oh gosh. All right. To answer that question directly, I mean, I would be incredibly presumptuous to say that I could judge something, a tweet especially, is more towards the fact side or more towards opinion. And in fact, if I did do that, it would reflect my own socioeconomic biases as you’ve just said. So I did try to classify tweets according to what I called themes. There were basically, I think, five key themes: the economy, immigration security, a couple more. It’s been a while, sorry. I used that as the way to label them, as opposed to “are they more factual or not?” And none of my categories are mutually exclusive, so a tweet could contain enthusiasm and anger and fear and include the economy and immigration and this and that. So that’s how I approach as opposed to a spectrum of [00:22:00] either factual emotion, if that makes sense. And still, you know, my bias has went into it and I had a couple of people code a few hundred tweets and, you know, cross validation and all that into a reliability score that are done. So there weren’t some checks my biases. I think the-
Alistair Croll: [00:22:16] How closely did your categorizations line up with those of other people?
Dr Yin Yin Lu: [00:22:21] So at first it was like night and day, and the only way to make them closer and closer was to collapse the categories. So one category that was collapsed was what I called resolution from literature. It sounds very opaque, but it means the degree to which you are explicitly invoking an action. So verbs are the, kind of, main way that you materialize that in a tweet or language in general. So originally, I had zero, one, two. But how the heck do you distinguish between, like, a very, you know, strong call to action and imperative versus a somewhat strong call to action?
I mean, in my [00:23:00] mind it was super obvious. I had my coders do it. It was just like, “Oh, Okay. You know what? Let’s just do zero and one”. And that was- it went out from like 40% agreement to maybe 80% just like that. So I guess one higher level theme I’ve extracted myself from this conversation that I’ve always been thinking about is that the definition of key concepts is so critical to healthy discourse. You’ve just said, Alistair, about the left and the right believing different definitions of the same key word. Was it fairness? So, one kind of obvious thing that all of us have to do is spend some time upfront defining what we mean by these. It sounds super obvious. We don’t take the time to do this. We kind of assume that everyone thinks that fairness equals equality, right? We don’t take the time to say “by the way, this is what I mean, and please ask questions because questions are also very helpful”. And I call this- it’s part of my- I’m developing this communication science framework, because I believe that communication is completely broken and we have to just [00:24:00] reinvent or reimagine the discipline. And one of the elements of this is level of detail. So I think one way that you can define a concept is obviously, you begin with the end level of detail- to use a term from mathematics- which is, its the quality or equity, right? But then that’s not enough. Okay, so what do you mean by equality or equity? Then you kind of go down that hierarchy of detail to the end minus 10 level in some cases. And that’s through a dialogue with questions and answers. That’s the dialogue methodology plus the end level of detail, the framing of it leads to a definition of a key concept that’s collaboratively constructed. And from there you have your argument.
Jamie Joyce: [00:24:43] I love this so much. So in our climate change debate map, as well as all of our topic debate maps, the very first question, I think I referred to it as the fundamental question, which houses all of the subtopics of debate. Our very first question is the question of terms: what is climate change? And that has so [00:25:00] many different definitions because for some people, climate change is just a hoax and they will say it. But the context of the rest of the information around where that expression is nestled implies that, I mean, it’s a hoax, so they’ll explicitly say it’s a hoax. For some it’s capital C climate change, where it’s immediate, it’s going to be abrupt, it’s going to be catastrophic while for some it’s just climate change. Climate change has always happened and will continue to happen. It may be, you know, we don’t know how long it’s going to take for us to see the impacts. The impacts are going to be varied in different places around the world. So there’s almost like a different tone that can be put into language if you collect enough data, that kind of explicitly articulates what we mean by climate change. Is climate change a hoax? Is climate change impending and doomed? Is it just climate change that’s matter of fact? There’s so many different definitions of what climate change is when it’s used in discourse. And one of the problems is, is that when people are consuming media and they see that word, they’re bringing their assumption as to what that means. They’re bringing their implication. And if the rest of that media [00:26:00] disagrees with what they fundamentally believe: backfire. They’re digging in their heels. They’re like, this is fundamentally wrong. They’re not going to hear anything else because they don’t agree on how the term is being used, which is wild but really important when you’re having a conversation is to really get clear about terms.
Alistair Croll: [00:26:14] Yeah. There’s a guy named David McRaney who has a podcast and a book called “You Are Not So Smart”. A really nice guy, we’ve become friends over the years. He’s very thoughtful about biases. And he talks a lot about confirmation bias and we were talking about the internet and technology and what it’s doing to communication a while back. And David said that, you know, the problem with the internet is that the act of searching is not just to find information, but also to find an ingroup. When you go and find a book at the library, you pull the book out. You don’t get 50 people who agree with you coming out of the shelf with the book, right? When you go to the internet and you pull a book out, all of a sudden, a whole bunch of people who agree with you come spilling out, right? If you [00:27:00] Google best grilled cheese sandwich recipe, lo and behold, you found all the cheese sandwich fans. And I think that this is the problem; is that the information and the people on social media, the information and its creators are inextricably linked.
And this is why I’m so concerned about stuff like QAnon, because QAnon is, I think, an epistemic crisis for democracy because it’s challenging how we have a discussion. It’s headless, meaning there’s no person you can point to. I mean, there’s supposedly Q, but there’s no actual person you go “this is the QAnon party leader” yet, anyway. And there’s- it’s a perfect Petri dish where you can surface a thousand possible memes and see which one gets the uploads and the lols, and then push that out. And so I think we not only distrust science, but we are also seeing that in a world where everyone can be a publisher, as you said, Yin, everyone can be an expert. We are all in a competition to [00:28:00] see who can be the most expert on something and we’re judging your expertise by the number of people who agree with you rather than by whether it produces the outcomes you want. Are we- is this recoverable without a civil war? Like, are we going to have to- that’s a pretty big question, but we’re running a whole morning on this stuff on resilient democracy on November 9th because if you can’t agree on how to disagree, then you don’t have a society. Are you as worried as I am?
Jamie Joyce: [00:28:29] Can I jump in on that? I have so much to say about QAnon and I hope it’s something that you haven’t heard before. So, because in my line of work, we really get into the various points of view on an issue. We explore what some people may think are ridiculous rabbit holes, but we love going through them because people are truly expressing something that’s really important to them. And if there’s a large volume of this content, clearly it’s a hit. So for QAnon in particular, we haven’t done any formal research into it, but I did spend some time going down those rabbit holes. And for me, [00:29:00] one of the things that I see as the biggest problem that could lead to, like, a civil war, one is that I don’t think we appreciate the complexity of the QAnon movement. There’s many, many groups, you know. There is the suburban moms who are concerned about child trafficking because that’s a narrative that’s coming out of the QAnon group. And then there’s people who, you know, are actively asking for civil war and things like that. And so I think it’s really important that we start distinguishing the various perspectives that are a part of the movement. And one of our, like, nonviolent communication techniques is just to try and validate what there is to validate.
So QAnon has been moving from platform to platform supposedly for many years. And even though whomever it is, or whatever group it is speaks in a kind of code. It’s not really structured language. It’s a lot of “do your research and take this hint and you should know better” type rhetoric that I’ve seen. However, there’s some clues that people have used to formulate certain instances going on that have been proved out in the media.
Alistair Croll: [00:29:56] Yeah, I remember, I mean the pizzagate thing, and [00:30:00] then Epstein happened. So you look at it and you’re like, “Oh wait, this doesn’t exist. But this other part is partly true.” And so I don’t want to pick on Americans here, Jamie, but I’m going to. I had a bunch of salespeople who worked with me who didn’t understand the metric system very well. And we got on an airplane- I got an airplane, we were one of the ones, and I pointed to a cup of water and said, “That thing weighs a quarter of a kilogram”. He goes, “How can you know that? That’s weird. Is that like some super power?” And I go, “No, it’s 250 milliliters and a liter of water weighs a kilogram.” He goes, “What?” I said, “Do you understand that’s the whole point, right? By the way, it’s 10 centimeters by 10 centimeters by 10- what? You don’t understand this?” And he goes, “No”. And so I started to troll my friends, and I said, “There’s such a thing as metric time. And you know, outside of the US, we have 10 seconds, which go into 10 hours, which go into 10 minutes, which go into 10 hours, there are longer than individual things, but it adds up.” And they’re like, “You’re kidding, right?” I said, “No. Why do you think Newfoundland is a half hour off? Why do you think Canada has Thanksgiving on a different weekend?” And they went, “Whoa, really?” Like, [00:31:00] all I did was give you two facts completely unrelated, but they sounded sciency. And I’m terrified that if people are that easily swayed by unrelated things, we’ve just lost the ability to think.
And how do we, we restore that or have we hit some kind of peak stupid where people don’t have the critical faculties to make rational decisions and we just devolve into tribes? Which is, you know, one of, I think, Haidt’s big predictions.
Jamie Joyce: [00:31:29] Go ahead.
Dr Yin Yin Lu: [00:31:29] Okay. I have a couple of points I’d like to make. Actually, Alistair, your second question about people being stupid, in a way, overlaps a lot with your first question about civil war. So I think firstly, you are the King of clickbaity questions. Well done. Those questions are just meant to provoke me.
Alistair Croll: [00:31:47] Who, me? No, I’m just going to get trolled. We’re going to get nasty comments in this video.
Dr Yin Yin Lu: [00:31:50] And I actually would disagree with the way you’ve asked the questions because, you know, I agree that it’s a great way to get attention. But the reason I don’t like these questions, Alistair- and it’s not [00:32:00] personal, don’t worry. I really respect you and admire you, but I’m really not a fan of negative framing in this way. Hard to categorize it, but it’s, you know, “civil war”, “stupid”, like these very negative words, which are effective in getting people to click the headline, read the article or watch this broadcast. Fine. But the issue is that I fundamentally believe that we have a moral obligation to be positive and optimistic within reason. Not boundlessly optimistic, that’s dangerous. And that is the thing I see lacking everywhere, especially with the COVID thing on top of other issues, climate change, the race issues, Trump issues- I mean, all sorts of things. There is a severe lack of genuine optimism and positivity. I hate watching the news now because it is just starving children in Yemen and, you know, NHS does not have enough testing and dah, dah, dah.
So one issue is that [00:33:00] we’re too negative and Alistair, I’m sorry. You’ve just exemplified that through line of questioning. The second point though, about, you know, learning how to think- I love that phrase because I fundamentally think that what we need is to change the way that we think, talk and listen, all three of those things. And so the solution to this civil war that you alluded to, Alistair, is, I believe, education. And I know that’s a cliche, but it begins there and it doesn’t end. And we are lifelong learners. Education is not K to 12, it’s zero to a hundred. Assuming we all live that long. And I think the key, high level issue with education as I’ve experienced it- I’d love to hear your thoughts, Jamie, as well- but it’s, it’s such a siloed, disciplinary approach to things it’s much worse in Europe than it is in North America, but even in North America, this liberal arts thing, isn’t as liberal as it could be. There’s no cross disciplinary pollination going on. If it does happen, it’s a facade and we’ve got [00:34:00] these, you know, insane degrees like a bachelor’s in hairdressing or lawn mowing or astroeconomics. I don’t know. It all exists.
Alistair Croll: [00:34:10] Wasn’t it telephone sterilization was what The Hitchhiker’s Guide to the Galaxy, said was the least useful job.
Dr Yin Yin Lu: [00:34:16] And there’s like a hundred different types of anthropology alone. Master’s degrees at Oxford, I think. So it’s absolutely insane how specialized our education system is, even in secondary education. It begins there, especially in Europe and it gets worse and worse and worse. And that is fundamentally, I think, the core of the issue and it leads to these experts that we’ve got, it leads to people literally not speaking the same dialect of English. It leads to definitions of key concepts being different. It leads to so many things and education is a way of programming humans. I mean, teachers are brain surgeons without the surgery.
Alistair Croll: [00:34:58] You are a source of [00:35:00] great t-shirt quotes. Yeah, no, I mean, you’re absolutely right. I’m definitely showing my bias on this.
Dr Yin Yin Lu: [00:35:06] Sorry for what might have come across as a critique.
Alistair Croll: [00:35:09] Oh no, throw me under the bus, it’s where I live. That’s great and I am just curious about these things. My concern, if I unpack it, is that as you said, I firmly believe education is the key. But this post I wrote ages ago about being stupid was something about how does a society, when it hits a certain level of disrespect for education, stop funding education to the point that it no longer can function. Because education, you know, we originally had literacy, so people could learn religion and file their taxes. And now it’s sort of more- as you said, we’re programming people to be part of modern society. And if modern society is not functional, then- but it continues to do the programming, we’re kind of at a crossroads. And Jamie, when you said earlier that you want people to come in and learn, even if they have an opinion that [00:36:00] someone might not think is right or they’re in the minority, or is not backed by objective research, they want to come in and find themselves.
And other people should be humble. I shouldn’t call people stupid and so on. And the challenge I have with that idea is that, like, confirmation bias is strong. I don’t know a lot of people who go to the live- get up in the morning and go, “I want to read me some stuff that tells me why I’m not who I think I am”. That’s a very frightening thing. So, David, his next book is about how people leave cults and he’s literally gone and spent like a week with the David Koresh type people, the flat earthers, and so on. He spent time with them and he talked to one man who is in a flat earth society. Who actually left. And the reason he left was that his wife was in some kind of crystals and channeling society. I’m going to- I’m paraphrasing all these things, but he does. This man had a new place to go so he wasn’t leaving a group. He was moving to a different group that was awaiting him, if you will. [00:37:00] And so it really just seems to me like this sort of ingroup identification trumps everything else. And so how are you going to get people to visit the library, Jamie?
Jamie Joyce: [00:37:10] Great question. So the library, I will say, is actually the final iteration in a long line of projects that we’re working on. So the first series of projects are what we call the Great American Debate. And we kind of harken to a particular identity, which is the climate change project. The ultimate goal is actually the comprehensive library. And so the hand wavy dream is that we want to create a knowledge database online where- that future generations can grow up with before they inherit a bunch of different ideological identities and preferences. So it’s really to interrupt that generational political socialization that happens time and time again, based on where a child grows up and what kind of education they have and who their parents are and who their friends are. I certainly hope, although I’m not working on this in particular, that we will come up with new systems of belonging and new types of group [00:38:00] identity that are not anchored around politics. Maybe that is a pie in the sky dream, but I think that is intimately tied to the evolution of our socialization and social culture.
And I hope that we can detach that from education and politics. And I’m also- I signed up for that camp that believes that an investment in education is incredibly important. And when I think about the educational institutions we have now, it seems as though it’s an evolution of passing along oral traditions. So teachers, they learn this knowledge and they recombine it in their minds and they pass that along. And even though we have technology to pass that information through other materials, such as books and paper and websites and whatever, I do think that the teacher may soon become a different technological format because I see that as just an evolution of the oral tradition, and I hope that radically changes. I hope the society library helps to nudge that in a direction, but the society library is an educational entity, is one of so many different educational platforms. So to answer your actual question of how do we get people to [00:39:00] the library, we hope future generations will grow up with it and we will adapt it so it’s ultimately useful as an educational tool. In the meantime, the programs that the library spits out, the Great American Debate about climate change, that’s going to be using persuasive messaging to, identify the different ideological camps and groups that think a certain way about climate change and inviting them to view different points of view. And that’s marketing, that’s messaging, that’s storytelling, but what they ultimately get is is this knowledge database.
Alistair Croll: [00:39:30] So, good answer and it would be great if we all grew up with such a library. Let’s talk a bit about- we’ve strayed more into sort of politics than we normally would on one of these things, but I want to get back to the public sector. If public servants are trying to do good things, do good work often in the face of a political pressure to not do the right thing or to, you know, under or oversell things, how does a public servant navigate their delivery of services to a [00:40:00] society in the face of this sort of rapidly vacillating political narrative that we are all surrounded by today? How do you think public this- I know this is not an area of expertise for either of you, but if you’re a public servant and you’re facing the winds of frequent political change, whether you’re talking about, you know, poorest borders between Northern and Southern Ireland, in the UK at post-Brexit, or you’re talking about discussions around electoral ballots in the US or wherever that is, how does a public servant choose the right course, if there is no truth?
Dr Yin Yin Lu: [00:40:33] I have one immediate answer to this actually. And that is to get out of your ivory tower, oval office, whatever it might be, get onto the streets, knock on people’s doors and ask them questions and take a recorder with you and record what they’re saying, and ask, them more questions and genuinely, actively listen to them and speak to as many different people as you can, and get an army of [00:41:00] volunteers to help you out on this front, and then spend some time transcribing, analyzing what they’ve said. That’s my immediate answer.
Alistair Croll: [00:41:09] Jamie?
Jamie Joyce: [00:41:10] We can help with that. So I do think listening, yes, is very important. Getting out of your own head and getting into the worlds of other people, I think that’s incredibly important. For public servants today, as someone who considers myself a public servant of sorts, that’s why I’ve dedicated myself to the work that the Society Library does is so we can really hear people. At the end of the day, even though we may not know what is truth, when it comes to climate change or various issues, there are some fundamentals that we can lean on. If democracy is meant to behave by the will of the people, then, as Yin said, understanding the will of the people and just operating on the basic platform of their own rights. If there’s any type of voter suppression that’s happening and it’s difficult to determine if it’s true or not, what is the right course in terms of people having access to certain polling locations? It doesn’t matter who’s to blame or caused it, but at the end of the day, these people have rights. [00:42:00] So there are some definitions that are legally encoded that we can lean on, and some fundamentals that we can operate from, even if we can’t really discover the truth, because I’m sure Yin and myself know very well that actually inquiring into that is an immense amount of work.
Alistair Croll: [00:42:15] Can you give me an example of the time you were most surprised by just getting out? Like is there a case where you got out of the building and talked to someone and went, “Wow, I’m completely disconnected from how people at scale are thinking about this.” Has that happened to you in your research, Jamie?
Jamie Joyce: [00:42:29] Yeah. I mean, so I love talking to people because I will always, always hear an argument that I’ve never heard before. And that’s because, you know, we’re mining arguments and claims from books, and social media, and talking to experts, but there is a digital divide. There are some arguments that people don’t post, they’re not writing Medium articles, only people who have a certain level of confidence or, you know, whatever psychological variables, get them to get it online. So having conversations with people is actually immensely important and we’ve recently partnered with a group that has [00:43:00] focused groups and they interview and talk to people, and then we transcribe that content and then add it to our database as well. So we know it’s a very important part. And so that’s why we’ve created a specific partnership.
Alistair Croll: [00:43:10] The Atlantic had an article recently about India’s lynching epidemic, and the problem with blaming technology and the point that they made- one of the points in the article, it’s a great article, was that while we say, “Oh, you know, WhatsApp is behind a lot of this. WhatsApp is just secure SMS that works for free.” Like it’s not a social graph with algorithms. You put yourself in a room and you talk to people. And so one of the claims the author makes is we should not quickly blame social media, because social media is actually just us talking at scale. With that lead-in, Yin, you spent a lot of time looking at the underbelly of content on the internet. What is your sense of whether modern communications technology is good or bad for democracy?
[00:44:00] Dr Yin Yin Lu: [00:43:59] Right. Yeah. This question really resonates with me because the framing of technology being either the champion or the destroyer of the fabric of our society is, in my opinion, the wrong way to frame things. It’s the end level of detail. And you need N minus five, I would say, in this case. So in this context, you know- so in my PhD, I think that’s a good example. The case study there is my overarching finding, was that rhetorical devices that work in traditional contexts, things like including strong emotions, even playful language, which you think: no brainer. Right? That should help improve the chance of getting a retweet and a like. Well, actually, no. If you look at the kind of global average across every context, none of the techniques work except who is speaking, which isn’t a technique. And is there a video, which is more reflective of how the algorithm of social [00:45:00] media works. Right? So obviously, videos are difficult to ignore. Hence they go up a bit higher and they’re more visible and people might then see them and then they might click and all that stuff. But if you look at specific context- so one example of my research was live debates. There were so many of these on television during the Brexit campaigning period. And in the context of live debates, the technique of negative framing with playfulness- playful negativity was very successful. Why?
Alistair Croll: [00:45:32] I feel seen.
Dr Yin Yin Lu: [00:45:35] Yeah. I might have to retract my earlier critique. But lots of, you know, GIFs and, you know, playful images, but also just text juxtapositions were very common for playfulness. You know, the text would be with the image don’t quite match up. And it’s because obviously, you’ve got a more captive audience. So when you’re live streaming, the debate, everyone’s watching the debate is reading the live [00:46:00] stream. And so there’s more of a democratic discourse happening and the effect of the algorithm and the over-amplified impact of being a politician or a public figure or using a video is vastly diminished. So yeah, in short, you have to look at specific rhetorical or communication context just to dictate whether or not technology has accelerated an issue, or nearly just reflected the deepest, darkest recesses of our souls.
Alistair Croll: [00:46:31] I think it’s notable that the very first episode of the Colbert Report, he had a series of, a segment he called “The Word” and it was like the word of the day. And they talk about a word. And what we’ll do is exactly what you were saying. And he would say something and then the text next to him would actually contradict him. So like, he’d say- it’s because Colbert, in that show, was playing a person who was not himself, who was a persona. It was supposed to be a right wing TV pundit. And he would say, you know, “Hey, this is [00:47:00] great that someone has done this”. And then a little thing on the side would sort of make sarcastic reference or criticize the thing and say what it actually was. And the very first episode of the Colbert Report, the first episode ever, the word was “truthiness”. Like, that’s the word he said, I think it’s 2008. Like, he coined that term. He nailed it. That truthiness is just truth I like to believe. And maybe it’s not true. And it seems to me like, that’s like, to me, that is a watershed moment.
It’s an inaugural moment for it. We were all coming online for live streaming. We all had a reasonable history or footprint on the internet. We had several social media platforms that we put on different personas for. You know, I’m a different person on LinkedIn from Twitter, from Facebook. When I look at what we have built from a technology standpoint, we should be able- if you’re the founder of a constitutional democracy, you don’t need representation when every person can comment. But the tools that should empower us to have like direct democracy and [00:48:00] ask our citizens more directly and communicate with them and all these tools that should allow us to communicate at scale, instead of waiting for a printing press or only having, you know, only those that can afford a broadcast license. It seems like now we’ve gotten to a point where it’s more of a liability than an asset because there were these unintended consequences of communication at scale. And I worry that if you’re trying to get out, for example, best practices for COVID protection, or communicate policies like that, what should have been this amazing tool for communicating with your citizens at scale has now become a vulnerability. Am I being pessimistic or am I scaremongering again? I should throw in something witty at the end to make this funny.
Jamie Joyce: [00:48:43] I don’t think you’re being pessimistic. I think that whenever we communicate, and especially how much we communicate, we are increasing surface area for criticism and critique and comment. And so, even though it’s amazing that we have all these direct lines of communication with hundreds of millions of people in the US [00:49:00] alone. whenever that communication comes out, if it’s oversimplified, people are going to have questions. If it uses the wrong words that not everyone agrees as to what they mean, like, they’re going to be debates and critiques. So I think that that’s the complication with communication is that there’s so much abbreviation in our logic and language, there’s so many implications and every time we communicate, we increase surface area. And so like, part of my work is just to flesh out that surface area as much as possible and just lay that all out so we can all get on the same page. But, I do think that it’s kind of a double edged sword, like many institutions are a double edged sword.
Dr Yin Yin Lu: [00:49:34] I would agree with that in the sense that it’s so easy. Context collapse is a popular phrase used by researchers in the internet space. And it’s so easy now to take a snippet from any piece of online information that you see, whether it’s a tweet or a video or a blog, and put that in a different context to support your own argument. And because people don’t actually read beyond the headline, [00:50:00] then it doesn’t matter what the actual original source said. What matters is how it’s being used. That I think is the tricky thing, which is like education, again, coming back to that, is a defense against this dark arts usage of communication.
And, you know, I wrote a piece on Medium a couple of months ago entitled “Forget Data Science, You Need Communications Lines”. If you’ve read the piece, you would see that I actually am a staunch advocate of data science. I’ve been there, done that and love it. Did a whole webinar on it. Amazing thing. But of course, people took that headline and that was my article to some people was, “Oh my God, look at this crazy person saying that we should forget data science. How dare she say that?” So that was my fault. I was being clickbaity like you Alistair. That was- I completely had to, kind of, do that. And the reason I had to do that was this bloody capitalist framework that we all live in, which is getting worse and worse, thanks to the internet. How have we not mentioned capitalism yet? We are in a capitalist democracy. It’s not just democracy in a bubble. It’s a capitalist democracy [00:51:00] and capitalism is going completely haywire. And it’s the whole “bitch that bitch” situation you’re seeing on social media in every single context. All right, unbounded, unfettered capitalism, and the public sector is too slow to catch up. This is the issue. And you can’t get the talent. I’m sorry. But the talent is going to Google, to Facebook, to Airbnb. I know very talented people from my Oxford Internet Institute, where I did the PhD, who said “I would never work for Facebook.” Oh. And a year later, they’re all on Facebook. Why? It’s obvious. It’s absolutely insane. Sorry.
Jamie Joyce: [00:51:35] No, don’t apologize.
Dr Yin Yin Lu: [00:51:37] So we just got to get some regulations in there, attract talent. This is all going to take time, by the way. And, you know, I guess. Yeah. We can’t just take things out of context. Sorry. I’m going to calm down a bit. I know, right.
Jamie Joyce: [00:51:51] No, I’ll fire right up as you calm down. About taking things out of context, because actually, this is one of the criticisms I have of the fact checking community. I adore [00:52:00] fact-checking as a discipline. It’s so honorable and important. However, I have complained many times of instances where facts are actually taken out of their full historical context or full political context so it could be weaponized to essentially support the counter narrative. And that’s just kind of blown my mind is that we cannot reduce fact-checking down to just fact-checking a single claim, right? No, we really have to embrace the complexity and examine the full context in which that piece of information is correct. And then the other thing I wanted to say about the capitalist context- so I suspect maybe that Atlantic article was commenting on a recent documentary called The Social Dilemma, I have no idea. I haven’t read it. But- are you going to confirm, Alistair?
Alistair Croll: [00:52:42] Sorry. I can neither confirm nor deny. The article was just saying, “Hey, maybe we can’t blame tech.” And I mean, Tristan Harris has done great stuff trying to explain what’s going on, and I think for most people it’s good as a remedial lesson. I have a T-shirt that I literally woke up one [00:53:00] morning, ran across my street in Toronto to a T-shirt store and said, “Do you have five minutes? Can you type this in and print me one?” And it just says “Those who do not understand the workings of the internet are doomed to believe it.” And I’ve literally walked down Market Street with that shirt on and people high five me out of nowhere, pre-COVID, no fomites. But it does seem to me like, you know, the work of Daniel Kahneman, where he talks about system one and system two, where system one is very quick reactions where you don’t feel like you have a lot invested in the decision, and system two is “Okay, I’m going to spend some time thinking about it in a reason’s way.” Let me try and explain an example of this that is cited in a few books. Pardon me?
Dr Yin Yin Lu: [00:53:41] “Let me explain an example” is a verbal indication of going down the level of detail.
Alistair Croll: [00:53:44] Oh wow. Okay. I’m going to go down a level. We ready here, everybody ready? So if a baseball bat and a baseball cost a dollar and 10 cents, and the bat is a dollar more than the baseball, how much is the bat? [00:54:00] Quick!
Jamie Joyce: [00:54:02] Did you say it’s a dollar? What can you repeat that?
Dr Yin Yin Lu: [00:54:05] I’m so confused.
Alistair Croll: [00:54:07] So a baseball bat and a baseball cost a dollar 10 cents total. The bat is a dollar more than the baseball. How much is the bat?
Jamie Joyce: [00:54:15] A dollar.
Alistair Croll: [00:54:15] No.
Dr Yin Yin Lu: [00:54:16] Ten- Ten- Ten- No, 10 cents?
Jamie Joyce: [00:54:20] Oh, it’s 0 cents.
Dr Yin Yin Lu: [00:54:22] Hang on, no.
Alistair Croll: [00:54:23] So it’s a dollar five and 5 cents. Because the bat is a dollar five and the baseball is 5 cents. Right? So together, they cost the dollar 10 cents.
Jamie Joyce: [00:54:34] Oh, and that’s a dollar over.
Alistair Croll: [00:54:35] It’s a dollar more, right. But I didn’t say to you, “If you get this wrong, I will kill you.” Right? So it’s a reasonable, really complex math problem that you should really take a pencil and write down, and you’ve all done this right. X equals Y plus a dollar, right? And then X plus Y equals 10 cents is a dollar 10, so you can solve it. So it’s a simple problem, but Kahneman’s argument is that for most problems like that, we go, “Ah, [00:55:00] not a big cost of getting it wrong. Here’s a quick, easy answer.” Because as humans, we are like, lazy, and we don’t need to invest too much time. So we’re constantly satisficing, deciding how much intellectual effort to devote to something. It’s why we binge-watch Netflix with ice cream in our laps, right? Our better selves know that we shouldn’t, but we do it because we are satisficing the finite amount of energy we have. And it really seems to me like learning is a lot of work. Taking the time to visit the library, taking the time to unpack the rhetorical statements is a lot of work. And I’m fascinated.
I had a great conversation with Joyce Murray, who is the minister of digital government for Canada, last week. And she said to me that one of the challenges they want is they want to figure out how to get citizens to clamor for these changes. Like, nothing’s going to work until people are excited about delivering ubiquitous digital identity, for example. Nothing’s going to [00:56:00] work until we get everybody to agree that these are the things they want. As you said, get out of the office, get out of the building and find out what people want and then align those interests with that stuff. So I think when we’re looking at how to produce change in an era of low trust in institutions, we have to factor in this idea of system one and system two thinking, where system one for Kahneman is lazy, quick answer, and system two is “I got to get out the pen and paper, work this out, think it through, you know, use my conscious brain rather than my reflexive brain.” The psychology of government- it seems like governments are not hiring enough behavioral economists and psychologists right now. Would you agree with that?
Dr Yin Yin Lu: [00:56:41] Absolutely. Although in my dream vision of education in the future, we wouldn’t even have degrees in psychology or behavioral economics. We’d have degrees in themes like the internet or climate change, or I don’t know. It’s hard to kind of think about the future themes, [00:57:00] but the whole idea is that these themes would basically take into account psychology and behavioral science and a bit of stats, data science, you know, in one package. Does that make sense?
Alistair Croll: [00:57:12] Yeah. Yeah. That’s really interesting that you’re aligning according to verticals and sort of horizontals kind of thing.
Dr Yin Yin Lu: [00:57:16] Yeah, exactly. And the thing is that these themes would, by definition, have to be updated every five, 10 years. So the curriculum- one of the major issues with the current curriculum is that it’s pretty fixed, and that’s because, professors have no incentive to actually update it because they’re not paid to be good teachers. The metric of success is publications and high impact journals. It’s been this way for the past 100 years. Hasn’t changed yet. So there’s lots of things that need to change for the system to change, but until we get higher quality education that’s frequently updated and focus more on the most resonant themes, we’re not going to be able to improve the public sector in the way that [00:58:00] you’ve just described Alistair. So again, sorry to keep hammering on the E word. It’s, yeah, it has to be like- there have to be, I guess more investments in education and throughout one’s vocation or career. That’s the other thing I think has to change. I’m not sure how exactly to do it. I think Jamie’s- Jamie, your site library could be something that’s mandatory for everyone living in the US for example, I don’t know.
Alistair Croll: [00:58:25] You have to visit the library in order to vote on a topic. You have to, like- you want to vote on this or you want to vote on this bill? You’ve got to go.
Jamie Joyce: [00:58:34] I have ideas about how voting could change actually.
Alistair Croll: [00:58:37] How would you change voting Jamie? Since we’re solving society.
Jamie Joyce: [00:58:40] Yeah. Since we’re solving society right now. Well, one of the things that I think- you know, I come from, an American perspective and when the United States was founded, not everyone could really vote. Some of that was incredibly discriminatory, but some of it was reason that people who own land, are wealthy, are well off, are educated should be the ones that are voting. And so one of the things that I’ve been ideating on for some time is, [00:59:00] if we had a system of representation of ideas instead of a system of representation of individuals, so like impact on a group of people can be quantified and that’s a claim, you don’t necessarily need everyone who is a part of that group to demonstrate their impact by voting in a particular and particularly binary way. And so one of the projects that we’ve been ideating on once we finished the debate mapping projects is taking what essentially is our system of representation and trying to figure out, “Okay, how could collaborative win-win policy be created from arguments that are determined again, very soft and squishy and subjective at the moment, that are determined to be, you know, uncontested and meet a certain threshold of having been proved in relation to other arguments? And there’s models that we could make and it’s not going to be perfect, but that’s one of the things that I’ve been thinking about is that we have our values, we have our bill of rights, we have our laws. So how can we use those as the base framework for which we ideate and argue upon solutions to problems that have been well articulated, as opposed to trying to corral people to choose one side or the [01:00:00] other to solve all of their problems.
Alistair Croll: [01:00:02] Yeah. That definitely seems to be, you’re voting for a team, not voting for what topics you want. And I think this comes down to what you’re doing, what you’re saying, Yin, which is: if we have education around a specific domain, I’m an expert in internet, but I understand programming and ethics and all the other pieces of that are the cost systems, the economics. A few years ago, I got called out and challenged on it. “Well, okay. If you’re so smart, how would you change democracy?” And I got into a good conversation with some people around this idea of syndicated democracy. So I might not know a lot about education, but maybe I trust that Jamie does. So I give Jamie my token for any education related voting. And then Jamie gets some kind of compensation from the state, every token, she gets a dollar.
So now after a few people, Jamie can then set up her own media outlet because she’s paid to do it, she can do research, she becomes a representative, she carries around a bag of votes with her and if she stops voting the way I like, I can just take my token back. But if we say we have these tokens around like [01:01:00] 20 or 30 topics from the economy, to healthcare, to technology or whatever, that’s a neat model for syndication. We actually already have it today, it’s called the media. We just pay them with ad views and clicks. But because of that, as you said, Yin, because there’s an underlying, you know, bottom line and profit element to it, weird perverse outcomes happen from that. So I’m not trying to be inflammatory when I say, do we need a civil war or revolution? And hopefully nobody accuses me of sedition, but I think that the fundamental systems we have for incentives, for capturing externalities- best line I heard recently was that the problem with current politics is we treat the future as an externality, And until we figure that out, until we figure out how to incorporate that unknown future as not an externality, but it’s actually factored into things, we’re gonna have a very hard time surviving as a species.
Dr Yin Yin Lu: [01:01:54] Yeah, I think just a quick comment on this, and also Jamie’s previous comment. I think the key issue isn’t [01:02:00] fixing the voting system, technically, because it’s all this hullabaloo about helping, make it more efficient and streamlined and updated for the internet age and blah blah so it’s easier to vote and dah, dah, dah. I think that is immaterial. The key issue is a behavioral one. It’s not about the tech. It’s about getting people to care enough to overcome any hurdle that might be to vote, and Brexit is a great example of that because so many more people went to the polls and drove hours in the rain- it was raining the day that happened actually. People left work for half a day just to wait and queue for many, many hours and sat in the rain, for example, to be able to cast a vote, because the campaigning, especially on the Leave side, was so effective that got people to care and the future was not an externality. It was intricately tied to motivations and incentives of citizens in the present. And so I think we need to look at these case studies as [01:03:00] examples of effective rhetoric and learn from them, not dismiss them, but go “Wow, Vote Leave” was absolutely brilliant. I spent three hours grilling the chief executive of Vote Leave for my thesis and I loved every second of it. We really hit it off, actually. Sorry. So just to end with the fact that it’s a behavioral problem and it’s about caring. And much less about voting as an abstract tool.
Jamie Joyce: [01:03:25] And see, that is one of the reasons why I worry, and to actually borrow a phrase from Tristan Harris: “It’s almost like we’re racing to the bottom of the brainstem.” If we just have to make whatever, you know, whatever team you’re on, whatever they think is the right choice, you have to use the best rhetoric to convince people to care, to get there, to vote. I mean, to me, like, that’s why I invest my time in trying to figure out, you know, new systems of representation and just, you know, ideating on how could we deliberate and collaboratively solve problems without having to corral the crowd. I think Alistair what you mentioned was something similar to, like, liquid democracy. And for me, it’s like still a [01:04:00] popularity contest. So there’s gotta be ways that we can create new collective intelligence systems. And maybe it’s just that we have a veto vote as opposed to a vote for. That these decisions are made through this deliberative processes, collective intelligence process, and it can be transparent. It can be a transparent process, but at the end of the day, it’s- if only people want to get up and veto it as opposed to vote for it, that may be a better-
Alistair Croll: [01:04:22] Like in driver’s licenses where you have to opt out of organ donation versus opting in. That’s interesting. Yeah.
Dr Yin Yin Lu: [01:04:27] Not the binary yes-no, but I absolutely love that. Can I also fact-check your metaphor, Jamie?
Jamie Joyce: [01:04:35] Please, yeah.
Dr Yin Yin Lu: [01:04:36] Horrible, horrible concept, fact-checking a metaphor.
Jamie Joyce: [01:04:39] It’s not my metaphor either.
Dr Yin Yin Lu: [01:04:41] Yes. I actually met Harris and he spoke at the Internet Institute through one of his colleagues, he was doing a deal there as well. But- so I’ve met the guy and he’s super nice, but the brainstem, in my limited understanding of neuroscience, is this little blob called the cerebellum. And I effing love this blob because it is systems two [01:05:00] thinking. It kind of tempers our emotions and controls our physical movements. It’s a very small size, as an organ, but it has massive impact. And it kind of- in a way it’s diametrically opposed to the amygdala, which is obviously what the internet-
Alistair Croll: [01:05:15] But the amygdala is system one.
Dr Yin Yin Lu: [01:05:17] Yeah, but I think cerebellum is part of system two. So I think the bottom of the brainstem metaphor might not work, but that’s me as an unenlightened neuroscientist.
Jamie Joyce: [01:05:27] Well, I appreciate you fact-checking my metaphor. Yeah, I actually really do. Thank you. That’s ok, that’s why it’s recorded.
Alistair Croll: [01:05:35] So this is awesome. I’m very happy the two of you know each other, because you’re both brilliant, but you both have very different approaches and I think it’s sort of a bottoms-up and top-down approach to the same problem. Because you can’t talk about reasons, structures of thinking without the emotional rhetorical side of it as well. And so maybe this is the equivalent of the amygdala [01:06:00] and the cerebellum actually getting along and forming a salient consciousness that has a chance. I’m sorry if I sound overly polarizing, but part of my job is to get people to argue and get excited about things, and clearly you both did that. We have to wrap up now, this has been a fascinating conversation. I’m sure we’ll have more of them. We’re definitely gonna have some, we have a whole morning lined up on November 9th for resilient democracy. I’m thrilled the both of you are in our social graph, in my confirmation bias group. And I really look forward to more of these conversations. Thank you both so much for being here. I’m really looking forward to whatever you’re both working on next. Thank you for trying to save the world in your own little ways, and let’s hear it for a better education and for getting out of the building and learning from people on the street and talking to actual folks to understand them. The whole “seek first to understand” thing, I think, applies so much in restoring truth and trust in institutions right now. So, thank you so much, both of you for taking the time to do this. It’s been wonderful.
Jamie Joyce: [01:06:56] Thank you too. And it’s been just a pleasure, Dr. Yin.
[01:07:00] Dr Yin Yin Lu: [01:07:00] Ah, thank you. Can I just say I’ve been counting the rhetorical devices in our conversation: 11,093. That was hyperboles. 11,094 hyperboles!.
Jamie Joyce: [01:07:13] Do you know how long that would take me to deconstruct?
Alistair Croll: [01:07:15] All right, have a great day, thank you both so much. This was fantastic.
Dr Yin Yin Lu: [01:07:21] Thank you.
Jamie Joyce: [01:07:22] Cheers.