The sixth episode of ‘Radio Resilience. Conversations about our ability to adapt and respond to change‘ is about new and emerging technologies, how they can be weaponised by foreign and domestic adversaries of democracy and what democracies can do to become more resilient against these hybrid threats. For this exchange, held on 19 December 2022, we were joined by:

  • Zoe Zoe Stanley-Lockman, Innovation Officer in NATO’s Emerging Security Challenges Division
  • Bishop Garrison, Fellow at the National Security Institute, VP Government Affairs and Public Policy with Paravision, former Senior Advisor to the U.S. Secretary of Defence
  • Edward Christie, Senior research fellow with the Finnish Institute of International Affairs
  • Alice Stollmeyer, Executive Director, Defend Democracy (moderator)

You can listen to the conversation here, and read the transcript below.

Alice:

Our conversation today is about new and emerging technologies, how they can be weaponised by foreign and domestic adversaries of democracy and what democracies can do or are doing to become more resilient against these hybrid threats.

What do experts mean when they talk about ‘new and emerging technologies’? Can you give some examples of such technologies?

Zoe:

In 2019, at NATO we listed the main emerging and disruptive technologies in seven technological areas that basically had some underlying factors. Many of them were areas where the technological advancement is really driven by the civilian, private and academic sectors, rather than the traditional 20th century innovations that we saw spinning more out of governments and militaries. Some of them were clearly possibly disruptive threats to the way that we deter and defend. So we started out with these seven technology areas, and at the risk of sounding buzzwords, I’m going to list them off for you.

It’s data, artificial intelligence, autonomy, quantum technologies, biotechnology and human enhancement, hypersonic technologies, space. We constantly scan the horizon, not only for the convergence points between these different technological areas, because that’s usually where the most disruptive potential lies, but also scan for trends that we need to continue enhancing our shared understanding of as an alliance. So we’ve added two more technological areas to the list. Those are novel materials & advanced manufacturing, and energy & propulsion. And I think on that last point, that’s where we see a lot of the green tech and clean tech advancements that are going to be so important for our future resilience.

The main points from our side at NATO is that we need to better engage the innovation ecosystems that are driving the advancements across these different kinds of tech verticals, and also pay attention to those convergence points between them. Because that’s where some of the most important scientific breakthroughs are happening. And as I’ll get into a little bit later, that’s where we start to look more at deep tech, where we’re focusing on higher risk technology areas that have longer development life cycles. Their hardware and software integration makes them riskier. The life cycle means that short term profit maximization that we see market dynamics driving toward aren’t necessarily as attracted to, which is why we see enhanced interest in propelling that future innovation from that public perspective. And again, always looking toward not only what’s emerged, but what’s going to be on the horizon.

Bishop:

It is fantastic to have a panel like this, because I think we really do approach even a basic or simple question like this from two different aspects. When I think of emerging technologies, I’m talking about developing technologies that will be available in about five or 10 years. As it relates to my expertise and public policy, we start having discussions around deep fakes and the ability to identify and detect them. But more broadly, the technology that’s really going to be disruptive is along the lines of virtual reality and augmented reality. There are many different ways that both can be leveraged, particularly when we talk about the military, national security generally, or even talking about healthcare and other industries, the ability to train professionals for highly skilled positions. It will be very necessary to add additional safety and take away any associated risk.

Another great one, that we’ve continued to see really developed is the Internet of Things. IoT is going to continue to provide both challenges and a lot of potential pros to its use. But we really need to get a better understanding of what that means for our interconnectedness and what it means for security risk across the board from a cybersecurity perspective as well.

And then one that we’ve seen recently in the news is the major breakthrough that was recently made, and will potentially be made, in fusion energy. The seismic societal impact that something like that could have, literally, if they’re able to make any type of commercialisation movement forward within the next decade, that’s just going to be tremendous.

So as you think about the verticals, I’m thinking a little bit more on some of the specific technologies that we think could potentially be coming out, and the impact that can have globally on society, as well as for me more specifically on the national security apparatus here in the United States as well.

Alice:

Thank you, Bishop. Zoe, do you specialise in one or more emerging technologies in particular? And can you describe this field and elaborate a bit about your work in this area?

Zoe:

I have focused more on the organisational change that’s required to adapt to the pace of technological advancement today. And that brought me into looking at software intensive systems at first, then I spent a couple of years looking at non-lethal applications of artificial intelligence, and looking at human-machine teaming dynamics that would be really decisive to how we operate in the future. So personally, that led me into a focus on responsible use of technology. I think Bishop gave really an excellent introduction to why this area is so important. And one of the reasons I love working for NATO in this space is because the responsible development and use of emerging and disruptive technologies is at the forefront of how we’re approaching these areas.

Just to highlight one specific area, that of artificial intelligence. The alliance has these six principles of responsible use. You can find it on the NATO website, if you look up the summary of NATO’s AI strategy, it’ll navigate you pretty quickly to our six principles. What’s really important is that we’re an alliance of democracies, and we’re an alliance of shared values. That’s really core to the North Atlantic Treaty, so really kind of embedded in the roots of how the alliance is meant to operate. And with regard to technology, we have an opportunity to set that groundwork, set responsibility into design in the way that we are setting up new structures from a governance perspective, looking at the way that we’re designing technology, creating new and future requirements for development and use of the technology. And so the six principles of responsible use are really exemplary of an area that I find really personally important and that I’ve worked on for quite some time – ethical or responsible AI.

But what makes me excited is the efforts to operationalise the principles. So we’re standing up a new [Data and AI Review Board] that will have representatives from all soon to be 32 allies. The Board is going to function by creating really practical tools to put our principles into practice. At its core, the most important tool that will be developed next year is a responsible AI certification standard. Because standardisation is really important as the bedrock of our interoperability, but also, it’s a tool to give predictability to so many different stakeholders who are really interested in this space, but don’t necessarily have the tools they need to create responsible AI. So that’s an example of one of those areas that I’ve worked on personally.

And then our team has also been very involved in standing up to really exciting, larger scale initiatives at NATO. One is DIANA, the Defence Innovation Accelerator for the North Atlantic, which is going to be a new kind of business model of how to work with innovators to solve our hardest defence and security challenges. And we’re also standing up a 1 billion euro venture capital fund, called the NATO Innovation Fund. And the reason I bring both of these up is because when we’re standing up new structures, like DIANA and the Fund, we have an opportunity to embed into the very governance of how these new structures are going to look like and how they’re going to work, exactly what we want for the future governance of different technology areas. I can get more into that when the discussion gets a little bit deeper. But for now, AI is a really interesting starting point, because it forces us to ask those difficult governance questions. And then the way we actually design structures can be responsive to the pace of technological change, and also use some of those more mature technology areas as starting points for us to build out the way that we’re working with the innovation ecosystems that are focusing on some of those more deep tech integration of hardware and software areas.

Bishop:

As a part of industry, we need to ensure that we are a trusted partner with not only our customers, but with society at large and with government, in building out our services and our products overall. So for us, that means building trusted AI. So the question just becomes, what does that mean? Specifically in the vision space, and more broadly in the discussion of artificial intelligence generally. From a perspective of a business within industry, for us, it meant that we needed to develop AI principles.

So our key principles are two main focal points, the idea that whatever we build these to be ethically built, and we need to ensure that it is conscientiously sold. So what does that mean? And I think Zoe already kind of hit on the ethically built piece, but the idea is as you’re going into the development of the technologies of these machine learning algorithms, actual software, or what have you, that you’re attempting to do the least amount of harm and the most good. You need to be thinking of how you’re training the models, and if there’s any inherent defence bias in the development and training of the models, and then what does the team look like? Who is actually building out, what are their backgrounds? Where are they from, what is their mindset, what type of diversity of background and diversity of thought exists in the team that’s actually helping you to develop the state of the art technology.

The second focal point is conscientiously selling. So who are you doing business with? Of course, there are legal ramifications right off the bat. For us as a US based company, we need to know whether an entity or government is on any type of sanction list or if it’s an authoritarian regime, the type of business that they engage with, anything that would go against our values and go against our principles and in some instances go against the law. So we’re not going to do business with those individuals.

But it’s also about the type of business that we’re talking about. So even if it’s an approved or supportive US company or US ally or a US business, foreign business, excuse me, well, what are they asking us to do? We ensure our algorithms are not used in weapon systems. That is very important for us as a business, is very important for our engineers, and something that we believe goes against our value set and our value proposition where we’re talking about the state of the art technology.

So it’s important for us to understand and know that, as we enter in to these discussions on the business side of it, that we defence align ourselves with our ethics on both the business side and from a development side in any arrangement that we may enter into, because it’s not simply important for us to make money. It’s important for us to know that we’re being good stewards and good supporters of the global community when we’re building such state of the art cutting edge technologies and a part of our governance structure, and doing that, internally is ensuring that we have a review of every use case that we will potentially enter into as a part of our business. For proprietary reasons, I won’t go into the specifics of what that system looks like, but understand that is a group of people internally, of which I am a member, that reviews each and every one of the potential business opportunities. We have to ensure that it ethically aligns with who we are and who we say we are as a business entity within the United States and within the global community. So that’s very, very important for us.

Finally, for us, it means the development of trusted vision AI and how this vision AI system can be both properly leveraged and abused, and understanding how we can minimise the abuse while ensuring that those positive use cases are what we continue to highlight and to message and to relay these key aspects of our business model. So for us in the vision AI space, that does mean focusing in on the future of logistics, of manufacturing, of medical imaging, they are also key aspects of how we can use this technology for the verification and authentication of identity. You see in the news constantly, a lot of the poor bad use cases around leveraging face recognition and AI systems and how there are a lot of governments, whether it’s the EU, whether it’s a US based government and elsewhere, that have true concerns about them. And we want to make sure that we’re continuing to message exactly what we do and working as a trusted partner, with other entities, with governments, with society, with think-tanks, academia, subject matter experts like Zoe, and like you, Alice, on this, to understand what are the best practices we need to leverage and what types of guiding principles exist to continue building out these types of technologies. And particularly, one that is really important that recently came out was the AI Bill of Rights from the White House from the Biden administration. It is a very good first step in giving us guiding principles that we have not seen captured in that form before.

Alice:

Thank you, Bishop it’s really interesting to have your perspective from the business side.

Edward, glad you managed to join us! Could you give us a short introduction of who you are and why you’re joining this topic.

Edward:

I’m a senior research fellow with the Finnish Institute of International Affairs. So I’m a public policy generalist, especially interested in foreign security policy. I used to work at NATO, doing similar work to what Zoe is doing now. So I was there when NATO was taking its first baby steps in developing policies on emerging technologies (ETs) and artificial intelligence. But as an economist by background, my interest is in the public policy implications of the new technologies, how we adapt to them, how we regulate them, and also on the foreign and security policy implications of these new technologies. So on one hand, of course, the defence and military implications of that new technologies, but I’m also interested in issues to do with their political effects, which are probably issues that excite a lot of people on social media, wondering whether social media are properly governed. That’s not quite the cutting edge technologies necessarily, we’ve had this platform economy for a number of years. But our political spaces, our public discourse, have been strongly affected by new technologies.

Alice:

Thank you, I’m really happy that you could join us. Let’s dive a bit deeper into the topic for today. How can these new and emerging technologies be weaponised by not only foreign, but also domestic adversaries of democracy? And is this just a future threat? Or are there already relatively new technologies out there that are currently being weaponised against democracies?

Zoe:

The fact that we’ve already naturally tended toward talking about responsible use of the technology in the first half of this conversation sets us up really well, because another reason that NATO takes this so seriously, and is creating this new board and this new forum, to be part of the governance community is to also better understand and be better prepared to prevent and to manage the irresponsible use of technology that will adversely impact our strategic and operating environments. This is something that we’re already seeing today. Both domestically, but especially internationally, and especially by potential adversaries, and strategic competitors, we see competing models of governance where Bishop talks about how we need to align our use of technology with our values. And there are different modes of doing that.

For us, one of the first steps here is to better understand what irresponsible use of technology looks like, how that can create new risks to compliance with international law, especially international humanitarian law, and also if we have a clear cut threshold of what’s considered irresponsible or unlawful, especially with regard to armed conflict. What happens actually below that threshold? And how do we govern technology below that threshold in a way that’s still consistent with our values? So I think these are really important questions for us to tackle collectively. And that’s something that’s really core to what we are looking at with responsible use of especially artificial intelligence. I’d love to hear a little bit more from Bishop and Edward, and then maybe we can circle back but I think it’s just an absolutely vital area, so I’m curious to hear your thoughts on this first as well.

Bishop:

You made a very salient point that we’ve already trended towards the discussion of ethical use. For my company and our business, specifically, I think there are three key pieces that we look at. We were already talking about authoritarian regimes and a misuse or abuse of this type of a vision to AI technology. And we have to be very careful that, since we have one of the world’s best algorithms, with the ability to identify and differentiate, particularly between individuals and people of colour, more so than a lot of the other ones can, that there’s already a concern from individuals and from governments, from elected leaders, that this type of technology can be leveraged against marginalised communities as it already stands.

So we really want to make sure it goes back again to the marginalised ethical use standard and the ethical use review, our use case review that I mentioned previously, that our technology is built ethically, but also being sold with those marginalised aspects in mind of the potential misuse or abuse.

So what does that look like? Well, we don’t support live surveillance, in at least from a case by case perspective, understanding that any type of broad life surveillance where individual or where organisations or governments can just go out and search for trouble is not something that we want to be associated with or be a part of, because you can inherently see where some of the truly negative and horrific aspects of that type of technology can be.

Another piece for us is the fight against deep fakes that I mentioned previously. As facial recognition technologies develop more and more every day, that we’ve already seen how that is translating into the video aspects and individuals now have the ability to sit and make deep fakes at this point, they’re getting more and more sophisticated. So it’s really important that, one, we’re not working with individuals or groups that are going to abuse that type of technology, but two, we have invested in development of a deep fake detector, helping governments, helping organisations identify fabricated videos, because the last thing we want to have is a public official or senior leader in a video saying something ill of a enemy, or of even an ally, that causes some type of international chaos that could lead to something like an armed conflict. So it’s really important to be a part of the positive aspects of identifying those types of medium.

And finally, the ability to track individuals, follow individuals and crowds, and utilising this type of vision AI for the negative aspects of targeting individual people, or groups of people, like marginalised communities, to find some type of evidence of wrongdoing where none really exists, and just continuously violating an individual’s privacy here in the States, or in Europe or elsewhere, is something that we’re really concerned about. We see the EU develop its Privacy Act, legislation is ongoing in the United States, not only on a federal level, but also at the state levels, and we’re seeing a plethora of new privacy acts popping up specifically around the use of these types of technologies. It is really important for us to be good stewards of the technology we build and to be proper subject matter experts and speak to these governments and work with them to ensure that we are continuing to be innovative in this space and not building out policies that while they’re well intended, may have some type of overall chilling effect on industry or on our ability to protect ourselves against the misuse of these types of technologies.

Edward:

I’ll just start from a slightly different perspective. I mean, if you think of the potential military applications of emerging and disruptive technologies, be it AI, autonomy, robotics, etc. – imagine advanced drones, etc. – looking at this as a social scientist, obviously we are in a situation where we have a technology race between the great powers. And obviously Russia and China want to get their hands on the best technology possible. And I’m sure that many listeners have heard or read plenty of stories about industrial espionage, so not just military espionage, not just our adversaries trying to spy on our defence establishments, but also private companies that are state sponsored by hostile states trying to get their hands on the very cutting edge of what our tech sectors do best, especially those companies that are involved in dual use products that could also have military applications. We are in a race.

So on the one side, there’s a lot of activities that we want to do in our countries to try to innovate as much as possible. But then there is the other side, which I would call the antagonistic policies, if you will, where our adversaries try to spy on us and try to hoover up our technologies. And we try to stop them. We try to ensure that there’s less technology transfer, at least for the really sensitive use technologies.

That’s a space which is in great motion. That’s handled less so through NATO, and more so by national governments and through the European Union. So here, I’m thinking of things like sanctions, export controls, foreign investment, screening, strengthening our intelligence services, so that they can stop industrial espionage. So that’s a whole piece of the puzzle, which is important as well, and we’re not out of the woods, because it does link up to the whole issue of whether we’re looking at some kind of economic decoupling with respect to China. And perhaps European governments are not quite so sure. They’re not quite as bullish as our U.S. colleagues. There’s a whole issue of semiconductors and where they’re produced and how safe the supply chains are.

So I just wanted to highlight that whole area of work, and how that is important from the public policy perspective, and we’re still in the middle of it. And it really challenges the best hopes that we had, which our nations had maybe 20 or 15 years ago, that we can simply have a very open globalisation process, that our universities can be very open, that our research labs can be very open, that our corporate sectors can be very open and hire the best talent from all over the world. And now we’re finding that it’s a bit more delicate. So that’s an area where our governments need to pay plenty of attention. And I’m not sure that we’re quite where we want to be.

Alice:

Is there a difference between foreign versus domestic abuse of new and emerging technologies? Like, is the foreign threat more about national security? And then the domestic threats more about things like individual data privacy and digital rights? Or are those lines between foreign and domestic completely blurred?

Zoe:

I want to tie this answer to what Edward just introduced into the conversation, which is thinking about how we protect our advantage with regard to these technologies. This is an absolutely vital question for our resilience, because part of the resilience and the prosperity and the military advantage that we have historically had is partially thanks to our ability to maintain our technological edge. And that’s exactly what we’re trying to do with this host of new generations of technologies that look very different, that evolve much more quickly and with different stakeholders than some of the previous generations of technology that have been core to our technological edge.

This is an area that we at NATO take very seriously as well. But instead of foreign versus domestic, I think one thing that we really see is that technology doesn’t necessarily respect national lines. A lot of the technological advancement is based on deep collaboration between scientists and researchers who operate across national borders.

The same is true for the proliferation and the diffusion of technology, which has immense benefits, as well as risks that we can see. And so there are some hard tools that Edward has alluded to that are more national in nature, like looking at, what are the choke points in technological production lines that may be export controls, that will be able to work effectively as a national tool, or one that you can coordinate amongst nations. But if we think of hard law versus soft law, if we think of the hard tools in our toolbox, it is things like export controls and sanctions. At NATO, we’re really focused on creating those soft tools as well. And if I can even go as far as to mix metaphors, there are carrots, and there are sticks. We’re also looking at ways that we can incentivise more trusted collaboration.

Because it’s true, like Edward said, it’s really hard to imagine if you’re a startup and you have an incredible product, but maybe a pretty new business model for you to even know the right kinds of questions to ask, like if I work with this partner, is that going to lead to some kind of technology transfer that’s actually going to make me lose control of my business, or that’s going to foreclose some of my business opportunities. Because I won’t be able to work in this market if I’m too aligned with these other markets that don’t align with our values and where we want to see the technology. Or from a more concrete perspective, if you’re interested in working in government driven markets, are you foreclosing some future business opportunities, because your technology no longer resides within the Alliance or within the like-minded countries where you would have the ability to be part of government driven supply chains.

So the way that we tackle this is looking at ways that we can sensitise startups especially, as well as venture capitalists, to potentially adversarial sources of capital, looking at what kinds of tools different actors are using to transfer technology, either through licit or illicit means, and make that part of the business model early so that when you’re maturing your technology, and when you’re maturing your company, you know what kinds of things to look out to. And also how do we create more market alternatives that work with trusted or allied sources of capital, so that we actually don’t have to say, forego that other opportunity with another country, because the theoretical opportunities are going to be better down the road here, but actually create those opportunities, so they don’t have to go to a potentially adversarial source in the first place. So these are some of the kinds of initiatives that are really embedded in the way that we’re building up DIANA.

So again, this is our accelerator where if you’re accepted into the program to continue advancing a technology that responds to a defence and security challenge, you’ll also have access to this mentorship and education. That includes focusing on exactly these kinds of areas. And it’s also really crucial to the NATO Innovation Fund, not only because it will literally be one of those market alternatives where it would be a source of capital to keep some of those cutting edge technologies in the Alliance, but also because it will be working with private sources of capital to actually crowd more capital in and use its billion euros, as a market signal that can then create and sustain more of that ability to have resilient technology pipelines here at home. So it’s a little bit different from the traditional sanctions like export controls. Those are absolutely vital tools as well. But I think that there are also ones that work well across national boundaries and cut through some of this foreign versus domestic layer that we sometimes still think about these questions.

Bishop:

I just wanted to add one piece where Zoe talked about market alternatives that I think is really important. Eric Schmidt said some time ago, speaking of the AI race, that the United States was beginning from a step back at the starting line, because we were going against competitors like China and Russia, who don’t value our privacy or some of the constitutional rights that we have here in the United States in the same way that we do. So they’re going to be able to more quickly develop, specifically around vision AI, but also in facial recognition, and more broadly, just AI in general, because they’re not going to have the same respect of humanitarian rights and personal privacy’s that we have here.

So the key aspect that we’re talking about is a constant battle to remind everyone that we have the ability, we have the expertise, we have the knowledge to develop these types of technologies here in an ethical manner in the United States and in Western society, broadly, and it’s really incumbent on us to do everything we can to continue to do so in an ethical manner, so that when we do find ourselves facing these in a meeting with near peer competitors, that we’re able to do so in a thoughtful manner, and meet them on potentially a battlefield with these types of technologies that are going to be competitive.

We have to navigate concerns of OFAC [the U.S. Office of Foreign Assets Control] or import/export or ongoing sanctions and just ensuring that the business model that we’re engaging in and the business that we’re doing is ethically driven at the end of the day, that we’re really focused on ensuring that we uphold our own values to do it, and it is definitely possible, but it does make for a very dynamic space.

Edward:

Personally, I’m a great believer in the principles of responsible use or principles of ethical use actually helping us to forge ahead, better than strategic competitors. I’m thinking especially of compliance of international humanitarian law, if you think of battlefield systems, the whole point of using better technology is that you’re going to be more accurate, and you’re going to be more successful at completing the mission, with less collateral damage. And we can actually see right now, the horrible war we have in Ukraine, is to a degree already a war of precision against mass, the Ukrainians being more precise and fighting in a more western style and with more and more western systems, against the traditional mass artillery, the sort of ‘destroy everything, smash everything up’ approach of the Russians, and the Ukrainians are doing rather well, thank God.

This says something about why having the technological edge is important and why it’s valuable. And hopefully, in fact, deploying technologies like AI and autonomy on the battlefield doesn’t at all need to lead to some sort of dystopian vision of killer robots that some in our civil society have been very afraid of, understandably. But the issue is greater precision, greater accuracy and sticking to absolutely essential principles of international humanitarian laws, laws of warfare if you like, that effectively means fighting better. And I don’t want to say the word cleaner because it could be misinterpreted. But what I mean is, at the end of the day, you take out legitimate enemy targets. And that’s it. And that’s where that ends. And if you can do that with higher precision and more intelligence and more smartness, then you’re going to do better militarily.

That’s for the battlefield side of that, but I just wanted to speak about the issue of our values. Obviously, sometimes there are trade-offs. But I think pursuing and sticking to our values can also be compatible with being competitive and doing well, technologically.

Alice:

That’s a really important point and I fully agree: sticking to our values, also in designing and using new technologies, can actually be competitive.

In security circles, it’s common knowledge that hybrid threats require a so-called ‘whole of society’ response. Now, are governments and alliances involving civil society in their efforts to increase resilience against these new and emerging technologies that might be used against us?

For example, in the design phase, is there a multi-stakeholder approach where not only government and the private sector are involved, but also the civil sector, like philosophers of technology, or anthropologists or digital rights experts? [Read more about co-designing technology here.]

Bishop:

I call it the trust triangle. I think it’s really important that you have industry at one corner, you have government at one corner, and you have society, the other one, and what you’re trying to do ultimately is bring them all closer together and make the smallest triangle possible because in the middle of that triangle, you have your trust gap. And right now, specifically with vision AI, we broadly have a trust gap between those three entities. When I break down society’s corner a bit, for me, that’s academia, that’s grass-top leaders, local elected officials, that is advocacy groups, that is think-tanks, that is a lot of the very engaged stakeholders that have a true interest in ensuring that organ donations who build this type of software and who are building out these products are doing so in an ethical manner. The trust gap is never going to go away, we’re always going to have some level of distrust. But it’s important for us to do what we can to come to the table as honest brokers.

Zoe:

I’m going to jump in here and fully endorse everything that Bishop has just said. And also, just from the public sector perspective, the onus is also on us to be a trusted partner in the very same vein.

As a case study of trying to lessen the size of this triangle, I’ve mentioned our Data and AI Review Board [DARB]. And very concretely, we saw it is important to include civil society in this network of experts that are working on operationalising principles of responsible use. So every nation has an appointed representative at the table. And they’re explicitly invited to appoint a civil society representative, if they want to. So that’s also really different from the way NATO usually operates, where the rep would usually come from the government. And here it can be government, academia, it can be private sector, or civil society. Keeping civil society as part of these conversations is really important, not only for accountability, but also because often they’re the ones who can help us anticipate the ways that we need to adapt to change, and give that stress test that’s going to ultimately make us more effective.

Also, when it comes to that multidisciplinarity, when you were asking the question, Alice, I went to the NATO website for the Data and AI Review Board. And you can see there that this new board, allies will be appointing representatives from computer science, data engineering, humanities, law, philosophy, social sciences, among other backgrounds. And I think that that multidisciplinarity is also going to be really crucial in lessening the size of that triangle. So we’re really excited to be taking this work forward next year, the first meeting of the new board with all of these multidisciplinary individuals are going to be taking place in the first quarter. So I hope that we’ll have another chance to come back and circle back to this conversation and see how it plays out in practice.

Alice:

Well, this really sounds great and I’m happy that civil society and social scientists and philosophers and whatnot are all invited or, hopefully, be appointed by national governments to help on this Board.

Edward, is there anything you would like to add?

Edward:

Just very briefly, I’m extremely positive about multidisciplinary in all these efforts. I think they are extremely important. And I just wanted to highlight one issue, which is that we’re coming out of a period of a, let’s say, more benign kind of globalisation. And obviously, we’re now in a much more competitive, international environment. And effectively, in Western Europe in particular, more than in the United States for all sorts of reasons, civil societies have probably got more used to the notion that the principal role is to be a countervailing power or countervailing force against risks that may emanate from their own government or from excessive corporate power.

But our societies might not be fully reconciled with the notion that we also have external adversaries that are hostile, great powers out there, that mean harm. So that is a discussion that needs to be had, so that we can have a civil society that is able to do the things that they traditionally care about, which is to protect vulnerable groups within our own societies, from forces from within our own societies, but also with a good understanding and awareness that we also have external enemies, or external adversaries, or potential enemies.

And I just want to say as a concluding note, that one civil society that impresses me a lot recently is the Ukrainian one, which has been really quite extraordinary, in facing down all sorts of hybrid threats under conditions of wartime. And I will also make an honorary mention of the civil societies of the Nordic and Baltic nations, which are, I would say, quite coherent and quite able to generate a sense that societies need to be resilient, also, with respect to external attacks, and I think we have a lot to learn from those colleagues.

Alice:

Absolutely agree, Edward. And this is what Defend Democracy is trying to do, to have a coherent approach of these foreign, domestic and technological threats to democracy, and to work on increasing societal resilience. And that’s why we’re doing this series Radio Resilience.


Stay tuned for the next ‘Radio Resilience’ via our social media. Did you enjoy this episode? Please take a moment for a survey from NATO’s Public Diplomacy Division.