How are AI technologies are revolutionizing cyberwarfare, kinetic warfare, and information warfare?

Ryusei Best Hayashi
19 min readMay 10, 2023

--

Photo by Thierry K on Unsplash

I wrote the following piece for my PhD class at UC Berkeley with Professor Andrew W. Reddie.

In your view, what are the international security risks posed by AI technologies? How might these risks be mitigated?

In 2015, Artificial Intelligence (AI) systems for recognizing handwriting and images surpassed average human performance in standardized tests. By 2020, AI came to surpass mean human performance in speech recognition, reading comprehension, language understanding, medical diagnosis predictions, vehicle operation and coordination, and others (Roser 2022). Recent developments in Generative Adversarial Networks (GANs) and Large Language Models (LLMs) are opening countless opportunities and risks for military applications, including using ChatGPT for finding and exploiting cyber vulnerabilities. Nevertheless, technical academicians are still debating the definition of AI (Wang 2019), while international relations scholars are attempting to make recommendations to policymakers without adequately classifying what counts as AI and what doesn’t, nor a sense for where the technology is headed (Horowitz et al. 2019; Jensen et al. 2019; Johnson 2020; Johnson 2020; Johnson 2021; Horowitz et al. 2022). The consequences of this are that we fail to (1) anticipate how exactly AI is and will be weaponized, (2) understand its impacts and risks for international security, and (3) lay out comprehensive risk mitigation strategies. In order to narrow down these gaps in the field, I decided to tackle the question, what are the international security risks posed by AI technologies? How might these risks be mitigated?

In this paper, I explore the ways in which the latest AI technologies are exacerbating and driving international security risks as well as ways to mitigate those risks, focusing on 1) cyberwarfare and unsupervised and adaptive cyberattacks, 2) conventional warfare and intelligent lethal autonomous weapons, and 3) information warfare and generative AI. For each AI application, I first break down the unique capabilities of AI, so I can then tackle the specific international security risks stemming from its unique capabilities, and end by discussing how to mitigate those risks. I argue that AI poses the greatest immediate international security risks in information warfare as generative AI increase the likelihood of the rise of destabilizing non-state actors and bring about distrust that aggravate the security dilemma, then in cyberwarfare as AI can result in an upsurge of escalatory cyberattacks, and lastly in kinetic warfare as LAWS can spark prestige tech arms races.

INTERNATIONAL SECURITY AND AI

The systemic effect of AI used to be simply an exacerbation of international security risks arising from existing conventional, intelligence and nuclear capabilities, but the latest AI technologies are bringing transformative unique capabilities that could end up driving international security risks. International security refers to the property of the international system for it to reasonably expect the preservation and revamping of the status quo arrangement not only in the actual world, but also within some subset of possible worlds (Herington 2012). As I’m exploring the risks of AI weaponization, I took the definition of international security to mean the preservation and revamping of stability in the balance of power, likelihood of war, and the offense-defense balance. According to Wang (2019), AI has two defining characteristics. First, AI can operate in an environment of insufficient knowledge and resources. Second, AI learns and adapts its outputs, either prediction or classification. Both machine learning (ML), ranging from LASSO regression to gradient boosted trees, and deep learning (DL), utilizing artificial neural networks, embody these two characteristics, and the key distinction between them is that DL can more easily be implemented for contexts with large sets of tuning parameters enabling it to process objects like images or text.

Many scholars argue that AI still has fundamental technological limitations and therefore the risks it poses to international security mostly come from delegating too much decision-making to AI that result in accidents that escalate or manipulation by adversaries. Rebecca Slayton claims that AI fails to capture complex sociopolitical data and slight changes to circumstances render the models irrelevant (Horowitz et al. 2022). King (2023) posits that AI is best for “simple decisions with perfect data”, and that it’s limited to the problem of collecting large enough sets of the right data and the difficulty of getting the bureaucratic state and tech companies to cooperate. Jensen et al. (2020) argue that the impact of AI is limited by organizational adoption, especially by the complexity of managing large scale information system architectures and changing entrenched military doctrines, and they also contend that interstate war has historically been a result of political rather than technological factors. The shortcoming of these arguments is that they assume that AI weaponization will be for general purpose tasks resembling a human decision-maker when in reality it is more of a specialized tool that follows human orders, which doesn’t require capturing vague behavioral signals, for which it’s fairly easy to collect the required training data, and has lower organizational adoption needs than envisioned as the applications are for narrow domains and objectives. The consequence of this is that they fail to see the orders of magnitude improvement in effectiveness due to AI, instead of building Stuxnet in 3–6 years you could built it in 3–6 weeks (and it would be much less time once you have a pretrained model), and therefore have a hard time anticipating its systemic effects on the cybersecurity offense-defense balance, the frequency of cyberattacks and states’ inclination for retaliation, the belligerent signals sent by states developing intelligent lethal autonomous weapons and its likelihood to trigger prestige tech arms races, and the risks to heightened information warfare and counter warfare. To update the literature, I will be exploring the impact and international security risks of AI in cyberwarfare, kinetic warfare, and information warfare.

CYBERWARFARE AND UNSUPERVISED AND ADAPTIVE CYBERATTACKS

AI-enhanced cyberattacks are different from traditional cyberattacks. AI can be integrated into cyberattacks to provide it with autonomy at rest and autonomy at motion capabilities. For autonomy at rest, AI can be used for finding zero-day vulnerabilities in the source code, in the network or in the information system infrastructure and it can be used for writing exploits with LLMs in an unsupervised manner, which enables for real-time, instantaneous cyberattacks. For autonomy at motion, AI can be implemented for equipping malware with adaptation capabilities for situational evasion and infiltration — such as IBM’s DeepLocker malware that uses deep convolutional neural networks for concealing payload and extracting keys on the move or polymorphic malware that adapts it code to avoid detection — as well as for personalized phishing attacks through impersonation of political decision-makers via fine-tuning LLMs, which enables for cyberattacks that can be deployed in operations with imperfect information and in a cost-effective manner. It’s worth nothing, however, that the autonomy at motion also has serious practical limitations, most notably that equipping malware with AI can compromise its cover because it has to either store large files of the fine-tuned model and use sizeable computational power if it were to retrain itself in the middle of a cyberattack, or use an API endpoint to call the AI. But, in the end, the unsupervised and adaptive characteristics work and they set it apart from traditional cyberattacks because of three reasons. First, it’s harder to defend against because the cyberattack can be running around the clock and the malware is more difficult to detect and remove due to its adaptability. Second, it reduces the cost of offense in the long term as it’s cheaper to invest in maintaining AI training cycles and servers than to employ humans. Third, it can undertake more complex missions and has higher barriers of entry, so the more likely perpetrators are state rather than non-state actors. These unique capabilities affect the offense-defense balance.

AI risks destabilizing international security due to an upsurge of cyberattacks and states’ willingness to retaliate. Slayton (2016) found that the three most expensive costs of Stuxnet for the US and Israel from 2007 to 2010 were intelligence gathering ($260 million), infrastructure costs ($40 million) and labor required to develop the worm ($7 million), compared to Iran’s $14 million total expenses. It’s to be expected that the offense will be costlier than defense as an attacker you have to move resources over longer distances and have to overwhelm defense infrastructures to breakthrough, scholars like Glaser and Kaufmann (1998) calculating the offense-defense ratio to be 3:1, but because of the much higher offense-defense ratio Slayton concluded that at large scale and symmetrical cyber operations defense was dominant. However, AI risks tilting the offense-defense balance in favor of aggressors by reducing the relative cost-effectiveness of cyberattacks compared to defense. AI reduces the cost of intelligence gathering by automating the process of finding vulnerabilities and decreasing the capability benchmark because the malware can adapt itself at motion. It also reduces the cost of malware development because AI can write exploits independently. More importantly, AI can reduce the timespan of the cyberattack from years to weeks, further reducing all expenses. Even though defense can also integrate AI to anticipate zero-day vulnerabilities, automate anomaly detection and to an extent write code for removing known malware, it is difficult to implement defensive autonomy at motion capabilities for self-patching, so it likely requires humans in the loop for fixing the system, which increases costs. For cyberattacks, AI enables economies of scale by distributing the infrastructure costs among multiple cyberattacks, while the cost of defense monotonically increases when facing multiple cyberattacks.

There are three international security risk implications of AI cyberattacks. First, by bringing down the offense-defense ratio we can expect a normalization of cyberwarfare as a means of fighting, increasing the frequency of state-sponsored cyberattacks and threats of cyberattacks. Second, because of the scalability advantages of cyberattacks, states are incentivized to carry out multiple simultaneous aggressions, which increases the risks of escalation from limited conflicts to extensive cross-domain warfare. Third, because of the increased lethality of AI cyberattacks, states are likely to have less tolerance for cyberattacks, increasing their propensity for retaliation and distrusting other states due to the difficulty of accountability in cybersecurity. The combination of these creates the risk for uncontrolled cyberwarfare that can lead to total war because there are no behavioral, historical or legal frameworks for bounding cyberwarfare, just as the revolution of combined arms warfare and military conscription led states into unanticipated total war in the US Civil War and World War I.

A combination of neorealist and technological approaches are the most sound option for mitigating the risk of escalatory cyberwarfare. Neorealists like Waltz argue that in an anarchic system military power is what ultimately dictates international outcomes. Because cybersecurity is a new and quickly evolving field of warfare, it lacks a robust framework of international organizations and shared norms which undermines the short-term practicality of liberal and constructivist solutions. Neorealists have two main cyber risk mitigation strategies. From offensive realists like Mearsheimer, the best way to stabilize the system is for great power states to build AI-powered cyber capabilities so they can all credibly threaten each other with lethal first-strike and second-strike cyberattacks, resulting in deterrence that offsets initial incentives to cyberattack because at the end of the day all states would end up worse off and with minimal gains. For defensive realists like Jack Snyder, they’d also emphasize the risk of destabilization before reaching deterrence, where first-movers might perceive windows of opportunity to strike while other states still lack the AI cyber capabilities, leading to escalatory cyberwarfare, as well as the risk of emerging powers and rogue states developing AI cyber capabilities to challenge the status quo and assert their place or survival in the international order, which could add gas to existing tensions while hindering mediums for resolution as cyber capabilities create distrust due to the difficulty of verifying software capabilities, risking the possibility of a fragmentation of the international community into isolated blocs. Therefore, the risk mitigation strategy should also include signaling to differentiate between offensive and defensive cyber capabilities to aid political decision-makers in estimating cyber offense-defense balances so they have more room for discussion and negotiation with other states to resolve conflicts at the table rather than on the battlefield. Because cyber is software, different from hardware conventional and nuclear weapons, it can endlessly shift forms, so technology is transcendental for risk mitigation. The two competing risk mitigation technology strategies are information centralization and decentralization. Centralization is beneficial because it tightens control over cyber capabilities to the state, which makes it easier to attribute a cyberattack to a specific state as there are less players, stabilizing international security from suspicion paranoia, but it incentivizes large scale cyber offensives because that’s the only way to take down opponents. Decentralization is beneficial as it’s easier to expand, reducing the time of windows of opportunity that arise as states adopt AI cyber, but risks involving non-state actors that can lead to unwanted technology proliferation and non-state sponsored cyberattacks. Further research is needed to narrow down the optimal technological risk mitigation strategy.

KINETIC WARFARE AND INTELLIGENT LETHAL AUTONOMOUS WEAPONS

Latest AI developments provide a few new features to lethal autonomous weapons (LAWS), but they don’t change the current international security equation. Two new AI technologies, multimodal-AI and computation distribution networks, enhance LAWS for more effective autonomy at rest and autonomy at motion capabilities. Multimodal-AI, which is in the integration of AI systems, allows LAWS to process images, speech, sounds, language, text, chemical compositions and electromagnetic radiations for making better threat probability predictions before engaging in kill decisions, for anticipating attacks and for planning movements and optimal routes. Computation distribution networks through platforms like Anyscale make the training of artificial neural networks far more efficient, facilitating real-time multi-agent coordination between LAWS, which was previously difficult because adding a new drone to the swarm or removing one or navigating rough terrain, for instance, required either fine-tuning or retraining the model that could easily take more than 24 hours, so it was mostly only practical to undertake one-time-use drone swarm military operations. However, the key value proposition of LAWS is that they are unmanned — you can attack, defend or threaten without risking your people’s lives — thus the new AI technologies only serve to materialize LAWS, and the present-day international security risks remain mostly consistent. It is also worth noting that I didn’t define LAWS because there still isn’t consensus on what characterizes LAWS from other unmanned weapon systems (are laser-guided missiles LAWS?) and in my view we still haven’t seen a clear battlefield LAWS application as most can only be deployed in single-agent or small groups rather than en masse, which influenced my exploration of risks.

Intelligent LAWS risk destabilizing international security mostly as a prestige tech arms race and war outbreaks or war escalation due to overconfidence in offense capabilities. The diffusion of Intelligent LAWS demands high financial capital — hardware development is capital intensive, electrical engineering, robotics, and synthetic biology, and AI for multi-agent coordination and multimodal data processing to integrate a variety of sensors is expensive for each research and development iteration — and high organizational capital because states need to to only train military officers in AI, but also establish digitalized institutional frameworks for managing machine-to-machine coordination and data stream systems for facilitating LAWS’ access to information for decision-making while implementing processes for protecting information architecture systems from cyberattacks like data-poisoning. Because of the high barriers of entry we can expect only few state adopters of Intelligent LAWS, making it a prestige technology. In order to reinforce their status as great powers, states such as the US and China will be incentivized to invest in the development of Intelligent LAWS to remind other states that they are the leaders of the world. France, the UK, Russia, Japan, India, Germany and Israel are also likely to invest in Intelligent LAWS as they want to show to their allies, adversaries and the world that they are consequential players too and therefore deserve their interests to be heard and respected. Different from carrier warfare, though, Intelligent LAWS, I argue, have yet to demonstrate practical battlefield applications that generate credible security threats to neighboring states as multi-agent coordination technology hasn’t materialized. This will result in a prestige tech arms race. The other international security risk is that the development of Intelligent LAWS can start wars or escalate ongoing wars for two main reasons. First, militaries and political decision-makers can grow overconfident in the offense because of the technological deterministic paradigms circling around and due to the novelty induced biases. Second, Intelligent LAWS are still in experimentation and the first battlefield deployments have high likelihoods of causing unintended damages, casualties of children or unarmed civilians, that can escalate ongoing wars. Technologically, it may in the future be possible for hostile takeover of entire LAWS armies via cyberattacks, which is another risk worth taking into account.

A combination of liberal and constructivist approaches are the most sound option for mitigating the risk of prestige tech arms races and war consequences. The advantage of dealing with developing technologies is that there are greater opportunities to resort to preventive institutions and norms for regulating state behavior rather than realist’s power measures. Liberals would suggest employing international treaties and international organizations as mediums for mediating and controlling the prestige tech arms race and the deployment of risky Intelligent LAWS. Constructivists argue that norms on rule of conduct are socially constructed, so they would recommend shaping shared norms around Intelligent LAWS to minimize the risks of unwanted routes for war escalation through dialogue meetings in similar spirit to the Reykjavík Summit between President Reagan and Premier Gorbachev. Ultimately, Horowitz (2019) argues that LAWS have a relatively modest effect on interstate war as states more often than not choose to go to war and engage in arms races for political rather than technological reasons, so, unlike the structural incentives driving AI cyberwarfare, Intelligent LAWS by themselves pose little risk to international security.

INFORMATION WARFARE AND GENERATIVE AI

Generative AI drastically transforms information warfare. Generative Adversarial Networks (GANs) and Generative Pre-trained Transformers (GPTs) allow AI systems to predict the most likely next building block in response to prompts that when put together en masse create objects that humans interpret as text, images or music. The unique capability underpinning generative AI is its ability to create content that closely resembles reality as humans view it, which can “fool” human cognition to believe the AI-generated content to be true when in reality it could have been fabricated. There are three main domains of generative AI that impact information warfare. First, it can automatically form documents of human language by putting together a multitude of character tokens, which when coupled with transformer-enabled “memory” allows the AI to hold contextually relevant conversations with humans. Second, it can independently draw images and create videos that fit the intentions of the given prompt by using diffusion technology where a model is pre-trained on hundreds of millions of labeled images to understand semantic concepts like color or style to create new content. Third, it can copy and create new human voices and realistic sounds by predicting the next most likely arrangement of acoustic tokens based on the input or prompt which are then synthesized into the final waveform for outputting sound. The implication of fusing these capabilities together is that actors can create content that can catch the attention of its audience, evoke feelings, shape the audience’s knowledge systems, interact with the audience in humane ways, and persuade people to take particular actions.

Generative AI risks destabilizing international security by altering structural conditions that fuel information warfare, which can aggravate the security dilemma and cohesion of the international order. With the rise of mass media, first with the printing press, then the radio and recently with the internet, in order to ensure their survivability, it became essential for states to not only have a monopoly on violence, but also on the flow of information, domestically and internationally. With the increased range and speed of human communication it became easier to hold decision-makers accountable, negative domestic opinion can lead to unrest and armed clashes while states gained warmongering or oppressor reputations for international incursions and interventions, which can hinder international cooperation efforts or even result in states balancing against the one with a negative image. Despite notable past examples of foreign intervention campaigns to alter public opinion through spreading of information and propaganda, information warfare only arose until the widespread use of the internet as the time period for undertaking information attacks and counter attacks became almost instantaneous.

Generative AI brings three key international security risk implications. First, polarization campaigns targeted at civilian populations increase unrest and decrease trust in government institutions, which facilitate the emergence of non-state actors, human right activists and terrorists, that disrupt discourse and relations between states, making it harder to seek stabilizing measures in moments of crisis, increasing the likelihood for escalation. Second, misinformation attacks targeted at disrupting military communication can increase the fallibility of consequential weapon systems such as nuclear weapons. Johnson (2021) contended that false-flag cyber operations can be employed to mislead nuclear command and control or political decision-makers to “press the button”, imagine that a nuclear submarine receives an AI generated video of a nuclear attack coupled with a written or spoken order by an AI generated authority figure to retaliate, which on the one hand increases the likelihood of nuclear accidents, but also causes internal trust issues that weaken the functionality of the military, and it becomes harder to trust other states out of fear that the security promises of others are not reliable. This aggravates international security because there’s greater uncertainty, which increases the likelihood of misperceptions about other’s intentions and capabilities, facilitating the development of dangerous security dilemmas as states will be more prone to assume the worst which can cause war. Generative AI also makes it harder to signal benign intentions to adversaries, especially as the line between real and fake content becomes increasingly blurred, further aggravating the security dilemma. Third, generative AI significantly reduces the costs of carrying out offensives while current defensive AI models to detect real versus fake content are not effective enough yet so it will cost a lot to develop these defensive capabilities, which tilts the offense-defense balance in favor of aggression in the short-term, incentivizing states to wage more information warfare while the window of opportunity remains open. Depending on the domain, targeted at civilians versus targeted at military communications, the willingness to retaliate will vary, but, even though studies like Kreps and Schneider (2019) suggest that in most instances with non-kinetic warfare people opt to not retaliate, it’s too soon to tell and further research and close monitoring should be carried out on the impact of generative AI and information warfare.

A technological approach is the most sound option for mitigating the risks of information warfare due to generative AI. The key for mitigating the risks of generative AI in information warfare is to control the deployment platforms, hindering the capability of targeting AI-powered information campaigns at military communications and bureaucratic institutions, and restrict, directly or through incentives, the initial submission and proliferation of damaging AI-generated content in social media targeted at civilians. International treaties aren’t a reliable solution for generative AI because it would require at least yearly revisits to change the treaties, which is too expensive and not so productive. International organizations might be a beneficial added layer in the longer term, but they can take too long to set up the processes in the short-term, so I wouldn’t rely on them to mitigate today’s risks. Perhaps the optimal risk mitigation strategy is for states to agree to not use generative AI for information warfare, similar to how we banned chemical and biological warfare, so states can cooperate to deal with non-state actors, which is stabilizing for the system, but there might be conflicts with state-level interests. But, sometimes you have to forego selfish interests, which make everyone end up worse off, for the good of all.

LOOKING AHEAD

In conclusion, the latest AI technologies are transforming cyber, kinetic and information warfare in ways that pose significant risks for international security, and we could be greatly underestimating the impact of AI in scholarly circles. AI risks destabilizing international security because it incentivizes states to perform escalatory cyberattacks, it makes great powers engage in prestige tech arms races for Intelligent LAWS that fuel tensions, and it breaks down cohesion of the international system through generative AI information warfare by increasing uncertainty and reducing trust that aggravate security dilemmas. However, it is also worth thinking about the reasons for why states engage with one another, Horowitz (2019) contending that politics is what drives states to war rather than the mere advantages brought by new technologies, and that AI will only become a danger if organizations are able to adopt it, Jensen et al. (2020) pointing out that overcoming entrenched military doctrines is a key limitation for the adoption of AI. Areas of future research include understanding how the latest AI technologies can be weaponized for other domains, namely nuclear weapons, to explore how AI can break down or enhance deterrence models. Ultimately, AI’s long awaited revolution has arrived, with all its promises and dangers, and we are still on time to incorporate practices to mitigate unintended catalytic risks.

Bibliography

Herington, Jonathan. “Philosophy: The Concepts of Security, Fear, Liberty, and the State.” Security, 24 Dec. 2015, pp. 22–44., https://doi.org/10.1017/cbo9781316227671.002. Accessed 6 May 2023.

Herington, Jonathan Charles Carter. “The Concept of Security: Uncertainty, Evidence and Value.” The Australian National University, The Australian National University, 2012.

Horowitz, Michael C. et al. “A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence.” ArXiv abs/1912.05291 (2019), https://doi.org/10.48550/arXiv.1912.05291. Accessed 6 May 2023.

Horowitz, Michael C., et al. “Policy Roundtable: Artificial Intelligence and International Security.” Texas National Security Review, 24 Aug. 2022, https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/. Accessed 6 May 2023.

Horowitz, Michael C. “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability.” Journal of Strategic Studies, vol. 42, no. 6, 22 Aug. 2019, pp. 764–788., https://doi.org/10.1080/01402390.2019.1621174. Accessed 8 May 2023.

Glaser, Charles L., and Chaim Kaufmann. “What Is the Offense-Defense Balance and Can We Measure It?” International Security, vol. 22, no. 4, 1998, pp. 44–82. JSTOR, https://doi.org/10.2307/2539240. Accessed 9 May 2023.

King, Anthony. “AI At War.” War on the Rocks, 27 Apr. 2023, warontherocks.com/2023/04/ai-at-war/. Accessed 8 May 2023.

Kreps, Sarah, and Jacquelyn Schneider. “Escalation Firebreaks in the Cyber, Conventional, and Nuclear Domains: Moving beyond Effects-Based Logics.” Journal of Cybersecurity, vol. 5, no. 1, 29 Sept. 2019, https://doi.org/10.1093/cybsec/tyz007. Accessed 6 May 2023.

Jensen, Benjamin M, et al. “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence.” International Studies Review, vol. 22, no. 3, 24 June 2019, pp. 526–550., https://doi.org/10.1093/isr/viz025. Accessed 6 May 2023.

Johnson, James. “Artificial Intelligence in Nuclear Warfare: A Perfect Storm of Instability?” The Washington Quarterly, vol. 43, no. 2, 2 June 2020, pp. 197–211., https://doi.org/10.1080/0163660x.2020.1770968. Accessed 6 May 2023.

Johnson, James. “‘Catalytic Nuclear War’ in the Age of Artificial Intelligence & Autonomy: Emerging Military Technology and Escalation Risk between Nuclear-Armed States.” Journal of Strategic Studies, 13 Jan. 2021, pp. 1–41., https://doi.org/10.1080/01402390.2020.1867541. Accessed 6 May 2023.

Johnson, James. “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” Journal of Strategic Studies, vol. 45, no. 3, 30 Apr. 2020, pp. 439–477., https://doi.org/10.1080/01402390.2020.1759038. Accessed 6 May 2023.

Roser, Max. “The Brief History of Artificial Intelligence: The World Has Changed Fast — What Might Be next?” Our World in Data, 6 Dec. 2022, https://ourworldindata.org/brief-history-of-ai. Accessed 7 May 2023.

Slayton, Rebecca. “What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment.” International Security, vol. 41, no. 3, 1 Jan. 2017, pp. 72–109., https://doi.org/10.1162/isec_a_00267. Accessed 7 May 2023.

Wang, Pei. “On Defining Artificial Intelligence.” Journal of Artificial General Intelligence, vol. 10, no. 2, 1 Aug. 2019, pp. 1–37., https://doi.org/10.2478/jagi-2019-0002. Accessed 27 Apr. 2023.

--

--

Ryusei Best Hayashi

Founder & CEO of Reach Best | UC Berkeley Dean’s List | Stanford e-Japan Scholar | Harvard Innovation Challenge II Alumnus | CAA Leadership Award Scholar