On many online forums and social media sites today there are artificial accounts known as, “bots”, these bots perform a number of functions from posting the same image every hour to contributing racist banter to online threads. To this point there has not been much definitive work made to prevent or regulate their use. In this paper I will argue that a Kantian approach shows the flaws and deception of computer generated accounts on social media platforms and also why they should no longer be allowed in any form. This will be on the ground of how they violate an individual’s effective ability to ask themselves, what ought I do?’, how they take advantage of individuals in a way they cannot consent to, and how they contribute to a platform in which they have no access to a human reality. Bots have been an especially hot topic in American society since speculation around their use by Russia in the two thousand sixteen United States Presidential election. Attention that this election brought to these bots has begun to unravel the true depths of their use as more initiatives to track and regulate the use of online bots as legislators and the online community begin to truly recognize the threat they pose to an independent thinking society. As more attention comes to this issue, it would seem the negatives outweigh the positives and that continuing the use of bots is not in the best interest of progressing society.
The biggest issue brought about by social media bots contributing to a platform intended for human interaction is that they prevent anyone who interacts with them from being able to effectively ask themselves, “what ought I do?”, which is crucial to Kant’s ideology. The reason that these bots prevent an individual from effectively being able to ask themselves this is because of their tendency to spread malicious and deceptive information. In a study done in 2016 by Imperva Incapsula, data on bots revealed that the highest percentage of them are used as impersonators, at twenty-four point three percent of all internet traffic (Zeifman). In total, malicious bots in any form are responsible for twenty-eight point nine percent of all internet traffic (Zeifman). With almost a quarter of all internet traffic attributed to impersonator bots it becomes more apparent why it is so hard to make decisions and establish, “what ought I do?”, if a quarter of the contributed information is artificially created with the intention of effecting an individual’s thought process. Another article by Joanna Burkhardt sheds more light on the ease of access for bots to infiltrate an individual’s social network, as data reveals almost one fifth of Facebook users accept friend requests without confirming if they actually know the person sending the request, giving bots very good chances of getting access to the feeds of countless users (Burkhardt). Once they have access, these bots can be made with algorithms designed to spread biased or false information to targeted audiences. This artificial input on a platform designed for human interaction and original content is unacceptable and puts the free will of the individuals it effects at risk as well. If an unknown force is coercing a thought process with biases and strategic use of the resources provided by online social media platforms, it robs someone of a neutral environment to make and form opinions or make decisions, which is unacceptable.
Another common issue brought about by bots is there use in mass. In the past China’s government has shown the world the threat of mass use of online accounts with the, “50 Cent Army”, a collective of online bloggers paid to post in support of the government. The Chinese government officially refers to them as,” Internet Commentators”, and has a legitimate pricing model based on the length and quality of the post made (Wikipedia). A study published by Harvard University internet researchers approximates the Chinese Government employs two million individuals responsible for almost four hundred and fifty million posts a year (King). The content generated by this army is meant to engage anti-government sentiments on social media platforms and change the topic to distract the online community from the issues they mean to discuss. This distraction technique helps keep the governments operations out of the main topic of discussion on these platforms and also gives the audience that would discuss these topics new ideas to engage instead. The Chinese Government has been able to be undertake this massive operation due to their dominance in the state and massive amounts of money available to pay for the system to be run by human operated and generated accounts. With the emerging advances in bot technology soon private firms and possibly individuals will be able to replicate the, “50 Cent Army”, with artificial bots. This would allow for anyone with the cash resources or knowledge of how to create these systems to establish their own networks of misinformation and influence across social media platforms. The mass use of bots can be used without using them specifically to post information as well, in large quantity bots can be used to overwhelm servers and report accounts. These attacks are referred to as DDoS attacks, which stands for distributed denial of service. They are used to strategically cripple webpages during critical uses and get accounts suspended on social media platforms during times when the user is in a position to use that account for a variety of purposes. The bots achieve these goals by overwhelming webpages with thousands of requests a second from an army of accounts created for this one purpose, more often than not these accounts are created to self-deactivate to eliminate the evidence. Data collected by Imperva Incapsula shows that ninety-four point two percent of all websites in a one hundred thousand website sample had been targeted by some type of bot attack in just a ninety-day period, with a majority of these attacks being DDoS’s (Zeifman). When these bots attack websites and use human generated content as a means of achieving a malicious goal they are taking advantage of those human users in not just a form they do not want to consent to, but in a way they cannot consent to. This misuse of means shows the immorality of using bots as a method to carry out a cyber-attack or to sway public opinion on the grounds of Kant’s ideology (Oneill).
A third flaw of the use of online bots in terms of Kant’s school of thought is merely the fact that they are able to post. One of Kant’s major points is that, “we can vindicate no claims about any transcendent reality to which we have no access”, this quote is in the context of humans contemplating the realm of existence of a God, but is still applicable here (Oneill). As these computer operated bots are created they are taking claim to being able to post on a platform designed for human generated content when they are inhuman. A recent study published jointly by Indiana University and The University of Southern California estimates that somewhere between nine and fifteen percent of accounts just on the platform Twitter alone post computer generated content (Varul). This confirms the fact that there is a significant amount of the social media community’s information that is being contributed by these bots on false pretenses as they cannot comprehend the human experience and thought process, giving them no right to contribute to the platforms. Morality enters the argument in an applicable context here since it truly is an immoral act to create a system that can generate these accounts to take advantage of peers on these Kantian grounds.
There are several points that can be made using Kantian Ethics as a basis that can be used to critique the use of online bots on social media platforms. A central part of Kant’s ideology is, “universal law’, which means that, “principles that cannot serve for a plurality of agents are to be rejected: the thought is that nothing could be a moral principle which cannot be a principle for all”, as stated in the Oneill article (Oneill). This creates a great dilemma for the use of online bots, which cannot be claimed as a universal good. The fact that fifty-one point eight percent of all internet traffic can be attributed to bots, and fifty-six point four percent of this bot traffic is from bots with malicious intent proves that their use is not out of good will. The idea of good will and duty is another crucial part of Kantian Ethics, these are components of how Kant interprets actions and encourage actions that are positive for all agents involved (Oneill). This brings the use of bots to be critiqued on the grounds of their use not being a valid universal law as their use is predominantly malicious and that the fact they are used in this sense fails to fill the requirements of an act that uses good will, meaning those who act in accordance with their creation and use are not acting out of duty. Kant also discusses the, “Formula of the End in Itself”, which states that “we treat humanity in your own person or in the person of any other never simply as means but always at the same time and end” (Oneill). This statement requires a sense of respect for each person interacted with and that they should not be used as merely means to an end but also with the same regard of that the end. When bots are used to achieve an end through deceiving other users on social media platforms, they are not treating other persons as ends as well as means but simply means, violating Kantian Ethics. The virtue theory of ethics also has an application in the context of using online bots on social media platforms designed for user input when these bots try to discredit the work contributed to these platforms by legitimate users. Virtue requires that there be some sense of good coming from the action that would persuade the individual to do it again, and when there are forces acting to remove that sense of good out of these actions, it would lead to the downfall of these platforms designed to foster human interaction using virtue and self-determination (Benkler).
Kantian Ethics create a valid basis to argue against the use of online bots on social media platforms and while these bots are used mostly for a malicious missions and have no place on a platform built by and meant for humans, a few bots do exist in functional formats on some social media platforms. One of these would be the Big Cases Bot designed by Brad Heath of The New York Times. This bot automatically tweets out links to case files in major federal cases across the country to keep all those who follow it informed without having to look up the information themselves. Another popular use for bots on social media is for humor purposes, an example of this is the MemeGenerator Bot, which is on Facebook for users to send text lines and picture requests to so it can generate their intended meme for them. While these uses are fun and interesting ways to add content to the internet, the bad simply outweighs the good with bots. As stated earlier, the overall majority of bot content is malicious and that trend only seems to be increasing. The overall use of “good” bots can also be questioned as their primary uses are also outlined in the Imperva Incapsula data. This data shows that while fifty-five point six of all bots are malicious, only two point three percent are dedicated towards monitoring the health of websites and their online use. Compare this to the twenty-three point four percent of bots dedicated to simply finding data for users to cut steps out of online processes to help expedite them (Zeifman). This comparison shows that while there is an opportunity for bots to contribute positively, the community creating them seems to not hold it as a priority and see them more as just aids to making the online experience easier without truly considering their tremendous negative contribution to the online community.
In all, bots are simply a thorn in the side of the online community and need to be restricted before they become a bigger problem that leaves the effective use of the internet as a source for valid information in question. The fact that they are used so often to sway public opinion and achieve malicious tasks to undermine the ability of individuals to have a neutral online experience alongside their unwarranted contribution to these forums and danger in mass quantities clearly outlines a basis for them to be banned. The lack of initiative around regulating their uses or trying to curb the ability of anyone to make malicious bots simply shows that it is much easier said than done to legislate the use of bots, meaning they should just be banned in whole. There is no current positive use of them in any fashion that can justify allowing them to still be created that outweighs the tremendous negative contribution they make on behalf of individuals who choose to use them