Many national lawmakers have little real understanding of the laws they are passing, nor any real appreciation of their implications on individuals, businesses, or societies at large.
In the United States, for example, many lawmakers don’t even necessarily read the bills they vote on, relying instead on feedback from specialized staff members to review and explain legislation, or the independent analyses produced by nongovernmental organizations (NGOs), think tanks, or lobbyists.
Artificial intelligence (AI) is likely to be another interpreter of legislation in the future, although not necessarily to create legal summaries directly for lawmakers. It is more likely that NGOs, think tanks, lobbyists, and staff will use AI to help them analyze the likely effects of legislation more accurately.
A Long Read
The common practice of U.S. lawmakers not reading bills came back into public debate when the Patient Protection and Affordable Care Act, or Obamacare, prompted some media outlets to voice their opinion around a common message. The message to lawmakers was “Read the bill before voting on it.” These media outlets were really raising concerns about complexity.
The number of pages of regulation associated with Obamacare range between 5,000 and 20,000, depending on how you count them, because of all the references to other laws. In 2009, the U.S. Code was estimated to have over 42 million words.
The Obamacare bill (H.R. 3962), excluding references to other regulations, is 1,990 pages long with almost 235,000 substantive words, which is slightly shorter than the book “Harry Potter and the Order of the Phoenix,” and is roughly 0.5 percent of the entire U.S. Code.
Clearly, the large volume of text demonstrates the complexity of the U.S. regulatory framework. If lawmakers don’t read the bills they vote on, can they really understand them? Where are lawmakers getting their information from to base their decision on, and is it reliable?
The introduction of AI in the legislative process is becoming more likely because of other emerging trends such as the creation of human-readable reports from structured and unstructured data.
Primer, an AI company, has developed machine learning (ML) algorithms that can mine vast amounts of internet data to summarize a topic into key concepts and trends.
For example, a user can enter information into an AI platform about a topic he or she may be interested in, such as coal mining in Indonesia, and a report will be produced by the AI behind the scenes.
The technology already is being used by some big banks and members of the intelligence community to summarize topics of interest on-demand, which allows analysts to focus on other things, while still staying abreast of key trends and risks.
If an AI system can create human-readable reports from unstructured internet data, then it can also decipher legislation. It will take time to train AI how to process legislative language effectively, but as ML algorithms become ubiquitous, easily deployable, and more affordable to run, it’s likely that someone will develop AI to make legislation more transparent.
AI can transform the legislative process by moving it from lawyers manually reading and writing bills to modeling them. Perhaps, one analyst may read and write bills as another leverages AI, natural-language-processing algorithms, and data visualization to model their impact within existing complex legislative frameworks.
AI can help to model, predict, and monitor the impact of legislation that lawmakers pass, but it can also keep the same lawmakers accountable on many other fronts. In a 2018 Gallup poll, only 5 percent of those surveyed had a high degree of confidence in the U.S. Congress.
In many countries, simply trying to understand what an elected official or candidate running for public office believes or has historically voted on can be a daunting task. AI can help decipher how lawmakers have voted historically, their public stances in the media, sources of campaign funding, possible conflicts of interest, and other publicly available information.
Doing so could make the legislative process easier to understand and might also encourage more people to run for office and participate in the political process, by virtue of it being made more transparent and making elected representatives more accountable.
The relentless parade of corruption scandals worldwide amplifies the need to introduce more transparency and accountability across the board. In a 2016 study, it was estimated that the annual global cost of bribery was $1.5 trillion to $2 trillion, or approximately 2 percent of global GDP.
Some have criticized these widely cited corruption metrics, but anyone who has traveled throughout the world, follows the news, and has read Transparency International’s Corruption Perceptions Index understands the gravity of the problem.
AI is already being used by law enforcement to evaluate a person’s truthfulness when crossing country borders and by companies to determine which job candidates make it to the second round of interviews to meet with flesh-and-blood human beings.
AI can even analyze a person’s facial expressions in a recorded video or monitor pupil dilation during a series of questions designed to score an individual for trustworthiness.
In the future, some politicians may use their willingness to undergo an AI-based lie-detection test as a platform from which to build their political campaigns. Similarly, if an NGO developed AI to monitor the public data of lawmakers for any potential concerns about conflicts of interest, it might reduce the level of corruption because of the fear of getting caught.
The voters of the future may be able to visit a website to browse a politician they are thinking of voting for and get a lot of useful information from AI-driven platforms that could score politicians for honesty, graft risk, historical voting patterns on key issues, publicly available data on personal assets, and hidden links with companies and other high-risk individuals.
While some politicians may not want to give up their personal information voluntarily, others who may be an underdog in a political race may think they have nothing to lose by submitting themselves to an all-intrusive AI cavity search.
Also, the use of AI could help promote their campaign through viral media posts and reduce the need for wealthy campaign donors. In other words, AI could alter the power dynamic among the political elites, level the playing field, and give more power back to where it belongs—with voters.
If AI were to provide us with a wealth of information about potential political candidates, would people actually use it?
Also, would a candidate’s honesty or propensity for corruption affect the outcome of elections in the future, given the low bar that has been set on the “acceptability” of corruption, its widespread practice, its institutionalization in the halls of government, and the relative political apathy of some voting populations?
The caliber of the AI used in the political process of the future will, of course, only be as good as the data used to calibrate it and the degree to which voters use and benefit from it.
Evaluating the effectiveness of processes and laws boils down to the values that societies, lawmakers, and the voting public attribute to them. Instead of being deliberately transparent and neutral, AI systems could be trained to promote radically different value systems.
Could a truly objective political AI that is immune to partisanship even be created, and would it stand a chance of being accepted and effective if the noting public remained partisan in nature?
Despite the risks, the legislative and election process is in desperate need of a shakeup. The reality is that no one, especially lawmakers, truly understands the impact of the laws passed, because it’s extremely complex and no one has attempted to model it.
As governments begin to model legislative changes they can set guideposts for the types of impacts they would like to see such as a reduction in unemployment.
Besides holding lawmakers accountable for the laws they do or don’t pass, the public needs to hold politicians accountable for their own actions or lack thereof. Perhaps AI, if deployed widely enough and with care, is part of the answer toward rebalancing the scales of power back to the people.
Keith Furst is managing director of Data Derivatives. He is the co-author of the new book “AI Supremacy: Winning in the Era of Machine Learning.”
Editor's Note: A version of this article first appeared here in The Epoch Times on October 4, 2018.