Homelabs

An assortment of ideas and projects

Machine Learning

Or the Modern Frankenstein Creation

Feb. 26, 2019 by Maxwell Fan In Mary Shelley’s Frankenstein, Robert Walton and Victor Frankenstein both pursue potentially dangerous research in pursuit of knowledge, which gives what Walton describes as a “glow with an enthusiasm which elevates me to heaven; for nothing contributes so much to tranquillize the mind as a steady purpose – a point on which the soul may fix its intellectual eye” (Shelley 80). Eventually, Frankenstein finds success in his search for the secret of life and is able to create a sentient being capable of thoughts and emotions. These private thoughts and emotions presented a threat to Frankenstein, who promptly flees his lab and leads the monster onto a destructive rage. Eventually, Frankenstein regrets the creation of his monster, crying “[I will] extinguish the spark which I so negligently bestowed” (Shelley 191).

In today’s world, rapid advancements in machine learning have inflamed anxieties about the role of humans in the workplace. In addition, many people fear that if machine learning progresses to an advanced enough stage, it could become sentient – presenting to us to similar moral quandaries faced by Frankenstein. As the field of machine learning advances, and acquires the ability to mimic human abilities, we are left with a tough question: What will be our role in the world?

Today, the current state of machine learning is quite limited in comparison with the average human’s abilities. Machine learning requires gargantuan datasets in order to learn to a high degree of accuracy. Tasks such as object recognition, text analysis, facial recognition, game playing, and deepfakes, by nature of their design, require expensive computation and millions of examples in order to become useful. In fact, the computing power required by landmark papers in machine learning is growing at an exponential pace.

image A graph of computing power per major paper. (Credit: OpenAI Blog)

In addition, they require machine learning specialists to fine-tune and optimize the hyperparameters of the program. Most importantly, each of these machine learning categories are distinct. They use different algorithms and structures to learn, making them fundamentally incompatible. As several machine learning researchers put it in a paper, “current systems are better characterized as narrow experts rather than competent generalists” (Radford et al. 1).

Although, perhaps this limitation may change. On February 14th, 2019, OpenAI, a research institute dedicated to researching “safe artificial general intelligence”, published a landmark paper that was able to summarize, translate, and answer text coherently. This is the first time a machine learning algorithm has been able to replicate near human-level proficiency in so many domains. In fact, the researchers themselves were spooked by the uncanny aptitude of their own creation. In an unprecedented decision in machine learning research, they temporarily withheld the release of their fully trained model, “due to our concerns about malicious applications of the technology”. In addition to the research findings impressing the machine learning world, the principled ethical stand that the researchers took to delay full publication was equally as impressive. The researchers stated, “Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community”, starting an important dialogue in the machine learning community (Radford et al.).

image A partial excerpt from OpenAI’s model (Credit: OpenAI Blog)

Especially in the context of online fake news bots, OpenAI researchers were acutely aware of the dangers of their own creation. The predominant worry amongst machine learning researchers is not artificial general intelligence (AGI), but bad actors using cutting-edge machine learning research to generate fake news, impersonate people, manipulate people’s feelings, “guide” military decisions, and oppress minorities to an extraordinary degree – all of which are currently happening. Even though the unique dangers presented by machine learning advances are not quite as threatening as Frankenstein’s monster, without careful planning, machine learning technologies have the potential to turn into the modern-day analog of Frankenstein’s monster.

“I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart” (Shelley 135). – Victor Frankenstein

In addition, the superior performance of machine learning algorithms compared to humans in specific tasks can already profoundly impact the present-day job market. A study done by two Oxford University researchers found that 47% of US jobs are at risk for job automation, mostly uneducated workers (Frey and Osborne 38). But, drastically increasing the productivity per worker, if properly realized by governments in their rushed race for AI superiority, can usher in a new era of better living standards made possible by cheap and bountiful goods and services.

Machine learning today is a mixed bag of positive and negative developments. In the short term, it will have temporary, harmful effects to society, but if governments can seize the opportunity presented by having cheap, bountiful, and intelligent labor, we could be able to enjoy a golden era of prosperity.

Currently, many countries are in a single-minded focus to accelerate machine learning research – no matter the cost. US, China, Japan, South Korea, and many others are in a fierce battle to outspend and outcompete each other. In each country’s race to AI dominance, we must not lose ourselves in a blinding obsession for superiority and think about the consequences of our actions – just like Frankenstein, when he states “Even now I cannot recollect, without passion, my reveries while the work was incomplete. I trod heaven in my thoughts, now exulting in my powers, now burning with the idea of their effects. From my infancy I was imbued with high hopes and a lofty ambition; but how am I sunk!”(Shelley 337).

Hollywood loves to portray machine learning as an existential threat to humanity. Stories like Frankenstein fearmonger to luddites, especially at moments when creators realize the horror of their creations. These intense scenes captivate the public’s attention,

“How can I describe my emotions at this catastrophe, or how delineate the wretch whom with such infinite pains and care I had endeavoured to form? His limbs were in proportion, and I had selected his features as beautiful. Beautiful! – Great God! His yellow skin scarcely covered the work of muscles and arteries beneath; his hair was of a lustrous black, and flowing . . .” (Shelley 135). – Victor Frankenstein

and have severely impacted the public’s awareness of machine learning’s benefits and dangers. It is imperative that the public truly understand this revolutionary technology, both the advantages and the pitfalls.



Works Cited


Frey, Carl Benedikt, and Michael A. Osborne. “The Future of Employment: How Susceptible Are Jobs to Computerisation.” Oxford University, www.oxfordmartin.ox.ac.uk/downloads/academic/ The_Future_of_Employment.pdf. Accessed 27 Feb. 2019.

Radford, Alec, et al. “Language Models are Unsupervised Multitask Learners.” OpenAI, d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. Accessed 22 Feb. 2019.

—. Weblog post. OpenAI, 14 Feb. 2019, blog.openai.com/better-language-models/. Accessed 27 Feb. 2019.

Shelley, Mary. Frankenstein. IBook ed., Penguin.