AI-generated art by Sophie Dodd '20
AI-generated art by Sophie Dodd '20
AI-generated art by Sophie Dodd '20
Through their digital humanities coursework, Kenyon students have explored a wide range of subjects from popular sitcoms to U.N. speeches to crytocurrency. Learn more about a selection of recent projects below or browse the project archive at digital.kenyon.edu.
Mari Holben '22 examined the change in anti-Asian American/Asian sentiment since the start of the global pandemic. Among the many terms or hashtags that have gained prevalence on social media since the COVID-19 outbreak, Mari started by selecting #chinesevirus, #chinavirus, #kingflu, #wuhanflu, and #wuhanvirus and used Indiana University Observatory on Social Media trend tool to create a visual. The results showed that the impact of Trump coining the term “Chinese virus” in a tweet on March 16 is quite explicit. From the next day, the top trending topic on the internet was #ChineseVirus. GoogleTrends also showed that the five hashtags’ usage was at its peak on March 19, and has been declining since then.
Davida Harris '22 used sentiment analysis to run transcripts of President Trump's and Governor Cuomo’s COVID-19 addresses through code and see if the computer processing language can track emotional valence. After cleaning the text, the team used the Syuzhet package in R to test the emotional valence of both Cuomo and Trump’s transcripts over narrative time. Trump’s graph looks similar to a Cinderella story graph in which the emotional valence first rises, goes down then spikes up again. This may be due to his easy-to-read emotions and obvious wording, resulting in both the public and the computer identifying his thinking. Cuomo’s graph shows his emotional valence to be above neutral with very few valleys. The fact that he remains near the line of neutrality and uses anecdotes frequently gave the computer trouble segregating positive sentiments from negative.
Mara Kaspers '20 aimed to visualize the relationship between economic stimulus spending and the implementation of social protection plans with respect to the changes in unemployment rates in order to model the effectiveness of economic stimulus packages regarding unemployment. The results demonstrated that the median spending on economic stimuli is 1.8 percent of the GDP while the maximum number has climbed up to 20.8 percent in Italy. Despite scoring low on social protection measures, Japan has the highest effectiveness score of 28.4 with a low unemployment rate of 0.64 percent, indicating that social protection is not yet likely as unemployment remains low. The two countries with bigger stimuli and social protection policies are the United Kingdom and the United States. However, the large differences in their effectiveness scores (16.8 for the UK versus 1.9 for the US) are due to the differences in unemployment.
Molly Kavanaugh '22 studied the public perception of the COVID-19 lockdown protests on Twitter. She investigated tweets that reference the protests and attempted to identify how many also mention President Trump, the frequency of such tweets over time, and the number of retweets and likes with respect to the ideas’ generality. Approximately 688 tweets contained “#protest” and 134 out of them mentioned Trump in some capacity. Since nearly 20 percent of the tweets contained references to Trump, it is logical to conclude that there is a correlation between the president’s COVID-19 response and the prevalence of the protests. Likewise, day by day, the number of unique “#protest” tweets decreased along with the number of such tweets that mentioned Trump. Similarly, the number of #protests and Trump retweets fluctuated across the days, and these retweets carry the idea that Trump is still related to the increase of lockdown protests.
Owen Lloyd '21 scrutinized the frontier of AI-generated screenwriting using GTP-2, a generative pre-trained transformer. By pre-feeding it with a dataset derived from 8 million webpages, the model attempted to produce lines of action and dialogue from “Rick and Morty.” The GTP-2 managed to congregate words to form grammatical dialogue while the characters responded to each other in pragmatic and colloquial ways, and the results were top-notch in terms of accurate mimicry of the characters and diction. Many of the lines produced by the data could easily be inserted into a real “Rick and Morty” script and remain unnoticed, maybe even get a laugh. However, since the results lacked narrative coherence, it would seem that GTP-2 can effectively look like a piece of writing, but it will not be a convincing one until there is a way to program a sense of narrative intent into the writing.
Rebecca Turner '22 also used GTP-2 to observe the functionalities and failures of AI-generated comedy. Her team ran Oscar Wilde’s “Comedies of Manners” as a sample test. The results showed that GPT-2 is able to identify recurring themes of Wilde’s comedies, such as religion, levity and romance. It also identified "Wilde-isms," a typical incidence in Wilde’s comedies of manners in which a character makes a broad categorical statement that challenges the expectation of the audience in its content. However, in addition to consistent dry humor, GPT-2 was not able to produce the comic narrative structure present in Wilde’s comedies. Though GPT-2 lacks cohesive narrative structure and the human aspect that defines comedy, the fact that it can identify latent recurring devices used in literature and replicate them is quite exhilarating.
In a project titled "Quantifying Comedy," Donald Long '22 explored whether artificial intelligence could write a coherent and humorous episode of "Family Guy" or "South Park." He focused on whether comedic
patterns could be identified through plot and then replicated through automation. His answer: partially. AI doesn't yet understand the context of shared lived experience, which underlies many comedic moments. Additionally, adult animated shows have many physical, unspoken gags that AI would have difficulty generating.
Alasia Destine-DeFreece '21, Samara Handelsman '21, Talia Light Rake '20, Ally Merkel '20 and Gracie Moses '20 were also interested in the role that artificial intelligence could play in television writing. With short prompts and then full series exposure, they used a Generative Pretrained Trained Transformer 2 model (GPT-2) to generate scenes from "Sex and the City" and asked classmates to distinguish between the AI-generated scenes and the scenes written by humans. Ultimately, they found that even AI exposed to the entire "Sex and the City" series had difficulty creating consistently cohesive scenes and concluded that GPT-2 is surprisingly capable of mimicking the show's style, but not yet capable of writing scenes that the majority of audience members would believe to be written by a human.
Zhaofang (Zach) Wang '20 analyzed the vocabulary, word count, video length and on-screen attitude of popular vloggers with differing styles and audiences. The data visualization results indicated the importance of consistency and of tailoring style to suit a target audience. For example, younger viewers seem to respond to visual content and fewer words, and profanity can lead to success or failure based on audience preference. Wang noted that further studies that focus more on visual and audio content would lead to more comprehensive and accurate conclusions.
Sophie Barrio '20 explored the possibility of creating successful AI-generated song lyrics. She used GPT-2 to predict the next word from a given set of words. The GPT-2 model was trained by being fed every song from all 7 of Taylor Swift’s albums. GPT-2 proved successful in formulating metaphors and creating compelling lines but lacked consistency throughout the storytelling of a song’s lyrics. Sometimes, it did not accurately describe what Swift wanted to insist, leaving the lyrics to be more of a horror story with references to bruises, scars, burns, etc instead of a breakup song. To come up with a more cohesive song, the model has to be fed with an even comprehensive data set that should plausibly include Swift’s contemporaries.
Kaiya Case '22 studied how the GPT-2 platform that is commonly used to imitate prose can find its way to classical poetry, and how the more concise, metrical form may translate to machine generation. The system is used to generate text by filtering the textual input as a series of tokens, with the simple objective to predict each next word. It was fed a txt file containing Shakespeare’s 154 sonnets, followed by a reformatted compilation of Donne’s collected poetic works in a separate trial. The results showed that given such a limited corpus, along with the importance of individual word choice inherent to the poetic medium, articulate phrases can really be expected from the current GPT-2 model. However, the rhyme scheme is almost entirely unkempt. Traces of Donne’s reflective vanities start appearing in the generated results throughout the latter half but are never able to pick up lucid momentum.
Jonah Ziteli '20 examined the effectiveness of artificial intelligence in producing coherent and convincing contemporary poetry. A transformer-based language model called GPT-2 was fine-tuned to a corpus of 177 of mid-century Ohio poet James Wright’s poems across a span of his entire career. The machine produced two original sets of poems at temperatures of 0.7 and 0.9. At the lower temperature setting, the model generated believable, coherent poems that matched Wright’s style, use of vocabulary, expressions, and themes. However, at 0.9 settings, the generated poems deviated from Wright’s elegance, pulling words from corners of its lexicon distinct to Wright in the least. While both the poems did lack effective use of imagery and metaphors, on the whole, there is evidence that GPT-2 has the potential to accurately replicate contemporary poems if trained well.
Henry Abbot '21 used a generative adversarial network (GAN) to generate art and then conducted a Turing Test to determine if classmates could distinguish between artificial intelligence-generated and human created art. Abbot noted that, while the AI struggled to pass the test, it replicated some artistic styles better than others and also created new, unconventional styles. He concluded that, while it's unlikely that AI will completely replace humans as artistic creators, advancing technology has an undeniable future in art.
Nathan Attie '20 agreed that the futures of art and AI are inextricably linked. In a separate project, Attie used GANs, integrated with a deep music visualizer, to create art and explore whether AI-generated art is effective. He concluded that AI can create new, unique images and provide tools for people who are less comfortable with or talented at traditional art forms to express themselves.
Sophie Dodd '20 explored how current attitudes surrounding AI as a creative tool are changing, using a survey to capture emotional responses to art created by a style generative adversarial network (StyleGAN). The majority of survey participants agreed that AI could make art, but many stipulated that they didn’t like the AI-generated art (see images above). Dodd concluded that, while concerns about the exclusivity of art and fears that AI might one day surpass human skills are common, understanding how the underlying technology works reduces discomfort, and that AI is revolutionizing the meaning of art and our relationship with it.
Eryn Powell '19 combined photojournalism, political science, sociology, and visual analysis technology to explore how images can influence public opinion. Powell used a convolutional neural network (CNN) to analyze images of politician John McCain and concluded that, with advancing technology and large datasets, it is possible to identify the styles and elements of images that sway voters and convey specific sentiments (both positive and negative) about political figures. She notes that this may have an effect on visual choices made by photojournalists who traditionally favored content over style.
"AI and the Future of Curatorial Work in Art Museums" | Anna Shaulis '22
"Artistic Style Transfer: How Convolutional Breaks from Convention" | Miles Shebar '20
"Building a Universal Human Trafficking Lexicon" | Nora Mittleman '20
"Cold War Conflicts: Analyzing the Role of U.S. Arms Exports" | Kara Morrison '20
"Computational Approaches to Predicting Cryptocurrency Prices" | Chris Pelletier '20
"Fraud Detection in the Health Insurance Industry Using AI/ML Techniques" | Keely Sweet '20
"Guns in D.C. Appellate Court: Sentiment Analysis on Opinions from the Court" | Grace Peterson '22
"LEED Certification Prediction with K-Means Clustering Algorithm" | Jack Chase '19
"Lost in Translation: Using Sentiment Analysis on Translations of Homer’s "Odyssey'" | Erin Shaheen '22
"PICMOJI English Translator" | Alexander Beatty '20
"Predicting Attitudes Toward the Environment" | Emily Rachfal '20
"Sethe’s Icarus Moment: Sentiment Analysis and Toni Morrison’s 'Beloved'" | Willow Green '21
"Transitional Justice Terminology Analysis in UN General Assembly Speeches" | Michael Lahanas '19
"Using Machine Learning to Predict Major League Baseball Success" | Alexander Gow '21