joke2punchline, punchline2joke: Using a Seq2Seq Neural Network to "Translate" Between Jokes and Punchlines

 
> what do you call an unpredictable chef ?
< ouch .
 

After implementing the seq2seq model, an encoder-decoder network with attention, I wanted to get it to translate between jokes and punchlines. The scripts, pre-trained models, and training data can be found on my GitHub repo.

Model Overview

The underlying model is a PyTorch implementation of the Sequence to Sequence model network, an encoder-decoder network with an attention mechanism. Seq2seq can translate any arbitrary text sequence to any arbitrary text sequence. A more useful application, for example, would be translating English to French or vice versa. For this project, I trained the seq2seq model on question-answer format jokes, so that it can output a punchline given a joke, or output a joke given a punchline.

Results

Overall, the results were somewhat nonsensical, as one might expect. These results are curated by me based on whether or not they made me, at minimum, smile. Yes, I do laugh at my own jokes.

For the following examples, > represents the text input, < represents the model output. I’ve selected examples where the joke or punchline is not directly from the training set, i.e. excluding any output that is simply being regurgitated from the original dataset.

Joke2Punchline

For the following examples, the first line is a fake joke I wrote up using words within the model’s joke vocabulary and fed into the model (>), and the second line is the punchline outputted by the model (<).

 
> what do you call an unpredictable chef ?
< ouch .

> what do you call a pile of pillowcases ?
< screw music

> why was the sun hospitalized ?
< because he was sitting on me .

> what do you call an unhappy star ?
< stay here !

> what do you call an unhappy star ?
< days numbered !

> what is a strawberry s favorite month ?
< a cat !

> who s there ?
< in the dictionary .

> what is red and bad for your teeth ?
< a a gummy bear

> what treasure can you find on a thursday ?
< the lettuce !

> when is a sun a moon ?
< a barber driver

> how many bananas can the moon eat ?
< fish and the train .

> what do you call an upside down egg ?
< the dough

> why was the sun unhappy ?
< because he wanted to tearable time paste !

> what did the skeleton say when they died the wrong year ?
< it march

> how many snails does it take to get to the moon ?
< to the hot hot shakespeare !

> why was the moon crying ?
< because he was on the deck !

> where do sheep go to school ?
< they take the mile bison of course !

> how many emotions does the sun have ?
< he got cents
 

Punchline2Joke

For the following examples, I fed the model fake punchlines, written using words within the model’s punchline vocabulary, and the model outputted a joke that would result in the input punchline. The first line is the fake punchline I fed into the model (>), and the second line is the joke outputted by the model (<).

 
> two parents
< what has four wheels and flies over the world ?

> watermelon concentrate
< when do you stop at green and go at the orange juice factory ?

> cool space
< what do you call an alligator in a vest with a scoop of ice cream ?

> meteor milk
< what do you call a cow that is crossing ?

> one two three four
< what did the buffalo say to the bartender ?

> jalapeno ketchup
< what do you call a boy with no socks on ?

> ice cream salad !
< what did the fish say to the younger chimney ?

> the impossible !
< what did the worker say when he swam into the wall ?

> both !
< what do you call a ghosts mom and dad ?

> pasta party
< what do you call the sound a dog makes ?

> salad party
< what did the buffalo say to the patella ?

> dreams party
< what do you call the sound with a fever ?

> a thesaurus and a dictionary
< what kind of shorts do all spies wear ?

 

Considerations

Training Data

To train the model, I needed a dataset of clean jokes in question-answer text format.

While I did find a dataset of question-answer format jokes, the jokes are scraped from Reddit’s r/jokes subreddit. Going through the file, I did not like most of the jokes at all, as most of them were highly problematic. They were often racist, sexist, queerphobic, etc., and I would rather compile my own than to feed bad data into my model.

One option would be to filter this dataset using a set of “bad” keywords, but trying to filter a heavily biased dataset was less appealing to me than to create a new set entirely. An alternative could be to write a scraper for r/cleanjokes, filtering in only question-answer format jokes, but I didn’t want to invest too much time/energy on this toy project, and I personally am not a fan of using Reddit for training data in general.

I ended up compiling my own small dataset of clean jokes in the question-answer format, consisting of a little over 500 jokes total. A major trade-off was that the model’s vocabulary is relatively limited, but I enjoyed the jokes much more and felt much better about the data I was feeding into the model.

Teacher Forcing

For the joke2punchline and punchline2joke models, the teacher forcing ratio was set to 0.5. I’d be curious to adjust this parameter and see the results. I would expect a lower ratio to result in more nonsensical output, whereas a higher ratio would likely result in more outputs that are directly from the training set.

I think an ideal setup would be to lower the teacher forcing ratio in addition to having a much larger training set.

Possible Extensions

I do think it would be fun to generate jokes and punchlines using an RNN or LSTM before feeding it into these models, such that there is less human intervention (i.e. writing fake jokes/punchlines manually).

I also think the model would be way more fun to play with if it I could train it with a much larger dataset, i.e. 10K+ jokes.

Implementing a Seq2Seq Neural Network with Attention for Machine Translation from Scratch using PyTorch

Continuing with PyTorch implementation projects, last week I used this PyTorch tutorial to implement the Sequence to Sequence model network, an encoder-decoder network with an attention mechanism, used on a French to English translation task (and vice versa). The script, pre-trained model, and training data can be found on my GitHub repo.

In the following example, the first line (>) is the French input, the second line (=) is the English ground truth, and the third line (<) is the resulting English translation output from the model.

 
> je n appartiens pas a ce monde .
= i m not from this world .
< i m not from this world .
 

Model Overview

In this particular PyTorch implementation, the network comprises of 3 main components:

  • an encoder, which encodes the input text into a vector representation. For this project, the encoder is a recurrent neural network using gated recurrent units (GRUs). For each input word, the encoder will output a vector and a hidden state, and uses the hidden state for the next input word.

  • attention, a set of weights that is used during decoding. Attention weights are calculated using a simple feed-forward layer with softmax.

  • a decoder, which takes the encoder output and attention weights to generate a prediction for the next word. In this project, the decoder is a recurrent neural network using GRUs that starts off using the encoder’s last hidden state, which can be interpreted as a context vector for the input, and a start-of-sentence token. For each next word, the decoder uses the attention weights of the current token and the current hidden state to make a prediction with softmax.

The training data comes from the Tatoeba Project and comprises of language pairs within a text file. While this model uses the French-to-English data file, the file can easily be replaced with any other language pair file from the collection. As a caveat, I have not tested this on languages that may use different encodings (e.g. Traditional Chinese, Arabic, etc.)

Results

In the following examples, the first line (>) is the French input, the second line (=) is the English ground truth, and the third line (<) is the resulting English translation output from the model.

Overall, the results are fairly decent considering the small size of the training set, a 9MB text file, or about 1,400 language pairs.

 
> je suis impatient de la prochaine fois .
= i m looking forward to the next time .
< i m looking forward to the next time .

> je n appartiens pas a ce monde .
= i m not from this world .
< i m not from this world .

> il enseigne depuis ans .
= he s been teaching for years .
< he is been for for years .

> tu es sauve .
= you re safe .
< you re safe .

> je ne suis souvent qu a moitie reveille .
= i m often only half awake .
< i m still only sure .

> nous sommes reconnaissantes .
= we re grateful .
< we re contented .

> j en ai marre de garder des secrets .
= i m tired of keeping secrets .
< i m tired of hearing tom s .

> vous etes a nouveau de retour .
= you re back again .
< you re back again .

> il n est pas marie .
= he s not married .
< he s not married .

> je suis responsable des courses .
= i m in charge of shopping .
< i m very one .
 

Potential Extensions

Overall, this project serves mainly as a toy example and could easily be extended for better performance.

  • Training the model on other languages would be relatively straightforward as it would mainly be a matter of switching out the text file used for training data.

  • The embeddings can be replaced with other word embedding approaches, e.g. word2vec or GloVe.

  • There are various approaches for calculating attention weights. This implementation uses softmax. Experimenting with cosine (the angle between vectors) or dot product (considers both the angle and the magnitude for two vectors) could potentially produce different results.

  • Experimenting with longer training times, bigger datasets, and parameter tuning would likely yield better results.

Personally, I’m interested in running this network on translating jokes2punchlines and punchlines2jokes. My next steps are to acquire or compile a dataset of jokes with a question-answer format to train a seq2seq model. :-)

AACR June L. Biedler Prize for Cancer Journalism, SABEW Best in Business Honorable Mention

I’m excited to announce that I and my co-author Caroline Chen have been awarded the American Association for Cancer Research (AACR) June L. Biedler Prize for Cancer Journalism for our investigative ProPublica piece, Black Patients Miss Out On Promising Cancer Drugs.

Additionally, our piece was awarded the Society of American Business Editors and Writers (SABEW) Best in Business Honorable Mention in the Health/Science category.

Feeling, as always, very grateful for the opportunity. Many thanks to my co-author Caroline Chen, editor Sisi Wei, and the Google News Lab Fellowship for making this piece possible. To echo a sentiment from Caroline: “I hope that access to clinical trials improves so that this story becomes irrelevant asap.” And, on a larger scale, I hope that health justice across the board can become a reality ASAP as well.

AACR June L. Biedler Prize for Cancer Journalism

AACR June L. Biedler Prize for Cancer Journalism

Dogspotting: Using Machine Learning to Draw Bounding Boxes around Dogs in Pictures

 
Dog in shark costume

Dog in shark costume

 

I wanted to try out a computer vision project, and what better way to do that than to point out where dogs are in photos??

Project Overview

I’ve included a Github repo and Jupyter notebook for this project.

This project uses the ImageAI computer vision library for Python, which offers support for RetinaNet, YOLOv3, and TinyYOLOv3 algorithms for object detection. The model used is a RetinaNet model pretrained on the ImageNet-1000 dataset, also provided by ImageAI.

Official guide and documentation for ImageAI detection classes are provided as well.

Overall Impressions

I was pleasantly surprised at how easily out-of-the-box object detection has become. The ImageAI library supports custom object detection for the following categories:

 

person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop_sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donot, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair dryer, toothbrush.

 

This made it very easy to detect dogs specifically! All I had to do is set up my project, download the pretrained model, and set a few parameters and filepaths. The entire project only took about 20 minutes from setup to output image.

Some parameters of interest:

 

custom_objects = detector.CustomObjects(dog=True, cat=True)

 

Any of the object categories can be included here. We are not just limited to dogs, and we can include as many categories as we want -- or potentially all of them -- in the same detector.

 

detections = detector.detectCustomObjectsFromImage(input_image=input_path, output_image_path=output_path, custom_objects=custom_objects, minimum_percentage_probability=45)

 

minimum_percentage_probability refers to how confident the model should be before drawing a bounding box. We can set it to a low percentage, e.g. 15%, if we want it to flag everything it sees. We can set it to a high percentage, e.g. 85%, if we only want it to flag objects when the model is confident about what it’s detecting.

Results

Overall, fairly decent results, especially considering the short amount of time it took to set up. The boxes are bounding boxes for the dog (or animal, or bird) detected, with the object name and prediction probability.

Text Generation with GPT-2, OpenAI's Recently Released Language Model

Venus, planet of love Was destroyed by global warming, while the other suns have been ravaged by the tides of time. There are no suns and there are no tides, except the sun itself. A few suns exist now in Mars (the moon is now in the orbit of Sol and Jupiter), but they are gone in the future. (It is implied that we can go back home to Earth) Jupiter is the only other planet that is not the source; that planet would be the nearest known red planet to us. So is Earth.

The Earth's only visible source of energy is the sun itself. (In Greek it means "sun" or "heaven.") According to the Old Testament story, Jupiter was so cold that it was able to cause the death of children when they died in a ship. The only real star in the solar system that is capable of causing death is the sun, which must be one of the most powerful stars in the universe. Only the moon can cause death from its star at once, and Venus must be at least one of the most powerful star systems in the entire galaxy (more details here). Earth was never seen as an "open" planet.

Earlier this month, OpenAI released a new text generation model, called GPT-2. GPT-2 stands for “Generative Pre-Training 2”: generative, because we are generating text; pre-training, because instead of training the model for any one specific task, we’re using unsupervised “pre-training” such that the general model can perform on a variety of tasks; and 2, because it’s the second model using this approach, following the first GPT model.

TLDR: The model is pretty good at generating fiction and fantasy, but it’s bad at math and at telling jokes. Skip to the end for my favorite excerpts.

Model Overview

The GPT-2 model uses conditional probability language modeling with a Transformer neural network architecture that relies on self-attention mechanisms (inspired by attention mechanisms from image processing tasks) in lieu of recurrence or convolution. (Side note: interesting to see how advancements in neural networks for image and language processing co-evolve.)

The model is trained on about 8 million documents, or about 40 GB of text, from web pages. The dataset, scraped for this model, is called WebText, and is the result of scraping outbound links from Reddit with at least 3 karma. (Some thoughts on this later. See section on “Training Data”)

In the original GPT model, the unsupervised pre-training was used as an initial step, followed by a supervised fine-tuning step for various tasks, such as question answering. GPT-2, however, is assessed using only the pre-training step, without the supervised fine-tuning. In other words, the model performs well in a zero shot setting.

First Impressions

When I first saw the blog post, I was both very impressed and also highly skeptical of the results.


Read More

Black Patients Miss Out On Promising Cancer Drugs

Wrapped up my summer fellowship at ProPublica last week when our investigative piece was published! Give it a read here:

Black Patients Miss Out On Promising Cancer Drugs

A ProPublica analysis found that Black people and Native Americans are under-represented in clinical trials of new drugs, even when the treatment is aimed at a type of cancer that disproportionately affects them.

The accompanying data methodology is here: How We Compared Clinical Trial and Cancer Incidence Data

This story was co-published with STAT and can also be found on Mother Jones.


 

For this story, I pitched the idea and did a ton of research, data analysis, reporting, interviews, all the data visualization—a huge thank you to my wonderful co-author Caroline Chen and amazing editor Sisi Wei!

The story was on the front page the day it published and seemed to be received well. I’ve learned so much from this fellowship and have been super grateful for this opportunity from ProPublica and the Google News Lab.


 

Update—Statement of impact since our story was published:

Our story was featured on Information is the Best Medicine, a Black-owned talk radio station in Pennsylvania, as well as Axios, Vice, Mother Jones and The Atlantic’s People v. Cancer forum. It was reprinted in the Boston Globe and Indianz, a Native American publication. Nonprofit BIO Ventures for Global Health also wrote an op-ed in response to our story, noting that “clinical trials are perpetuating existing health care disparities across the globe.”

In the course of interviewing these patients, we realized that many people don’t understand how trials work, which prompted us to create the Cancer Patient’s Guide to Clinical Trials. The guide has been shared by the Leukemia and Lymphoma Society.


 

Update 2—

For this piece, my co-author and I were awarded the American Association for Cancer Research (AACR) June L. Biedler Prize for Cancer Journalism, as well as the Society of American Business Editors and Writers (SABEW) Best in Business Honorable Mention in the Health/Science category.

Predicting Readmission Risk after Orthopedic Surgery

My colleagues and I from the Clinical Research Informatics Core at Penn Medicine gave poster presentations at the Public Health session of the Symposium on Data Science and Statistics last week.

Here's the abstract:

Our project examined hospital readmissions after knee and hip replacement surgeries that took place within the University of Pennsylvania health system. We used a variety of information available within patient electronic health records and an assortment of machine learning tools to predict the risk of readmission for any given patient at the time of discharge after a primary joint replacement surgery. We faced challenges related to missing data. We used a number of different machine learning models such as logistic regression, random forest and gradient boosted trees. We also used an automated machine learning pipeline tool, TPOT, that uses a genetic algorithm to search through the machine learning model/parameter space to automatically suggest successful machine learning pipelines. We trained multiple models that predicted readmissions better than the existing clinical methods, with statistically significant increases in AUC over the clinical baseline. Finally our models suggested a number of features useful for readmission prediction that are not used at all in the existing clinician model. We hope our new models can be used in practice to help target patients at high risk of readmission after joint replacement surgery, and to help inform which interventions may be most useful.

 
SDSS Poster Presentation
 

Music and Mood: Assessing the Predictive Value of Audio Features on Lyrical Sentiment

 

aka - what's the relationship between the audio features of a song and how positive or negative its lyrics are? 

aka - data analysis of my spotify music data + sentiment analysis + supervised machine learning

aka - my senior thesis

the full jupyter notebook used to conduct this data analysis can be found on my github here: Spotify Data Analysis

(pg. 32 and onward is just the full python jupyter notebook in the appendix.)

Computational Creativity

I gave a presentation this week about some applications of artificial neural networks in computational creativity. It consists of an overview and discussion of 3 different papers:

  1. A Computational Model of Poetic Creativity with Neural Network as Measure of Adaptive Fitness

  2. A Neural Algorithm of Artistic Style

  3. What Happens Next? Event Prediction Using a Compositional Neural Network Model (part of the What-If Machine project)


Here are the slides: