OpenAI Scholars 2018: Final projects

Our first cohort of OpenAI Scholars has now completed the program.

One person writing on a notepad while another person smiles and looks on

Photo: Blake Tucker

September 10, 2018

Our first cohort of OpenAI Scholars has now completed the program. Over the past three months, we’ve seen how quickly experienced software developers can become machine learning practitioners. All eight Scholars produced an exciting final project and are going on to work or teach within machine learning.

We’ll be hosting a Scholars Demo Day at OpenAI to showcase their work. Our Scholars will talk about their work this summer, followed by Q&A, mingling, and refreshments with the Scholars and other OpenAI folks. This event is open to all who are interested—just RSVP at the link below!

We plan to repeat the Scholars program in 2019. The program is open to people from groups underrepresented in the field, and we think that AI will only benefit everyone if the field has better representation. We’ll also release a case study on the first cohort in upcoming months to help other organizations roll out similar initiatives at their own companies.

Final projects

Christine Payne

Mentor: Karthik Narasimhan

Works from: San Francisco, CA

Photo of Christine Payne
Interesting learningThis summer I had a basic introduction to RL, and I learned how to use Tensorflow within the Unity game engine to create a multi-agent soccer game. I also learned a lot about language modeling (through my music generation project). I feel much more fluent in PyTorch and TensorFlow than I did at the start of the summer.
Final projectAdapted current language modeling techniques to model classical music via two different approaches (notewise and chordwise) with parallel character and word level language models. This lets you generate solo piano music or piano and violin chamber music. You also have the option of training the generative model on a large music database or only on specific composers or styles. This project also incorporates a music critic network which scores whether a music sample is real or fake, and another which tries to determine who composed the sample.
What’s next: Joining the Fall 2018 class of OpenAI Fellows to continue to expand her ML research knowledge.

Dolapo Martins

Mentor: Geoffrey Irving

Works from: Baltimore, MD

Photo of Dolapo Martins
Interesting learningThis summer with OpenAI was truly amazing. The Scholars community has been truly supportive. Everyone was incredibly vested in our success. One thing I learned is how important pictures are to a deeper understanding. My mentor encouraged me to draw out and explain the math and processes behind what I was doing. I found that I was most helped by online blog posts drawing out what was going on. I also developed a much deeper appreciation for data generation and feature engineering. The most amazing thing I got out of this experience was the confidence to know that I could actually work in this space.
Final projectBuilt a model with good performance on the SemEval STS Task, Track 4; explored different iterations of what the model will look like, from LSTMs to the Transformer architecture to RNNs.
What’s next: Working as an engineer at Microsoft for a year before pursuing a dual PhD at the University of Illinois in Computer Science and Linguistics in 2019.

Hannah Davis

Mentor: Emily Denton

Works from: New York, NY

Photo of Hannah Davis
Interesting learningThis summer I gained a concrete understanding about various neural network architectures, especially those used in creative practices, such as recurrent neural networks, generative adversarial networks, variational autoencoders, and their iterations. I also learned how to implement neural networks in Tensorflow and Pytorch.
Final projectAn experiment in generating emotional landscapes with a GAN, a conditional VAE, and a multi-scale VAE to varying degrees of success. This was a two-week project after previously working on a dataset of stand-up comedy punchlines and realizing it would take longer than the time available.
Emotion classifications
Emotion class represented in each row (from top to bottom): joy, anticipation, fear, sadness, surprise, trust, disgust, anger, none
What’s next: Teaching a graduate-level course on generative music this fall at New York University, then work as a machine learning engineer and continuing to work with generative music and emotion datasets.

Holly Grimm

Mentor: Christy Dennison

Works from: Santa Fe, NM

Photo of Holly Grimm
Interesting learningI’m very appreciative of my three months at OpenAI. For the first two months, I was able to learn about the latest reinforcement learning algorithms and OpenAI’s toolkits. With my final project, I was able to learn about training with multiple outputs, modifying a CycleGAN model with additional loss terms, and increasing my experience with Tensorflow and Keras. In the context of Machine Learning, I learned patience, persistence, and to spend some time just thinking. These qualities helped me to work through diverse projects each week. Additionally, I found that writing about my experiences helped to clarify my thinking on a project and come up with some interesting solutions.
Final projectAn Art Composition Attributes network with a pretrained ResNet50 network fine-tuned on eight different art composition attributes, used within a CycleGAN network to generate art compositions by setting target values for each art attribute.
Color harmony: Analogous
What’s next: Begin working in Machine Learning as an engineer, based in New Mexico; continue to develop final project and incorporate it into her art practice.

Ifu Aniemeka

Mentor: Igor Mordatch

Works from: Chicago, IL

Photo of Ifu Aniemeka
Interesting learningI experienced the joy and excitement that comes with making progress and also the self-doubt and despair that attends failure. It’s been quite the intellectual journey and one I’m glad I didn’t have to take on alone. I know I’ve still got a long way to go, but I’m proud of the progress I’ve made. When I started this program, I’d not trained a single neural network. I’d read and written about them, but I had zero hands-on experience. Machine learning is not an easy topic, but now I feel less like I’m trying to scale Mount Everest and more like I’m climbing up a ladder. Knowing the path forward makes the way shorter.
Final projectBuilt a network that can learn the rules of how objects move in space in the same manner that humans learn those rules—via observation and without explicit definition of concepts like momentum, force, or friction.
Target animationPredicted frames generated by network
Left: target animation. Right: predicted frames generated by network.
What’s next: Work as an ML engineer after spending a few months as a front-end developer while learning and practicing ML skills a little more.

Munashe Shumba

Mentor: Yasaman Bahri

Works from: New York, NY

Photo of Munashe Shumba
Interesting learningBefore the program, I had seen TensorFlow code in tutorials but I had not worked with it myself. During the program, I got to explore and familiarize myself with the code in the TF libraries and to modify the libraries to get them to work on new data. I’m more aware of the field now. It’s easier to read and understand papers. Before the program, reading deep learning papers took a long time because every few sentences I would encounter a term or concept I had never heard before. Now when I read a paper, I understand the terminology as well as why the authors might have made certain choices. I also learned how others in this industry build deep neural nets—common practices, prominent architectures and popular datasets, tools, and the like. Knowing this helps me know where to start on new projects.
Final projectExploring the use of semantic trees in LSTMs as a way to better represent the relationships between entities in a sentence.
Final paper graphic
What’s next: Apply this Machine Learning expertise as an engineering consultant to help small to mid-size businesses take advantage of ML; hosting a short seminar on deep learning in Harare, Zimbabwe to help inform engineers in his home country.

Nadja Rhodes

Mentor: Natasha Jaques

Works from: Seattle, WA

Photo of Nadja Rhodes
Interesting learningI am grateful for the time and space that this program provided me with to self-study at the intersection of deep learning and NLP. I found and refined my blogging voice, including more visual storytelling than ever before. I am much more comfortable working with deep learning frameworks, especially PyTorch and Keras. And I now have experience building an end-to-end, deep learning product. I couldn’t have done it without this support system!
Final project@deephypebot is a music commentary generator. It is essentially a language model, trained on past human music writing from the web and conditioned on attributes of the referenced music. There is an additional training step that attempts to encourage a certain type of descriptive, almost flowery writing commonly found in this genre. Our goal is to teach this language model to generate consistently good and entertaining new writing about songs.
Deephypebot infra
What’s next: Go broader on/experiment more with creative applications in ML - both new and existing - and start looking and applying for ML engineer roles!

Sophia Arakelyan

Mentor: Alec Radford

Works from: San Francisco, CA

Photo of Sophia Arakelyan
Interesting learningDuring this program, I significantly increased my expertise in AI. I became more comfortable building models from scratch, working with TensorFlow, and built a general understanding of reinforcement learning. I’ve built a deeper understanding of NLP through a lot of low level implementations of common algorithms like word2vec, simple RNNs, LSTMs and various preprocessing approaches to text. In my final project, I’m combining NLP with RL where an agent achieves target cells in the Gridworld environment upon commands. The project will be modified to be more language conditioned as a step towards grounded language learning.
What’s next: Sophia will continue to pursue her education in machine learning with a view towards a career in AI research.

Scholars Demo Day

To wrap up this first class of Scholars, we will be bringing all the Scholars together with members of the ML community for a demo day at OpenAI’s offices on September the 20th. Here the Scholars will present their work and discuss their experiences and future ML goals with other attendees. If you are interested and would like to attend, sign up here. We’ll be recording their talks to share later.

We can’t wait to see how our Scholars continue to contribute to the ML community in the future!