Remember that time when IBM’s Watson went to Hollywood with the First “Cognitive Movie Trailer” and it was a horror flick?

Posted on

How do you create a movie trailer about an artificially enhanced human?

You turn to the real thing – artificial intelligence.

20th Century Fox has partnered with IBM Research to develop the first-ever “cognitive movie trailer” for its upcoming suspense/horror film, “Morgan”. Fox wanted to explore using artificial intelligence (AI) to create a horror movie trailer that would keep audiences on the edge of their seats.

Movies, especially horror movies, are incredibly subjective. Think about the scariest movie you know (for me, it’s the 1976 movie, “The Omen”). I can almost guarantee that if you ask the person next to you, they’ll have a different answer. There are patterns and types of emotions in horror movies that resonate differently with each viewer, and the intricacies and interrelation of these are what an AI system would have to identify and understand in order to create a compelling movie trailer. Our team was faced with the challenge of not only teaching a system to understand, “what is scary”, but then to create a trailer that would be considered “frightening and suspenseful” by a majority of viewers.

As with any AI system, the first step was training it to understand a subject area. Using machine learning techniques and experimental Watson APIs, our Research team trained a system on the trailers of 100 horror movies by segmenting out each scene from the trailers. Once each trailer was segmented into “moments”, the system completed the following;

1)   A visual analysis and identification of the people, objects and scenery. Each scene was tagged with an emotion from a broad bank of 24 different emotions and labels from across 22,000 scene categories, such as eerie, frightening and loving;

2)   An audio analysis of the ambient sounds (such as the character’s tone of voice and the musical score), to understand the sentiments associated with each of those scenes;

3)   An analysis of each scene’s composition (such the location of the shot, the image framing and the lighting), to categorize the types of locations and shots that traditionally make up suspense/horror movie trailers.

The analysis was performed on each area separately and in combination with each other using statistical approaches. The system now “understands” the types of scenes that categorically fit into the structure of a suspense/horror movie trailer.

Then, it was time for the real test. We fed the system the full-length feature film, “Morgan”. After the system “watched” the movie, it identified 10 moments that would be the best candidates for a trailer. In this case, these happened to reflect tender or suspenseful moments. If we were working with a different movie, perhaps “The Omen”, it might have selected different types of scenes. If we were working with a comedy, it would have a different set of parameters to select different types of moments.

It’s important to note that there is no “ground truth” with creative projects like this one. Neither our team, or the Fox team, knew exactly what we were looking for before we started the process. Based on our training and testing of the system, we knew that tender and suspenseful scenes would be short-listed, but we didn’t know which ones the system would pick to create a complete trailer. As most creative projects go, we thought, “we’ll know it when we see it.”

Our system could select the moments, but it’s not an editor. We partnered with a resident IBM filmmaker to arrange and edit each of the moments together into a comprehensive trailer. You’ll see his expertise in the addition of black title cards, the musical overlay and the order of moments in the trailer.

Not surprisingly, our system chose some moments in the movie that were not included in other “Morgan” trailers. The system allowed us to look at moments in the movie in different ways –moments that might not have traditionally made the cut, were now short-listed as candidates. On the other hand, when we reviewed all the scenes that our system selected, one didn’t seem to fit with the bigger story we were trying to tell –so we decided not to use it. Even Watson sometimes ends up with footage on the cutting room floor!

Traditionally, creating a movie trailer is a labor-intensive, completely manual process. Teams have to sort through hours of footage and manually select each and every potential candidate moment. This process is expensive and time consuming –taking anywhere between 10 and 30 days to complete.

From a 90-minute movie, our system provided our filmmaker a total of six minutes of footage. From the moment our system watched “Morgan” for the first time, to the moment our filmmaker finished the final editing, the entire process took about 24 hours.

Reducing the time of a process from weeks to hours –that is the true power of AI.

The combination of machine intelligence and human expertise is a powerful one. This research investigation is simply the first of many into what we hope will be a promising area of machine and human creativity. We don’t have the only solution for this challenge, but we’re excited about pushing the possibilities of how AI can augment the expertise and creativity of individuals.

AI is being put to work across a variety of industries; helping scientists discover promising treatment pathways to fight diseases or helping law experts discover connections between cases. Film making is just one more example of how cognitive computing systems can help people make new discoveries.

Machine Learning Has Vision

Posted on

By Liz Young, Marketing Coordinator 

Everyone is talking about machine learning; the benefits it has, the edge it give companies above the competition, the seemingly limitless capabilities the technology promises.  It’s all so easy to get lost in all the excitement, and let’s be honest, some of the use cases for implementing machine learning seem like something straight from Star Trek.   

But, the truth is, we are not that far off from a Star Trek-esque existence.  I mean, we already have tablets, big screen TVs, cell phones, and replicators, so is it such a stretch to believe we can use computers to automate learned tasks?   

Take, for example, IBM’s PowerAI Vision software.  PowerAI Vision abstracts machine learning and makes it incredibly easy.  The software utilizes image recognition to make it easy for enterprises to implement computer vision, as they introduce AI into infrastructures.  Image recognition includes analyzing and classifying of pixels.  The more exposure to the certain images, the stronger the model becomes.  

How is PowerAI Vision Used Today? 


AI technologies can be implemented to monitor and enforce regulations for safety. Embedded computer vision applications can flag workers while entering hazardous environments, or scan a construction area, to alert supervisors to act on a variety of scenarios. 

Quality Control 

Manufacturers and retailers are using AI technology to check quality control of goods to ensure they align with their standards and expectations.  The software can search for flaws and inconsistencies of products on conveyor belts, and note when they don’t meet requirements. 


PowerAI Vision can be tapped for regulatory and law enforcement applications.  The software can recognize license plates, search for oversized or unsafe vehicles, and even note road conditions, reporting findings back to law enforcement or traffic agencies.   

With this type of machine learning, organizations can label, train, monitor and deploy to streamline processes, train models to classify images and detect objects, introduce auto labeling with deep learning models, and employ video analytics for training and inference.  Imagine what this kind of capability can do for your organization! 


Liz Young writes and designs content for the Evolving Solutions blog, LinkedIn page, Twitter feed, Facebook page and Instagram account.  

Like what you read? Follow Liz on LinkedIn at