Skip to content

Result APIs Overview

This section provides detailed information on all available result APIs including links to OpenAPI specifications and online documentation.

All of these result APIs require an API key for access. Partners can receive their API key from their Vionlabs integration lead. It should be noted that although partners receive a single API key to authenticate calls with all Vionlabs Discovery Platform APIs, access authorization is configured separately on each API. Partner authorization for APIs is configured based on the agreed commercial contract.

Authentication

Authentication for all Result APIs is handled by passing a key parameter with your API key in each request. This key is used to identify your organization and verify your access to the requested API resources.

Getting an API Key

To get an API key, please contact our customer support team. Each customer is provided with a unique API key that grants access to the specific APIs included in their service agreement.

Most of the APIs provide results relating to items specified in a partner's catalog. The item identifiers used in both requests and responses are always the partner's identifiers for the items as specified in the catalog. Internal Vionlabs identifiers are never used on the APIs.

Some API methods provide results for series items. In many cases, the API can be configured by Vionlabs to provide either the item identifier for the series (which is the default) or the identifier for the earliest episode of the series available in catalog. This is configured independently for each partner integration. Please advise your Vionlabs integration lead should you wish to change this from the default. First episodes are automatically calculated by the platform, including the use of available window information if it is provided.

Content Similarity APIs

The Vionlabs Content Similarity APIs provide access to content similarity results. For a given item, the API returns a list of items that are similar to the given item. The similarity is determined based on the content of the items.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Binge Markers API

The Binge Marker API provides access to results for repetitive segments within the content that can be skipped for an enhanced binge-watching experience. These results are provided for individual assets (those identified in the catalog as type standalone or episode and for which asset information has been provided). The results are given as start and end timestamps, indicating the interval for three types of skippable segments:

  • Intro/opening credits
  • Recap/summary from a previous episode or season
  • End credits

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Fingerprint+ API

The Fingerprint+ API provides access to the raw fingerprint data plus additional metadata such as mood categories and genres. These results are provided for individual assets (those identified in the catalog as type standalone or episode and for which asset information has been provided).

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Ad Breaks API

The Ad Breaks API provides access to suggested time slots for inserting an ad. The focus is on quantity over quality, detecting only the breaks that do not disrupt the story. These results are provided for individual assets in either frame format or seconds format (those identified in the catalog as type standalone or episode and for which asset information has been provided). Ad breaks are created by a combination of multiple neural networks and ML algorithms and are categorized into four ranks:

  • Rank 1: An ad break where 4/4 networks agree that the detected slot is a good place to insert an ad
  • Rank 2: An ad break where 3/4 networks agree that the detected slot is a good place to insert an ad
  • Rank 3: An ad break where 2/4 networks agree that the detected slot is a good place to insert an ad
  • Rank 4: A filler ad break to make sure there is at least one detected slot for every 5 minutes of content

As indicated by the ranking description, a higher rank reflects greater confidence that the moment is suitable for inserting ads. Rank 4 should be considered a filler option for times when there is a desire to insert an ad break at specific moments, but there are no detected ad breaks with a rank of 1-3.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Contextual Ad Breaks API

The Contextual Ad Breaks API elevates the ad placement process by suggesting optimal time slots for ad insertion. It also enriches these suggestions with comprehensive metadata and relevant IAB keywords, allowing for a more targeted and effective ad placement strategy.

The API provides access to suggested time slots for inserting an ad. Additionally, for each ad break it provides a contextual list of keywords, mood tags, and IAB categories that are relevant to the content at that specific time slot.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Thumbnails API

The Thumbnails API provides access to recommended frames that are suitable to use as thumbnails for a given video file. Users may specify preferences for the general characteristics of the frame (such as brightness and stillness), as well as fine-tune character-specific attributes (including facial expressions and clarity, position in frame, and close-ups). Users are also permitted control over the aesthetic quality and overall relevance of the suggested frames to the video's content. Finally, the API aims to enhance diversity among the suggested thumbnails by spacing out results based on a minimum interval of time, which users may set according to their preferences.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Previews API

The Previews API provides access to timestamps that represent segments of the video suitable for giving users a peek into the content of the video file. The selected segments can be any length and can target main characters or various predefined emotional categories (clip types). If a series or season is specified, all constituent episodes are used in the search for suitable clips. Clip selection always aims to provide the most relevant portion of the video while honoring the specified parameters. The API supports speech avoidance for clip start and end points, which can also be controlled by parameter choice. Finally, the API aims to enhance diversity among the suggested segments by spacing out results based on a minimum interval of time, which users may also set according to their preferences.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Emotions API

The Emotions API provides access to a sequence of extracted emotions for an entire video, each of which describes how people feel about a 5-second clip. Its current version supports six basic emotions: anger, enjoyment, surprise, disgust, fear, and sadness. Each time it returns the top three emotions that are most suitable for describing the corresponding video clip. Besides, along with the emotions come values of valence (V), arousal (A), and dominance (D), which is a commonly used psychological model characterizing an emotion based on the level of happiness, excitement, and dominance, respectively. Such VAD information provides further options for more subtle emotion-related analysis.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Video-Text Retrieval API

The Video-Text Retrieval API offers semantic search capabilities for video assets. It allows searching for titles based on textual queries, expressed in natural language. It enables content discovery and exploration capabilities for video catalogs.

The API returns a list of video assets that are semantically related to the query, along with a relevance score. The API is designed to support a wide range of queries, including general topics, specific entities, and complex questions. Video assets in customer catalogs do not require any upfront tagging or metadata enrichment to be searchable. Vionlabs' indexing pipeline automatically extracts relevant information from the video content itself.

Here are example queries that can be used for content discovery:

  • "I want to laugh"
  • "Financial crisis"
  • "Emotional and inspiring stories about civil rights and social justice"
  • "Mysterious and thrilling movies with a plot twist and female protagonist"

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Scene Retrieval API

The Scene Retrieval API exposes video embeddings for scenes across customers' video catalogs. These embeddings can be used in a retrieval system to find scenes matching a user's text query.

The use case for this API is as follows:

  1. A customer gets scene embeddings for their video catalog using this API.
  2. A customer indexes these embeddings in a vector database.
  3. Vionlabs instructs the customer on how to generate an embedding from a user text query.
  4. A customer uses the vector database to find scenes matching the user query (by finding scene embeddings most similar to the query embedding).

Here are example queries that can be used for content discovery:

  • "Scene depicting a hilarious moment"
  • "A thrilling car chase"
  • "A romantic scene with a sunset"
  • "An intense basketball game"

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Profanity Detection API

The Profanity Detection API provides access to the results of profane language detection in the content. For a given asset, the API is capable of:

  • returning a list of speech segments containing profane language, i.e. time ranges in the video where profanity is detected, along with a corresponding confidence score
  • classifying the asset into one of the predefined categories based on the amount of profanity detected in the content

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Nudity Detection API

The Nudity Detection API provides access to the results of nudity detection in the content. For a given asset, the API is capable of:

  • returning a list of sampled timestamps with a nudity detection score
  • classifying the asset into one of the predefined categories based on the amount of nudity detected in the content

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as:

Content Summary API

The Content Summary API provides textual summaries and keyword extraction for media content.

For each media asset, the API provides:

  • AI-generated chapters (sections centered around the same topic)
  • Per-chapter summaries and keywords
  • Overall asset summary and keywords

This enables better content discovery and navigation for the content, helping users understand the structure and topics covered in audio-focused media.

To understand the details of this API you may download the OpenAPI specification or access detailed online documentation as: