Use Cases
Explore real-world scenarios where DDEX Suite delivers exceptional value across the music industry ecosystem. From major label catalog migrations to streaming platform ingestion pipelines, see how organizations leverage our high-performance parsing and deterministic building capabilities.
Major Record Labels
Major Label Group - Catalog Migration
Scenario: XYZ Music Group needs to migrate their entire back catalog (3M+ recordings) from a legacy system to a new distribution platform requiring DDEX ERN 4.3.
// Parse DDEX
const { DdexParser } = require('ddex-parser');
const parser = new DdexParser();
// Build DDEX
const { DdexBuilder } = require('ddex-builder');
const builder = new DdexBuilder();
const db = new DatabaseConnection();
// Apply deterministic configuration for reproducible migration
builder.applyPreset('deterministic_migration');
// Stream from legacy database to DDEX XML files
const catalogStream = db.streamCatalog({ batchSize: 1000 });
for await (const batch of catalogStream) {
const releases = batch.map(legacyRelease => ({
releaseId: legacyRelease.upc,
identifiers: {
upc: legacyRelease.upc,
catalogNumber: legacyRelease.catalog_no,
grid: legacyRelease.grid_id
},
title: [{ text: legacyRelease.title, languageCode: 'en' }],
displayArtist: legacyRelease.artist_name,
releaseDate: new Date(legacyRelease.release_date),
tracks: legacyRelease.tracks.map(track => ({
position: track.sequence,
isrc: track.isrc,
title: track.title,
duration: track.duration_seconds,
displayArtist: track.artist || legacyRelease.artist_name
}))
}));
// Generate DDEX message with stable IDs for cross-batch consistency
const { xml, warnings, canonicalHash } = await builder.build({
header: {
messageSender: { partyName: [{ text: 'XYZ Music Group' }] },
messageRecipient: { partyName: [{ text: 'YouTube' }] }
},
version: '4.3',
profile: 'AudioAlbum',
releases
}, {
idStrategy: 'stable-hash',
stableHashConfig: {
recipe: 'v1',
cache: 'sqlite' // External KV cache for ID persistence
}
});
// Store hash for verification
await db.storeMigrationHash(batch[0].id, canonicalHash);
await saveToDistributionQueue(xml);
}
Major Label - Weekly New Release Feed
Scenario: XYZ needs to generate weekly DDEX feeds for all new releases across their labels for 50+ DSP partners.
from ddex_builder import DdexBuilder
from datetime import datetime, timedelta
import pandas as pd
builder = DdexBuilder()
# Load this week's releases from data warehouse
releases_df = pd.read_sql("""
SELECT * FROM releases
WHERE release_date BETWEEN %s AND %s
AND status = 'APPROVED'
ORDER BY priority DESC, release_date
""", params=[datetime.now(), datetime.now() + timedelta(days=7)])
# Group by DSP requirements
for dsp, dsp_config in DSP_CONFIGS.items():
# Filter releases for this DSP based on territory rights
dsp_releases = filter_by_territory_rights(releases_df, dsp_config['territories'])
# Build DDEX message with generic configuration as base
if dsp == 'youtube':
builder.apply_preset('youtube_album')
else:
builder.apply_preset('audio_album') # Generic baseline
result = builder.build({
'header': {
'message_sender': {'party_name': [{'text': 'XYZ Music Entertainment'}]},
'message_recipient': {'party_name': [{'text': dsp_config['name']}]}
},
'version': dsp_config['ern_version'],
'profile': 'AudioAlbum',
'releases': dsp_releases.to_dict('records'),
'deals': generate_deals_for_dsp(dsp_releases, dsp_config)
})
# Upload to DSP's FTP/API
upload_to_dsp(dsp, result.xml)
Digital Distributors
Independent Distributor - New Release Pipeline
Scenario: Independent Distributor delivers 100,000+ new releases daily from independent artists and needs to generate DDEX feeds for multiple platforms.
// Build DDEX
const { DdexBuilder } = require('ddex-builder');
const { Queue } = require('bull');
const builder = new DdexBuilder();
const releaseQueue = new Queue('releases');
releaseQueue.process(async (job) => {
const { artistSubmission } = job.data;
// Transform artist's simple form data into DDEX
const release = {
identifiers: {
upc: await generateUPC(artistSubmission),
proprietary: [{
namespace: 'indieDistro',
value: artistSubmission.releaseId
}]
},
title: [{ text: artistSubmission.albumTitle }],
displayArtist: artistSubmission.artistName,
releaseType: artistSubmission.releaseType,
genre: mapToAVSGenre(artistSubmission.genre),
releaseDate: new Date(artistSubmission.releaseDate),
tracks: artistSubmission.tracks.map((track, idx) => ({
position: idx + 1,
isrc: track.isrc || await generateISRC(track),
title: track.title,
duration: track.durationSeconds,
displayArtist: track.featuring ?
`${artistSubmission.artistName} feat. ${track.featuring}` :
artistSubmission.artistName,
isExplicit: track.hasExplicitLyrics
})),
images: [{
type: 'FrontCoverImage',
resourceReference: `IMG_${artistSubmission.releaseId}`,
uri: artistSubmission.artworkUrl
}]
};
// Generate DDEX for each target platform
const platforms = ['spotify', 'amazon', 'youtube'];
for (const platform of platforms) {
const { xml } = await builder.build({
header: createHeaderForPlatform(platform),
version: PLATFORM_CONFIGS[platform].ernVersion,
releases: [release],
deals: [createStandardIndieDeals(release, platform)]
});
await queueForDelivery(platform, xml);
}
});
Streaming Platforms
YouTube - Ingestion Pipeline
Scenario: YouTube receives 1M+ DDEX messages daily and needs to normalize them for internal processing.
from ddex_parser import DdexParser
from ddex_builder import DdexBuilder
import asyncio
parser = DdexParser()
builder = DdexBuilder()
async def normalize_incoming_ddex(raw_xml: bytes) -> dict:
"""Normalize any DDEX version to internal format"""
# Parse incoming DDEX (any version)
parsed = await parser.parse_async(raw_xml)
# Normalize to internal canonical format
normalized_releases = []
for release in parsed.flat.releases:
# Apply YouTube-specific business rules
normalized = {
**release,
'youtube_id': youtube_spotify_id(release),
'availability': calculate_availability(release),
'content_rating': derive_content_rating(release),
'algorithmic_tags': generate_ml_tags(release)
}
normalized_releases.append(normalized)
# Rebuild as standardized ERN 4.3 for internal systems
result = await builder.build_async({
'header': create_internal_header(),
'version': '4.3', # Standardize on latest version
'releases': normalized_releases,
'deals': parsed.flat.deals,
'preflight_level': 'strict' # Ensure compliance
}, {
'determinism': {
'canonMode': 'db-c14n',
'sortStrategy': 'canonical'
}
})
return {
'normalized_xml': result.xml,
'canonical_hash': result.canonical_hash,
'metadata': extract_searchable_metadata(normalized_releases),
'ingestion_timestamp': datetime.now()
}
Enterprise Catalog Management
Major Label Group - Multi-Format Delivery
Scenario: XYZ Music Group needs to deliver the same release in different formats (physical, digital, streaming) with format-specific metadata.
from ddex_builder import DdexBuilder
from enum import Enum
class ReleaseFormat(Enum):
STREAMING = "streaming"
DOWNLOAD = "download"
PHYSICAL_CD = "physical_cd"
VINYL = "vinyl"
class MultiFormatBuilder:
def __init__(self):
self.builder = DdexBuilder()
def build_format_specific_release(self, master_release, format_type):
"""Generate format-specific DDEX from master release"""
# Base release data
release = {**master_release}
if format_type == ReleaseFormat.STREAMING:
# Streaming-specific adaptations
release['tracks'] = self.add_streaming_metadata(release['tracks'])
release['technical_details'] = {
'file_format': 'AAC',
'bitrate': 256,
'sample_rate': 44100
}
elif format_type == ReleaseFormat.VINYL:
# Vinyl-specific adaptations
release['tracks'] = self.organize_for_vinyl_sides(release['tracks'])
release['physical_details'] = {
'format': 'Vinyl',
'configuration': '2xLP',
'speed': '33RPM',
'color': 'Black'
}
return self.builder.build({
'version': '4.3',
'profile': self.get_profile_for_format(format_type),
'releases': [release],
'deals': self.generate_format_specific_deals(release, format_type)
})
Trifecta - The "Parse → Modify → Build" Workflow
This is the primary use case, demonstrating the power of the full suite:
// Parse DDEX
const { DdexParser } = require('ddex-parser');
const parser = new DdexParser();
// Build DDEX
const { DdexBuilder } = require('ddex-builder');
const builder = new DdexBuilder();
const fs = require('fs/promises');
// Apply generic baseline configuration
builder.applyPreset('audio_album', { lock: true });
// 1. PARSE an existing message
const originalXml = await fs.readFile('path/to/original.xml');
const parsedMessage = await parser.parse(originalXml);
// 2. MODIFY the data in a simple, programmatic way
const firstRelease = parsedMessage.flat.releases[0];
firstRelease.releaseDate = new Date('2026-03-01T00:00:00Z');
firstRelease.tracks.push({
position: firstRelease.tracks.length + 1,
title: 'New Bonus Track',
isrc: 'USXYZ2600001',
duration: 180,
displayArtist: firstRelease.displayArtist
});
// 3. BUILD a new, deterministic XML message from the modified object
const { xml, warnings, canonicalHash, reproducibilityBanner } = await builder.build({
header: parsedMessage.graph.messageHeader,
version: parsedMessage.flat.version,
releases: parsedMessage.flat.releases,
deals: parsedMessage.flat.deals,
}, {
determinism: {
canonMode: 'db-c14n',
emitReproducibilityBanner: true,
verifyDeterminism: 3 // Build 3 times to verify determinism
},
idStrategy: 'stable-hash'
});
if (warnings.length > 0) {
console.warn('Build warnings:', warnings);
}
// Verify deterministic output
console.log(`Canonical hash: ${canonicalHash}`);
console.log(`Reproducibility: ${reproducibilityBanner}`);
// The new XML is ready to be sent or validated by DDEX Workbench
await fs.writeFile('path/to/updated.xml', xml);
Next Steps
Ready to implement these use cases in your organization?
- Get Started: Follow our Getting Started Guide for installation
- API Reference: Explore the complete API Documentation
- Examples: See more detailed Examples with full code samples
- Performance: Learn about Performance Optimization for large-scale deployments