blog.kyri0s.org


Stuff longer than 140 characters I find important to save in the internets.

Deploying Apple’s HTTP Live Streaming in a GNU Linux Environment

Update: Check out my article on HTTP Live Video streaming. The installation instructions got updated there as well.

As you might know I’m part of a team producing a weekly Audio Podcast. We’re also offering a Live Stream during our recording sessions. Currently this is done with a nicecast /Icecast Installation.

With iPhone OS 3.0 Apple introduced a somehow new technology they call HTTP Live Streaming. As the name suggests it’s a Streaming “Protocol” based on HTTP. You may use this to share your Audio or Video with the world. To find out more please consider reading the official documentation from Apple or theire Internet standardization Request.

I’m not going to describe all possible usage scenarios here. I suggest reading the links provided above. If you did so you might want to deploy such a solution. At least, we wanted to. So here is what we have come up with.

HTTP Live Streaming consists of 4 parts:

  • Source (LineIn, Microphone,..)
  • Encoder
  • Media Segmenter
  • Webserver

Our goal was to do as little as possible on our local systems (Laptops) as possible. Hit “Stream” and everything else just works. So this naturally means having the encoder and segmenter on a server like system. In our setup this is a rented Linux System in a datacenter.

As noted earlier we’re already providing a Live Stream with Icecast (Server) and Nicecast (Source, Laptop). Nicecast offers the possibility to capture any Mac Audio Source and send it to a remote Icecast installation.

Since we wanted to keep this setup as a fallback we had two options. Either deploy a second solution for HTTP Streaming or capture the Icecast Stream on the remote (Linux) system and feed it to the HTTP Live Streaming setup as Source. We chose the later one. This ads some delay in the delivery but simplifies the operation. Nothing changes on the recording side. We simply hit “Start Broadcast” and everything works.

Two tricky parts are the encoder and segmenter.

The segmenter is software which splits a transport stream into chunks (e.g. 10 seconds) and updates a playlist file with these files. The cost-free segmenter provided by Apple wasn’t usable for us since it’s a binary for the Mac and we wanted to do this part on our server. We needed something that’s available for Linux. Thankfully Carson McDonald has written an opensource implementation of such a segmenter that runs just fine on linux.

The encoder has to produce a MPEG Transport Stream. For large Audio Video Producers (Television, Radio,..) this isn’t a problem. They usually have hardware based encoders that output exactly this. With limited budget we where looking for a software solution. To the rescue comes as always the great Opensource FFMPEG software. FFMPEG can not only encode the streams in all requires codecs (HE-AAC, MP3) but also ouput these elementary streams in MPEG-TS.
As an alternative to the MP3 Codec we wanted to provide HE-AAC streams. AAC is far more effective than MP3. The High Efficiency Variant is even more effective at low bitrates than “normal” (Low Complexity) AAC. Luckily, the 3GPP offers an opensource reference encoder that is capable of producing files in this codec. tipok has written a patch to build the 3GPP’s encoder as a library and also a patch for ffmpeg to include the library.

The last part is the easy part. A Webserver. Any Webspace is suitable. In our case it’s an Apache2 installation. Apache is propably not the best choice for delivering files. However we’re in the need to do some tricky authentication. Therefore we decided to go with the very flexible and customizable Apache instead of nginx or lighty. At least for now.

To put it all together you could write some scripts that encode, split the chunks and copy the resulting files to webspace. However Carson McDonalds has not only written a segmenter but also a very configurable ruby script that do all those parts. A complete solution.

Since the ruby script uses ffmpeg any Source understood by ffmpeg can be used. This includes Devices (/dev/audio ..), UNIX Pipes, RDP Streams,.. and in our case an Iceast HTTP Stream.

The output location for the generated files (playlist files, *.ts files) we’re using is a local directory which is served by Apache. However, the script is also capable of uploading the files to remote Webspace (SCP, Amazon S3,..).

Perfect!

Architecture Overview
Architecture Overview

Installation

Here are some notes on the installation. Building and configuration can be a little tricky. So I thought I’d share what I have found out. I won’t cover setting up icecast or Apache. There’s already plenty documentation available.

The system we deployed the encoder and segmenter on is a AMD64 Debian 5.0 (Lenny) Linux system. If you’re using something else expect some differences.

Since I had no luck with the distribution’s ffmpeg binaries I had to build it myself. Also we wanted to use the HE-AAC Codec which isn’t available in Debian’s ffmpeg.

General

Prerequisites:

  • libmp3lame

Some Debian Packages I hadn’t on my system already:

apt-get install build-essential \
autoconf \
libtool \
libtool-dev \
unzip \
ruby \
openssl \
libopenssl-ruby

Building Carson’s Live Segmenter

It took me several hours to find a ffmpeg version that could provide valid headers and libraries for the live_segmenter.c. So I provided a mirror of these files.

This isn’t the one we’re using for the encoding. It’s just for building live_segmenter.c

I had no luck with the SVN version.

X264 (In case you want video streaming)

wget http://dl.dropbox.com/u/5503/blog.kyri0s.org/\
x264-snapshot-20091130-2245.tar.bz2
cd x264-snapshot-20091130-2245
./configure
make
make install

Building FFMPEG with x264 support

wget http://dl.dropbox.com/u/5503/blog.kyri0s.org/\
ffmpeg-export-snapshot-2009-12-02.tar.bz2
tar -xvjf ffmpeg-export-snapshot-2009-12-02.tar.bz2
cd ffmpeg-export-2009-12-01/
./configure --enable-gpl --enable-nonfree  --enable-libfaac --enable-libfaad\
 --enable-libmp3lame --enable-libx264
make
make install

Building FFMPEG without x264 support

wget http://dl.dropbox.com/u/5503/blog.kyri0s.org/\
ffmpeg-export-snapshot-2009-12-02.tar.bz2
tar -xvjf ffmpeg-export-snapshot-2009-12-02.tar.bz2
cd ffmpeg-export-2009-12-01/
./configure --enable-gpl --enable-nonfree  --enable-libfaac --enable-libfaad\
 --enable-libmp3lame 
make
make install

For compiling the actual live_segmenter I provided a snapshot of the sources I used. The archive also contains a modified Makefile with the options I used for compiling. However this archive doesn’t contain the ruby script and configuration examples from the official GIT. You need to download those from the project website.

wget http://dl.dropbox.com/u/5503/blog.kyri0s.org/live_segmenter.tar.bz2
tar -xvjf live_segmenter.tar.bz2
make
make install

Production ready ffmpeg with AAC+

The FFMPEG Version we built above didn’t contain AAC+ (HE-AAC). So here are the necessary steps if you’d like to use this advanced audio codec.

Libaacplus
wget http://tipok.org.ua/downloads/media/aac+/libaacplus/libaacplus-1.0.5.tar.gz

tar -xvzf libaacplus-1.0.5.tar.gz
cd libaacplus-1.0.5
./autogen.sh
./configure
make
make install
ffmpeg with MP3 and HE-AAC Support
wget -O ffmpeg.tar.gz "http://git.ffmpeg.org/?\
p=ffmpeg;a=snapshot;h=124fefe867ef023a89ca4f4cc76e700342286b0d;sf=tgz"
tar -xvzf ffmpeg.tar.gz

cd ffmpeg
wget -O libswscale.tar.gz "http://git.ffmpeg.org/?\
p=libswscale;a=snapshot;h=1842e7d1cc122feea92dcd2d9a9a1adfb397aa24;\
sf=tgz"

tar -xvzf libswscale.tar.gz
wget http://tipok.org.ua/downloads/media/aac+/libaacplus-simple-\
sample/ffmpeg-patch/ffmpeg-aacp.diff

patch -p1 < ffmpeg-aacp.diff
./configure --enable-gpl --enable-nonfree  --enable-libfaac\
 --enable-libfaad --enable-libmp3lame --enable-libaacplus
make
make install

Installing Ruby for Carson’s http_streamer.rb

For the ruby script to work we need to satisfy some dependencies.

wget http://rubyforge.org/frs/download.php/45905/rubygems-1.3.1.tgz
tar -xvzf rubygems-1.3.1.tgz
cd rubygems-1.3.1
ruby setup.rb
gem update --system
gem install net-scp
gem install right_aws

That’s it for the installation part. If you’re having difficulties or have additional notes. Please contact me.

Configuration

Configuration is done in .yml files. These are serialized ruby data structures. The examples provided are a good starting point.

Here is a commented version of the file we’re using:

temp_dir: '/tmp/'

Prefix for the stream files. E.g. bitsundso_81.ts

segment_prefix: 'bitsundso'

Prefix for the Playlist Files

index_prefix: 'plus'

Logging:

# type of logging: STDOUT, FILE
#log_type: 'STDOUT'
log_type: 'FILE'
log_file: '/var/log/streamer.log'
# levels: DEBUG, INFO, WARN, ERROR
log_level: 'WARN'

This is the part in ffmpegs commandline that specifies the input (-i). See ffmpegs manual for available formats.

In our case this is password protexted icecast stream.

input_location: 'http://xxx:xxxxx@localhost:8000/plus'

The segment length in seconds.
Apple’s suggestion is to set this to 10 seconds. It’s the chunks size. Shorter values mean that the stream starts sooner. However this also influences the frequency clients are contacting your webserver. Too short values lead to high load on you Webserver.

segment_length: 10

This is the URL where the stream (ts) files will end up. Apple recommends using relative paths whenever possible to keep the index file’s size low.

url_prefix: './ts/'

How many .ts files should be referrenced in you index/playlist files? More files mean users can skip further back in time. However this also makes the index files bigger. We think 30 is a good compromise. (30 x 10 seconds => 5 minutes)

index_segment_count: 30

This command is the importing ffmpeg.

Audio Source —> THIS_COMMAND -> Various Encoders I didn’t figure out how to simply copy the input through. So at the moment we’re reencoding the input. To keep the quality loss low (copy of copy of copy …) we’re using a high bitrate of 512kbit/s. Comments appreciated. (-acodec copy doesn’t work)

source_command: 'ffmpeg -er 4 -y -i %s -acodec libmp3lame -ar 44100 -ab 512k -ac 2 -vcodec none  -f mpegts -'

This is the location of the segmenter

segmenter_binary: '/usr/local/bin/live_segmenter'

We’re offering a Multi-Rate Stream which means several qualities. If you specify more than one profile here the script automatically generates an adaptive quality playlist file. Please note that the whitespaces after the comma are required. Otherwise ruby will throw a syntax error. Remeber, these .yml files are serialized data structures.

The actual meaning of these profiles is defined later in the configuration file.

encoding_profile: [ 'audioaac_64k', 'audioaac_32k', 'audioaac_24k',  'audioaac_18k', 'audiomp3_64k', 'audiomp3_32k']

The upload profile to use. As with the encoding_profile you may specify more than one which might be required for load balancing.

transfer_profile: 'copy_dev'

The Encoding Profiles indexed above are specified each in a seperate block. You need to specify the bandwidth in order for adaptive quality to work. If you don’t do so the resulting m3u8 won’t contain the necessary information.

Important: Remove the linebreaks within the ffmpeg_command lines below. This is only for formatting.

audiomp3_64k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libmp3lame
 -ac 1 -ar 44100 -ab 64k - | %s %s %s %s %s"
  bandwidth: 64000

audiomp3_32k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libmp3lame
 -ac 1 -ar 44100 -ab 32k - | %s %s %s %s %s"
  bandwidth: 32000

audioaac_64k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 64k - | %s %s %s %s %s"
  bandwidth: 64000

audioaac_56k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 56k - | %s %s %s %s %s"
  bandwidth: 56000

audioaac_48k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 48k - | %s %s %s %s %s"
  bandwidth: 48000

audioaac_32k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 32k - | %s %s %s %s %s"
  bandwidth: 32000

audioaac_24k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 24k - | %s %s %s %s %s"
  bandwidth: 24000

audioaac_18k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 18k - | %s %s %s %s %s"
  bandwidth: 18000

audioaac_16k:
  ffmpeg_command: "ffmpeg -er 4 -y -i %s -f mpegts -acodec libaacplus
 -ac 2 -ar 44100 -ab 16k - | %s %s %s %s %s"
  bandwidth: 16000

We’re using a very simple transfer profile that just copies files to a local directory. You might want to check out the examples provided with the ruby script for other examples.

Whichever method you find suitable for you. Don’t forget to delete old .ts files.
In our case this is done with a simple cronjob that deletes files older than 30 minutes.

* * * * * find /var/www/live.bitsundso.de/stream/ts -name ‘*.ts’ -cmin 30 -exec rm {} \;

copy_dev:
  transfer_type: 'copy'
  directory: '/var/www/live.bitsundso.de/stream/ts'

Legal Notice:
iTunes, the iTunes Logo, iPhone, QuickTime and the QuickTime Logo are trademarks of Apple Inc., registered in the U.S. and other countries.
Nicecast is a trademark of Rogue Amoeba Software, LLC, registered in the U.S. and other countries.
The Winamp trademark is the property of Nullsoft, Inc. and its parent company, America Online, Inc.
All other marks are the properties of their respective owners.


Comments
blog comments powered by Disqus