Monday 17 October 2016

Writing an ALSA driver

 Writing an ALSA driver
Over the past week I've been writing an ALSA driver for an MPEG-4 capture board (4/8/16 channel). What I discovered is there are not many good documents on the basics of writing a simple ALSA driver. So I wanted to share my experience in the hopes that it would help others.

My driver needed to be pretty simple. The encoder produced 8Khz mono G.723-24 ADPCM. So you can avoid the wikepedia trip, that's 3-bits per sample, or 24000 bits per second. The card produced this at a rate of 128 samples per interrupt (48 bytes) for every channel available (you cannot disable each channel).

The card delivered this data in a 32kbyte buffer, split into 32 pages. Each page was written as 48*20 channels, which took up 960 bytes of the 1024 byte page (it could do up to this number, but for my purposes I was only using 4, 8 or 16 channels of encoded data depending on the capabilities of the card).

Now, let's set aside the fact that ALSA does not have a format spec for G.723-24, so my usage entails dumping out the 48 bytes to userspace as unsigned 8-bit PCM (and my userspace application handles the G.723-24 decoding, knowing that it is getting this data).

First, where to start in ALSA. I had to decide how to expose these capture interfaces. I could have exposed a capture device for each channel, but instead I chose to expose one capture interface with a subdevice for each channel. This made programming a bit easier, gave a better overview of the devices as perceived by ALSA, and kept /dev/snd/ less cluttered (especially when you had multiple 16-channel cards installed). It also made programming userspace easier since it kept channels hierarchically under the card/device.

I'll dig a little deeper into the base driver. I wont go into the details of the module and PCI initialization that was already present in my driver (I developed the core and v4l2 components first, so all of that is taken care of).

So first off I needed to register with ALSA that we actually have a sound card. This bit is easy, and looks like this:

struct snd_card *card;
ret = snd_card_create(SNDRV_DEFAULT_IDX1, "MySoundCard",
                      THIS_MODULE, 0, &card);
if (ret < 0)
        return ret;


This asks ALSA to allocate a new sound card with the name "MySoundCard". This is also the name that appears in /proc/asound/ as a symlink to the card ID (e.g. "card0"). In my particular instance I actually name the card with an ID number, so it ends up being "MySoundCard0". This is because I can, and typically do, have more than one installed at a time for this type of device. I notice some other sound drivers do not do this, probably because they don't expect more than one to be installed at a time (think HDA, which is usually embedded on the motherboard, and so wont have two or more inserted into a PCIe slot). Next, we set some of the properties of this new card.

strcpy(card->driver, "my_driver");
strcpy(card->shortname, "MySoundCard Audio");
sprintf(card->longname, "%s on %s IRQ %d", card->shortname,
        pci_name(pci_dev), pci_dev->irq);
snd_card_set_dev(card, &pci_dev->dev);


Here, we've assigned the name of the driver that handles this card, which is typically the same as the actual name of your driver. Next is a short description of the hardware, followed by a longer description. Most drivers seem to set the long description to something containing the PCI info. If you have some other bus, then the convention would follow to use information from that particular bus. Finally, set the parent device associated with the card. Again, since this is a PCI device, I set it to that.

Now to set this card up in ALSA along with a decent description of how the hardware works. We add the next bit of code to do this:

static struct snd_device_ops ops = { NULL };
ret = snd_device_new(card, SNDRV_DEV_LOWLEVEL, mydev, &ops);
if (ret < 0)
        return ret;


We're basically telling ALSA to create a new card that is a low level sound driver. The mydev argument is passed as the private data that is associated with this device, for your convenience. We leave the ops structure as a no-op here for now.

Lastly, to complete the registration with ALSA:

if ((ret = snd_card_register(card)) < 0)
        return ret;


ALSA now knows about this card, and lists it in /proc/asound/ among other places such as /sys. We still haven't told ALSA about the interfaces associated with this card (capture/playback). This will be discussed in the next installment. One last thing, when you cleanup your device/driver, you must do so through ALSA as well, like this:

snd_card_free(card);


This will cleanup all items associated with this card, including any devices that we will register later.

Setting up capture:

Now that we have an ALSA card initialized and registered with the middle layer we can move on to describing to ALSA our capture device. Unfortunately for anyone wishing to do playback, I will not be covering that since my device driver only provides for capture. If I end up implementing the playback feature, I will make an additional post.

So let's get started. ALSA provides a PCM API in its middle layer. We will be making use of this to register a single PCM capture device that will have a number of subdevices depending on the low level hardware I have. NOTE: All of the initialization below must be done just before the call to snd_card_register() in the last posting.

struct snd_pcm *pcm;
ret = snd_pcm_new(card, card->driver, 0, 0, nr_subdevs,
                  &pcm);
if (ret < 0)
        return ret;


In the above code we allocate a new PCM structure. We pass the card we allocated beforehand. The second argument is a name for the PCM device, which I have just conveniently set to the same name as the driver. It can be whatever you like. The third argument is the PCM device number. Since I am only allocating one, it's set to 0.

The third and fourth arguments are the number of playback and capture streams associated with this device. For my purpose, playback is 0 and capture is the number I have detected that the card supports (4, 8 or 16).

The last argument is where ALSA allocates the PCM device. It will associate any memory for this with the card, so when we later call snd_card_free(), it will cleanup our PCM device(s) as well.

Next we must associate the handlers for capturing sound data from our hardware. We have a struct defined as such:

static struct snd_pcm_ops my_pcm_ops = {
        .open      = my_pcm_open,
        .close     = my_pcm_close,
        .ioctl     = snd_pcm_lib_ioctl,
        .hw_params = my_hw_params,
        .hw_free   = my_hw_free,
        .prepare   = my_pcm_prepare,
        .trigger   = my_pcm_trigger,
        .pointer   = my_pcm_pointer,
        .copy      = my_pcm_copy,
};


I will go into the details of how to define these handlers in the next post, but for now we just want to let the PCM middle layer know to use them:

snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE,
                &my_pcm_ops);
pcm->private_data = mydev;
pcm->info_flags = 0;
strcpy(pcm->name, card->shortname);


Here, we first set the capture handlers for this PCM device to the one we defined above. Afterwards, we also set some basic info for the PCM device such as adding our main device as part of the private data (so that we can retrieve it more easily in the handler callbacks).

Now that we've made the device, we want to initialize the memory management associated with the PCM middle layer. ALSA provides some basic memory handling routines for various functions. We want to make use of it since it allows us to reduce the amount of code we write and makes working with userspace that much easier.

ret = snd_pcm_lib_preallocate_pages_for_all(pcm,
                     SNDRV_DMA_TYPE_CONTINUOUS,
                     snd_dma_continuous_data(GFP_KERNEL),
                     MAX_BUFFER, MAX_BUFFER);
if (ret < 0)
        return ret;


The MAX_BUFFER is something we've defined earlier and will be discussed further in the next post. Simply put, it's the maximum size of the buffer in the hardware (the maximum size of data that userspace can request at one time without waiting on the hardware to produce more data).

We are using the simple continuous buffer type here. Your hardware may support direct DMA into the buffers, and as such you would use something like snd_dma_dev() along with your PCI device to initialize this. I'm using standard buffers because my hardware will require me to handle moving data around manually.
 
Writing an ALSA driver: PCM Hardware Description
 we'll dig into the snd_pcm_hardware structure that will be used in the next post which will describe the PCM handler callbacks.

Here is a look at the snd_pcm_hardware structure I have for my driver. It's fairly simplistic:

static struct snd_pcm_hardware my_pcm_hw = {
        .info = (SNDRV_PCM_INFO_MMAP |
                 SNDRV_PCM_INFO_INTERLEAVED |
                 SNDRV_PCM_INFO_BLOCK_TRANSFER |
                 SNDRV_PCM_INFO_MMAP_VALID),
        .formats          = SNDRV_PCM_FMTBIT_U8,
        .rates            = SNDRV_PCM_RATE_8000,
        .rate_min         = 8000,
        .rate_max         = 8000,
        .channels_min     = 1,
        .channels_max     = 1,
        .buffer_bytes_max = (32 * 48),
        .period_bytes_min = 48,
        .period_bytes_max = 48,
        .periods_min      = 1,
        .periods_max      = 32,
};


This structure describes how my hardware lays out the PCM data for capturing. As I described before, it writes out 48 bytes at a time for each stream, into 32 pages. A period basically describes an interrupt. It sums up the "chunk" size that the hardware supplies data in.

This hardware only supplies mono data (1 channel) and only 8000HZ sample rate. Most hardware seems to work in the range of 8000 to 48000, and there is a define for that of SNDRV_PCM_RATE_8000_48000. This is a bit masked field, so you can add whatever rates your harware supports.

My hardware driver describes this data as unsigned 8-bit format (it's actually signed 3-bit g723-24, but ALSA doesn't support that, so I fake it). Most common PCM data is signed 16-bit little-endian (S16_LE). You would use whatever your hardware supplies, which can be more than one type. Since the format is a bit mask, you can define multiple data formats.

Lastly, the info field describes some middle layer features that your hardware/driver supports. What I have here is the base for what most drivers will supply. See the ALSA docs for more details. For example, if your hardware has stereo (or multiple channels) but it does not interleave these channels together, you would not have the interleave flag.



 Writing an ALSA driver: PCM handler callbacks
So here we are on the final chapter of the ALSA driver series. We will finally fill in the meat of the driver with some simple handler callbacks for the PCM capture device we've been developing. In the previous post, Writing an ALSA driver: Setting up capture, we defined my_pcm_ops, which was used when calling snd_pcm_set_ops() for our PCM device. Here is that structure again:

static struct snd_pcm_ops my_pcm_ops = {
        .open      = my_pcm_open,
        .close     = my_pcm_close,
        .ioctl     = snd_pcm_lib_ioctl,
        .hw_params = my_hw_params,
        .hw_free   = my_hw_free,
        .prepare   = my_pcm_prepare,
        .trigger   = my_pcm_trigger,
        .pointer   = my_pcm_pointer,
        .copy      = my_pcm_copy,
};


First let's start off with the open and close methods defined in this structure. This is where your driver gets notified that someone has opened the capture device (file open) and subsequently closed it.

static int my_pcm_open(struct snd_pcm_substream *ss)
{
        ss->runtime->hw = my_pcm_hw;
        ss->private_data = my_dev;

        return 0;
}

static int my_pcm_close(struct snd_pcm_substream *ss)
{
        ss->private_data = NULL;

        return 0;
}


This is the minimum you would do for these two functions. If needed, you would allocate private data for this stream and free it on close.

For the ioctl handler, unless you need something special, you can just use the standard snd_pcm_lib_ioctl callback.

The next three callbacks handle hardware setup.

static int my_hw_params(struct snd_pcm_substream *ss,
                        struct snd_pcm_hw_params *hw_params)
{
        return snd_pcm_lib_malloc_pages(ss,
                         params_buffer_bytes(hw_params));
}

static int my_hw_free(struct snd_pcm_substream *ss)
{
        return snd_pcm_lib_free_pages(ss);
}

static int my_pcm_prepare(struct snd_pcm_substream *ss)
{
        return 0;
}


Since we've been using standard memory allocation routines from ALSA, these functions stay fairly simple. If you have some special exceptions between different versions of the hardware supported by your driver, you can make changes to the ss->hw structure here (e.g. if one version of your card supports 96khz, but the rest only support 48khz max).

The PCM prepare callback should handle anything your driver needs to do before alsa-lib can ask it to start sending buffers. My driver doesn't do anything special here, so I have an empty callback.

This next handler tells your driver when ALSA is going to start and stop capturing buffers from your device. Most likely you will enable and disable interrupts here.

static int my_pcm_trigger(struct snd_pcm_substream *ss,
                          int cmd)
{
        struct my_device *my_dev = snd_pcm_substream_chip(ss);
        int ret = 0;

        switch (cmd) {
        case SNDRV_PCM_TRIGGER_START:
                // Start the hardware capture
                break;
        case SNDRV_PCM_TRIGGER_STOP:
                // Stop the hardware capture
                break;
        default:
                ret = -EINVAL;
        }

        return ret;
}


Let's move on to the handlers that are the work horse in my driver. Since the hardware that I'm writing my driver for cannot directly DMA into memory that ALSA has supplied for us to communicate with userspace, I need to make use of the copy handler to perform this operation.

static snd_pcm_uframes_t my_pcm_pointer(struct snd_pcm_substream *ss)
{
        struct my_device *my_dev = snd_pcm_substream_chip(ss);

        return my_dev->hw_idx;
}

static int my_pcm_copy(struct snd_pcm_substream *ss,
                       int channel, snd_pcm_uframes_t pos,
                       void __user *dst,
                       snd_pcm_uframes_t count)
{
        struct my_device *my_dev = snd_pcm_substream_chip(ss);

        return copy_to_user(dst, my_dev->buffer + pos, count);
}


So here we've defined a pointer function which gets called by userspace to find our where the hardware is in writing to the buffer.

Next, we have the actual copy function. You should note that count and pos are in sample sizes, not bytes. The buffer I've shown we assume to have been filled during interrupt.

Speaking of interrupt, that is where you should also signal to ALSA that you have more data to consume. In my ISR (interrupt service routine), I have this:

snd_pcm_period_elapsed(my_dev->ss);

ALSA Audio Driver


ALSA Audio Driver
The below figure shows audio connection on a PC compatible system; the audio controller on the south bridge, together with an external codec, interfaces with analog audio circuitry.
Figure: Audio in a PC environment
An audio codec converts digital audio data to analog sound signals for playing via speakers and performs the reverse operation for recording via a microphone. Other common audio inputs and outputs that interface with a codec include headsets, earphone, hand free, line in and line Out. A codec offers mixer functionality that connects it to a combination of these audio outputs and inputs and controls the volume gain associated with audio signal.
Digital data is obtained by sampling audio analog signals at specific bit rates using a technique called pulse code modulation (PCM). CD quality is for example sound sampled at 44.1 KHz, using 16 bits to hold each sample. A codec is responsible for recording audio by sampling at supported PCM bit rates and for playing audio originally sampled at different bit rates.
Linux Sound Sub System
Advanced Linux Sound Architecture (ALSA) is the sound sub system in 2.6 Kernel. Open Sound System (OSS), the sound layer in the 2.4 layer is now obsolete and depreciated.
The following figure shows the Linux sound sub system;
Figure: Linux Sound Sub System (ALSA)
1). The following /dev/snd/* device nodes are created and managed by the ALSA core. /dev/snd/controlC0 is a control node that application uses for controlling volume gain and such. /dev/snd/pcmC0D0P is a play back device. /dev/snd/pcmD0C is a recording device.
2). Audio controller drivers specific to the controller hardware . To drive the audio controller present in the intel ICH south bridge chipsets e.g. use the snd_intel8x0 driver.
3). Audio codec interface that assist communication between controllers and codecs for AC’ 97 codec used the snd_ac97_codec and ac97_bus modules.
4). The nodes /dev/adsp and /dev/mixer allow OSS application to run unchanged over ALSA. The OSS /dev/dsp node maps to ALSA nodes /dev/snd/pcmC0D0; /dev/adsp corresponds to /dev/snd/pcmC0D1* and /dev/mixer associates with /dev/snd/ControlC0.
5). Procfs and sysfc interface implementations for accessing info via /proc/aSound and /Sys/class/sound.
6). The user space ALSA library alsa-lib which provides the libSound.so object. The library passes the job of the ALSA application programming by offering several canned routines to access ALSA drivers.
Tools and Links
Answers of FAQ pertaining to Linux
MadPlay is a software MP3 decoder and player that are both ALSA and OSS aware.
It is used for basic playback and recording.
Data Structure and API
The figures show the list of data structure and API used in developing the ALSA driver.
Figure: Summary of data structures

Friday 14 October 2016

Embedded System Testing (Software & Hardware)

Embedded systems is gaining importance with increasing adoption of 16 ,32 and 64-bit processors across a wide variety of electronic products. As consumer expectation from these systems grow, manufacturers are challenged with the following factors before testing this to perfect for market release:

    Real time responses
    Separate host (target) systems from development environments
    Lack of standardization in deployment architectures
    Lack of established interfaces to systems under testing
    Stringent Fail-Safe requirements
    Extremely high cost of isolating and fixing defects


embedded system testing methods

  1.  System Level Testing
  2.  Application Testing
  3.  Middleware Testing
  4.  BSP & Driver testing
  5.  Embedded Hardware Design

Embedded Internet

The embedded Internet is bringing transformative changes to the embedded world. The era of intelligent connectivity is dawning. And the industry is about to hit the fast-forward button.¹

We have watched the Internet grow from its early beginnings to a global human network, breaking down old boundaries, stimulating new usage models, and unleashing opportunities for building businesses and growing revenue on a global scale.

Now the Internet is evolving again, to the embedded space. How big will it become? Intel Vice President Doug Davis cites the IDC prediction of 15 billion intelligent, connected devices by the year 2015¹.

Extending the power of Internet connectivity to a virtually limitless variety of embedded devices, with many communicating machine-to-machine without human intervention, has more far-reaching implications than any single technologist can imagine.

The important question is: how many of these breakthrough solutions will you create?

Even one billion of anything is an astounding number. But when you start to think about 15 billion intelligent devices1 connected to each other, you realize that our industry is on the threshold of something new.

If Internet-connected PCs and phones were transformative, imagine what happens when the Internet connects cars, home media phones, digital signs and shopping carts, mobile medical diagnostic tools, factory robots and intelligent wind turbines.

-Read the full article

At Intel we are working with the embedded computing and communications ecosystem, and with end customers, to envision the innovative possibilities and capture unprecedented opportunities for industry growth.

Driven by breakthroughs in microarchitecture and process technology, the same Intel® architecture that is at the heart of the majority of today’s Internet applications can now deliver scalable intelligence and connectivity to billions of new intelligent, connected devices.

Building on our 30 years of embedded industry experience, Intel is delivering the platforms you need today—based on products whose specifications range from milliwatts of power consumption to petaflops of performance—all based on a single, familiar and proven software architecture.

Newer and even more visionary embedded applications are yet to come. Their implications will be vast—for the industry, and for your future.


"Gantz, John. "The Embedded Internet: Methodology and Findings." IDC. January 2009."

Embedded Linux Vendors

Organizations developing embedded/mobile Linux products include
1.µClinux
2. RTLinux
3.LynuxWorks
4. Wind River
5.MontaVista
6. Andriod
7. IOS

which has released Mobilinux, the first version of its Linux operating system specifically designed and optimized for mobile phones and wireless devices.