Powered By Blogger

Wednesday, July 18, 2018

Wpa_supplicant and Hostapd Modules

wpa_supplicant

wpa_supplicant is a WPA Supplicant for Linux, BSD and Windows with support for WPA and WPA2 (IEEE 802.11i / RSN). Supplicant is the IEEE 802.1X/WPA component that is used in the client stations. It implements key negotiation with a WPA Authenticator and it can optionally control roaming and IEEE 802.11 authentication/association of the wlan driver.
The design goal for wpa_supplicant was to use hardware, driver, and OS independent, portable C code for all WPA functionality. The source code is divided into separate C files as shown on the code structure page. All hardware/driver specific functionality is in separate files that implement a well-defined driver API. Information about porting to different target boards and operating systems is available on the porting page.
EAPOL (IEEE 802.1X) state machines are implemented as a separate module that interacts with EAP peer implementation. In addition to programs aimed at normal production use, wpa_supplicant source tree includes number of testing and development tools that make it easier to test the programs without having to setup a full test setup with wireless cards. These tools can also be used to implement automatic test suites.
wpa_supplicant implements a control interface that can be used by external programs to control the operations of the wpa_supplicant daemon and to get status information and event notifications. There is a small C library that provides helper functions to facilitate the use of the control interface. This library can also be used with C++.
_wpa_supplicant.png
wpa_supplicant modules

hostapd

hostapd includes IEEE 802.11 access point management (authentication / association), IEEE 802.1X/WPA/WPA2 Authenticator, EAP server, and RADIUS authentication server functionality. It can be build with various configuration option, e.g., a standalone AP management solution or a RADIUS authentication server with support for number of EAP methods.
The design goal for hostapd was to use hardware, driver, and OS independent, portable C code for all WPA functionality. The source code is divided into separate C files as shown on the code structure page. All hardware/driver specific functionality is in separate files that implement a well-defined driver API. Information about porting to different target boards and operating systems is available on the porting page.
EAPOL (IEEE 802.1X) state machines are implemented as a separate module that interacts with EAP server implementation. Similarly, RADIUS authentication server is in its own separate module. Both IEEE 802.1X and RADIUS authentication server can use EAP server functionality.
hostapd implements a control interface that can be used by external programs to control the operations of the hostapdt daemon and to get status information and event notifications. There is a small C library that provides helper functions to facilitate the use of the control interface. This library can also be used with C++.
hostapd.png
hostapd modules

Thursday, July 12, 2018

Dynamic DMA mapping


Most of the 64bit platforms have special hardware that translates bus
addresses (DMA addresses) to physical addresses similarly to how page
tables and/or TLB translate virtual addresses to physical addresses.
This is needed so that e.g. PCI devices can access with a Single Address
Cycle (32bit DMA address) any page in the 64bit physical address space.
Previously in Linux those 64bit platforms had to set artificial limits on
the maximum RAM size in the system, so that the virt_to_bus() static scheme
works (the DMA address translation tables were simply filled on bootup
to map each bus address to the physical page __pa(bus_to_virt())).

So that Linux can use the dynamic DMA mapping, it needs some help from the
drivers, namely it has to take into account that DMA addresses should be
mapped only for the time they are actually used and unmapped after the DMA
transfer.

The following API will work of course even on platforms where no such
hardware exists, see e.g. include/asm-i386/pci.h for how it is implemented on
top of the virt_to_bus interface.

First of all, you should make sure

#include <linux/pci.h>

is in your driver. This file will obtain for you the definition of
the dma_addr_t type which should be used everywhere you hold a DMA
(bus) address returned from the DMA mapping functions.

What memory is DMA'able?

The first piece of information you must know is what kernel memory can
be used with the DMA mapping facilitites.  There has been an unwritten
set of rules regarding this, and this text is an attempt to finally
write them down.

If you acquired your memory via the page allocator
(i.e. __get_free_page*()) or the generic memory allocators
(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
that memory using the addresses returned from those routines.

This means specifically that you may _not_ use the memory/addresses
returned from vmalloc() for DMA.  It is possible to DMA to the
_underlying_ memory mapped into a vmalloc() area, but this requires
walking page tables to get the physical addresses, and then
translating each of those pages back to a kernel address using
something like __va().

This rule also means that you may not use kernel image addresses
(ie. items in the kernel's data/text/bss segment, or your driver's)
nor may you use kernel stack addresses for DMA.  Both of these items
might be mapped somewhere entirely different than the rest of physical
memory.

What about block I/O and networking buffers?  The block I/O and
networking subsystems make sure that the buffers they use are valid
for you to DMA from/to.

DMA addressing limitations

Does your device have any DMA addressing limitations?  For example, is
your device only capable of driving the low order 24-bits of address
on the PCI bus for DMA transfers?  If your device can handle any PCI
dma address fully, then please skip to the next section, the rest of
this section does not concern your device.

For correct operation, you must interrogate the PCI layer in your
device probe routine to see if the PCI controller on the machine can
properly support the DMA addressing limitation your device has.  This
query is performed via a call to pci_dma_supported():

 int pci_dma_supported(struct pci_dev *pdev, dma_addr_t device_mask)

Here, pdev is a pointer to the PCI device struct of your device, and
device_mask is a bit mask describing which bits of a PCI address your
device supports.  It returns non-zero if your card can perform DMA
properly on the machine.  If it returns zero, your device can not
perform DMA properly on this platform, and attempting to do so will
result in undefined behavior.

In the failure case, you have two options:

1) Use some non-DMA mode for data transfer, if possible.
2) Ignore this device and do not initialize it.

It is recommended that your driver print a kernel KERN_WARNING message
when you do one of these two things.  In this manner, if a user of
your driver reports that performance is bad or that the device is not
even detected, you can ask him for the kernel messages to find out
exactly why.

So if, for example, you device can only drive the low 24-bits of
address during PCI bus mastering you might do something like:

 if (! pci_dma_supported(pdev, 0x00ffffff))
  goto ignore_this_device;

When DMA is possible for a given mask, the PCI layer must be informed of the
mask for later allocation operations on the device.  This is achieved by
setting the dma_mask member of the pci_dev structure, like so:

#define MY_HW_DMA_MASK 0x00ffffff

 if (! pci_dma_supported(pdev, MY_HW_DMA_MASK))
  goto ignore_this_device;

 pdev->dma_mask = MY_HW_DMA_MASK;

A helper function is provided which performs this common code sequence:

 int pci_set_dma_mask(struct pci_dev *pdev, dma_addr_t device_mask)

Unlike pci_dma_supported(), this returns -EIO when the PCI layer will not be
able to DMA with addresses restricted by that mask, and returns 0 when DMA
transfers are possible.  If the call succeeds, the dma_mask will have been
updated so that your driver need not worry about it.

There is a case which we are aware of at this time, which is worth
mentioning in this documentation.  If your device supports multiple
functions (for example a sound card provides playback and record
functions) and the various different functions have _different_
DMA addressing limitations, you may wish to probe each mask and
only provide the functionality which the machine can handle.  It
is important that the last call to pci_set_dma_mask() be for the 
most specific mask.

Here is pseudo-code showing how this might be done:

 #define PLAYBACK_ADDRESS_BITS 0xffffffff
 #define RECORD_ADDRESS_BITS 0x00ffffff

 struct my_sound_card *card;
 struct pci_dev *pdev;

 ...
 if (pci_set_dma_mask(pdev, PLAYBACK_ADDRESS_BITS)) {
  card->playback_enabled = 1;
 } else {
  card->playback_enabled = 0;
  printk(KERN_WARN "%s: Playback disabled due to DMA limitations.\n",
         card->name);
 }
 if (pci_set_dma_mask(pdev, RECORD_ADDRESS_BITS)) {
  card->record_enabled = 1;
 } else {
  card->record_enabled = 0;
  printk(KERN_WARN "%s: Record disabled due to DMA limitations.\n",
         card->name);
 }

A sound card was used as an example here because this genre of PCI
devices seems to be littered with ISA chips given a PCI front end,
and thus retaining the 16MB DMA addressing limitations of ISA.
Types of DMA mappings

There are two types of DMA mappings:

- Consistent DMA mappings which are usually mapped at driver
  initialization, unmapped at the end and for which the hardware should
  guarantee that the device and the cpu can access the data
  in parallel and will see updates made by each other without any
  explicit software flushing.

  Think of "consistent" as "synchronous" or "coherent".

  Good examples of what to use consistent mappings for are:

 - Network card DMA ring descriptors.
 - SCSI adapter mailbox command data structures.
 - Device firmware microcode executed out of
   main memory.

  The invariant these examples all require is that any cpu store
  to memory is immediately visible to the device, and vice
  versa.  Consistent mappings guarantee this.

- Streaming DMA mappings which are usually mapped for one DMA transfer,
  unmapped right after it (unless you use pci_dma_sync below) and for which
  hardware can optimize for sequential accesses.

  This of "streaming" as "asynchronous" or "outside the coherency
  domain".

  Good examples of what to use streaming mappings for are:

 - Networking buffers transmitted/received by a device.
 - Filesystem buffers written/read by a SCSI device.

  The interfaces for using this type of mapping were designed in
  such a way that an implementation can make whatever performance
  optimizations the hardware allows.  To this end, when using
  such mappings you must be explicit about what you want to happen.

Neither type of DMA mapping has alignment restrictions that come
from PCI, although some devices may have such restrictions.

Using Consistent DMA mappings.

To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
you should do:

 dma_addr_t dma_handle;

 cpu_addr = pci_alloc_consistent(dev, size, &dma_handle);

where dev is a struct pci_dev *. You should pass NULL for PCI like buses
where devices don't have struct pci_dev (like ISA, EISA).  This may be
called in interrupt context. 

This argument is needed because the DMA translations may be bus
specific (and often is private to the bus which the device is attached
to).

Size is the length of the region you want to allocate.

This routine will allocate RAM for that region, so it acts similarly to
__get_free_pages (but takes size instead of a page order).  If your
driver needs regions sized smaller than a page, you may prefer using
the pci_pool interface, described below.

It returns two values: the virtual address which you can use to access
it from the CPU and dma_handle which you pass to the card.

The cpu return address and the DMA bus master address are both
guaranteed to be aligned to the smallest PAGE_SIZE order which
is greater than or equal to the requested size.  This invariant
exists (for example) to guarantee that if you allocate a chunk
which is smaller than or equal to 64 kilobytes, the extent of the
buffer you receive will not cross a 64K boundary.

To unmap and free such a DMA region, you call:

 pci_free_consistent(dev, size, cpu_addr, dma_handle);

where dev, size are the same as in the above call and cpu_addr and
dma_handle are the values pci_alloc_consistent returned to you.
This function may not be called in interrupt context.

If your driver needs lots of smaller memory regions, you can write
custom code to subdivide pages returned by pci_alloc_consistent,
or you can use the pci_pool API to do that.  A pci_pool is like
a kmem_cache, but it uses pci_alloc_consistent not __get_free_pages.
Also, it understands common hardware constraints for alignment,
like queue heads needing to be aligned on N byte boundaries.

Create a pci_pool like this:

 struct pci_pool *pool;

 pool = pci_pool_create(name, dev, size, align, alloc, flags);

The "name" is for diagnostics (like a kmem_cache name); dev and size
are as above.  The device's hardware alignment requirement for this
type of data is "align" (a power of two).  The flags are SLAB_ flags
as you'd pass to kmem_cache_create.  Not all flags are understood, but
SLAB_POISON may help you find driver bugs.  If you call this in a non-
sleeping context (f.e. in_interrupt is true or while holding SMP
locks), pass SLAB_ATOMIC.  If your device has no boundary crossing
restrictions, pass 0 for alloc; passing 4096 says memory allocated
from this pool must not cross 4KByte boundaries (but at that time it
may be better to go for pci_alloc_consistent directly instead).

Allocate memory from a pci pool like this:

 cpu_addr = pci_pool_alloc(pool, flags, &dma_handle);

flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
holding SMP locks), SLAB_ATOMIC otherwise.  Like pci_alloc_consistent,
this returns two values, cpu_addr and dma_handle.

Free memory that was allocated from a pci_pool like this:

 pci_pool_free(pool, cpu_addr, dma_handle);

where pool is what you passed to pci_pool_alloc, and cpu_addr and
dma_handle are the values pci_pool_alloc returned. This function
may be called in interrupt context.

Destroy a pci_pool by calling:

 pci_pool_destroy(pool);

Make sure you've called pci_pool_free for all memory allocated
from a pool before you destroy the pool. This function may not
be called in interrupt context.

DMA Direction

The interfaces described in subsequent portions of this document
take a DMA direction argument, which is an integer and takes on
one of the following values:

 PCI_DMA_BIDIRECTIONAL
 PCI_DMA_TODEVICE
 PCI_DMA_FROMDEVICE
 PCI_DMA_NONE

One should provide the exact DMA direction if you know it.

PCI_DMA_TODEVICE means "from main memory to the PCI device"
PCI_DMA_FROMDEVICE means "from the PCI device to main memory"

You are _strongly_ encouraged to specify this as precisely
as you possibly can.

If you absolutely cannot know the direction of the DMA transfer,
specify PCI_DMA_BIDIRECTIONAL.  It means that the DMA can go in
either direction.  The platform guarantees that you may legally
specify this, and that it will work, but this may be at the
cost of performance for example.

The value PCI_DMA_NONE is to be used for debugging.  One can
hold this in a data structure before you come to know the
precise direction, and this will help catch cases where your
direction tracking logic has failed to set things up properly.

Another advantage of specifying this value precisely (outside
of potential platform-specific optimizations of such) is for
debugging.  Some platforms actually have a write permission
boolean which DMA mappings can be marked with, much like page
protections in a user program can have.  Such platforms can
and do report errors in the kernel logs when the PCI controller
hardware detects violation of the permission setting.

Only streaming mappings specify a direction, consistent mappings
implicitly have a direction attribute setting of
PCI_DMA_BIDIRECTIONAL.

The SCSI subsystem provides mechanisms for you to easily obtain
the direction to use, in the SCSI command:

 scsi_to_pci_dma_dir(SCSI_DIRECTION)

Where SCSI_DIRECTION is obtained from the 'sc_data_direction'
member of the SCSI command your driver is working on.  The
mentioned interface above returns a value suitable for passing
into the streaming DMA mapping interfaces below.

For Networking drivers, it's a rather simple affair.  For transmit
packets, map/unmap them with the PCI_DMA_TODEVICE direction
specifier.  For receive packets, just the opposite, map/unmap them
with the PCI_DMA_FROMDEVICE direction specifier.

Using Streaming DMA mappings

The streaming DMA mapping routines can be called from interrupt context.
There are two versions of each map/unmap, one which map/unmap a single
memory region, one which map/unmap a scatterlist.

To map a single region, you do:

 dma_addr_t dma_handle;

 dma_handle = pci_map_single(dev, addr, size, direction);

and to unmap it:

 pci_unmap_single(dev, dma_handle, size, direction);

You should call pci_unmap_single when the DMA activity is finished, e.g.
from interrupt which told you the DMA transfer is done.

Similarly with scatterlists, you map a region gathered from several regions by:

 int i, count = pci_map_sg(dev, sglist, nents, direction);
 struct scatterlist *sg;

 for (i = 0, sg = sglist; i < count; i++, sg++) {
  hw_address[i] = sg_dma_address(sg);
  hw_len[i] = sg_dma_len(sg);
 }

where nents is the number of entries in the sglist.

The implementation is free to merge several consecutive sglist entries
into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
consecutive sglist entries can be merged into one provided the first one
ends and the second one starts on a page boundary - in fact this is a huge
advantage for cards which either cannot do scatter-gather or have very
limited number of scatter-gather entries) and returns the actual number
of sg entries it mapped them to.

Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_length() macros where you previously
accessed sg->address and sg->length as shown above.

To unmap a scatterlist, just call:

 pci_unmap_sg(dev, sglist, nents, direction);

Again, make sure DMA activity finished.

PLEASE NOTE:  The 'nents' argument to the pci_unmap_sg call must be
              the _same_ one you passed into the pci_map_sg call,
       it should _NOT_ be the 'count' value _returned_ from the
              pci_map_sg call.

Every pci_map_{single,sg} call should have its pci_unmap_{single,sg}
counterpart, because the bus address space is a shared resource (although
in some ports the mapping is per each BUS so less devices contend for the
same bus address space) and you could render the machine unusable by eating
all bus addresses.

If you need to use the same streaming DMA region multiple times and touch
the data in between the DMA transfers, just map it
with pci_map_{single,sg}, after each DMA transfer call either:

 pci_dma_sync_single(dev, dma_handle, size, direction);

or:

 pci_dma_sync_sg(dev, sglist, nents, direction);

and after the last DMA transfer call one of the DMA unmap routines
pci_unmap_{single,sg}. If you don't touch the data from the first pci_map_*
call till pci_unmap_*, then you don't have to call the pci_sync_*
routines at all.
Pseudo code:
Here is pseudo code which shows a situation in which you would need
to use the pci_dma_sync_*() interfaces.

 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
 {
  dma_addr_t mapping;

  mapping = pci_map_single(cp->pdev, buffer, len, PCI_DMA_FROMDEVICE);

  cp->rx_buf = buffer;
  cp->rx_len = len;
  cp->rx_dma = mapping;

  give_rx_buf_to_card(cp);
 }

 ...

 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
 {
  struct my_card *cp = devid;

  ...
  if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
   struct my_card_header *hp;

   /* Examine the header to see if we wish
    * to accept the data.  But synchronize
    * the DMA transfer with the CPU first
    * so that we see updated contents.
    */
   pci_dma_sync_single(cp->pdev, cp->rx_dma, cp->rx_len,
         PCI_DMA_FROMDEVICE);

   /* Now it is safe to examine the buffer. */
   hp = (struct my_card_header *) cp->rx_buf;
   if (header_is_ok(hp)) {
    pci_unmap_single(cp->pdev, cp->rx_dma, cp->rx_len,
       PCI_DMA_FROMDEVICE);
    pass_to_upper_layers(cp->rx_buf);
    make_and_setup_new_rx_buf(cp);
   } else {
    /* Just give the buffer back to the card. */
    give_rx_buf_to_card(cp);
   }
  }
 }

Drivers converted fully to this interface should not use virt_to_bus any
longer, nor should they use bus_to_virt. Some drivers have to be changed a
little bit, because there is no longer an equivalent to bus_to_virt in the
dynamic DMA mapping scheme - you have to always store the DMA addresses
returned by the pci_alloc_consistent, pci_pool_alloc, and pci_map_single
calls (pci_map_sg stores them in the scatterlist itself if the platform
supports dynamic DMA mapping in hardware) in your driver structures and/or
in the card registers.

Thursday, June 28, 2018

Linux Scheduler Evolution

The Linux scheduler is a priority based scheduler that schedules tasks based upon their static and dynamic priorities. When these priorities are combined they form a task's goodness . Each time the Linux scheduler runs, every task on the run queue is examined and its goodness value is computed. The task with the highest goodness is chosen to run next.
When there are cpu bound tasks running in the system, the Linux scheduler may not be called for intervals of up to .40 seconds. This means that the currently running task has the CPU to itself for periods of up to .40 seconds (how long depends upon the task's priority and whether it blocks or not). This is good for throughput because there are few computationally uneccessary context switches. However it can kill interactivity because Linux only reschedules when a task blocks or when the task's dynamic priority (counter) reaches zero. Thus under Linux's default priority based scheduling method, long scheduling latencies can occur.
Looking at the scheduling latency in finer detail, the Linux scheduler makes use of a timer that interrupts every 10 msec. This timer erodes the currently running task's dynamic priority (decrements its counter). A task's counter starts out at the same value its priority contains. Once its dynamic priority (counter) has eroded to 0 it is again reset to that of its static priority (priority). It is only after the counter reaches 0 that a call to schedule() is made. Thus a task with the default priority of 20 may run for .200 secs (200 msecs) before any other task in the system gets a chance to run. A task at priority 40 (the highest priority allowed) can run for .400 secs without any scheduling occurring as long as it doesn't block or yield.
Linux scheduler has been gone through some big improvements since kernel version 2.4. There were a lot of complaints about the interactivity of the scheduler in kernel 2.4. During this version, the scheduler was implemented with one running queue for all available processors. At every scheduling, this queue was locked and every task on this queue got its timeslice update. This implementation caused poor performance in all aspects. The scheduler algorithm and supporting code went through a large rewrite early in the 2.5 kernel development series. The new scheduler was arisen to achieveO(1 ) run-time regardless number of runnable tasks in the system. To achieve this, each processor has its own running queue. This helps a lot in reducing lock contention. The priority array was introduced which used active array and expired array to keep track running tasks in the system. TheO(1 ) running time is primarily drawn from this new data structure. The scheduler puts all expired processes into expired array. When there is no active process available in active array, it swaps active array with expired array, which makes active array becomes expired array and expired array becomes active array. There were some twists made into this scheduler to optimize further by putting expired task back to active array instead of expired array in some cases.O(1 ) scheduler uses a heuristic calculation to update dynamic priority of tasks based on their interactivity (I/O bound versus CPU bound) The industry was happy with this new scheduler until Con Kolivas introduced his new scheduler named Rotating Staircase Deadline (RSDL) and then later Staircase Deadline (SD). His new schedulers proved the fact that fair scheduling among processes can be achieved without any complex computation. His scheduler was designed to run inO(n ) but its performance exceeded the currentO(1 ) scheduler.
The result achieved from SD scheduler surprised all kernel developers and designers. The fair scheduling approach in SD scheduler encouraged Igno Molnar to re-implement the new Linux scheduler named Completely Fair Scheduler (CFS). CFS scheduler was a big improvement over the existing scheduler not only in its performance and interactivity but also in simplifying the scheduling logic and putting more modularized code into the scheduler. CFS scheduler was merged into mainline version 2.6.23. Since then, there have been some minor improvements made to CFS scheduler in some areas such as optimization, load balancing and group scheduling feature.


  • An O(n) scheduler - Goes through the entire “ global runqueue” to determine the next task to be run. This is an O(n) algorithm where 'n' is the number of processes. The time taken was proportional to the number of active processes in the system

  • A Global runqueue - All CPUs had to wait for other CPUs to finish execution. A Global runqueue for all processors in a symmetric multiprocessing system (SMP). This meant a task could be scheduled on any processor -- which can be good for load balancing but bad for memory caches. For example, suppose a task executed on CPU-1, and its data was in that processor's cache. If the task got rescheduled to CPU-2, its data would need to be invalidated in CPU-1 and brought into CPU-2 .

  • This lead to large performance hits during heavy workload

Kernel 2.4 Scheduler Policies:
  • SCHED_FIFO - A First-In, First-Out real-time process. When the scheduler assigns the CPU to the process, it leaves the process descriptor in its current position in the runqueue list. If no other higher-priority realtime process is runnable, the process will continue to use the CPU as long as it wishes, even if other real-time processes having the same priority are runnable

  • SCHED_RR - A Round Robin real-time process. When the scheduler assigns the CPU to the process, it puts the process descriptor at the end of the runqueue list. This policy ensures a fair assignment of CPU time to all SCHED_RR real-time processes that have the same priority

  • SCHED_OTHER - A conventional, time-shared process. The policy field also encodes a SCHED_YIELD binary flag. This flag is set when the process invokes the sched_ yield( ) system call (a way of voluntarily relinquishing the processor without the need to start an I/O operation or go to sleep. The scheduler puts the process descriptor at the bottom of the runqueue list.

O(1) Algorithm ( Constant time algorithm )
  • Choose the task on the highest priority list to execute
  • To make this process more efficient, a bitmap is used to define when tasks are on a given priority list
  • On most architectures, a find-first-bit-set instruction is used to find the highest priority bit set in one of five 32-bit words (for the 140 priorities
  • The time it takes to find a task to execute depends not on the number of active tasks but instead on the number of priorities
  • This makes the 2.6 scheduler an O(1) process because the time to schedule is both fixed and deterministic regardless of the number of active tasks


  • The 2.6 scheduler was designed and implemented by Ingo Molnar. His motivation in working on the new scheduler was to create a completely O(1) scheduler for wakeup, context-switch, and timer interrupt overhead
  • One of the issues that triggered the need for a new scheduler was the use of Java virtual machines (JVMs). The Java programming model uses many threads of execution, which results in lots of overhead for scheduling in an O(n) scheduler
  • Each CPU has a runqueue made up of 140 priority lists that are serviced in FIFO order. Tasks that are scheduled to execute are added to the end of their respective runqueue's priority list
  • Each task has a time slice that determines how much time it's permitted to execute
  • The first 100 priority lists of the runqueue are reserved for real-time tasks, and the last 40 are used for user tasks (MAX_RT_PRIO=100 and MAX_PRIO=140)
  • In addition to the CPU's runqueue, which is called the active runqueue, there's also an expired runqueue
  • When a task on the active runqueue uses all of its time slice, it's moved to the expired runqueue. During the move, its time slice is recalculated (and so is its priority)
  • If no tasks exist on the active runqueue for a given priority, the pointers for the active and expired runqueues are swapped, thus making the expired priority list the active one
Scheduler

Kernel 2.6 Scheduler Policies:
  • SCHED_NORMAL - A conventional, time-shared process (used to be called SCHED_OTHER), for normal tasks
  1. Each task assigned a “Nice” value
  2. PRIO = MAX_RT_PRIO + NICE + 20
  3. Assigned a time slice
  4. Tasks at the same prio(rity) are round-robined
  5. Ensures Priority + Fairness

  • SCHED_FIFO - A First-In, First-Out real-time process
  1. Run until they relinquish the CPU voluntarily
  2. Priority levels maintained
  3. Not pre-empted !!

  • SCHED_RR - A Round Robin real-time process
  1. Assigned a timeslice and run till the timeslice is exhausted.
  2. Once all RR tasks of a given prio(rity) level exhaust their timeslices, their timeslices are refilled and they continue running
  3. Prio(rity) levels are maintained

  • SCHED_BATCH - for "batch" style execution of processes
  1. For computing-intensive tasks
  2. Timeslices are long and processes are round robin scheduled
  3. lowest priority tasks are batch-processed (nice +19)

  • SCHED_IDLE - for running very low priority background job
  1. nice value has no influence for this policy
  2. extremely low priority (lower than +19 nice)

Completely Fair Scheduler (CFS)
  • The main idea behind the CFS is to maintain balance (fairness) in providing processor time to tasks. This means processes should be given a fair amount of the processor. When the time for tasks is out of balance (meaning that one or more tasks are not given a fair amount of time relative to others), then those out-of-balance tasks should be given time to execute.
    To determine the balance, the CFS maintains the amount of time provided to a given task in what's called the virtual runtime. The smaller a task's virtual runtime—meaning the smaller amount of time a task has been permitted access to the processor—the higher its need for the processor. The CFS also includes the concept of sleeper fairness to ensure that tasks that are not currently runnable (for example, waiting for I/O) receive a comparable share of the processor when they eventually need it.
    But rather than maintain the tasks in a run queue, as has been done in prior Linux schedulers, the CFS maintains a time-ordered red-black tree (see Figure below). A red-black tree is a tree with a couple of interesting and useful properties. First, it's self-balancing, which means that no path in the tree will ever be more than twice as long as any other. Second, operations on the tree occur in O(log n) time (where n is the number of nodes in the tree). This means that you can insert or delete a task quickly and efficiently.
cfs

Wednesday, June 6, 2018

Static and dynamic libraries

Linux Library Types:

There are two Linux C/C++ library types which can be created:
  1. Static libraries (.a): Library of object code which is linked with, and becomes part of the application.
  2. Dynamically linked shared object libraries (.so): There is only one form of this library but it can be used in two ways.
    1. Dynamically linked at run time. The libraries must be available during compile/link phase. The shared objects are not included into the executable component but are tied to the execution.
    2. Dynamically loaded/unloaded and linked during execution (i.e. browser plug-in) using the dynamic linking loader system functions.

Library naming conventions:

Libraries are typically named with the prefix "lib". This is true for all the C standard libraries. When linking, the command line reference to the library will not contain the library prefix or suffix.

Consider the following compile and link command: gcc src-file.c -lm -lpthread 

The libraries referenced in this example for inclusion during linking are the math library ("m") and the thread library ("pthread"). They are found in /usr/lib/libm.a and /usr/lib/libpthread.a.

Static Libraries: (.a)
How to generate a static library (object code archive file):
  • Compile: cc -Wall -c ctest1.c ctest2.c 
    Compiler options:
    • -Wall: include warnings. See man page for warnings specified.
  • Create library "libctest.a": ar -cvq libctest.a ctest1.o ctest2.o
  • List files in library: ar -t libctest.a
  • Linking with the library:
    • cc -o executable-name prog.c libctest.a
    • cc -o executable-name prog.c -L/path/to/library-directory -lctest

Dynamically Linked "Shared Object" Libraries: (.so)

How to generate a shared object: (Dynamically linked object library file.) Note that this is a two step process.
  1. Create object code
  2. Create library
  3. Optional: create default version using a symbolic link.
Library creation example:

    gcc -Wall -fPIC -c *.c
    gcc -shared -Wl,-soname,libctest.so.1 -o libctest.so.1.0   *.o
    mv libctest.so.1.0 /opt/lib
    ln -sf /opt/lib/libctest.so.1.0 /opt/lib/libctest.so.1
    ln -sf /opt/lib/libctest.so.1.0 /opt/lib/libctest.soCompiler 
  • -Wall: include warnings. See man page for warnings specified.
  • -fPIC: Compiler directive to output position independent code, a characteristic required by shared libraries. Also see "-fpic".
  • -shared: Produce a shared object which can then be linked with other objects to form an executable.
  • -Wl,options: Pass options to linker. 
    In this example the options to be passed on to the linker are: "-soname libctest.so.1". The name passed with the "-o" option is passed to gcc.
  • Option -o: Output of operation. In this case the name of the shared object to be output will be "libctest.so.1.0"
Library Links:
  • The link to /opt/lib/libctest.so allows the naming convention for the compile flag -lctest to work.
  • The link to /opt/lib/libctest.so.1 allows the run time binding to work. See dependency below.
Compile main program and link with shared object library:

gcc -Wall -L/opt/lib prog.c -lctest -o prog
Library Path:
In order for an executable to find the required libraries to link with during run time, one must configure the system so that the libraries can be found. Methods available: (Do at least one of the following)
  1. Add library directories to be included during dynamic linking to the file /etc/ld.so.conf
    Sample: /etc/ld.so.conf
    /usr/X11R6/lib
    /usr/lib
    ...
    ..
    /usr/lib/sane
    /usr/lib/mysql
    /opt/lib
              
    Add the library path to this file and then execute the command (as root) ldconfig to configure the linker run-time bindings. 
    You can use the "-f file-name" flag to reference another configuration file if you are developing for different environments. 
    See man page for command ldconfig.
    OR
  2. Add specified directory to library cache: (as root) 
    ldconfig -n /opt/lib 
    Where /opt/lib is the directory containing your library libctest.so 
    (When developing and just adding your current directory: ldconfig -n . Link with -L.)
    This will NOT permanently configure the system to include this directory. The information will be lost upon system reboot.
    OR
  3. Specify the environment variable LD_LIBRARY_PATH to point to the directory paths containing the shared object library. This will specify to the run time loader that the library paths will be used during execution to resolve dependencies. 
    (Linux/Solaris: LD_LIBRARY_PATH, SGI: LD_LIBRARYN32_PATH, AIX: LIBPATH, Mac OS X: DYLD_LIBRARY_PATH, HP-UX: SHLIB_PATH)
    Example (bash shell): export LD_LIBRARY_PATH=/opt/lib:$LD_LIBRARY_PATH or add to your ~/.bashrc file:
    ...
    if [ -d /opt/lib ];
    then
       LD_LIBRARY_PATH=/opt/lib:$LD_LIBRARY_PATH
    fi
    
    ...
    
    export LD_LIBRARY_PATH
          

    This instructs the run time loader to look in the path described by the environment variable LD_LIBRARY_PATH, to resolve shared libraries. This will include the path /opt/lib.
Library paths used should conform to the "Linux Standard Base" directory structure.

Dynamic loading and un-loading of shared libraries using libdl:
01#include <stdio.h>
02#include <dlfcn.h>
03#include "ctest.h"
04 
05int main(int argc, char **argv)
06{
07   void *lib_handle;
08   double (*fn)(int *);
09   int x;
10   char *error;
11 
12   lib_handle = dlopen("/opt/lib/libctest.so", RTLD_LAZY);
13   if (!lib_handle)
14   {
15      fprintf(stderr, "%s\n", dlerror());
16      exit(1);
17   }
18 
19   fn = dlsym(lib_handle, "ctest1");
20   if ((error = dlerror()) != NULL) 
21   {
22      fprintf(stderr, "%s\n", error);
23      exit(1);
24   }
25 
26   (*fn)(&x);
27   printf("Valx=%d\n",x);
28 
29   dlclose(lib_handle);
30   return 0;
31}

gcc -rdynamic -o progdl progdl.c -ldl
Explanation:
  • dlopen("/opt/lib/libctest.so", RTLD_LAZY); 
    Open shared library named "libctest.so". 
    The second argument indicates the binding. See include file dlfcn.h
    Returns NULL if it fails. 
    Options:
    • RTLD_LAZY: If specified, Linux is not concerned about unresolved symbols until they are referenced.
    • RTLD_NOW: All unresolved symbols resolved when dlopen() is called.
    • RTLD_GLOBAL: Make symbol libraries visible.
  • dlsym(lib_handle, "ctest1"); 
    Returns address to the function which has been loaded with the shared library.. 
    Returns NULL if it fails. 
    Note: When using C++ functions, first use nm to find the "mangled" symbol name or use the extern "C" construct to avoid name mangling. 
    i.e. extern "C" void function-name();