Semaphore Problems
When first reading about semaphores, it is very tempting to conclude that they represent the solutions to all of our shared data problems. This is not true. In fact, your systems will probably work better, the fewer times you have to use semaphores. The problem is that semaphores work only if you use them perfectly, and there are no guarantees that you (or your- coworkers) will do that. There are any numbers of tried – and - true ways to mess up with semaphores:
Forgetting to take the semaphore. Semaphores only work if every task that accesses the shared data, for read or for write, uses the semaphore. If anybody forgets, then the RTOS may switch away from the code that forgot to take the semaphore and cause an ugly shared-data bug .
Forgetting to release the semaphore. If any task fails to release the semaphore, then every other task that ever uses the semaphore will sooner or later block waiting to take that semaphore and will be blocked forever.
Taking the wrong semaphore. If you are using multiple semaphores, then taking the wrong one is as bad as forgetting to take one.
Holding a semaphore for too long. Whenever one task takes a semaphore, every other task that subsequently wants that semaphore has to wait until the semaphore is released. If one task takes the semaphore and then holds it for too long, other tasks may miss rea1 - time deadlines.
A particularly perverse instance of this problem can arise if the RTOS switches from a low-priority task (call it Task C) to a medium priority task (call it Task B) after Task C has taken an semaphore. A high-priority task (call it Task A) that wants the semaphore then has to wait until Task B gives up the microprocessor: Task C can’t release the semaphore until it gets the microprocessor back. No matter how carefully you code Task C, Task B can prevent Task C from releasing the semaphore and can thereby hold up Task A indefinitely. This problem is called priority inversion; some RTOSs resolve this problem with priority inheritance - they temporarily boost the priority of Task C to that of Task A whenever Task C holds the semaphore and Task A is waiting for it.
Semaphore Variants
There are a number of different kinds of semaphores. Here is an overview of some of the more common variations:
Some systems offer semaphores that can be taken multiple times. Essentially, such semaphores are integers; taking them decrements the integer and releasing them increments the integer. If a task tries to take the semaphore when the integer is equal to zero, then the task will block. These semaphores are called counting semaphores, and they were the original type of semaphore.
Some systems offer semaphores that can be released only by the task that took them. These semaphores are useful for the shared - data problem, but they cannot be used to communicate between two tasks. Such semaphores are sometimes called resource semaphores or resources.
Some RTOSs offer one kind of semaphore that will automatically deal with the priority inversion problem and another that will not. The former kind of semaphore commonly called a mutex semaphore or mutex. (Other RTOSs offer semaphores that they call mutexes but that do not deal with priority inversion.)
If several tasks are waiting for a semaphore when it is released, systems vary as to which task gets to run. Some systems will run the task that has been waiting longest; others will run the highest - priority task that is waiting for the semaphore. Some systems give you the choice.
Ways to Protect Shared Data
We have discussed two ways to protect shared data: disabling interrupts and using semaphores. There is a third way that deserves at least a mention: disabling task switches. Most RTOSs have two functions you can call, one to disable task switches and one to reenable them after they’ve been disabled. As is easy to see, you can protect shared data from an inopportune task switch by disabling task switches while you are reading or writing the shared data.
Here’s a comparison of the three methods of protecting shared data:
1. Disabling interrupts is the most drastic in that it will affect the response times of all the interrupt routines and of all other tasks in the system. (If you disable interrupts, you also disable task switches, because the scheduler cannot get control of the microprocessor to switch.) On the other hand, disabling interrupts has two advantages. (1) It is the only method that works if your data is shared between your task code and your interrupt routines. Interrupt routines are not allowed to take semaphores, as we will discuss in the next chapter, and disabling task switches does not prevent interrupts. (2) It is fast. Most processors can disable or enable interrupts with a sing1e instruction; all of the RTOS functions are many instructions long. If a task’s access to shared data lasts only a short period of time - incrementing a single variable for example - sometimes it is preferable to take the shorter hit on interrupt service response than to take the longer hit on task response that you get from using a semaphore or disabling task switches.
2. Taking semaphores is the most targeted way to protect data, because it affects only those tasks that need to take the same semaphore. The response times of interrupt routines and of tasks that do not need the semaphore are unchanged. On the other hand, semaphores do take up a certain amount of microprocessor time - albeit not much in most RTOSs - and they will not work for interrupt routines.
3. Disabling task switches is somewhere in between the two. It has no effect on interrupt routines, but it stops response for all other tasks cold.