<P> In a deadlock, two or more tasks lock mutex without timeouts and then wait forever for the other task's mutex, creating a cyclic dependency . The simplest deadlock scenario occurs when two tasks alternately lock two mutex, but in the opposite order . Deadlock is prevented by careful design . </P> <P> The other approach to resource sharing is for tasks to send messages in an organized message passing scheme . In this paradigm, the resource is managed directly by only one task . When another task wants to interrogate or manipulate the resource, it sends a message to the managing task . Although their real - time behavior is less crisp than semaphore systems, simple message - based systems avoid most protocol deadlock hazards, and are generally better - behaved than semaphore systems . However, problems like those of semaphores are possible . Priority inversion can occur when a task is working on a low - priority message and ignores a higher - priority message (or a message originating indirectly from a high priority task) in its incoming message queue . Protocol deadlocks can occur when two or more tasks wait for each other to send response messages . </P> <P> Since an interrupt handler blocks the highest priority task from running, and since real time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible . The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done . This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message . A scheduler often provides the ability to unblock a task from interrupt handler context . </P> <P> An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on . Updates to this catalogue must be strictly controlled . For this reason it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so . The OS function called from an interrupt handler could find the object database to be in an inconsistent state because of the application's update . There are two major approaches to deal with this problem: the unified architecture and the segmented architecture . RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated . The downside of this is that interrupt latency increases, potentially losing interrupts . The segmented architecture does not make direct OS calls but delegates the OS related work to a separate handler . This handler runs at a higher priority than any thread but lower than the interrupt handlers . The advantage of this architecture is that it adds very few cycles to interrupt latency . As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture . </P>

What is a real time operating system how is it different from other oss in terms of processing