Plz give ur comments
This is a discussion on Threads vs Forks in embedded environment : Some Conclusions - Linux ; Forks: Forked processes may not always have its own copy of ALL the segments of the update engine. Most processes of Linux will do a "copy-on-write" for a "page", i.e. a process will get its own copy of a page ...
Forked processes may not always have its own copy of ALL the segments
of the update engine. Most processes of Linux will do a "copy-on-write"
for a "page", i.e. a process will get its own copy of a page only if it
modifies it. So the RAM requirement is not high. So the only overhead
is creation of the kernel structures.
If we have a MMU, the memory consumption of a process may be lower than
think because of the "copy-on-write" semantics.
Switching and interprocess communication time is more. But this is not
an overhead if there is not very frequent switching and communication
between processes, as in our case where each process will execute its
own copy of update engine and work on independent patch parts.
We do not require any additional libraries to supports forks as in case
The problem of concurrency and synchronization complexity is not
evident among processes created with fork.
Linux has a unique implementation of threads. To the Linux kernel,
there is no concept of a thread. Linux implements all threads as
standard processes. The Linux kernel does not provide any special
scheduling semantics or data structures to represent threads. Instead,
a thread is merely a process that shares certain resources with other
processes. Each thread has a unique task struct and appears to the
kernel as a normal process which just happens to share resources, such
as an address space, with other processes.
Threads are created like normal tasks, with the exception that the
clone () system call is passed flags corresponding to specific
resources to be shared. This leads to a behavior identical to a normal
fork (), except that the address space, file system resources, file
descriptors, and signal handlers are shared. In other words, the new
task and its parent are what are popularly called threads.
This approach to threads contrasts greatly with operating systems such
as Microsoft Windows or Sun Solaris, which have explicit kernel support
for threads (and sometimes call threads lightweight processes).The name
"lightweight process" sums up the difference in philosophies between
Linux and other systems. To these other operating systems, threads are
an abstraction to provide a lighter, quicker execution unit than the
heavy process. To Linux, threads are simply a manner of sharing
resources between processes (which are already quite lightweight)
Threads require support libraries, so extra space is required in flash
memory. If we have to ship just one program that requires the threading
library (as in our case the update engine), then we have to ship the
threading library. Minimizing the threading library cost is only
possible if we can identify all multithreaded programs in the base
Linux distribution. Once we have the library in the flash image to
support just one such program, it costs "nothing" for additional
programs to also link to it. Updation of libraries may also be required
so this may increase the installation time.
Threads have Moderate RAM requirement but it depends upon number of
threads. The advantage of threads is their lower resource consumption.
Multiple threads typically share the state information of a single
process, and share memory & other resources directly. Though threads
share resources, in our case the sharing is not substantial.
Threads take much less CPU time to switch among themselves than between
processes, because there's no need to switch address spaces. In
addition, because they share address space, threads in a process can
communicate more easily with one another. Of course inter thread
communication can be easier than inter process communication, as we can
use shared memory objects, but additional care must be taken to use
thread save functions wherever necessary.
Another problem is concurrency and synchronization complexity.
Sharing, locking, deadlock; race conditions come vividly alive in
threads. Processes don't usually have to deal with this, since most
shared data is passed through pipes. Threads can share file handles,
variables, signals, etc. this may lead to error conditions if not
Applications executed in a thread environment must be thread-safe. This
means that functions (or the methods in object-oriented applications)
must be reentrant-a function with the same input always returns the
same result, even if other threads concurrently execute the same
function. Accordingly, functions must be programmed in such a way that
they can be executed simultaneously by several threads.
Plz give ur comments
> Plz give ur comments
Plz gv mr lttrz!
Stop talking kiddyspeak!
Also, Please quote some context if following up on someone (following up
on oneself is considered unappropriate).
Josef Möllers (Pinguinpfleger bei FSC)
If failure had no penalty success would not be a prize
-- T. Pratchett
-----BEGIN PGP SIGNED MESSAGE-----
> Threads take much less CPU time to switch among themselves than
> between processes, because there's no need to switch address spaces.
I would question this statement.
To Linux, the scheduling overheads of threads are the same as of
The assumption that threads are somehow lighter because they "share
address space" with other threads ignores the fact that, if the OS has
invoked a context swap of the process with the thread for another,
different process, it must "switch address spaces".
And, because Linux threads are scheduled as if they were separate
processes, there's no way to force Linux to schedule threads of a
common process together to ensure that the address space doesn't need
to be switched.
In other words, the Linux scheduling system will likely endure as much
overhead with threaded programs as it would with unthreaded programs.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (MingW32) - WinPT 0.11.12
-----END PGP SIGNATURE-----
Lew Pitcher wrote:
> I would question this statement.
I question your questioning.
> To Linux, the scheduling overheads of threads are the same as of
Not quite. See below.
> The assumption that threads are somehow lighter because they "share
> address space" with other threads ignores the fact that, if the OS has
> invoked a context swap of the process with the thread for another,
> different process, it must "switch address spaces".
Yes, if you switch from one of the threads to another process, you will
have a full context switch. However, some of the time you will likely
switch between threads, in which case you gain some efficiency.
> And, because Linux threads are scheduled as if they were separate
> processes, there's no way to force Linux to schedule threads of a
> common process together to ensure that the address space doesn't need
> to be switched.
In Linux, the scheduler knows about "tasks", rather than threads or
processes. These "tasks" may share various resources, including memory
map, signal handlers, file descriptors, etc. It just so happens that
what we call threads share almost everything.
Back in 2.4 the scheduler actually did give a small bonus to tasks that
shared memory map with the previous task. This tended to allow threads
to schedule after their other sibling threads.
In 2.6 with the O(1) scheduler I don't think this is done anymore. But
there's still a gain even without forcing it, since they will
occasionally run consecutively. Whether that gain is significant will
depend on what else is on the system.
> In other words, the Linux scheduling system will likely endure as much
> overhead with threaded programs as it would with unthreaded programs.
Depends on the system. If your main app is threaded and the rest of the
system is basically idle...