PSI sics-cvs-psi-complete-tree-post-site-support

This commit is contained in:
2004-03-09 15:18:11 +00:00
committed by Douglas Clowes
parent 6373f6b0fb
commit ae77364de2
196 changed files with 8344 additions and 3485 deletions

View File

@@ -3,7 +3,7 @@ In this chapter the facilities of the SICS servers kernel will be examined
more closely. All the kernel modules and their function will be listed,
together with some explanatory information and an overview about the
application programmers interfaces (API) provided. This section should
answer the questions: WHat is available?, Where to find what?,
answer the questions: What is available?, Where to find what?,
Why did they do that? Details of
the API's mentioned are given in the reference section.
@@ -34,11 +34,12 @@ SICS sense is defined by a function, the task function. It is of the type:
int TaskFunction(void *pData);
\end{verbatim}
When it is time for the task to execute this function is called, with a
parameter to a the tasks own data structure. This data structure must have
been defined by the user of this module. The task function returns 1, if it
shall continue to live, or 0 if it should be deleted from the task list.
These task functions are kept in a list. The elements of this list are
visited cyclically, when the scheduler runs.
pointer to the tasks own data structure as a parameter. This data
structure must have been defined by the user of this module. The task
function returns 1, if it shall continue to live, or 0 if it should be
deleted from the task list. These task functions are kept in a
list. The elements of this list are visited cyclically, when the
scheduler runs.
The API to this object consists of functions for creating a task manager,
adding tasks to the task list and to run the task list. Other functions
@@ -102,7 +103,7 @@ SICS should be changed. For instance to Token--Ring or AppleTalk or
whatever.
The network reader implements the polling for network message in the SICS
server. It is organized as one of the SICS system tasks. Polling in a POSIX
environment is all about the select system call. The select system call
environment is all about the select() system call. The select() system call
allows to enquire if an open TCP/IP socket has data pending to be read, can
be written to etc. For more details see the unix man pages for the select
system call. An earlier version of the SICS server had a list of connection
@@ -124,7 +125,7 @@ The network reader currently supports four types of sockets:
\item User sockets.
\end{itemize}
The accept type of socket is the main server port where clients try to
The accept type of socket is the main server port to which clients try to
connect to. The network reader accepts the connection and tries to read a
username/password pair for a specified amount of time.
If the username/password is valid, the connection will be accepted,
@@ -135,13 +136,15 @@ the system and the network reader registers a new client command port.
The normal client command ports are accepted connections from a client. The
SICS server expects commands to be sent from the clients. Thus any data
pending on such a socket will be read, split into single commands at the \\n
and put into the connections command stack for execution. At this place
there is also the check for the special interrupt string on command
connections (see \ref{prot1}).
pending on such a socket will be read and split into single commands at
the newline character. Now the network reader checks if the command
represents an interrupt(see \ref{prot1}) and if so process the interrupt
immediately. If not then command is put into the connections command
stack for execution.
The SICS server accepts only interrupts on its UDP port. This will be checked
for when handling data pending on the servers UDP port.
The SICS server accepts interrupts also on its UDP port. This will be checked
for when handling data pending on the servers UDP port. This feauture
is implemented but not well tested and not used in the moment.
User type sockets are a feature for dealing with extremely slow hardware
connections. Some hardware devices require a long time to answer requests.
@@ -175,16 +178,18 @@ mechanism. For more details see John Ousterhout's book.
In an earlier stage it was considered to use the Tcl interpreter as the SICS
interpreter. This idea was discarded for some reasons: One was the
difficulty of transporting the client execution context (i.e. the connection
object) through the Tcl interpreter. There is no standard Tcl mechanism for
doing that. The second was security: the Tcl
interpreter is very powerful and can be abused. It was felt that the system
had to be protected against such problems. The third reasons was that the
set of user commands should not be cluttered with Tcl commands in order to
prevent confusion. Programming macros is anyway something which is done by
SICS managers or programmers. However, the SICS interpreter is still modeled
very much like the Tcl-interpreter. A Tcl interpreter is still included in
order to provide a full featured macro language. The SICS interpreter and the
Tcl macro interpreter are still tightly coupled.
object) through the Tcl interpreter. This reason has become invalid
now, with the advent of Tcl 8.+ which supports namespaces. The second
was security: the Tcl interpreter is very powerful and can be
abused. It was felt that the system had to be protected against such
problems. The third reasons was that the set of user commands should
not be cluttered with Tcl commands in order to prevent
confusion. Programming macros is anyway something which is done by
SICS managers or programmers. However, the SICS interpreter is still
modeled very much like the Tcl-interpreter. A Tcl interpreter is
still included in order to provide a full featured macro
language. The SICS interpreter and the Tcl macro interpreter are
still tightly coupled.
The SICS interpreter must forward commands to the SICS objects. For this the
interpreter needs some help from the objects themselves. Each SICS object
@@ -285,34 +290,15 @@ For the SICS concept for handling sample environment devices see
\section{The Server}
The server module defines a server data structure. A pointer to this
data structure is the sole global variable in the SICS system. Its name is
{\bf {\huge pServ}}. This data structure contains pointers to the most
{\bf pServ}. This data structure contains pointers to the most
important SICS components: the interpreter, the task switcher, the device
executor, the environment monitor and the network reader. This module also
contains the code for initializing, running and stopping the server.
\section{The ServerLog}
As part of the SICS kernel there exists a global server log file. This file
contains:
\begin{itemize}
\item All traffic on all client connections. Even messages suppressed by the
clients.
\item All internal error messages.
\item Notifications about important internal status changes.
\end{itemize}
This server log is meant as an aid in debugging the server. As the SICS
server may run for days, weeks and months uninterrupted this log file may
become very large. However, only the last thousand or so messages are really
of interest when tracking a problem. Therefore a scheme is implemented to
limit the disk space used by the server log. The server log writes
cyclically into a number of files. A count of the lines is kept which were
written to each file. Above a predefined count, a new file is started.
As an interface the server log provides a function which allows to write
a message to it. This can be used by any object in the system for
interesting messages. The number of files to cycle through and the length of
each file can be configured by defines at the top of servlog.c.
\section{The Performance Monitor}
This facility provides the data for the Performance (see user documentation)
This facility provides the data for the ``Performance''
(see user documentation)
command. The Performance Monitor is a task which increments a counter each
time it is executed. After a predefined integration time (20 seconds) a new
value cycles/per second is calculated. This is the data retrievable by the
@@ -325,7 +311,7 @@ monitor may well be removed from the system without harm.
\section{The Object Factory}
During SICS initialization the SICS interpreters command list needs to be
initialized. This is the task of the object factory. Its function
IniIniCommand initializes all fixed, general SICS commands and all object
InitIniCommands initializes all fixed, general SICS commands and all object
creation commands. Then the server initialization file is processed from the
server initialization code. After this is done, the server initialization
code calls KillIniCommands which removes the now surplus object creation
@@ -351,5 +337,63 @@ users. If this becomes a serious concern, this module has to be rewritten.
\section{The Server Main Function}
This does not do much, just initialize the server, run it, and stop it.
\section{Logging}
The SICS server offers multiple options for logging:
\begin{itemize}
\item There is a cyclical server log logging all traffic. This is
described below.
\item Per client connection log files can be configured. This is part
of the connection object interface.
\item A special module, the commandlog exists, which saves all traffic
issued on client connections with user or manager privilege. This is
the most useful log for finding problems. This facility can be
configured to create a log file per day. Or the user can demand to
have her very own log file.
\end{itemize}
\subsection{The ServerLog}
As part of the SICS kernel there exists a global server log file. This file
contains:
\begin{itemize}
\item All traffic on all client connections. Even messages suppressed by the
clients.
\item All internal error messages.
\item Notifications about important internal status changes.
\end{itemize}
This server log is meant as an aid in debugging the server. As the SICS
server may run for days, weeks and months uninterrupted this log file may
become very large. However, only the last thousand or so messages are really
of interest when tracking a problem. Therefore a scheme is implemented to
limit the disk space used by the server log. The server log writes
cyclically into a number of files. A count of the lines is kept which were
written to each file. Above a predefined count, a new file is started.
As an interface the server log provides a function which allows to write
a message to it. This can be used by any object in the system for
interesting messages. The number of files to cycle through and the length of
each file can be configured by defines at the top of servlog.c.
\section{Instrument Status Persistence}
Real programs do dump core (the SICS server is good, but is no
exception in this respect) and real computers fall over. In such cases
it would be useful if instrument configuration parameters such as
zero points , variable settings etc. are not lost. SICS achieves this
by writing a status file each time a parameter changes. This
status file is read back whenever the SICS server starts. The default
status file is configured in the instrument startup file as the SicsOption
statusfile. The user
can also request a status file to be written or recovered manually.
The status file is just a file with SICS commands which configure
relevant parameters. The actual writing of these commands is delegated
to each SICS object. Each SICS object which whishes to save data into
the status file has to implement a function which will
automatically be called when a status file is written. For details,
consult the chapter on SICS object implementation.