PSI sics-cvs-psi-complete-tree-post-site-support
This commit is contained in:
@@ -14,7 +14,8 @@ System, had to meet the following specifications:
|
||||
\item Enhanced portability across instrument hardware. This means that it
|
||||
should be easy to add other types of motors, counters or other hardware to
|
||||
the system.
|
||||
\item Support authorization on the command and variable level. This means
|
||||
\item Support authorization on the command and parameter modification
|
||||
level. This means
|
||||
that certain instrument settings can be protected against random changes by
|
||||
less knowledgable users.
|
||||
\item Good maintainability and extendability.
|
||||
@@ -28,14 +29,20 @@ matches the above criteria.
|
||||
\section{The SINQ Hardware Setup}
|
||||
SICS had to take in account the SINQ hardware setup which had been decided
|
||||
upon earlier on. Most hardware such as motors and counters is controlled via
|
||||
RS--232 interfaces. These devices connect to a Macintosh PC which has a
|
||||
terminal server program running on it. This terminal server program collects
|
||||
request to the hardware from a TCP/IP port and forwards them to the serial
|
||||
device. The instrument control program runs on a workstation running
|
||||
DigitalUnix. Communication with the hardware happens via TCP/IP through the
|
||||
terminal server. Some hardware devices, such as the histogram memory, can handle
|
||||
RS--232 interfaces. These RS--232 interfaces are connected to a
|
||||
terminal server which allows to access such devices through the TCP/IP
|
||||
network.
|
||||
|
||||
For historical reasons the instrument control software does not access
|
||||
the terminal server directly but through another software layer, the
|
||||
SerPortServer program. The SerPortServer program is another TCP/IP
|
||||
server which allows multiple network clients to access the same
|
||||
terminal server port through a home grown protocoll. In the long run
|
||||
this additional software layer will be abolished.
|
||||
|
||||
Some hardware devices, such as the histogram memory, can handle
|
||||
TCP/IP themselves. With such devices the instrument control program
|
||||
communicates directly through TCP/IP, without a terminal server. All
|
||||
communicates directly through TCP/IP. All
|
||||
hardware devices take care of their real time needs themselves. Thus the
|
||||
only task of the instrument control program is to orchestrate the hardware
|
||||
devices. SICS is designed with this setup up in mind, but is not restricted
|
||||
@@ -72,7 +79,7 @@ This is a real concern at SINQ where VMS,
|
||||
Intel-PC, Macintosh and Unix users have to be satisfied.
|
||||
As many instrument scientists still prefer
|
||||
the command line for interacting with instruments, the most used client is a
|
||||
visual command line client. Status displays are another sort of specialized
|
||||
visual command line client. Status displays are another kind of specialized
|
||||
client programs. Graphical user interfaces are under consideration for some
|
||||
instruments. As an example for a client a screen shot of the status display
|
||||
client for a powder diffractometer is given in picture \ref{dmc}
|
||||
@@ -80,7 +87,7 @@ client for a powder diffractometer is given in picture \ref{dmc}
|
||||
\begin{figure}
|
||||
%% \epsfxsize=0.65\textwidth
|
||||
\epsfxsize=160mm
|
||||
%% \epsffile{dmc.eps}
|
||||
\epsffile{dmccom.eps}
|
||||
\caption{Example for a SICS client: Powder Diffractometer Status Display}\label{dmc}
|
||||
\end{figure}
|
||||
|
||||
@@ -90,15 +97,18 @@ client for a powder diffractometer is given in picture \ref{dmc}
|
||||
The SICS server is the core component of the SICS system. The SICS server is
|
||||
responsible for doing all the work in instrument control. Additionally the
|
||||
server has to answer the requests of possibly multiple clients.
|
||||
The SICS server can be subdivided into three subsystems: The kernel, a database
|
||||
of SICS objects and an interpreter. The SICS server kernel takes care of
|
||||
client multitasking and the preservation of the proper I/O and error context
|
||||
for each client command executing.
|
||||
SICS objects are software modules which represent all aspects
|
||||
of an instrument: hardware devices, commands, measurement strategies
|
||||
The SICS server can be subdivided into three subsystems:
|
||||
\begin{description}
|
||||
\item[The kernel] The SICS server kernel
|
||||
takes care of client multitasking and the preservation of the proper
|
||||
I/O and error context for each client command executing.
|
||||
\item[SICS Object Database] SICS objects are software modules which
|
||||
represent all aspects of an instrument: hardware devices, commands, measurement strategies
|
||||
and data storage. This database of objects is initialized at server startup
|
||||
time from an initialization script. The third SICS server component is an
|
||||
interpreter which allows to issue commands to the objects in the objects database.
|
||||
time from an initialization script.
|
||||
\item[The Interpreter] The interpreter allows to issue commands to the
|
||||
objects in the objects database.
|
||||
\end{description}
|
||||
The schematic drawing of the SICS server's structure is given in picture
|
||||
\ref{newsics}.
|
||||
\begin{figure}
|
||||
@@ -120,28 +130,29 @@ In more detail the SICS server kernel has the following tasks:
|
||||
\item Monitor HW--operations.
|
||||
\item Monitor environment devices.
|
||||
\end{itemize}
|
||||
Any server serving multiple clients has the problem how to organize multiple
|
||||
clients accessing the same server and how to stop one client reading data,
|
||||
which another client is just writing. The approach used for the SICS server
|
||||
is a combination of polling and cooperative multitasking. This scheme is
|
||||
Any program serving multiple clients has the problem how to organize multiple
|
||||
clients accessing the same server and how to prevent one client from
|
||||
reading data, while another client is writing. The approach used for
|
||||
the SICS server is a combination of polling and cooperative multitasking. This scheme is
|
||||
simple and can be implemented in an operating system independent manner. One
|
||||
way to look at the SICS server is as a series of tasks in a circular queue
|
||||
executing one after another. The servers main loop does nothing but
|
||||
executing the tasks in this circular buffer in an endless loop.
|
||||
There are several system tasks and one such
|
||||
task for each living client connection. Thus only one task executes at any
|
||||
given time and data access is efficiently serialized. One of the main system
|
||||
given time and data access is efficiently serialized.
|
||||
|
||||
One of the main system
|
||||
tasks (and the one which will be always there) is the network reader. The
|
||||
network reader has a list of open network connections and checks each of
|
||||
them for pending requests. What happens when a data is pending on an open
|
||||
them for pending requests. What happens when data is pending on an open
|
||||
network port depends on the type of port: If it is the servers main
|
||||
connection port, the network reader will try to accept and verify a new
|
||||
client connection and create the associated data structures. If the port
|
||||
belongs to an open client connection the network reader will read the
|
||||
command pending and put it onto a command stack existing for each client
|
||||
connection. When it is time for a client task to execute, it will fetch a
|
||||
command from its very own command stack and execute it. When the net reader
|
||||
finds an user interrupt pending, the interrupt is executed.
|
||||
command from its very own command stack and execute it.
|
||||
This is how the SICS server deals with client requests.
|
||||
|
||||
The scheme described above relies on the fact that most SICS command need
|
||||
@@ -170,7 +181,7 @@ an hardware request all other clients requests to drive the hardware will
|
||||
return an error. The device executor is also responsible for monitoring the
|
||||
progress of an hardware operation. It does so by adding a special task into
|
||||
the system which checks the status of the operation each time this tasks
|
||||
executes. When the hardware operation is finished (one way or another) this
|
||||
executes. When the hardware operation is finished this
|
||||
device executor task will end. A special system facility allows a client
|
||||
task to wait for the device executor task to end while the rest of the task
|
||||
queue is still executing. In this way time intensive hardware operations can
|
||||
@@ -190,9 +201,9 @@ Most experiments do not happen at ambient room conditions but
|
||||
require some special environment for the sample. Mostly this is temperature
|
||||
but it can also be magnetic of electric fields etc. Most of such devices
|
||||
can regulate themselves but the data acquisition program needs to monitor
|
||||
such devices. Within SICS this is done via a special system object, the
|
||||
such devices. Within SICS, this is done via a special system object, the
|
||||
environment monitor. A environment device, for example a temperature
|
||||
controller, registers it's presence with this object. Then an special system
|
||||
controller, registers it's presence with this object. Then a special system
|
||||
task will control this device when it is executing, check for possible out
|
||||
of range errors and initiates the proper error handling if such a problem is
|
||||
encountered.
|
||||
@@ -241,15 +252,15 @@ to a system of protocols. There are protocols for:
|
||||
\item For checking the authorisation of the client who wants to execute the
|
||||
command.
|
||||
\end{itemize}
|
||||
|
||||
SICS uses NeXus$^{2}$, the upcoming standard for data exchange for neutron
|
||||
and x\_ray scattering as its raw data format.
|
||||
|
||||
SICS objects have the ability to notify clients and other objects of
|
||||
internal state changes. For example when a motor is driven, the motor object
|
||||
can be configured to tell SICS clients or other SICS objects about his new
|
||||
position.
|
||||
|
||||
SICS uses NeXus$^{2}$, the upcoming standard for data exchange for neutron
|
||||
and x--ray scattering as its raw data format.
|
||||
|
||||
|
||||
\section{SICS Working Examples}
|
||||
In order to get a better feeling for the internal working of SICS the course
|
||||
of a few different requests through the SICS system is traced in this
|
||||
@@ -268,10 +279,10 @@ done by a special system component, the task switcher.
|
||||
\subsection{The Request for a new Client Connection}
|
||||
\begin{itemize}
|
||||
\item The network reader recognizes pending data on its main server port.
|
||||
\item The network reader accepts the connection and tries to read a
|
||||
\item The network reader accepts the connection and tries to read an
|
||||
username/password pair.
|
||||
\item If such a username/password pair comes within a suitable time
|
||||
intervals it is checked for validity. On failure the connection is closed
|
||||
\item If such an username/password pair comes within a suitable time
|
||||
interval it is checked for validity. On failure the connection is closed
|
||||
again.
|
||||
\item If a valid connection has been found: A new connection object is
|
||||
created, a new task for this client connection is introduced into the
|
||||
@@ -284,7 +295,7 @@ pending commands.
|
||||
\begin{itemize}
|
||||
\item The network reader finds data pending at one of the client ports.
|
||||
\item The network reader reads the command, splits it into single lines and
|
||||
put those on the top of the client connections command stack. The network
|
||||
put those on top of the client connections command stack. The network
|
||||
reader passes control to the task switcher.
|
||||
\item In due time the client connection task executes, inspects its command
|
||||
stack, pops the command pending and forwards it together with a pointer to
|
||||
@@ -301,7 +312,7 @@ task to the task switcher.
|
||||
\item The next task executes.
|
||||
\end{itemize}
|
||||
|
||||
\subsection{A Drive Command in Blocking Mode}
|
||||
\subsection{A ``Drive'' Command in Blocking Mode}
|
||||
\begin{itemize}
|
||||
\item The network reader finds data pending at one of the client ports.
|
||||
\item The network reader reads the command, splits it into single lines and
|
||||
@@ -331,8 +342,8 @@ requesting the wait state. The client connection and task executing the drive co
|
||||
\item The device executor task will keep on monitoring the progress of the motor
|
||||
driving whenever the task switcher allows it to execute.
|
||||
\item In due time the device executor task will find that the motor finished
|
||||
driving. The task will then die. The clients grab of the hardware driving
|
||||
permission will be released.
|
||||
driving. The task will then finish executing. The clients grab of the
|
||||
hardware driving permission will be released.
|
||||
\item At this stage the drive command wrapper function will awake and
|
||||
continue execution. This means inspecting errors and reporting to the client
|
||||
how things worked out.
|
||||
@@ -342,7 +353,7 @@ other commands.
|
||||
\item The next task executes.
|
||||
\end{itemize}
|
||||
|
||||
\subsection{A Drive Command Interrupted}
|
||||
\subsection{A ``Drive Command Interrupted}
|
||||
\begin{itemize}
|
||||
\item The network reader finds data pending at one of the client ports.
|
||||
\item The network reader reads the command, splits it into single lines and
|
||||
@@ -385,7 +396,7 @@ task to the task switcher.
|
||||
\item The next task executes.
|
||||
\end{itemize}
|
||||
|
||||
\subsection{A Run Command in Non Blocking Mode}
|
||||
\subsection{A ``Run'' Command in Non Blocking Mode}
|
||||
\begin{itemize}
|
||||
\item The network reader finds data pending at one of the client ports.
|
||||
\item The network reader reads the command, splits it into single lines and
|
||||
@@ -397,13 +408,13 @@ itself to the SICS interpreter.
|
||||
\item The SICS interpreter inspects the first word of the command. Using
|
||||
this key the interpreter finds the drive command wrapper function and passes
|
||||
control to that function.
|
||||
\item The run command wrapper function will check further arguments,
|
||||
\item The ``run'' command wrapper function will check further arguments,
|
||||
checks the
|
||||
clients authorisation if appropriate for the action requested. Depending on
|
||||
the checks, the wrapper function will create an error message or do its
|
||||
work.
|
||||
\item Assuming everything is OK, the motor is located in the system.
|
||||
\item The drive command wrapper function asks the device executor to run the
|
||||
\item The ``run'' command wrapper function asks the device executor to run the
|
||||
motor.
|
||||
\item The device executor verifies that nobody else is driving, then starts
|
||||
the motor and grabs hardware control. The device executor also starts a task
|
||||
@@ -415,23 +426,19 @@ new commands.
|
||||
driving whenever the task switcher allows it to execute.
|
||||
\item In due time the device executor task will find that the motor finished
|
||||
driving. The task will then die silently. The clients grab of the hardware driving
|
||||
permission will be released. If errors occurred, however a they will be reported.
|
||||
\item At this stage the drive command wrapper function will awake and
|
||||
continue execution. This means inspecting errors and reporting to the client
|
||||
how things worked out.
|
||||
\item This done, control passes back through the interpreter and the connection
|
||||
task to the task switcher. The client connection is free to execute
|
||||
other commands.
|
||||
\item The next task executes.
|
||||
permission will be released. Any errors however, will be reported.
|
||||
\end{itemize}
|
||||
|
||||
All this seems to be pretty complex and time consuming. But it is the complexity needed to
|
||||
do so many things, especially the non blocking mode of operation requested
|
||||
by users. Tests have shown that the task switcher manages +900 cycles per second
|
||||
through
|
||||
the task list on a DigitalUnix machine and 50 cycles per second on a pentium 133mhz
|
||||
machine running linux. Both data were obtained with software simulation of
|
||||
hardware devices. With real SINQ hardware these numbers drop 4 cycles per
|
||||
second. This shows clearly that the communication with the hardware is the
|
||||
systems bottleneck and not the task switching scheme.
|
||||
by users. Tests have shown that the task switcher manages +900 cycles
|
||||
per second through the task list on a DigitalUnix machine and 500
|
||||
cycles per second on a pentium 2GHZ machine running linux. Both data
|
||||
were obtained with software simulation of hardware devices. With real
|
||||
SINQ hardware these numbers drop to as low as 4 cycles per second if
|
||||
the hardware is slow in responding. This shows
|
||||
clearly that the communication with the hardware is the systems
|
||||
bottleneck and not the task switching scheme.
|
||||
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user