445 lines
23 KiB
TeX
445 lines
23 KiB
TeX
\chapter{SICS Overview}
|
|
\section{Introduction}
|
|
At the new spallation source SINQ at PSI a whole set of new neutron
|
|
scattering instruments are being installed. All these new instruments need a
|
|
computer instrument control control system. After a review of similar systems
|
|
out in the market it was found that none fully met the requirements defined
|
|
for SINQ or could easily be extended to do so. Therefore it was decided to
|
|
design a new system. This new system SICS, the SINQ Instrument Control
|
|
System, had to meet the following specifications:
|
|
\begin{itemize}
|
|
\item Control the instrument reliably.
|
|
\item Good remote access to the instrument via the internet.
|
|
\item Portability across operating system platforms.
|
|
\item Enhanced portability across instrument hardware. This means that it
|
|
should be easy to add other types of motors, counters or other hardware to
|
|
the system.
|
|
\item Support authorization on the command and parameter modification
|
|
level. This means
|
|
that certain instrument settings can be protected against random changes by
|
|
less knowledgable users.
|
|
\item Good maintainability and extendability.
|
|
\item Be capable to acomodate graphical user interfaces (GUI).
|
|
\item One code base for all instruments.
|
|
\item Powerful macro language.
|
|
\end{itemize}
|
|
A suitable new system was implemented using an object oriented design which
|
|
matches the above criteria.
|
|
|
|
\section{The SINQ Hardware Setup}
|
|
SICS had to take in account the SINQ hardware setup which had been decided
|
|
upon earlier on. Most hardware such as motors and counters is controlled via
|
|
RS--232 interfaces. These RS--232 interfaces are connected to a
|
|
terminal server which allows to access such devices through the TCP/IP
|
|
network.
|
|
|
|
For historical reasons the instrument control software does not access
|
|
the terminal server directly but through another software layer, the
|
|
SerPortServer program. The SerPortServer program is another TCP/IP
|
|
server which allows multiple network clients to access the same
|
|
terminal server port through a home grown protocoll. In the long run
|
|
this additional software layer will be abolished.
|
|
|
|
Some hardware devices, such as the histogram memory, can handle
|
|
TCP/IP themselves. With such devices the instrument control program
|
|
communicates directly through TCP/IP. All
|
|
hardware devices take care of their real time needs themselves. Thus the
|
|
only task of the instrument control program is to orchestrate the hardware
|
|
devices. SICS is designed with this setup up in mind, but is not restricted
|
|
to this setup. A schematic view of this setup is given in picture
|
|
\ref{hard}.
|
|
\begin{figure}
|
|
%% \epsfxsize=0.65\textwidth
|
|
\epsfxsize=160mm
|
|
\epsffile{hart.eps}
|
|
\caption{Instrument control hardware setup at SINQ}\label{hard}
|
|
\end{figure}
|
|
|
|
|
|
\section{SICS Overall Design}
|
|
In order to achieve the design goals stated above it was decided to divide
|
|
the system into a client server system. This means that there are at least
|
|
two programs necessary to run an instrument: a client program and a server
|
|
program. The server program, the SICS server, does all the work and
|
|
implements the actual instrument control. The SICS server usually runs on
|
|
the DAQ computer. The client program may run on any computer on the world
|
|
and implements the user interface to the instrument. Any numbers of clients
|
|
can communicate with one SICS server. The SICS server and the clients
|
|
communicate via a simple ASCII command protocol through TCP/IP sockets.
|
|
With this design good remote control through the network is easily achieved.
|
|
As clients can be implemented in any language or system capable of handling
|
|
TCP/IP the user interface and the functional aspect are well separated. This
|
|
allows for easy exchange of user interfaces by writing new clients.
|
|
|
|
|
|
\section{SICS Clients}
|
|
SICS Clients implement the SICS user interface. Current client
|
|
programs are mostly implemented in Java for platform independence.
|
|
This is a real concern at SINQ where VMS,
|
|
Intel-PC, Macintosh and Unix users have to be satisfied.
|
|
As many instrument scientists still prefer
|
|
the command line for interacting with instruments, the most used client is a
|
|
visual command line client. Status displays are another kind of specialized
|
|
client programs. Graphical user interfaces are under consideration for some
|
|
instruments. As an example for a client a screen shot of the status display
|
|
client for a powder diffractometer is given in picture \ref{dmc}
|
|
\ref{hard}.
|
|
\begin{figure}
|
|
%% \epsfxsize=0.65\textwidth
|
|
\epsfxsize=160mm
|
|
\epsffile{dmccom.eps}
|
|
\caption{Example for a SICS client: Powder Diffractometer Status Display}\label{dmc}
|
|
\end{figure}
|
|
|
|
|
|
|
|
\section{The SICS Server}
|
|
The SICS server is the core component of the SICS system. The SICS server is
|
|
responsible for doing all the work in instrument control. Additionally the
|
|
server has to answer the requests of possibly multiple clients.
|
|
The SICS server can be subdivided into three subsystems:
|
|
\begin{description}
|
|
\item[The kernel] The SICS server kernel
|
|
takes care of client multitasking and the preservation of the proper
|
|
I/O and error context for each client command executing.
|
|
\item[SICS Object Database] SICS objects are software modules which
|
|
represent all aspects of an instrument: hardware devices, commands, measurement strategies
|
|
and data storage. This database of objects is initialized at server startup
|
|
time from an initialization script.
|
|
\item[The Interpreter] The interpreter allows to issue commands to the
|
|
objects in the objects database.
|
|
\end{description}
|
|
The schematic drawing of the SICS server's structure is given in picture
|
|
\ref{newsics}.
|
|
\begin{figure}
|
|
%% \epsfxsize=0.65\textwidth
|
|
\epsfxsize=160mm
|
|
\epsffile{newsics.eps}
|
|
\caption{Schematic Representation of the SICS server's structure}\label{sicsnew}
|
|
\end{figure}
|
|
|
|
|
|
\subsection{The SICS Server Kernel}
|
|
In more detail the SICS server kernel has the following tasks:
|
|
\begin{itemize}
|
|
\item Accept and verify client connection requests.
|
|
\item Read and execute client commands.
|
|
\item Maintain the I/O and error context for each client connection.
|
|
\item Serialize data access.
|
|
\item Serialize hardware access.
|
|
\item Monitor HW--operations.
|
|
\item Monitor environment devices.
|
|
\end{itemize}
|
|
Any program serving multiple clients has the problem how to organize multiple
|
|
clients accessing the same server and how to prevent one client from
|
|
reading data, while another client is writing. The approach used for
|
|
the SICS server is a combination of polling and cooperative multitasking. This scheme is
|
|
simple and can be implemented in an operating system independent manner. One
|
|
way to look at the SICS server is as a series of tasks in a circular queue
|
|
executing one after another. The servers main loop does nothing but
|
|
executing the tasks in this circular buffer in an endless loop.
|
|
There are several system tasks and one such
|
|
task for each living client connection. Thus only one task executes at any
|
|
given time and data access is efficiently serialized.
|
|
|
|
One of the main system
|
|
tasks (and the one which will be always there) is the network reader. The
|
|
network reader has a list of open network connections and checks each of
|
|
them for pending requests. What happens when data is pending on an open
|
|
network port depends on the type of port: If it is the servers main
|
|
connection port, the network reader will try to accept and verify a new
|
|
client connection and create the associated data structures. If the port
|
|
belongs to an open client connection the network reader will read the
|
|
command pending and put it onto a command stack existing for each client
|
|
connection. When it is time for a client task to execute, it will fetch a
|
|
command from its very own command stack and execute it.
|
|
This is how the SICS server deals with client requests.
|
|
|
|
The scheme described above relies on the fact that most SICS command need
|
|
only very little time to execute. A command needing time extensive
|
|
calculations may effectively block the server. Implementations of such
|
|
commands have to take care that control passes back to the task switching
|
|
loop at regular intervalls in order to prevent the server from blocking.
|
|
|
|
Another problem in a server handling multiple client requests is how to
|
|
maintain the proper execution context for each client. This includes the
|
|
clients I/O-context (socket), the authorisation of the client and possible
|
|
error conditions pending for a
|
|
client connection. SICS does this via a connection object, a special
|
|
data structure holding all the above information plus a set of functions
|
|
operating on this data structure. This connection object is passed along
|
|
with many calls throughout the whole system.
|
|
|
|
Multiple clients issuing commands to the SICS server may mean that multiple
|
|
clients might try to move motors or access other hardware in conflicting
|
|
ways. As there is only one set of instrument hardware this needs to be
|
|
prevented. This is achieved by a convention. No SICS object drives hardware
|
|
directly but registers it's request with a special object, the device
|
|
executor. This device executor starts the requested operation and reserves
|
|
the hardware for the length of the operation. During the execution of such
|
|
an hardware request all other clients requests to drive the hardware will
|
|
return an error. The device executor is also responsible for monitoring the
|
|
progress of an hardware operation. It does so by adding a special task into
|
|
the system which checks the status of the operation each time this tasks
|
|
executes. When the hardware operation is finished this
|
|
device executor task will end. A special system facility allows a client
|
|
task to wait for the device executor task to end while the rest of the task
|
|
queue is still executing. In this way time intensive hardware operations can
|
|
be performed by drive, count or scan commands without blocking the whole
|
|
system for other clients. \label{devexec}
|
|
|
|
The SICS server can be configured to support another security feature, the
|
|
token system. In this scheme a client can grab control of the instrument.
|
|
With the control token grabbed, only the client which has the token may
|
|
control the instrument. Any other client may look at things in the SICS server
|
|
but does not have permission to change anything. Passing the control token
|
|
requires that the client which has the token releases the token so that
|
|
another client may grab it. There exists a password protected back door for
|
|
SICS managers which allows to force the release of a control token.
|
|
|
|
Most experiments do not happen at ambient room conditions but
|
|
require some special environment for the sample. Mostly this is temperature
|
|
but it can also be magnetic of electric fields etc. Most of such devices
|
|
can regulate themselves but the data acquisition program needs to monitor
|
|
such devices. Within SICS, this is done via a special system object, the
|
|
environment monitor. A environment device, for example a temperature
|
|
controller, registers it's presence with this object. Then a special system
|
|
task will control this device when it is executing, check for possible out
|
|
of range errors and initiates the proper error handling if such a problem is
|
|
encountered.
|
|
|
|
\subsection{The SICS Interpreter}
|
|
When a task belonging to a client connection executes a command it will pass
|
|
the command along with the connection object to the SICS interpreter. The
|
|
SICS interpreter will then analyze the command and forward it to the
|
|
appropriate SICS object in the object database for further action. The SICS
|
|
interpreter is very much modeled after the Tcl interpreter as devised by
|
|
John Ousterhout$^{1}$. For each SICS object visible from the interpreter there is
|
|
a wrapper function. Using the first word of the command as a key, the
|
|
interpreter will locate the objects wrapper function. If such a function is
|
|
found it is passed the command parameters, the interpreter object and the
|
|
connection object for further processing. An interface exists to add and
|
|
remove commands to this interpreter very easily. Thus the actual command
|
|
list can be configured easily to match the instrument in question, sometimes
|
|
even at run time. Given the closeness of the design of the SICS interpreter
|
|
to the Tcl interpreter, the reader may not be surprised to learn that the
|
|
SICS server incorporates Tcl as its internal macro language. The internal
|
|
macro language may use Tcl commands as well as SICS commands.
|
|
|
|
\subsection{SICS Objects}
|
|
As already said, SICS objects implement the true functionality of SICS
|
|
instrument control. All hardware, all commands and procedures, all data
|
|
handling strategies are implemented as SICS objects. Hardware objects, for
|
|
instance motors deserve some special attention. Such objects are divided
|
|
into two objects in the SICS system: A logical hardware object and a driver
|
|
object. The logical object is responsible for implementing all the nuts and
|
|
bolts of the hardware device, whereas the driver defines a set of primitive
|
|
operations on the device. The benefit of this scheme is twofold:
|
|
switching to new hardware, for instance a new type of motor, just requires
|
|
to incorporate a new driver into the system. Internally, independent from
|
|
the actual hardware, all hardware object of the same type (for example
|
|
motors) look the same and can be treated the same by higher level objects.
|
|
No need to rewrite a scan command because a motor changed.
|
|
|
|
In order to live happily within the SICS system SICS object have to adhere
|
|
to a system of protocols. There are protocols for:
|
|
\begin{itemize}
|
|
\item Input/Output to the client.
|
|
\item Error handling.
|
|
\item Interaction with the interpreter.
|
|
\item For identification of the object to the system at run time.
|
|
\item For interacting with hardware (see device executor above).
|
|
\item For checking the authorisation of the client who wants to execute the
|
|
command.
|
|
\end{itemize}
|
|
SICS objects have the ability to notify clients and other objects of
|
|
internal state changes. For example when a motor is driven, the motor object
|
|
can be configured to tell SICS clients or other SICS objects about his new
|
|
position.
|
|
|
|
SICS uses NeXus$^{2}$, the upcoming standard for data exchange for neutron
|
|
and x--ray scattering as its raw data format.
|
|
|
|
|
|
\section{SICS Working Examples}
|
|
In order to get a better feeling for the internal working of SICS the course
|
|
of a few different requests through the SICS system is traced in this
|
|
section. The examples traced will be:
|
|
\begin{itemize}
|
|
\item A request for a new client connection.
|
|
\item A simple command.
|
|
\item A command to drive a motor in blocking mode.
|
|
\item A command to drive a motor which got interrupted by the user.
|
|
\item A command to drive a motor in non blocking mode.
|
|
\end{itemize}
|
|
For the whole discussion it is assumed that the main loop is running,
|
|
executing cyclically each single task registered in the server. Task switching is
|
|
done by a special system component, the task switcher.
|
|
|
|
\subsection{The Request for a new Client Connection}
|
|
\begin{itemize}
|
|
\item The network reader recognizes pending data on its main server port.
|
|
\item The network reader accepts the connection and tries to read an
|
|
username/password pair.
|
|
\item If such an username/password pair comes within a suitable time
|
|
interval it is checked for validity. On failure the connection is closed
|
|
again.
|
|
\item If a valid connection has been found: A new connection object is
|
|
created, a new task for this client connection is introduced into the
|
|
system and the network reader registers a new client port to check for
|
|
pending commands.
|
|
\item Control is passed back to the task switcher.
|
|
\end{itemize}
|
|
|
|
\subsection{A Simple Command}
|
|
\begin{itemize}
|
|
\item The network reader finds data pending at one of the client ports.
|
|
\item The network reader reads the command, splits it into single lines and
|
|
put those on top of the client connections command stack. The network
|
|
reader passes control to the task switcher.
|
|
\item In due time the client connection task executes, inspects its command
|
|
stack, pops the command pending and forwards it together with a pointer to
|
|
itself to the SICS interpreter.
|
|
\item The SICS interpreter inspects the first word of the command. Using
|
|
this key the interpreter finds the objects wrapper function and passes
|
|
control to that function.
|
|
\item The object wrapper function will check further arguments, checks the
|
|
clients authorisation if appropriate for the action requested. Depending on
|
|
the checks, the wrapper function will create an error message or do its
|
|
work.
|
|
\item This done, control passes back through the interpreter and the connection
|
|
task to the task switcher.
|
|
\item The next task executes.
|
|
\end{itemize}
|
|
|
|
\subsection{A ``Drive'' Command in Blocking Mode}
|
|
\begin{itemize}
|
|
\item The network reader finds data pending at one of the client ports.
|
|
\item The network reader reads the command, splits it into single lines and
|
|
put those on the top of the client connections command stack. The network
|
|
reader passes control to the task switcher.
|
|
\item In due time the client connection task executes, inspects its command
|
|
stack, pops the command pending and forwards it together with a pointer to
|
|
itself to the SICS interpreter.
|
|
\item The SICS interpreter inspects the first word of the command. Using
|
|
this key the interpreter finds the drive command wrapper function and passes
|
|
control to that function.
|
|
\item The drive command wrapper function will check further arguments,
|
|
checks the
|
|
clients authorisation if appropriate for the action requested. Depending on
|
|
the checks, the wrapper function will create an error message or do its
|
|
work.
|
|
\item Assuming everything is OK, the motor is located in the system.
|
|
\item The drive command wrapper function asks the device executor to run the
|
|
motor.
|
|
\item The device executor verifies that nobody else is driving, then starts
|
|
the motor and grabs hardware control. The device executor also starts a task
|
|
monitoring the activity of the motor.
|
|
\item The drive command wrapper function now enters a wait state. This means
|
|
the task switcher will execute other tasks, except the connection task
|
|
requesting the wait state. The client connection and task executing the drive command
|
|
will not be able to process further commands.
|
|
\item The device executor task will keep on monitoring the progress of the motor
|
|
driving whenever the task switcher allows it to execute.
|
|
\item In due time the device executor task will find that the motor finished
|
|
driving. The task will then finish executing. The clients grab of the
|
|
hardware driving permission will be released.
|
|
\item At this stage the drive command wrapper function will awake and
|
|
continue execution. This means inspecting errors and reporting to the client
|
|
how things worked out.
|
|
\item This done, control passes back through the interpreter and the connection
|
|
task to the task switcher. The client connection is free to execute
|
|
other commands.
|
|
\item The next task executes.
|
|
\end{itemize}
|
|
|
|
\subsection{A ``Drive Command Interrupted}
|
|
\begin{itemize}
|
|
\item The network reader finds data pending at one of the client ports.
|
|
\item The network reader reads the command, splits it into single lines and
|
|
put those on the top of the client connections command stack. The network
|
|
reader passes control to the task switcher.
|
|
\item In due time the client connection task executes, inspects its command
|
|
stack, pops the command pending and forwards it together with a pointer to
|
|
itself to the SICS interpreter.
|
|
\item The SICS interpreter inspects the first word of the command. Using
|
|
this key the interpreter finds the drive command wrapper function and passes
|
|
control to that function.
|
|
\item The drive command wrapper function will check further arguments,
|
|
checks the
|
|
clients authorisation if appropriate for the action requested. Depending on
|
|
the checks, the wrapper function will create an error message or do its
|
|
work.
|
|
\item Assuming everything is OK, the motor is located in the system.
|
|
\item The drive command wrapper function asks the device executor to run the
|
|
motor.
|
|
\item The device executor verifies that nobody else is driving, then starts
|
|
the motor and grabs hardware control. The device executor also starts a task
|
|
monitoring the activity of the motor.
|
|
\item The drive command wrapper function now enters a wait state. This means
|
|
the task switcher will execute other tasks, except the connection task
|
|
requesting the wait state.
|
|
\item The device executor task will keep on monitoring the progress of the
|
|
driving of the motor when it is its turn to execute.
|
|
\item The network reader finds a user interrupt pending. The interrupt will
|
|
be forwarded to all tasks in the system.
|
|
\item In due time the device executor task will try to check on the progress
|
|
of the motor. It will recognize the interrupt. If appropriate the motor will
|
|
get a halt command. The task will then die.
|
|
The clients grab of the hardware driving permission will be released.
|
|
\item At this stage the drive command wrapper function will awake and
|
|
continue execution. This means it finds the interrupt, tells the user what he
|
|
already knows: an interrupt was issued.
|
|
\item This done, control passes back through drive command wrapper,
|
|
the interpreter and the connection
|
|
task to the task switcher.
|
|
\item The next task executes.
|
|
\end{itemize}
|
|
|
|
\subsection{A ``Run'' Command in Non Blocking Mode}
|
|
\begin{itemize}
|
|
\item The network reader finds data pending at one of the client ports.
|
|
\item The network reader reads the command, splits it into single lines and
|
|
put those on the top of the client connections command stack. The network
|
|
reader passes control to the task switcher.
|
|
\item In due time the client connection task executes, inspects its command
|
|
stack, pops the command pending and forwards it together with a pointer to
|
|
itself to the SICS interpreter.
|
|
\item The SICS interpreter inspects the first word of the command. Using
|
|
this key the interpreter finds the drive command wrapper function and passes
|
|
control to that function.
|
|
\item The ``run'' command wrapper function will check further arguments,
|
|
checks the
|
|
clients authorisation if appropriate for the action requested. Depending on
|
|
the checks, the wrapper function will create an error message or do its
|
|
work.
|
|
\item Assuming everything is OK, the motor is located in the system.
|
|
\item The ``run'' command wrapper function asks the device executor to run the
|
|
motor.
|
|
\item The device executor verifies that nobody else is driving, then starts
|
|
the motor and grabs hardware control. The device executor also starts a task
|
|
monitoring the activity of the motor.
|
|
\item The run command wrapper function passes control through the interpreter and
|
|
the clients task function back to the task switcher. The client connection can handle
|
|
new commands.
|
|
\item The device executor task will keep on monitoring the progress of the motor
|
|
driving whenever the task switcher allows it to execute.
|
|
\item In due time the device executor task will find that the motor finished
|
|
driving. The task will then die silently. The clients grab of the hardware driving
|
|
permission will be released. Any errors however, will be reported.
|
|
\end{itemize}
|
|
|
|
All this seems to be pretty complex and time consuming. But it is the complexity needed to
|
|
do so many things, especially the non blocking mode of operation requested
|
|
by users. Tests have shown that the task switcher manages +900 cycles
|
|
per second through the task list on a DigitalUnix machine and 500
|
|
cycles per second on a pentium 2GHZ machine running linux. Both data
|
|
were obtained with software simulation of hardware devices. With real
|
|
SINQ hardware these numbers drop to as low as 4 cycles per second if
|
|
the hardware is slow in responding. This shows
|
|
clearly that the communication with the hardware is the systems
|
|
bottleneck and not the task switching scheme.
|
|
|
|
|
|
|