- Updated Makefiles

- Moved TAS code to psi
- Updated programmers documentation


SKIPPED:
	psi/make_gen
	psi/nextrics.c
	psi/t_conv.c
	psi/t_conv.f
	psi/t_rlp.c
	psi/t_rlp.f
	psi/t_update.c
	psi/t_update.f
	psi/hardsup/el734_utility.c
	psi/hardsup/makefile_alpha
This commit is contained in:
cvs
2003-06-30 11:51:35 +00:00
parent 007a2e2536
commit e52bd5d937
17 changed files with 1561 additions and 3655 deletions

View File

@@ -28,14 +28,20 @@ matches the above criteria.
\section{The SINQ Hardware Setup}
SICS had to take in account the SINQ hardware setup which had been decided
upon earlier on. Most hardware such as motors and counters is controlled via
RS--232 interfaces. These devices connect to a Macintosh PC which has a
terminal server program running on it. This terminal server program collects
request to the hardware from a TCP/IP port and forwards them to the serial
device. The instrument control program runs on a workstation running
DigitalUnix. Communication with the hardware happens via TCP/IP through the
terminal server. Some hardware devices, such as the histogram memory, can handle
RS--232 interfaces. These RS--232 interfaces are connected to a
terminal server which allows to access such devices through the TCP/IP
network.
For historical reasons the instrument control software does not access
the terminal server directly but through another software layer, the
SerPortServer program. The SerPortServer program is another TCP/IP
server which allows multiple network clients to access the same
terminal server port through a home grown protocoll. In the long run
this additional software layer will be abolished.
Some hardware devices, such as the histogram memory, can handle
TCP/IP themselves. With such devices the instrument control program
communicates directly through TCP/IP, without a terminal server. All
communicates directly through TCP/IP. All
hardware devices take care of their real time needs themselves. Thus the
only task of the instrument control program is to orchestrate the hardware
devices. SICS is designed with this setup up in mind, but is not restricted
@@ -90,15 +96,18 @@ client for a powder diffractometer is given in picture \ref{dmc}
The SICS server is the core component of the SICS system. The SICS server is
responsible for doing all the work in instrument control. Additionally the
server has to answer the requests of possibly multiple clients.
The SICS server can be subdivided into three subsystems: The kernel, a database
of SICS objects and an interpreter. The SICS server kernel takes care of
client multitasking and the preservation of the proper I/O and error context
for each client command executing.
SICS objects are software modules which represent all aspects
of an instrument: hardware devices, commands, measurement strategies
The SICS server can be subdivided into three subsystems:
\begin{description}
\item[The kernel] The SICS server kernel
takes care of client multitasking and the preservation of the proper
I/O and error context for each client command executing.
\item[SICS Object Database] SICS objects are software modules which
represent all aspects of an instrument: hardware devices, commands, measurement strategies
and data storage. This database of objects is initialized at server startup
time from an initialization script. The third SICS server component is an
interpreter which allows to issue commands to the objects in the objects database.
time from an initialization script.
\item[The Interpreter] The interpreter allows to issue commands to the
objects in the objects database.
\end{description}
The schematic drawing of the SICS server's structure is given in picture
\ref{newsics}.
\begin{figure}
@@ -130,10 +139,12 @@ executing one after another. The servers main loop does nothing but
executing the tasks in this circular buffer in an endless loop.
There are several system tasks and one such
task for each living client connection. Thus only one task executes at any
given time and data access is efficiently serialized. One of the main system
given time and data access is efficiently serialized.
One of the main system
tasks (and the one which will be always there) is the network reader. The
network reader has a list of open network connections and checks each of
them for pending requests. What happens when a data is pending on an open
them for pending requests. What happens when data is pending on an open
network port depends on the type of port: If it is the servers main
connection port, the network reader will try to accept and verify a new
client connection and create the associated data structures. If the port
@@ -190,9 +201,9 @@ Most experiments do not happen at ambient room conditions but
require some special environment for the sample. Mostly this is temperature
but it can also be magnetic of electric fields etc. Most of such devices
can regulate themselves but the data acquisition program needs to monitor
such devices. Within SICS this is done via a special system object, the
such devices. Within SICS, this is done via a special system object, the
environment monitor. A environment device, for example a temperature
controller, registers it's presence with this object. Then an special system
controller, registers it's presence with this object. Then a special system
task will control this device when it is executing, check for possible out
of range errors and initiates the proper error handling if such a problem is
encountered.
@@ -241,15 +252,15 @@ to a system of protocols. There are protocols for:
\item For checking the authorisation of the client who wants to execute the
command.
\end{itemize}
SICS uses NeXus$^{2}$, the upcoming standard for data exchange for neutron
and x\_ray scattering as its raw data format.
SICS objects have the ability to notify clients and other objects of
internal state changes. For example when a motor is driven, the motor object
can be configured to tell SICS clients or other SICS objects about his new
position.
SICS uses NeXus$^{2}$, the upcoming standard for data exchange for neutron
and x\_ray scattering as its raw data format.
\section{SICS Working Examples}
In order to get a better feeling for the internal working of SICS the course
of a few different requests through the SICS system is traced in this
@@ -284,7 +295,7 @@ pending commands.
\begin{itemize}
\item The network reader finds data pending at one of the client ports.
\item The network reader reads the command, splits it into single lines and
put those on the top of the client connections command stack. The network
put those on top of the client connections command stack. The network
reader passes control to the task switcher.
\item In due time the client connection task executes, inspects its command
stack, pops the command pending and forwards it together with a pointer to
@@ -415,23 +426,18 @@ new commands.
driving whenever the task switcher allows it to execute.
\item In due time the device executor task will find that the motor finished
driving. The task will then die silently. The clients grab of the hardware driving
permission will be released. If errors occurred, however a they will be reported.
\item At this stage the drive command wrapper function will awake and
continue execution. This means inspecting errors and reporting to the client
how things worked out.
\item This done, control passes back through the interpreter and the connection
task to the task switcher. The client connection is free to execute
other commands.
\item The next task executes.
permission will be released. Any errors however, will be reported.
\end{itemize}
All this seems to be pretty complex and time consuming. But it is the complexity needed to
do so many things, especially the non blocking mode of operation requested
by users. Tests have shown that the task switcher manages +900 cycles per second
through
the task list on a DigitalUnix machine and 50 cycles per second on a pentium 133mhz
machine running linux. Both data were obtained with software simulation of
hardware devices. With real SINQ hardware these numbers drop 4 cycles per
second. This shows clearly that the communication with the hardware is the
systems bottleneck and not the task switching scheme.
by users. Tests have shown that the task switcher manages +900 cycles
per second through the task list on a DigitalUnix machine and 50
cycles per second on a pentium 133mhz machine running linux. Both data
were obtained with software simulation of hardware devices. With real
SINQ hardware these numbers drop 4 cycles per second. This shows
clearly that the communication with the hardware is the systems
bottleneck and not the task switching scheme.