- Fixed a bug at the new counter driver

SKIPPED:
	psi/el737hpdriv.c
	psi/el737hpv2driv.c
	psi/make_gen
	psi/psi.c
	psi/tas.c
	psi/tasdrive.c
	psi/tasinit.c
	psi/tasscan.c
	psi/tasutil.c
This commit is contained in:
cvs
2003-08-08 07:30:40 +00:00
parent 3ddb19d8a9
commit 189f7563b6
11 changed files with 272 additions and 705 deletions

View File

@@ -14,7 +14,8 @@ System, had to meet the following specifications:
\item Enhanced portability across instrument hardware. This means that it
should be easy to add other types of motors, counters or other hardware to
the system.
\item Support authorization on the command and variable level. This means
\item Support authorization on the command and parameter modification
level. This means
that certain instrument settings can be protected against random changes by
less knowledgable users.
\item Good maintainability and extendability.
@@ -78,7 +79,7 @@ This is a real concern at SINQ where VMS,
Intel-PC, Macintosh and Unix users have to be satisfied.
As many instrument scientists still prefer
the command line for interacting with instruments, the most used client is a
visual command line client. Status displays are another sort of specialized
visual command line client. Status displays are another kind of specialized
client programs. Graphical user interfaces are under consideration for some
instruments. As an example for a client a screen shot of the status display
client for a powder diffractometer is given in picture \ref{dmc}
@@ -86,7 +87,7 @@ client for a powder diffractometer is given in picture \ref{dmc}
\begin{figure}
%% \epsfxsize=0.65\textwidth
\epsfxsize=160mm
%% \epsffile{dmc.eps}
\epsffile{dmccom.eps}
\caption{Example for a SICS client: Powder Diffractometer Status Display}\label{dmc}
\end{figure}
@@ -129,10 +130,10 @@ In more detail the SICS server kernel has the following tasks:
\item Monitor HW--operations.
\item Monitor environment devices.
\end{itemize}
Any server serving multiple clients has the problem how to organize multiple
clients accessing the same server and how to stop one client reading data,
which another client is just writing. The approach used for the SICS server
is a combination of polling and cooperative multitasking. This scheme is
Any program serving multiple clients has the problem how to organize multiple
clients accessing the same server and how to prevent one client from
reading data, while another client is writing. The approach used for
the SICS server is a combination of polling and cooperative multitasking. This scheme is
simple and can be implemented in an operating system independent manner. One
way to look at the SICS server is as a series of tasks in a circular queue
executing one after another. The servers main loop does nothing but
@@ -151,8 +152,7 @@ client connection and create the associated data structures. If the port
belongs to an open client connection the network reader will read the
command pending and put it onto a command stack existing for each client
connection. When it is time for a client task to execute, it will fetch a
command from its very own command stack and execute it. When the net reader
finds an user interrupt pending, the interrupt is executed.
command from its very own command stack and execute it.
This is how the SICS server deals with client requests.
The scheme described above relies on the fact that most SICS command need
@@ -181,7 +181,7 @@ an hardware request all other clients requests to drive the hardware will
return an error. The device executor is also responsible for monitoring the
progress of an hardware operation. It does so by adding a special task into
the system which checks the status of the operation each time this tasks
executes. When the hardware operation is finished (one way or another) this
executes. When the hardware operation is finished this
device executor task will end. A special system facility allows a client
task to wait for the device executor task to end while the rest of the task
queue is still executing. In this way time intensive hardware operations can
@@ -258,7 +258,7 @@ can be configured to tell SICS clients or other SICS objects about his new
position.
SICS uses NeXus$^{2}$, the upcoming standard for data exchange for neutron
and x\_ray scattering as its raw data format.
and x--ray scattering as its raw data format.
\section{SICS Working Examples}
@@ -279,10 +279,10 @@ done by a special system component, the task switcher.
\subsection{The Request for a new Client Connection}
\begin{itemize}
\item The network reader recognizes pending data on its main server port.
\item The network reader accepts the connection and tries to read a
\item The network reader accepts the connection and tries to read an
username/password pair.
\item If such a username/password pair comes within a suitable time
intervals it is checked for validity. On failure the connection is closed
\item If such an username/password pair comes within a suitable time
interval it is checked for validity. On failure the connection is closed
again.
\item If a valid connection has been found: A new connection object is
created, a new task for this client connection is introduced into the
@@ -312,7 +312,7 @@ task to the task switcher.
\item The next task executes.
\end{itemize}
\subsection{A Drive Command in Blocking Mode}
\subsection{A ``Drive'' Command in Blocking Mode}
\begin{itemize}
\item The network reader finds data pending at one of the client ports.
\item The network reader reads the command, splits it into single lines and
@@ -342,8 +342,8 @@ requesting the wait state. The client connection and task executing the drive co
\item The device executor task will keep on monitoring the progress of the motor
driving whenever the task switcher allows it to execute.
\item In due time the device executor task will find that the motor finished
driving. The task will then die. The clients grab of the hardware driving
permission will be released.
driving. The task will then finish executing. The clients grab of the
hardware driving permission will be released.
\item At this stage the drive command wrapper function will awake and
continue execution. This means inspecting errors and reporting to the client
how things worked out.
@@ -353,7 +353,7 @@ other commands.
\item The next task executes.
\end{itemize}
\subsection{A Drive Command Interrupted}
\subsection{A ``Drive Command Interrupted}
\begin{itemize}
\item The network reader finds data pending at one of the client ports.
\item The network reader reads the command, splits it into single lines and
@@ -396,7 +396,7 @@ task to the task switcher.
\item The next task executes.
\end{itemize}
\subsection{A Run Command in Non Blocking Mode}
\subsection{A ``Run'' Command in Non Blocking Mode}
\begin{itemize}
\item The network reader finds data pending at one of the client ports.
\item The network reader reads the command, splits it into single lines and
@@ -408,13 +408,13 @@ itself to the SICS interpreter.
\item The SICS interpreter inspects the first word of the command. Using
this key the interpreter finds the drive command wrapper function and passes
control to that function.
\item The run command wrapper function will check further arguments,
\item The ``run'' command wrapper function will check further arguments,
checks the
clients authorisation if appropriate for the action requested. Depending on
the checks, the wrapper function will create an error message or do its
work.
\item Assuming everything is OK, the motor is located in the system.
\item The drive command wrapper function asks the device executor to run the
\item The ``run'' command wrapper function asks the device executor to run the
motor.
\item The device executor verifies that nobody else is driving, then starts
the motor and grabs hardware control. The device executor also starts a task
@@ -432,10 +432,11 @@ permission will be released. Any errors however, will be reported.
All this seems to be pretty complex and time consuming. But it is the complexity needed to
do so many things, especially the non blocking mode of operation requested
by users. Tests have shown that the task switcher manages +900 cycles
per second through the task list on a DigitalUnix machine and 50
cycles per second on a pentium 133mhz machine running linux. Both data
per second through the task list on a DigitalUnix machine and 500
cycles per second on a pentium 2GHZ machine running linux. Both data
were obtained with software simulation of hardware devices. With real
SINQ hardware these numbers drop 4 cycles per second. This shows
SINQ hardware these numbers drop to as low as 4 cycles per second if
the hardware is slow in responding. This shows
clearly that the communication with the hardware is the systems
bottleneck and not the task switching scheme.