This product is made available subject to acceptance of the EPICS open source license.
This document describes pvDatabaseCPP, which is a framework for implementing a network accessable database of smart memory resident records. Network access is via pvAccess. The data in each record is a top level PVStructure as defined by pvData. The framework includes a complete implementation of ChannelProvider as defined by pvAccess. The framework can be extended in order to create record instances that implements services. The minimum that an extenson must provide is a top level PVStructure and a process method.
EPICS version 4 is a set of related products in the EPICS
V4 control system programming environment:
relatedDocumentsV4.html
This is the 19-Feb-2014 version of of pvDatabaseCPP.
Since the last version of the documentation:
This project is ready for alpha users.
I have not had time to look at two unresolved problems reported in the previous version of this document:
Future enhancements in priority order:
The main purpose of this project to make it easier to implement services that are accessed via pvAccess. This project supplies is a complete implementation of the server side of pvAccess. All that a service has to provide is a top level PVStructure and a process method. A service can be run as a main process or can be part of a V3 IOC. Thus services can be developed that interact with V3 records, asynDriver, areaDetector, etc.
A brief description of a pvDatabase is that it is a set of network accessible, smart, memory resident records. Each record has data composed of a top level PVStructure. Each record has a name which is the channelName for pvAccess. A local Channel Provider implements the complete ChannelProvider and Channel interfaces as defined by pvAccess. The local provider provides access to the records in the pvDatabase. This local provider is accessed by the remote pvAccess server. A record is smart because code can be attached to a record, which is accessed via a method named process.
This document describes components that provide the following features:
Base classes make it easy to create record instances. The code attached to each record must create the top level PVStructure and the following three methods:
The first step is to build pvDatabaseCPP as described in the next section.
One of the examples is exampleServer. It can be started either via a main program or as part of a V3 IOC.
To start it as a main program do the following:
mrk> pwd /home/hg/pvDatabaseCPP/exampleServer mrk> bin/linux-x86_64/exampleServerMain
You should see something like the following:
result of addRecord exampleServer 1 VERSION : pvAccess Server v3.0.5-SNAPSHOT PROVIDER_NAMES : local BEACON_ADDR_LIST : AUTO_BEACON_ADDR_LIST : 1 BEACON_PERIOD : 15 BROADCAST_PORT : 5076 SERVER_PORT : 5075 RCV_BUFFER_SIZE : 16384 IGNORE_ADDR_LIST: STATE : INITIALIZED exampleServer Type exit to stop:
Then in another window execute a pvput and pvget as follows:
mrk> pvput -r "field(argument.value)" exampleServer World
...
mrk> pvget -r "record[process=true]field(result.value)" exampleServer
exampleServer
structure
string value Hello World
mrk>
To run the example as part of a V3 IOC do the following:
mrk> pwd /home/hg/pvDatabaseCPP/exampleServer/iocBoot/exampleServer mrk> ../../bin/linux-x86_64/exampleServer st.cmd
You will see the following:
> envPaths
epicsEnvSet("ARCH","linux-x86_64")
epicsEnvSet("IOC","exampleServer")
epicsEnvSet("TOP","/home/hg/pvDatabaseCPP/exampleServer")
epicsEnvSet("EPICS_BASE","/home/install/epics/base")
epicsEnvSet("EPICSV4HOME","/home/hg")
cd /home/hg/pvDatabaseCPP/exampleServer
## Register all support components
dbLoadDatabase("dbd/exampleServer.dbd")
exampleServer_registerRecordDeviceDriver(pdbbase)
## Load record instances
dbLoadRecords("db/dbScalar.db","name=pvdouble,type=ao")
dbLoadRecords("db/dbArray.db","name=pvdoubleArray,type=DOUBLE")
dbLoadRecords("db/dbStringArray.db","name=pvstringArray")
dbLoadRecords("db/dbEnum.db","name=pvenum")
dbLoadRecords("db/dbCounter.db","name=pvcounter");
cd /home/hg/pvDatabaseCPP/exampleServer/iocBoot/exampleServer
iocInit()
Starting iocInit
############################################################################
## EPICS R3.14.12.3 $Date: Mon 2012-12-17 14:11:47 -0600$
## EPICS Base built Dec 21 2013
############################################################################
iocRun: All initialization complete
dbl
pvdouble
pvcounter
pvenum
pvdoubleArray
pvstringArray
epicsThreadSleep(1.0)
exampleServerCreateRecord pvaServer
startPVAServer
VERSION : pvAccess Server v3.0.5-SNAPSHOT
PROVIDER_NAMES : dbPv local
BEACON_ADDR_LIST :
AUTO_BEACON_ADDR_LIST : 1
BEACON_PERIOD : 15
BROADCAST_PORT : 5076
SERVER_PORT : 5075
RCV_BUFFER_SIZE : 16384
IGNORE_ADDR_LIST:
STATE : INITIALIZED
pvdbl
pvaServer
epics>
Just like previously you can then execute a pvput and pvget and see Hello World.
The examples, i. e. exampleServer, exampleLink, examplePowerSupply, and exampleDatabase, are described in separate sections below. In addition arrayPerformance can be used to measure that performance of big arrays. It is also described in a later section.
Reading section exampleServer and looking at it's code is a good way to learn how to implement a service.
This document descibes a C++ implementation of some of the components in pvIOCJava, which also implements a pvDatabase. PVDatabaseCPP implements the core components required to create a network accessible database of smart memory resident records. pvDatabaseCPP does not implement any of the specialized support that pvIOCJava provides. It is expected that many services will be created that do not require the full features provided by pvIOCJava. In the future pvIOCJava should be split into multiple projects with one of them named pvDatabaseJava.
Similar to epics base, pvIOCJava implements the concept of synchronous and asynchronous record processing. For pvDatabaseCPP the process method is allowed to block. Until a need is demonstrated this will remain true. The main user of a pvDatabase is pvAccess, and in particular, remote pvAccess. The server side of remote pvAccess creates two threads for each client and always accesses a record via these threads. It is expected that these threads will be sufficient to efficently handle all channel methods except channelRPC. For channelRPC pvAccess provides (or will provide) a thread pool for channelRPC requests. If, in the future, a scanning facility is provided by pvDatabaseCPP or some other facility, then the scanning facility will have to provide some way of handling process requests that block.
This documentation describes the first phase of a phased implementation of pvDatabaseCPP:
Future phases of pvDatabaseCPP might include:
The completion of each phase provides useful features that can be used without waiting for the completion of later phases. The rest of this document discusses only the first phase.
The first phase will only implement record processing, i. e. the process method has to do everything itself without any generic field support. This will be sufficient for implementing many services. The following are the minimium features required
The following sections describes the classes required for the first phase.
To build pvDatabaseCPP You must provide a file RELEASE.local in directory configure. Thus do the following:
mrk> pwd /home/hg/pvDatabaseCPP/configure mrk> cp ExampleRELEASE.local RELEASE.local
Then edit RELEASE.local so that it has the correct location of each product pvDatabaseCPP requires. Than at the top level just execute make:
mrk> cd .. mrk> pwd /home/hg/pvDatabaseCPP mrk> make
This builds pvDatabaseCPP and also the tests and all examples.
Each example and arrayPerformance is a completely separate top, but is also built when make is run in pvDatabaseCPP itself.
Each is a separate top for the following reasons:
If it is desired to build an example all by itself, just follow the same instructions as for building pvDatabaseCPP itself. For example:
mrk> pwd /home/hg/pvDatabaseCPP/exampleServer/configure mrk> cp ExampleRELEASE.local RELEASE.local
Then edit RELEASE.local so that it has the correct location of each product the example requires. Than at the top level of the example just execute make:
mrk> cd .. mrk> pwd /home/hg/pvDatabaseCPP/exampleServer mrk> make
This builds the example.
The following iocsh commands are provided for a V3IOC:
The client commands are provided via PVAClientRegister.dbd and the other commands via PVAServerRegister.dbd.
In addition any code that implements a PVRecord must implement an ioc command. The directory example has examples of how to implement the registration code. See example/V3IOC/exampleCounter/src/ for a simple example.
This Directory has the following files:
This directory has the following files:
recordName = "laptoprecordListPGRPC";
pvRecord = RecordListRecord::create(recordName);
result = master->addRecord(pvRecord);
The classes in pvDatabase.h describe a database of memory resident smart records. It describes the following classes:
Each class is described in a separate subsection.
namespace epics { namespace pvDatabase {
class PVRecord;
typedef std::tr1::shared_ptr<PVRecord> PVRecordPtr;
typedef std::map<epics::pvData::String,PVRecordPtr> PVRecordMap;
class PVRecordField;
typedef std::tr1::shared_ptr<PVRecordField> PVRecordFieldPtr;
typedef std::vector<PVRecordFieldPtr> PVRecordFieldPtrArray;
typedef std::tr1::shared_ptr<PVRecordFieldPtrArray> PVRecordFieldPtrArrayPtr;
class PVRecordStructure;
typedef std::tr1::shared_ptr<PVRecordStructure> PVRecordStructurePtr;
class PVRecordClient;
typedef std::tr1::shared_ptr<PVRecordClient> PVRecordClientPtr;
class PVListener;
typedef std::tr1::shared_ptr<PVListener> PVListenerPtr;
class RecordPutRequester;
typedef std::tr1::shared_ptr<RecordPutRequester> RecordPutRequesterPtr;
class PVDatabase;
typedef std::tr1::shared_ptr<PVDatabase> PVDatabasePtr;
NOTES:
class PVRecord
public epics::pvData::Requester,
public std::tr1::enable_shared_from_this<PVRecord>
{
public:
POINTER_DEFINITIONS(PVRecord);
virtual bool init() {initPVRecord(); return true;}
virtual void process() {}
virtual void destroy();
static PVRecordPtr create(
std::string const & recordName,
epics::pvData::PVStructurePtr const & pvStructure);
virtual ~PVRecord();
std::string getRecordName();
PVRecordStructurePtr getPVRecordStructure();
PVRecordFieldPtr findPVRecordField(
epics::pvData::PVFieldPtr const & pvField);
bool addRequester(epics::pvData::RequesterPtr const & requester);
bool removeRequester(epics::pvData::RequesterPtr const & requester);
inline void lock_guard() { epics::pvData::Lock theLock(mutex); }
void lock();
void unlock();
bool tryLock();
void lockOtherRecord(PVRecordPtr const & otherRecord);
bool addPVRecordClient(PVRecordClientPtr const & pvRecordClient);
bool removePVRecordClient(PVRecordClientPtr const & pvRecordClient);
void detachClients();
bool addListener(PVListenerPtr const & pvListener);
bool removeListener(PVListenerPtr const & pvListener);
void beginGroupPut();
void endGroupPut();
std::string getRequesterName() {return getRecordName();}
virtual void message(
std::string const & message,
epics::pvData::MessageType messageType);
void message(
PVRecordFieldPtr const & pvRecordField,
std::string const & message,
epics::pvData::MessageType messageType);
void toString(epics::pvData::StringBuilder buf);
void toString(epics::pvData::StringBuilder buf,int indentLevel);
int getTraceLevel();
void setTraceLevel(int level);
protected:
PVRecord(
std::string const & recordName,
epics::pvData::PVStructurePtr const & pvStructure);
void initPVRecord();
epics::pvData::PVStructurePtr getPVStructure();
PVRecordPtr getPtrSelf()
{
return shared_from_this();
}
private:
...
}
The methods are:
The protected methods are:
class PVRecordField {
public virtual epics::pvData::PostHandler,
public std::tr1::enable_shared_from_this<PVRecordField>
public:
POINTER_DEFINITIONS(PVRecordField);
PVRecordField(
epics::pvData::PVFieldPtr const & pvField,
PVRecordStructurePtr const &parent,
PVRecordPtr const & pvRecord);
virtual ~PVRecordField();
virtual void destroy();
PVRecordStructurePtr getParent();
epics::pvData::PVFieldPtr getPVField();
std::string getFullFieldName();
std::string getFullName();
PVRecordPtr getPVRecord();
bool addListener(PVListenerPtr const & pvListener);
virtual void removeListener(PVListenerPtr const & pvListener);
virtual void postPut();
virtual void message(
std::string const & message,
epics::pvData::MessageType messageType);
protected:
PVRecordFieldPtr getPtrSelf()
{
return shared_from_this();
}
virtual void init();
virtual void postParent(PVRecordFieldPtr const & subField);
virtual void postSubField();
private:
...
};
When PVRecord is created it creates a PVRecordField for every field in the PVStructure that holds the data. It has the following methods:
class PVRecordStructure : public PVRecordField {
public:
POINTER_DEFINITIONS(PVRecordStructure);
PVRecordStructure(
epics::pvData::PVStructurePtr const & pvStructure,
PVRecordFieldPtrArrayPtr const & pvRecordField);
virtual ~PVRecordStructure();
virtual void destroy();
PVRecordFieldPtrArrayPtr getPVRecordFields();
epics::pvData::PVStructurePtr getPVStructure();
virtual void removeListener(PVListenerPtr const & pvListener);
virtual void postPut();
protected:
virtual void init();
private:
...
};
When PVRecord is created it creates a PVRecordStructure for every structure field in the PVStructure that holds the data. It has the following methods:
class PVRecordClient {
POINTER_DEFINITIONS(PVRecordClient);
virtual ~PVRecordClient();
virtual void detach(PVRecordPtr const & pvRecord);
};
where
class PVListener {
virtual public PVRecordClient
public:
POINTER_DEFINITIONS(PVListener);
virtual ~PVListener();
virtual void dataPut(PVRecordFieldPtr const & pvRecordField) = 0;
virtual void dataPut(
PVRecordStructurePtr const &
requested,PVRecordFieldPtr const & pvRecordField) = 0;
virtual void beginGroupPut(PVRecordPtr const & pvRecord) = 0;
virtual void endGroupPut(PVRecordPtr const & pvRecord) = 0;
virtual void unlisten(PVRecordPtr const & pvRecord);
};
where
class PVDatabase : virtual public epics::pvData::Requester {
public:
POINTER_DEFINITIONS(PVDatabase);
static PVDatabasePtr getMaster();
virtual ~PVDatabase();
virtual void destroy();
PVRecordPtr findRecord(std::string const& recordName);
bool addRecord(PVRecordPtr const & record);
epics::pvData::PVStringArrayPtr getRecordNames();
bool removeRecord(PVRecordPtr const & record);
virtual std::string getRequesterName();
virtual void message(
std::string const &message,
epics::pvData::MessageType messageType);
private:
PVDatabase();
};
where
This is code that provides an implementation of channelProvider as defined by pvAccess. It provides access to PVRecords and is access by the server side of remote pvAccess.
This is a complete implementation of channelProvider and , except for channelRPC, provides a complete implementation of Channel as defined by pvAccess. For monitors it calls the code described in the following sections.
This provides code that creates a top level PVStructure that is an arbitrary subset of the fields in the PVStructure from a PVRecord. In addition it provides code that monitors changes to the fields in a PVRecord. A client configures the desired set of subfields and monitoring options via a pvRequest structure. pvAccess provides a class CreatePVRequest that creates a pvRequest. The pvCopy code provides the same functionality as the pvCopy code in pvIOCJava.
Currently all that is implemented is a header file. The only algorithm currently implemented is onPut
epics::pvData::monitor defines the monitor interfaces as seen by a client. See pvDatabaseCPP.html For details.
monitorFactory implements the monitoring interfaces for a PVRecord. It implements queueSize=0 and queueSize>=2.
The implementation uses PVCopy and PVCopyMonitor which are implemented in pvCopy. When PVCopyMonitor tells monitor that changes have occurred, monitor applies the appropriate algorithm to each changed field.
Currently only algorithm onPut is implemented but, like pvIOCJava there are plans to support for the following monitor algorithms:
MonitorFactory provides the following methods:
class MonitorFactory
{
static MonitorPtr create(
PVRecordPtr const & pvRecord,
MonitorRequester::shared_pointer const & monitorRequester,
PVStructurePtr const & pvRequest);
static void registerMonitorAlgorithmCreater(
MonitorAlgorithmCreatePtr const & monitorAlgorithmCreate,
String const & algorithmName);
}
where
This section provides two useful record support modules and one that is used for testing.
This implements a PVRecord that allows a client to set the trace level of a record. It follows the pattern of a channelPutGet record:
traceRecord
structure arguments
string recordName
int level 0
structure result
string status
where:
testExampleServerMain.cpp has an example of how to create a traceRecord:
PVDatabasePtr master = PVDatabase::getMaster(); PVRecordPtr pvRecord; String recordName; bool result(false); recordName = "traceRecordPGRPC"; pvRecord = TraceRecord::create(recordName); result = master->addRecord(pvRecord); if(!result) cout<< "record " << recordName << " not added" << endl;
This implements a PVRecord that allows a client to set the trace level of a record. It follows the pattern of a channelPutGet record:
traceRecord
structure arguments
string database master
string regularExpression .*
structure result
string status
string[] names
where:
Note that swtshell, which is a Java GUI tool, has a command channelList that requires that a record of this type is present and calls it. Thus user code does not have to use a channelGetPut to get the list of record names.
testExampleServerMain.cpp has an example of how to create a traceRecord:
recordName = "laptoprecordListPGRPC"; pvRecord = RecordListRecord::create(recordName); result = master->addRecord(pvRecord); if(!result) cout<< "record " << recordName << " not added" << endl;
The example implements a simple service that has a top level pvStructure:
structure
structure argument
string value
structure result
string value
time_t timeStamp
long secondsPastEpoch
int nanoSeconds
int userTag
It is designed to be accessed via a channelPutGet request. The client sets argument.value When the record processes it sets result.value to "Hello " concatenated with argument.value. Thus if the client sets argument.value equal to "World" result.value will be "Hello World". In addition the timeStamp is set to the time when process is called.
The example can be run on linux as follows:
mrk> pwd /home/hg/pvDatabaseCPP/exampleService mrk> bin/linux-x86_64/exampleService
The directory layout is:
exampleServer
configure
ExampleRELEASE.local
...
src
exampleServer.h
exampleServer.cpp
exampleServerInclude.dbd
exampleServerMain.cpp
exampleServerRegister.cpp
ioc
Db
...
src
exampleServerInclude.dbd
exampleServerMain.cpp
iocBoot
exampleServer
st.cmd
...
where
exampleServerCreateRecord exampleServerMultiple commands can be issued to create multiple service records.
If only a main program is desired then the directory layout is:
exampleServer
configure
ExampleRELEASE.local
...
src
exampleServer.h
exampleServer.cpp
exampleServerMain.cpp
Thus if only a main program is required the directory layout is simple.
Also many sites will want to build the src directory in an area separate from where the iocs are build.
The example resides in src The implementation is in exampleServer.cpp.
The description consists of
class ExampleServer;
typedef std::tr1::shared_ptr<ExampleServer> ExampleServerPtr;
class ExampleServer :
public PVRecord
{
public:
POINTER_DEFINITIONS(ExampleServer);
static ExampleServerPtr create(
std::string const & recordName);
virtual ~ExampleServer();
virtual void destroy();
virtual bool init();
virtual void process();
private:
ExampleServer(std::string const & recordName,
epics::pvData::PVStructurePtr const & pvStructure);
epics::pvData::PVStringPtr pvArgumentValue;
epics::pvData::PVStringPtr pvResultValue;
epics::pvData::PVTimeStamp pvTimeStamp;
epics::pvData::TimeStamp timeStamp;
};
where
The implementation of create method is:
ExampleServerPtr ExampleServer::create(
std::string const & recordName)
{
StandardPVFieldPtr standardPVField = getStandardPVField();
PVDataCreatePtr pvDataCreate = getPVDataCreate();
PVStructurePtr pvArgument = standardPVField->scalar(pvString,"");
PVStructurePtr pvResult = standardPVField->scalar(pvString,"timeStamp");
StringArray names;
names.reserve(2);
PVFieldPtrArray fields;
fields.reserve(2);
names.push_back("argument");
fields.push_back(pvArgument);
names.push_back("result");
fields.push_back(pvResult);
PVStructurePtr pvStructure = pvDataCreate->createPVStructure(names,fields);
ExampleServerPtr pvRecord(
new ExampleServer(recordName,pvStructure));
if(!pvRecord->init()) pvRecord.reset();
return pvRecord;
}
This:
The private constructor method is:
ExampleServer::ExampleServer(
std::string const & recordName,
epics::pvData::PVStructurePtr const & pvStructure)
: PVRecord(recordName,pvStructure)
{
}
The example is very simple. Note that it calls the base class constructor.
The destructor and destroy methods are:
ExampleServer::~ExampleServer()
{
}
void ExampleServer::destroy()
{
PVRecord::destroy();
}
The destructor has nothing to do.
The destroy method, which is virtual, just calls the destroy method of the base class.
A more complicated example can clean up any resources it used but must call the base
class destroy method.
The implementation of init is:
bool ExampleServer::init()
{
initPVRecord();
PVFieldPtr pvField;
pvArgumentValue = getPVStructure()->getStringField("argument.value");
if(pvArgumentValue.get()==NULL) return false;
pvResultValue = getPVStructure()->getStringField("result.value");
if(pvResultValue.get()==NULL) return false;
pvTimeStamp.attach(getPVStructure()->getSubField("result.timeStamp"));
return true;
}
The implementation of process is:
void ExampleServer::process()
{
pvResultValue->put(String("Hello ") + pvArgumentValue->get());
timeStamp.getCurrent();
pvTimeStamp.set(timeStamp);
}
It gives a value to result.value and
then sets the timeStamp to the current time.
NOTE: This is a shorter version of the actual code. It shows the essential code. The actual example shows how create an additional record.
The main program is:
int main(int argc,char *argv[])
{
PVDatabasePtr master = PVDatabase::getMaster();
ChannelProviderLocalPtr channelProvider = ChannelProviderLocal::create();
String recordName("exampleServer");
PVRecordPtr pvRecord = ExampleServer::create(recordName);
bool result = master->addRecord(pvRecord);
cout << "result of addRecord " << recordName << " " << result << endl;
pvRecord.reset();
startPVAServer(PVACCESS_ALL_PROVIDERS,0,true,true);
cout << "exampleServer\n";
string str;
while(true) {
cout << "Type exit to stop: \n";
getline(cin,str);
if(str.compare("exit")==0) break;
}
return 0;
}
This:
To start exampleServer as part of a V3IOC:
mrk> pwd /home/hg/pvDatabaseCPP/exampleServer/iocBoot/exampleServer mrk> ../../../bin/linux-x86_64/exampleServer st.cmd
You can then issue the commands dbl and pvdbl:
epics> dbl double01 epics> pvdbl exampleServer epics>double01 is a v3Record. exampleServer is a pvRecord.
It starts pvASrv so that the V3 records can be accessed via Channel Access or via PVAccess.
The exampleServer pvDatabase has many records including the following:
It also has a number of other scalar and array records.
exampleDatabase can be started as a main program or as a V3 IOIC. If started as a V3 IOC it also has a number of V3 records, and starts pvaSrv so that the V3 records can be accessed via Channel Access or via PVAccess.
This example show how a service can access other PVRecords. This section 1) starts with a discussion of accessing data via pvAccess and 2) gives a brief description of an example that gets data for an array of doubles.
The process routine of a PVRecord can access other PVRecords in two ways:
Access via pvAccess can be done either by local or remote channel provider.
If pvAccess is used then it handles data synchronization. This is done by making a copy of the data that is transfered between the two pvRecords. This is true if either remote or local pvAccess is used. Each get, put, etc request results in data being copied between the two records.
If the linked channel is a local pvRecord then, for scalar and structure arrays, raw data is NOT copied for gets. This is because pvData uses shared_vector to hold the raw data. Instead of copying the raw data the reference count is incremented.
For puts the linked array will force a new allocation of the raw data in the linked record, i. e. copy on write semantics are enforced. This is done automatically by pvData and not by pvDatabase.
As mentioned before a pvDatabase server can be either a separate process, i. e. a main program, or can be part of a V3IOC.
A main pvDatabase server issues the following calls:
ClientFactory::start(); ChannelProviderLocalPtr channelProvider = getChannelProviderLocal(); ... ServerContext::shared_pointer serverContext = startPVAServer(PVACCESS_ALL_PROVIDERS,0,true,true);
The first call is only necessary if some of the pvRecords have pvAccess links. These must be called before any code that uses links is initialized. After these two calls there will be two channel providers: local, and pvAccess.
A pvDatabase that is part of a V3IOC has the following in the st.cmd file.
... iocInit() startPVAClient startPVAServer ## commands to create pvRecords
Once the client and local provider code has started then the following creates a channel access link.
PVDatabasePtr master = PVDatabase::getMaster(); ChannelAccess::shared_pointer channelAccess = getChannelAccess(); ChannelProvider::shared_pointer provider = channelAccess->getProvider(providerName); Channel::shared_pointer channel = provider->createChannel(channelName,channelRequester);
exampleLink
configure
ExampleRELEASE.local
...
src
exampleLink.h
exampleLink.cpp
exampleLinkInclude.dbd
exampleLinkRegister.cpp
ioc
Db
src
exampleLinkInclude.dbd
exampleLinkMain.cpp
iocBoot
exampleLink
st.local
st.remote
...
This example is only built to be run as part of a V3 IOC. Note that two startup files are available: st.local and st.remote. st.local has two records: doubleArray and exampleLink. doubleArray is a record that can be changed via a call to pvput. exampleLink is a record that, when processed, gets the value from doubleArray and sets its value equal to the value read. st.local has both records. st.remote has only one record named exampleLinkRemote.
To start the example:
mrk> pwd /home/hg/pvDatabaseCPP/exampleLink/iocBoot/exampleLink mrk> ../../bin/linux-x86_64/exampleLink st.local
then in another window:
mrk> pvput doubleArray 4 100 200 300 400
Old : doubleArray 0
New : doubleArray 4 100 200 300 400
mrk> pvget -r "record[process=true]field(value)" exampleLink
exampleLink
structure
double[] value [100,200,300,400]
mrk>
exampleLink.h contains the following:
...
class ExampleLink :
public PVRecord,
public epics::pvAccess::ChannelRequester,
public epics::pvAccess::ChannelGetRequester
{
public:
POINTER_DEFINITIONS(ExampleLink);
static ExampleLinkPtr create(
std::string const & recordName,
std::string const & providerName,
std::string const & channelName
);
virtual ~ExampleLink() {}
virtual void destroy();
virtual bool init();
virtual void process();
virtual void channelCreated(
const epics::pvData::Status& status,
epics::pvAccess::Channel::shared_pointer const & channel);
virtual void channelStateChange(
epics::pvAccess::Channel::shared_pointer const & channel,
epics::pvAccess::Channel::ConnectionState connectionState);
virtual void channelGetConnect(
const epics::pvData::Status& status,
epics::pvAccess::ChannelGet::shared_pointer const & channelGet,
epics::pvData::PVStructure::shared_pointer const & pvStructure,
epics::pvData::BitSet::shared_pointer const & bitSet);
virtual void getDone(const epics::pvData::Status& status);
private:
...
All the non-static methods are either PVRecord, PVChannel, or PVChannelGet methods and will not be discussed further. The create method is called to create a new PVRecord instance with code that will issue a ChannelGet::get request every time the process method of the instance is called. Some other pvAccess client can issue a channelGet, to the record instance, with a request to process in order to test the example.
All of the initialization is done by a combination of the create and init methods so lets look at them:
ExampleLinkPtr ExampleLink::create(
String const & recordName,
String const & providerName,
String const & channelName)
{
PVStructurePtr pvStructure = getStandardPVField()->scalarArray(
pvDouble,"alarm.timeStamp");
ExampleLinkPtr pvRecord(
new ExampleLink(
recordName,providerName,channelName,pvStructure));
if(!pvRecord->init()) pvRecord.reset();
return pvRecord;
}
This first creates a new ExampleLink instance, and then calls the init method and the returns a ExampleLinkPtr. Note that if init returns false it returns a pointer to NULL.
The init method is:
bool ExampleLink::init()
{
initPVRecord();
PVStructurePtr pvStructure = getPVRecordStructure()->getPVStructure();
pvTimeStamp.attach(pvStructure->getSubField("timeStamp"));
pvAlarm.attach(pvStructure->getSubField("alarm"));
pvValue = static_pointer_cast<PVDoubleArray>(
pvStructure->getScalarArrayField("value",pvDouble));
if(pvValue==NULL) {
return false;
}
ChannelAccess::shared_pointer channelAccess = getChannelAccess();
ChannelProvider::shared_pointer provider =
channelAccess->getProvider(providerName);
if(provider==NULL) {
cout << getRecordName() << " provider "
<< providerName << " does not exist" << endl;
return false;
}
ChannelRequester::shared_pointer channelRequester =
dynamic_pointer_cast<ChannelRequester>(getPtrSelf());
channel = provider->createChannel(channelName,channelRequester);
event.wait();
if(!status.isOK()) {
cout << getRecordName() << " createChannel failed "
<< status.getMessage() << endl;
return false;
}
ChannelGetRequester::shared_pointer channelGetRequester =
dynamic_pointer_cast<ChannelGetRequester>(getPtrSelf());
PVStructurePtr pvRequest = getCreateRequest()->createRequest(
"value,alarm,timeStamp",getPtrSelf());
channelGet = channel->createChannelGet(channelGetRequester,pvRequest);
event.wait();
if(!status.isOK()) {
cout << getRecordName() << " createChannelGet failed "
<< status.getMessage() << endl;
return false;
}
getPVValue = static_pointer_cast<PVDoubleArray>(
getPVStructure->getScalarArrayField("value",pvDouble));
if(getPVValue==NULL) {
cout << getRecordName() << " get value not PVDoubleArray" << endl;
return false;
}
return true;
}
This first makes sure the pvStructure has the fields it requires:
Next it makes sure the channelProvider exists.
Next it creates the channel and waits until it connects.
Next it creates the channelGet and waits until it is created.
Next it makes sure it has connected to a double array field.
If anything goes wrong during initialization it returns false. This a return of true means that it has successfully created a channelGet and is ready to issue gets when process is called.
Look at the code for more details.
This is an example of creating a service that requires a somewhat complicated top level PVStructure. It is similar to the powerSupply example that is provided with pvIOCJava. Look at the code for details.
This section describes main programs that demonstrate performance of large arrays and can also be used to check for memory leaks. Checking for memory leaks can be accomplished by running the programs with valgrind or some other memory check program.
The programs are:
Each has support for -help.
mrk> pwd /home/hg/pvDatabaseCPP-md mrk> bin/linux-x86_64/arrayPerformanceMain -help arrayPerformanceMain recordName size delay providerName nMonitor queueSize waitTime default arrayPerformance arrayPerformance 10000000 0.0001 local 1 2 0.0 mrk> bin/linux-x86_64/longArrayMonitorMain -help longArrayMonitorMain channelName queueSize waitTime default longArrayMonitorMain arrayPerformance 2 0.0 mrk> bin/linux-x86_64/longArrayGetMain -help longArrayGetMain channelName iterBetweenCreateChannel iterBetweenCreateChannelGet delayTime default longArrayGetMain arrayPerformance 0 0 1 mrk> bin/linux-x86_64/longArrayPutMain -help longArrayPutMain channelName arraySize iterBetweenCreateChannel iterBetweenCreateChannelPut delayTime default longArrayPutMain arrayPerformance 10 0 0 1 mrk>
Note: These may fail if run on a platform that does not have sufficent memory,
To see an example just execute the following commands in four different terminal windows:
bin/linux/<arch>/arrayPerformanceMain bin/linux/<arch>/longArrayMonitorMain bin/linux/<arch>/longArrayGetMain bin/linux/<arch>/longArrayPutMain
Each program generates a report every second when it has somthing to report. Examples are:
mrk> bin/linux-x86_64/arrayPerformanceMain
arrayPerformance arrayPerformance 10000000 0.0001 local 1 2 0
...
monitors/sec 66 first 131 last 131 changed {1, 2} overrun {} megaElements/sec 656.999
arrayPerformance value 132 time 1.00486 Iterations/sec 65.681 megaElements/sec 656.81
monitors/sec 66 first 197 last 197 changed {1, 2} overrun {} megaElements/sec 656.304
arrayPerformance value 198 time 1.00563 Iterations/sec 65.6307 megaElements/sec 656.307
monitors/sec 66 first 263 last 263 changed {1, 2} overrun {} megaElements/sec 654.824
...
mrk> bin/linux-x86_64/longArrayMonitorMain
longArrayMonitorMain arrayPerformance 2 0
...
monitors/sec 6 first 2357 last 2357 changed {1, 2} overrun {} megaElements/sec 68.6406
monitors/sec 13 first 2385 last 2385 changed {1, 2} overrun {} megaElements/sec 118.72
monitors/sec 9 first 2418 last 2418 changed {1, 2} overrun {1, 2} megaElements/sec 85.0984
...
mrk> bin/linux-x86_64/longArrayPutMain longArrayPutMain arrayPerformance 10 0 0 1 ... put numChannelPut 0 time 1.00148 Elements/sec 79.8819 put numChannelPut 1 time 1.00176 Elements/sec 79.8598 ...
mrk> bin/linux-x86_64/longArrayGetMain longArrayGetMain arrayPerformance 0 0 1 ... get kiloElements/sec 7384.61 get kiloElements/sec 8726.34 ...
The arguments for arrayPerforamanceMain are:
arrayPerformance creates a PVRecord that has the structure:.
recordName
long[] value
timeStamp timeStamp
alarm alarm
Thus it holds an array of 64 bit integers.
The record has support that consists of a separate thread that runs until the record is destroyed executing the following algorithm:
This is a pvAccess client that monitors an arrayPerformance record. It generates a report every second showing how many elements has received. For every monitor it also checks that the number of alements is >0 and the the first element equals the last element. It reports an error if either of these conditions is not true.
The arguments for longArrayMonitorMain are:
This is a pvAccess client that uses channelGet to access an arrayPerformance record. Every second it produces a report.
The arguments for longArrayGetMain are:
This is a pvAccess client that uses channelPut to access an arrayPerformance record. Every second it produces a report.
The arguments for longArrayPutMain are:
The results were from my laptop. It has a 2.2Ghz intel core i7 with 4Gbytes of memory. The operating system is linux fedora 16.
When test are performed with large arrays it is a good idea to also run a system monitor facility and check memory and swap history. If a test configuration causes physical memory to be exhausted then performance becomes very poor. You do not want to do this.
The simplest test to run arrayPerformance with the defaults:
mrk> pwd /home/hg/pvDatabaseCPP-md mrk> bin/linux-x86_64/arrayPerformanceMain
This means that the array will hold 10 million elements. The delay will be a millisecond. There will be a single monitor and it will connect directly to the local channelProvider, i. e. it will not use any network connection.
The report shows that arrayPerformance can perform about 50 iterations per second and is putting about 500million elements per second. Since each element is an int64 this means about 4gigaBytes per second.
When no monitors are requested and a remote longArrayMonitorMain is run:
mr> pwd /home/hg/pvDatabaseCPP-md mrk> bin/linux-x86_64/longArrayMonitorMain
The performance drops to about 25 interations per second and 250 million elements per second. The next section has an example that demonstrates what happens. Note that if the array size is small enough to fit in the local cache then running longArrayMonitor has almost no effect of arrayPerforance.
Running longArrayMonitorMain, longArrayPutMain, and longArrayGetMain under valgrind shows no memory leaks.
arrayPerformanceMain shows the following:
==9125== LEAK SUMMARY: ==9125== definitely lost: 0 bytes in 0 blocks ==9125== indirectly lost: 0 bytes in 0 blocks ==9125== possibly lost: 576 bytes in 2 blocks
The possibly leaked is either 1 or 2 blocks. It seems to be the same if clients are connected.
This example demonstrates how array size effects performance. The example is run as:
bin/linux-x86_64/vectorPerformanceMain -help vectorPerformanceMain size delay nThread default vectorPerformance 50000000 0.01 1
Consider the following:
bin/linux-x86_64/vectorPerformanceMain 50000000 0.00 1 ... thread0 value 20 time 1.01897 iterations/sec 19.6277 elements/sec 981.383million thread0 value 40 time 1.01238 iterations/sec 19.7554 elements/sec 987.772million thread0 value 60 time 1.00878 iterations/sec 19.826 elements/sec 991.299million ... bin/linux-x86_64/vectorPerformanceMain 50000000 0.00 2 ... thread0 value 21 time 1.00917 iterations/sec 9.90911 elements/sec 495.455million thread1 value 31 time 1.05659 iterations/sec 9.46443 elements/sec 473.221million thread0 value 31 time 1.07683 iterations/sec 9.28648 elements/sec 464.324million thread1 value 41 time 1.0108 iterations/sec 9.89312 elements/sec 494.656million ... bin/linux-x86_64/vectorPerformanceMain 50000000 0.00 3 thread0 value 7 time 1.0336 iterations/sec 6.77244 elements/sec 338.622million thread1 value 7 time 1.03929 iterations/sec 6.73534 elements/sec 336.767million thread2 value 7 time 1.04345 iterations/sec 6.70852 elements/sec 335.426million thread0 value 14 time 1.03335 iterations/sec 6.77406 elements/sec 338.703million thread1 value 14 time 1.03438 iterations/sec 6.76734 elements/sec 338.367million thread2 value 14 time 1.04197 iterations/sec 6.71805 elements/sec 335.903million ... bin/linux-x86_64/vectorPerformanceMain 50000000 0.00 4 thread2 value 5 time 1.00746 iterations/sec 4.96298 elements/sec 248.149million thread1 value 5 time 1.02722 iterations/sec 4.86751 elements/sec 243.376million thread3 value 5 time 1.032 iterations/sec 4.84496 elements/sec 242.248million thread0 value 6 time 1.18882 iterations/sec 5.04703 elements/sec 252.351million thread2 value 10 time 1.00388 iterations/sec 4.98068 elements/sec 249.034million thread3 value 10 time 1.02755 iterations/sec 4.86592 elements/sec 243.296million thread1 value 10 time 1.04836 iterations/sec 4.76936 elements/sec 238.468million thread0 value 11 time 1.01575 iterations/sec 4.92249 elements/sec 246.124million
As more threads are running the slower each thread runs.
But now consider a size that fits in a local cache.
bin/linux-x86_64/vectorPerformanceMain 5000 0.00n/linux-x86_64/vectorPerformanceMain 5000 0.00 1 ... thread0 value 283499 time 1 iterations/sec 283498 elements/sec 1417.49million thread0 value 569654 time 1 iterations/sec 286154 elements/sec 1430.77million thread0 value 856046 time 1 iterations/sec 286392 elements/sec 1431.96million ... bin/linux-x86_64/vectorPerformanceMain 5000 0.00 2 ... thread0 value 541790 time 1 iterations/sec 271513 elements/sec 1357.56million thread1 value 541798 time 1 iterations/sec 271418 elements/sec 1357.09million thread0 value 813833 time 1 iterations/sec 272043 elements/sec 1360.21million thread1 value 813778 time 1 iterations/sec 271979 elements/sec 1359.89million thread0 value 541790 time 1 iterations/sec 271513 elements/sec 1357.56million thread1 value 541798 time 1 iterations/sec 271418 elements/sec 1357.09million thread0 value 813833 time 1 iterations/sec 272043 elements/sec 1360.21million thread1 value 813778 time 1 iterations/sec 271979 elements/sec 1359.89million ... bin/linux-x86_64/vectorPerformanceMain 5000 0.00 3 ... thread0 value 257090 time 1 iterations/sec 257089 elements/sec 1285.45million thread1 value 256556 time 1 iterations/sec 256556 elements/sec 1282.78million thread2 value 514269 time 1 iterations/sec 257839 elements/sec 1289.19million thread0 value 514977 time 1 iterations/sec 257887 elements/sec 1289.43million thread1 value 514119 time 1 iterations/sec 257563 elements/sec 1287.81million thread2 value 770802 time 1 iterations/sec 256532 elements/sec 1282.66million
Now the number of threads has a far smaller effect on the performance of each thread.