THE EXPERIMENTAL PHYSICS AND INDUSTRIAL CONTROL SYSTEM ARCHITECTURE: PAST, PRESENT, AND FUTURE*


Leo R. Dalesio, Jeffrey O. Hill, Los Alamos National Laboratory (LANL)
Martin Kraimer, Argonne National Laboratory (ANL)
Stephen Lewis, Lawrence Berkeley Laboratory (LBL)
Douglas Murray, Stephan Hunt, Superconducting Super Collider Laboratory (SSCL)
William Watson, Continuous Electron Beam Accelerator Facility (CEBAF)
Matthias Clausen, Deutsches Elektronen-Synchrotron (DESY)
John Dalesio, Tate Integrated Systems (TIS)

Contents


Abstract

The Experimental Physics and Industrial Control System (EPICS), has been used at a number of sites for performing data acquisition, supervisory control, closed-loop control, sequential control, and operational optimization. The EPICS architecture was originally developed by a group with diverse backgrounds in physics and industrial control. The current architecture represents one instance of the 'standard model'. It provides distributed processing and communication from any local area network (LAN) device to the front end controllers. This paper will present the genealogy, current architecture, performance envelope, current installations, and planned extensions for requirements not met by the current architecture.

* Work supported under the U.S. Department of Energy, Office of Basic Energy Sciences under Contract Nos. (W-7405-ENG-36), (W-31-109-ENG-38) and (DE-AC02-89ER40486)


Introduction

The Experimental Physics and Industrial Control System (EPICS), has been used at a number of sites for performing data acquisition, supervisory control, closed-loop control, sequential control, and operational optimization. The current EPICS collaboration[1] consists of five U.S. laboratories; Los Alamos National Laboratory, Argonne National Laboratory, Lawrence Berkeley Laboratory, the Superconducting Super Collider Laboratory, and the Continuous Electron Beam Accelerator Facility[2][3][4][5]. In addition, there are three industrial partners and a number of other scientific labs and universities using EPICS[6]. This paper will present the genealogy, current architecture, performance envelope, current installations, and planned extensions for requirements not met by the current architecture.

One Shot Laser Physics High Order Beam Optics Isotopic Refinery Process Control GTACS/EPICS
Architecture Hierarchical Distributed Distributed Distributed
Signal Count ~4,000 ~300 ~3,000 ~30,000
Field Bus STD/CAMAC CAMAC Industrial VME/VXI/GPIB/Industrial Bitbus/CAMAC
OPI/Front End VAX / VAX VAX / VAX 6800 / 6800 680x0 / workstation
Network DecNet/RS232 DecNet MAP TCP/IP
Data Transfer Polled Polled / Notification Polled Polled / Notification
Special I/O 200 TDRs
Positioning
Video
Diagnostic Positioning
High Rep Rate
Closed-loop control
Full Complement
Offline ConfigurationTools none displays displays, alarms, I/O, control, and archive requests displays, alarms, I/O, control, and archive requests
Table 1. Architectural History

Design History

EPICS was developed by a group with experience in control of various complex physics processes and industrial control. Three programs preceding the EPICS development were high order beam optics control, single shot laser physics research, and isotopic refinery process control. These systems were all developed between 1984 and 1987. The three programs embodied different aspects of data acquisition, control and automation. They used equipment and methods most appropriate for the time and scope of their respective problems. The Ground Test Accelerator project, where EPICS development began as GTACS[7], required fully automated remote control in a flexible and extensible environment. These requirements encompassed aspects from all of the previous control system experience. The design group combined the best features of their past, like distributed control, real-time front-end computers, interactive configuration tools, and workstation based operator consoles, while taking advantage of the latest technology, like VME, VXI, X-windows, MOTIF, and the latest processors (table 1). Since the collaboration began, major steps have been made in portability between sites, extensibility in database and driver support, and added functionality like the alarm manager, knob manager and the Motif based operator interface. The EPICS name was adopted after the present multi-lab collaboration began. The key to the design strength has always been the ability of the design engineers to explore and evaluate new ideas.

Current Architecture

The EPICS architecture[8] represents an instance of the 'standard model'[9][10]. There are distributed workstations for operator interfaces, archiving alarm management, sequencing, and global data analysis. There is a local area network for communicating peer-to-peer and a set of single board computers for supporting I/O interfaces, closed-loop control, and sequential control.

The software design incorporates a collection of extensible tools interconnected through the channel access communication protocol[11][12][13] (figure 1). The software architecture allows the users to implement control and data acquisition strategies, to create state notation programs, and to implement sequential control in a single board computer called the Input/Output Controller (IOC). All data is passed through the channel access protocol using gets, puts, or monitors (notification on change). One can extend the basic EPICS system in the IOC by creating new database record types, calling 'C' subroutines from the database, extending the driver support, and creating independent VxWorks tasks (figure 2). Some of the larger extensions include video sampling, video analysis[14], and support for a 4 KHz closed loop control distributed over multiple IOCs[15]. Workstation based tools are frequently developed to accommodate unique operator requirements, to integrate physics codes or to take advantage of some commercial package. Some examples are an adaptive neural network for optimizing a small angle ion source[16], WingZ, PV-Wave, and Mathmatica. The EPICS software architecture provides a flexible environment for resolving problems that extend beyond its capabilities.

Number of bytes per instance instances to use 1.5 M Bytes useconds each 68040 (MV167) CPU Usage @ 1,000/second
A/D Conversions 576 2,600 61 usec each  6.1%
Binary Inputs 480 3,100 52 usec each  5.2%
Monitors 32,000 / client 46 clients 100 usec each 10.0%
Table 2. IOC Measured Performance and Memory Consumption [18]

Performance

The IOC provides the physical interface to a portion of a machine. The limiting factors in the performance of the IOC are the CPU bandwidth and memory. Table 2 shows the measured performance of analog inputs, binary inputs, and monitors. If channel access notification is required, an additional 100 us is incurred. It is important to note that most signals are not monitored by channel access clients and that monitors are only sent on change of state or excursion outside of a dead-band. In the average case, a signal being processed will not post monitors. Periodic scan rates as delivered vary from 10 Hz to one per minute, but can be modified to range from 60 Hz to once every several hours. In addition, records can be processed on end-of-conversion and change-of-state. For analog inputs, scanning on end-of-conversion significantly reduces the latency between gating a signal and processing the record. This is useful for pulse to pulse closed loop control. The scheduling and dead-bands should be selected to best fit the situation. For instance, a transducer that may change within 50 msec but is accurate to 2 units, should be processed at 20 Hz with a dead-band of 2. It will be read every 50 msec but only send monitors when the difference between the last monitor and the current reading exceeds the jitter. The database scanning is flexible to provide optimum performance and minimum overhead.

Communication performance is bounded by the channel access protocol, TCP/IP packet overhead, and the physical communication media. Channel access makes efficient use of the communications overhead, by combining multiple requests or responses. For a point to point connection, 1,000 monitors per second will use about 3% of the 10 Mbit ethernet band-width (~30 bytes per monitor). To avoid collisions and therefore avoid non-determinism, the ethernet load is kept under 30%[17]. At this level, we can issue 10,000 monitors per second. Use of LAN bandwidth can reduced by 50%-80% by changing the channel access protocol to variable command format and compressing the monitor response data (~6-15 bytes per packet). LAN bandwidth can be expanded by using commercially available hardware. By isolating subnetworks with bridges or an etherswitch, the bandwidth can easily be tripled. Going to a 100 Mbit ethernet yields a 10 times performance improvement. Using 100 Mbit FDDI provides a 10 times faster media with twice the available bandwidth (60% utilization), since it is a token based scheme. The Ground Test Accelerator, with 2,500 physical connections and 10,000 database records distributed among 14 IOCs and interfaced to 8 workstations, used only between 5-7% of our 10 Mbit ethernet during operation. Using the GTA measurements as a basis and assuming ethernet availability at around 30%, the 10 Mbit ethernet will support a control network eight times larger than the current system, or 20,000 physical connections. Networks using bridges, etherswitches, 100 Mbit ethernet, and 100 Mbit FDDI will be able to support systems with between 60,000 and 400,000 physical connections on a local area network.

Installations

EPICS is in use at a number of scientific laboratories, universities and commercial installations. Table 3 presents a summary of some of these installations, the number of signals, IOCs, and workstations installed and the projected number of signals on completion. The EPICS software is typically used in systems between 200 and 50,000 signals. The SSC is a unique case at 1,000,000 signals projected. Although we have run a number of tests to characterize the operating parameters for EPICS, the largest installation that has been operated has only 2,500 physical connections and 10,000 database records. EPICS extensibility will be demonstrated on CEBAF, APS, and GTA installations in the next 12 months, as each of these installations are commissioning large portions of their respective accelerators.

Signals
Implemented
IOCs
Installed
Workstations
Installed
Signals on
Completion
Ground Test Accelerator 2,500 14 8 15,000
Advanced Photon Source 400 3 3 30,000
Gammasphere 150 8 6 3,000
Superconducting Super Collider 200 3 1 1,000,000
CEBAF 0 0 0 50,000
Duke Mark III IR FEL 380 1 2 380
St. Louis Water System 7,200 4 6 7,200
Table 3. Installations of EPICS

Extensions

There are a number of extensions required to meet the needs of the laboratories currently specifying EPICS. The major shortcomings in the EPICS environment revolve around configuration tools, communication support issues, and some general system functions. The manpower required to do the effort is distributed among the collaborating labs and is certainly adequate to make these additions.

Issue Solution Site
Graphical database configuration Use Objectviews as basis for tool ANL, SSCL
Use schematic capture program LANL, CEBAF
Graphical state notation language Use Objectviews as basis for tool SSCL
Extend Graphical Display Configuration Motif based ANL
X-based LANL
Graphical Alarm Configuration Motif-based ANL
System Configuration RDB D-BASE Tate
INGRES CEBAF
Graphical Archive Configuration Use Alarm Configuration tool as basis None
Table 4. Configuration Tool Extensions for EPICS

We have several significant development and tool integration efforts going on at several sites to bring the configuration tools up to modern standards. Most of these efforts are directed at graphical configuration tools (table 4). Another critical aspect of these configuration tools is the maintenance of very large configuration files over the lifetime of the programs. The most promising combination seems to be a graphical configuration tool that interfaces to a relational database. This combines easy visualization during configuration of a specific portion of the application with the ability to use the querying capabilities for locating things after the fact.

The communication support issues are just being addressed, as the channel access protocol is the basis for all compatibility. We have run the same version of the channel access protocol for the past three years. The requirements forcing us to finally revisit channel access are support for serial communication media, the need to support user facilities, and the ability to integrate other data sources (table 5). We are maintaining compatibility at the subroutine interface level so that all of the current channel access clients and servers will only require recompilation and relinking.

Issue Solution Site
Need dedicated point to point communication Add an option to use a name server
Add drivers for serial and T1
Tate, SSCL, LANL
Access protection Add access control based on user, location, channel, and machine mode ANL, LANL
Need closed-loop control across the network Add multi-priority channel access connections LANL
Connect to alternate data stores Port the channel access server to different data stores DESY, LANL
Support a multitude of operator interfaces Create a data gateway to clients that are able to withstand a single point of failure and the added latency LANL
IOC memory limitations Size server queues according to need LANL
Socket and task limitations in the IOC Take advantage of the newly working vxWorks Select Tate, LANL
Long time-outs on disconnect Add a time-out heartbeat when there's no traffic on a connection Tate, LANL
Table 5. Channel access extensions

Other system wide functions needed by the facilities are; the ability to add and delete signals during operation, redundant IOCs for critical processes, higher level physics objects as database records, a general save and restore of operating parameters and a support group to reintegrate, test, and distribute these new versions. We are currently exploring options for providing this support function. In the past, we integrated extensions and supported the EPICS installations through direct program funding. As the collaboration has grown, this has proven to be more difficult. We have recently identified this integration need as requiring dedicated manpower and equipment with an explicit charter to provide this support.

There are significant pieces of development required to make EPICS a complete solution for experimental physics. Most of the tasks are currently under development at the collaborating laboratories or the industrial partners. We are exploring options for providing good user support for the EPICS community. The functional specifications and design for these added tasks have been reviewed by the collaboration members and have been approved. The collaboration works as a single group to specify and design additions to EPICS, using the combined resources and knowledge of the collaboration.

Conclusion:

The EPICS toolkit provides an environment for implementing systems that range from small test stands requiring several hundred points per second to large distributed systems with tens of thousands of physical connections. The application of EPICS requires a minimum amount of programming. The EPICS environment supports system extensions at all levels, enabling the user to integrate other systems or extend the system for their needs. Work is underway to provide a more integrated application development environment. The base software is also being extended to support some of the fundamental needs of the projects that are controlling user facilities. Through the modular software design which supports extensions at all levels, we are able to provide an upgrade path to the future as well as an interface to an installed base. With the addition of a user support group, we will be able to provide a stable starting point complete with an upgrade path, for those programs choosing to use the EPICS toolkit.

Acknowledgement

There are now several chapters in the EPICS story with close to one hundred colleagues contributing thus far. The decision to collaborate with member labs has responsibilities to support your fellow collaborators as you would your own programs. This responsibility has received the necessary managerial support from each of the five member laboratories to provide the environment for a successful collaboration. The ability to develop system software in a collaborative environment requires a real dedication to finding the best solution. The system designers that have been involved in this collaboration have been egoless in their search for the best answer resulting in consensus design. Finally, there are the application engineers who have continually provided suggestions for upgrades and extensions. Their dedication to using these tools make it possible to create a toolkit. The application engineers at every site have supported our efforts even through some challenging times. All of the teams at Los Alamos National Laboratory, Argonne National Laboratory, Lawrence Berkeley Laboratory, the Superconducting Super Collider Laboratory, and the Continuous Electron Beam Accelerator Facility are responsible for this success in codeveloping software. It is certainly rewarding to work with such a wide range of experience and knowledge.

References

  1. [1] Knott, M., Thuot, M., Gurd, D., Lewis, S., "EPICS: A Control System Software Co-Development Success Story, " submitted to this conference.

  2. [2] Gurd, D., "Control System Plans and Progress at the SSC," submitted to this conference.

  3. [3] Knott, M.J., McDowell, W.P., Lenkszus, F.R., Kraimer, M.R., Arnold, N.R., Daly, R.T., Gunderson, G.R., Cha, B.K., and Anderson, M.D., "The Advanced Photon Source Control System," in Proceedings of the 1991 IEEE Particle Accelerator Conference, (San Francisco, California, 1991), pp. 2526-2528.

  4. [4] Young, J. A., et. al."Status of the Advanced Light Source Control System," submitted to this conference.

  5. [5] Watson III, W. A., Barker, D., Bickley, M., Gupta, P., Johnson, R. P., "The CEBAF Accelerator Control System: Migrating From A TACL To An EPICS Based System" submitted to this conference.

  6. [6] Clausen, M., "Control for the TESLA Test Facility - Status and Future Plans," submitted to this conference.

  7. [7] Kozubal, A. J., Kerstiens, D. M., Hill, J. O., Dalesio, L. R., "Run-time Environment and Application Tools for the Ground Test Accelerator Control System," in Proceedings of International Conference on Accelerator and Large Experimental Physics Control Systems, D. P. Gurd and M. Crowley-Milling, Eds. (ICALEPCS, Vancouver, British Columbia, Canada, 1989), pp. 288--291.

  8. [8] Dalesio, L. R., Kraimer, M.R., Kozubal, A. J., "EPICS Architecture," in Proceedings of International Conference on Accelerator and Large Experimental Physics Control Systems, C. O. Pac, S. Kurokowa and T. Katoh, Eds. (ICALEPCS, KEK, Tsukuba, Japan, 1991), pp. 278-282.

  9. [9] Kuiper, B., "Issues in Accelerator Controls," in Proceedings of International Conference on Accelerator and Large Experimental Physics Control Systems, C. O. Pac, S. Kurokowa and T. Katoh, Eds. (ICALEPCS, KEK, Tsukuba, Japan, 1991), pp. 602-611.

  10. [10] Thuot, M., and Dalesio, L. R., "The Standard and Non-Standard Models," presented at the Particle Accelerator Conference, Washington, D.C., May 1993.

  11. [11] Kraimer, M.R., Cha, B.C., and Anderson, M.D., "Alarm Handler for the Advanced Photon Source Control System," in Proceedings of the 1991 IEEE Particle Accelerator Conference, (San Francisco, California, 1991), pp. 1314-1316.

  12. [12] Kozubal, A. J., and D. M. Kerstiens, "Experience with the State Notation Language and the Run-time Sequencer," submitted to this conference.

  13. [13] Cole, R., Atkins, W., "Real-Time Data Archiving for GTA," Los Alamos National Laboratory report LA-UR-92-2420, August, 1992.

  14. [14] Zander, M. "EPICS Video," Los Alamos National Laboratory report LA-UR-93-2701, August, 1993.

  15. [15] Lenzkzsus, F., Kahana, E., A., Votaw, Decker, G., Chung, Y., Ciarlette, D., Laird, R., "Beam Postition Monitor Data Acquisition for the Advanced Photon Source", for the presented at the Particle Accelerator Conference, Washington, D.C., May 1993.

  16. [16] Brown, S., Mead, W., Bowling, P., "Optimization and Control of a Small Angle Ion Source Using an Adaptive Neural Network Controller," presented at the International Conference on Ion Sources, Bejing, China, September 1993.

  17. [17] Nemzow, M., "Keeping the Link: Ethernet: Ethernet Installation and Management", Magraw-Hill Book Co., pp. 219-220.

  18. [18] Botlo, M., Romero, A., "EPICS Performance Evaluation", SSCL-644, 1993.


EPICS Architecture - 9 JAN 1998