Download A VMEbus Adapter for G-64

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

PID controller wikipedia , lookup

Resilient control systems wikipedia , lookup

Geophysical MASINT wikipedia , lookup

Control system wikipedia , lookup

Control theory wikipedia , lookup

Distributed control system wikipedia , lookup

Opto-isolator wikipedia , lookup

Transcript
CAN field bus for industrial-controls applications in high-energy physics experiments.
W.P.J.Heubers, H.Boterenbrood, J.T.van Es, R.G.K.Hart
NIKHEF
National Institute for Nuclear and High-Energy Physics
Amsterdam, The Netherlands
ABSTRACT
CAN bus, a popular field bus choice in industry, has gained momentum in high-energy physics research for industrial-controls-like applications in experiments and accelerators. This
paper describes at first the controls problems which next-generation particle-physics experiments are facing. It will then discuss the hardware and software developments of prototype
systems for analogue sensor readout by the CERN-recommended CAN field bus.
INTRODUCTION
THE ATLAS DETECTOR
The Large Hadron Collider (LHC) at CERN is a particle accelerator that will collide beams of protons with an energy of 7
TeV. Four high-energy physics particle detectors will be installed on well-defined locations in the 27-km long accelerator
ring. These detectors will be designed, built and operated under responsibility of large collaborations with participants
from all over the world. Because of the extremely long time
scale, the start of LHC operation is foreseen in 2005, and the
complexity of the instrumentation one has to take care that
when possible, widely accepted industrial standards and solutions will be implemented. Reliability and maintainability have
to be ensured during the many years of operation.
The largest detector to be installed in LHC is the ATLAS detector, measuring 20 by 40 meters. The ATLAS detector consists of three main detection systems (muon detector, calorimeters and tracking detectors) and of a number of subsystems,
such as the magnet system, the cooling system and the data
acquisition system. The outer detector is the Muon Spectrometer and occupies the largest volume of the ATLAS detector
(approximately 16.000 m3) and a radius of 11 meters. This
Muon Spectrometer consists of a barrel and two end-caps with
a total number of 1194 precision chambers and 800 trigger
chambers, generating physics data on more then 1,000,000
readout channels. One has to take into account that the instrumentation for the Muon Spectrometer has to operate in a magnetic field with values of typical 0.5-2 Tesla and with a radiation background of mainly low-energy neutrons and photons.
A typical area where one can not select standard industrial
solutions is the inner part of the detector, where space is limited and environmental conditions are hostile. Especially the
amount of radiation and constraints concerning low-power
dissipation and limited space, require custom designed and
radiation-hard electronics for the readout of the large number
of data channels. In the outer parts of the detectors the environmental conditions are somewhat better and one can consider the use of commercially available electronics, such as micro
controllers as intelligent nodes connected to a field bus.
Physicists and engineers cannot enter the caverns where the
detectors are installed to inspect the functioning of the instrumentation when LHC is operating. A reliable and redundant
control system is required to be able to set and monitor the
estimated number of several 100,000 I/O points remotely from
the control rooms. Mixing of the control data with the physics
data on the high bandwidth data channels from the detector to
the counting rooms must be avoided, to prevent a blocking of
the control signals in case of congestion in the data channels.
In this paper a description will be given of the ideas for the
readout of analogue sensors to continuously monitor the temperature and the values of the magnetic field in the precision
chambers in the Muon Spectrometer. The proposal is to install
on each of these chambers a field bus node where analogue
signals can be digitized and transmitted to the detector control
system.
DETECTOR CONTROL SYSTEM
The ATLAS Detector Control System DCS [1] will be divided
into subsystems. A requirement of DCS is that these subsystems can be operated independently from each other to be able
to test and commission parts of the detector during installation
and maintenance periods. The division of DCS into subsystems can be either a functional division (e.g. the magnet system) or a topological division (e.g. the precision chambers in
the barrel). The supervisory control has to interconnect all
these subsystems together and provide the operators with a
global and consistent interface to the detector. Local Control
Stations will interface to the hardware, typically modular sys-
1
tems (most likely VME) with connections to field bus networks and PLC systems.
The output values for these probes will be compared with the
results of calculations on the three-dimensional field model.
SELECTION OF A FIELD BUS
Apart from the temperature sensors and Hall probes, output
from other sensors, such as strain gauges or gas pressure sensors, can be digitized and collected by the CAN nodes as well.
Concerning the time scale and the complexity of the instrumentation projects for the LHC accelerator and particle detectors, a policy has been defined to apply industrial solutions
and standards where ever appropriate. Field buses are typical
examples of industrial developments and they will be implemented on a large scale to control and monitor a wide range of
equipment in the accelerator and experiments. As many different kinds of field buses are available from industry, it was felt
necessary to restrict the choice of field buses to be used at
CERN [2]. The CAN field bus has been selected as the solution for relatively simple control tasks and sensor readout.
Reliability, availability of inexpensive controller chips from
different suppliers, ease of use, wide acceptance by industry
and the expectation that CAN will be available for a long period of time, were strong arguments in favour of this choice.
Both data link layer and physical layer (not including the
transmission medium) from the ISO/OSI reference model are
defined for the CAN bus as the open standard ISO/DIS 11898
[3].
CAN CONTROL FOR MUON PRECISION CHAMBERS
As mentioned earlier, one of the main detection systems in the
ATLAS detector is the Muon Spectrometer. The barrel of this
spectrometer contains about 600 Monitored Drift Tube (MDT)
chambers. These precision chambers are arranged in three
concentric cylinders around the beam axis at radial distances
of about 5, 7.5 and 10.5 meters. Each chamber is an assembly
of six parallel layers of drift tubes on a support frame, three
layers on each side.
From a control system point of view each of these chambers can be considered as a separate entity controlled by one
CAN node which is mounted on the frame of the chamber. The
barrel of the spectrometer consists of about 600 precision
chambers, consequently a field bus network configuration has
to be designed of (at least) 600 CAN nodes interfaced to the
detector control system by VME modules. Monitoring of the
temperature on different spots of the supporting frame on a
precision chamber is required with a resolution of 0.5 C with
a repetition rate in the order of ten seconds. Collecting this
information regularly is required because the resolution of the
drift tubes and the deformation of the chambers are functions
of the temperature. When measuring the temperature on an
average of 16 locations on each chamber, close to 10,000 sensors are needed for the 600 chambers in the barrel of the Muon
Spectrometer.
Hall probes will be mounted on many chambers to monitor
continuously the three field components of the magnetic field.
PROTOTYPE MUON DETECTOR
The Demonstration of Atlas Chamber Alignment (DATCHA)
is a prototype of a barrel section of the Muon Spectrometer.
This prototype consists of three precision chambers and is
intentionally built to test the accuracy of the alignment system.
DATCHA is 12 meters high and has been installed in one of
the underground caverns at CERN. To gain experience using
the CAN field bus for chamber control we considered
DATCHA as an excellent opportunity to work out our ideas
and to show it as a real application to the ATLAS collaboration. CAN nodes have been designed and implemented to
control and monitor the readout electronics, to set and monitor
the high-voltage channels and to monitor temperatures. The
field bus network is interfaced to a Sun Unix system by a
VME-CAN interface from MicroSys [4].
GENERAL PURPOSE CAN NODES
The heart of the CAN nodes that control the DATCHA prototype detector is a general-purpose credit-card sized module.
This module contains a CAN controller, a micro controller,
memory and I/O and is used in combination with dedicated
electronics to add more specific functionality. These generalpurpose modules can be programmed for different application
and combine local intelligence, extensive I/O capabilities and
the CAN-bus interface on a small board.
Two different implementations of this general purpose
CAN module are used for the DATCHA detector: one is the
General-Purpose CAN module (GPCAN), an in-house development of NIKHEF, and the other one is commercially available (MicroKey [5]). Both have a Philips micro controller of
the 8051 family with an on-chip integrated CAN bus controller
and extensive I/O functions.
Three dedicated CAN nodes are designed with these general- purpose CAN modules for the control and monitoring of
the precision chambers in DATCHA:
1.
Detector Control Card (DCC)
2.
Quad High-Voltage Card (Quad-HV)
3.
Crystal CAN system.
After a short description of the DCC and the Quad-HV, we
will give a more detailed description of the Crystal CAN system for analogue sensor readout in the next chapters.
2
DETECTOR CONTROL CARD
CRYSTAL CAN SYSTEM
A Detector Control Card (DCC) with a CAN node and additional electronics has been installed on each of the three precision chambers of the prototype detector DATCHA. The DCC
module adjusts the analogue threshold values, disables noisy
channels, generates test signals and monitors the temperature
of the front-end electronics. An important feature here is the
function to disable noisy channels in the chambers. Each frontend module connects to 32 tubes in the precision chamber and
has a 32-bits register to disable individual channels. The register is written and read through a JTAG interface, as defined by
the Boundary Scan Architecture standard [6] by the micro
controller. The Detector Control Card is implemented by a
GPCAN from NIKHEF with the Philips 87C592 micro controller and integrated CAN-bus interface.
One of the requirements of the future ATLAS DCS is that it
should be able to measure the magnetic field and temperature
inside and around different parts of the detector. The Crystal
CAN system is developed as a prototype for distributed and
multiplexed analogue sensor readout.
The hardware is made up of two module types: the Controller Module and the Sensor Module. The controller module
hardware is the same in all applications. The software running
on the module will provide application specific features. The
sensor module is built around the same ADC in all applications, but the signal conditioning electronics around it depends
on the application (the type of sensors to be read out).
The Controller Module is based on a commercially available universal micro controller module (MicroKey 20CN592),
designed around the Philips 80C592 8-bit micro controller
running at a clock frequency of 16 MHz. The micro controller
has an integrated CAN controller and the on-board CAN
transceiver (Philips 82C250) provides an ISO/DIS-11898
standard CAN-bus interface. The module offers 48 KB of user
application ROM (flash) and 63.5 KB of RAM. This amount
of memory is enough to build quite large applications. User
application code can be downloaded through a serial port
(standard) or optionally via the CAN-bus, an option that could
be very useful once the modules are integrated into an experiment. These features are provided by the onboard firmware
HIGH-VOLTAGE CONTROL CARD
The Quad High-Voltage Card (Quad-HV) is meant for the
control and monitoring of the high-voltage channels for the
precision chambers. It consists of a CAN node, which controls
four independent high-voltage generators. The node is able to
switch on and off the power supply, to set the value between 0
and the maximum value of 4096 Volt, to monitor the actual
voltage and current values and to trip the supply when preset
limits are exceeded. The CAN node is based on the same
GPCAN node as used for the detector control card described
above.
CRYSTAL
CS5525
ADC
n
sensor 2
sensor 2
2
SCLK
SDI
3
SDO
n
Sensor Module #1
1
CAN
controller
Analog
Mux
selection
3
Controller Module
20CN592
Micro
Module
sensor 1
analog in
Digital
Mux
CRYSTAL
CS5525
ADC
3
sensor 1
analog in
sensor 2
Analog
Mux
n
selection
selection
sensor 2
n
Sensor Module #2
8
3
CAN field bus
SCLK,SDI,SDO
CRYSTAL
CS5525
ADC
sensor 1
analog in
n
Analog
Mux
selection
sensor 2
sensor 2
n
Sensor Module #8
Figure 1. Crystal CAN system configuration.
3
delivered with the module. The controller module also contains a multiplexer chip to enable connections to several sensor modules. One controller module may control up to eight
sensor modules. A digital multiplexer-chip is used to switch
the controller's serial interface I/O port to any of the connected
sensor modules.
The CRYSTAL CS5525 ADC [7] is the heart of the Sensor
Module that digitizes the analogue quantities to be measured.
This 16-bits ADC contains an instrumentation amplifier, a
programmable gain amplifier, a digital filter and calibration
circuitry. It also has four output latch pins, which are used to
control an external analogue multiplexer to directly select any
of up to 16 analogue inputs. In our case we use the CPD output as a fifth bit to be able to select even up to 32 analogue
inputs. The CS5525 can perform conversions with rates of
3.76, 7.5, 15, 30, 60, 123, 169 and 202 Hz with voltage input
ranges of 25 mV, 55 mV, 100 mV, 1 V and 5 V, unipolar as
well as bipolar. The CS5525 is controlled through a three-wire
serial interface that is compatible with SPI and MicroWire
standards. The interface signal lines are connected to I/O ports
of the micro controller in the controller module, which runs
software implementing the protocol across these lines [8].
Host
Controller
(local)
user
interface
Local Control
Station
system bus (e.g. VME, PCI)
Detector
Control System
DCS
Bus-to-CAN
interface
System configuration
Figure 1 shows a schematic of a general configuration of a
Controller Module and a number of Sensor Modules. Typically the sensor modules are located a variable but relatively
small distance from the controller module and connected by a
cable, carrying the control signals and power. The individual
sensors either are part of the sensor module or are connected
to the sensor module by a cable. The controller module is connected to the outside world through a CAN field-bus. The
CAN-bus can extend several hundred meters with its -in this
application- foreseen 125 Kbit/sec transfer rate and can connect up to about 64 controller modules, thus providing the
required distributed control capability. Figure 2 shows a possible configuration of a local control station as part of the DCS
of the ATLAS detector. It is a hierarchical system: the controller modules monitor the sensors, the Bus-to-CAN interfaces
possibly may possess some intelligence and monitor the controller modules on 'their' CAN-bus and the host controller
monitors the various CAN-networks through the system-bus
interfaces and may provide a local (graphical) user interface.
The whole system is remotely monitored and controlled by the
controller
+
sensors
controller
+
sensors
controller
+
sensors
controller
+
sensors
CAN bus
CAN bus
Bus-to-CAN
interface
controller
+
sensors
controller
+
sensors
controller
+
sensors
controller
+
sensors
ATLAS detector
Figure 2. Sensor readout configuration for ATLAS DCS.
central DCS.
4
B-field sensors
The B-field Sensor Module [9] consists of six Hall elements, a
reference element, a temperature sensor and the CS5525 ADC,
multiplexer and some additional electronics. There are a total
of eight analogue quantities to measure: six B-field values, a
reference resistor for calibration and an NTC temperature sensor. The hall probes are calibrated regularly for a full scale of
1,4 Tesla with a resolution of 50 microTesla. The maximum
scan frequency is in the order of 10 Hz and is determined
mainly by the time needed for the analogue signal to stabilise
after switching the analogue multiplexer. The components of
the sensor module do not contain materials that disturb the
magnetic field.
Device
NODE
CAN
RS232
ADC
PWM
WD
PAR_IO
DIG_IO
HEX
Number
1
1
1
1
1
1
1
4
1
Description
General status of the node
CAN controller/interface
Serial I/O port
10-bits internal ADC
Pulse-width modulator output
Watchdog timer
8-bit parallel I/O port
1-bit I/O port
On-board hex switches
HV
BST
CS5255
4
1
8
High-voltage supply controller
JTAG port with BST functions
16-bits CRYSTAL ADC
Temperature sensors
The temperature sensor module [10] consists of a small box
containing the ADC, multiplexer, reference resistors for the
calibration inputs, some additional electronics and 30 connectors for the NTC temperature sensors, which can be spatially distributed around the module (distances between NTC’s
and box are typically up to several meters). There are a total
of 32 analogue quantities to measure. The input voltage to the
ADC on the module as a function of the temperature of the
NTC temperature sensor is a non-linear function, requiring a
look-up table.
Table 1. Software devices defined for our applications.
device can handle a status request, although the format of the
status returned depends on the type of the device.
Two types of devices can be distinguished in our applications:
1.
Simple or standard devices mapping directly to the
80C592's I/O-devices, like the ADC or the 8-bit I/O ports,
or mapping to the hardware layout, like the hex-switches
which are mapped to I/O port 0 (via a buffer with enableinput) or the on-chip watchdog.
2.
Specialized devices that combine certain 80C592 on-chip
I/O-devices in combination with specific hardware to
form the functionality of a more complex kind of device,
e.g. a high-voltage supply (with a ramping function, a tripmonitor function and a calibration function) or a JTAG interface with Boundary Scan Test (BST) functionality.
SOFTWARE ASPECTS
Higher-level standards for communication across the CAN-bus
are available from several suppliers in the market. Examples
of these communication standards are Smart Distributed System SDS from Honeywell, DeviceNet from Allen-Bradley and
CAL/CANopen from the CAN users and manufacturers association CAN in Automation. Nevertheless it was decided to develop our own simple protocol in order to get started quickly.
It was also assumed that the overhead of a high-level standard
for the relatively simple applications we planned would be too
costly. Moreover, the high-level standard preferred by us -the
non-proprietary CANopen protocol- was not available at the
time that we started developing our first CAN-bus applications. Actually, it is seen as one of the advantages of the CANbus that it is so easy to implement one's own applicationspecific protocol and quickly get to a working system. All
software written for the CAN nodes has been written in the Clanguage.
The software is designed in a device-oriented manner: a
CAN module can be viewed as a collection of devices, each
device being of a certain device-type (class), capable of handling certain commands (processing certain received messages) and sending certain messages or replies. In line with the
object-oriented approach is that commands can be given to
any or to several device-types although the exact action performed by a device depends on its device type. E.g. almost any
Table 1 lists examples of devices that have been defined
for our applications. The Number column indicates the quantity of a certain device type present in the node. This number
can vary according to the requirements of the application.
Besides this notion of devices, the functionality of the
software includes network management capabilities, like remote reset of nodes, remote configuration of CAN-controller
baud rate, connecting / disconnecting individual nodes and
node guarding. The values of devices (e.g. a temperature value) can either be monitored periodically by the network host
or can be reported asynchronously by individual CAN nodes
when the value exceeds a preset minimum/maximum value.
FUTURE DIRECTIONS
We expect CAN field bus networks be implemented on a large
scale in those parts of the future high-energy physics particle
5
detectors, where radiation levels are below a certain level
(which is not quantified yet). This excludes the inner parts of
the detector, includes probably the outer parts, such as the
MUON detector and includes certainly the large number of
crates and racks in the counting rooms.
In the developments so far we used a commercially available CAN hardware building block based on a type of micro
controller we were familiar with, offering ample memory and
I/O to ease and speed up development of hardware as well as
software. Keeping in mind the large number of sensors required in the coming high-energy physics experiments, in future developments we aim to minimize the hardware in terms
of cost, power dissipation and space requirement, however
without loosing completely the flexibility we now have. The
emphasis will be put on the development of hardware for 'true'
-general purpose- control and monitoring tasks, not so much
for special purpose tasks like the Detector Control Card we
described in this paper, whose main task is to configure hardware on behalf of the data-acquisition system.
Conformance to an open high-level standard for the CAN
communication protocol used in the CAN application software
will certainly be beneficial, especially when the number and
the size of the applications will grow, as well as the number of
people involved in the project will grow. Moreover the possibilities of integration of commercially available CAN equipment as well as supervisory tools and other (commercial) utilities will be greatly increased when using an open standard.
The CANopen communication and device protocol [11] defined by the CAN in Automation users and manufacturers
group (CiA) is a likely candidate to be used as the communication layer standard. CANopen offers -if needed- complete con-
figurability of device and communication parameters, while at
the same time allowing on the same network very simple minimal capability devices, because the mandatory minimum requirements for a device to function in a CANopen network are
only few. The CANopen standard has the capability of mixing
real-time data with non-real-time on the same network without
compromising the real-time behavior.
REFERENCES
[1] ATLAS Muon Spectrometer, Technical Design Report
(chapter 11), CERN/LHCC-97-22, 5 June 1997
[2] Recommendations for the use of field buses at CERN,
Guy Baribaud et al, CERN-ECP/96-11
[3] Road Vehicles – Interchange of Digital Information –
Controller Area Network (CAN) for high-speed communication, Document ISO/DIS 11898, International Standard Organization, 1993
[4] CV002 VME, MicroSys Electronics GmbH
[5] 20CN592 80C592 based micro module with an on board
CAN bus controller, MicroKey B.V. 1996
[6] Boundary Scan Test, H.Bleeker et al, Kluwer Academic
Publisher, 1993
[7] CS5525/CS5526 16-bit/20-bit multi-range ADC with 4bit latch, data sheets, Crystal Semiconductor Corporation,
Sept 1996
[8] Interfacing the CS5526 to the 80C51 Micro controller,
AN74, Crystal Semiconductor Corporation, Sept 1996.
[9] 3D Magneetveld meter, J.T. van Es, NIKHEF-ETR 97-05
[10] NTC-Temp.sensor Scanner and Conditioner. J.T. van Es,
NIKHEF-ETR-97-03
[11] CANopen, CAL-based Communication Profile For Industrial Systems, CiA, DS 301 Version 3.0, October 1996
6