Sunteți pe pagina 1din 50

94J281/575 10/2/97,5:39 PM

JOINT ADVANCED STRIKE TECHNOLOGY PROGRAM

AVIONICS ARCHITECTURE DEFINITION APPENDICIES


VERSION 1.0
August 9, 1994
Lt Col Chuck Pinney, JAST Avionics Lead 1215 Jefferson Davis Highway, Suite 800 Arlington, VA 22202 phone: 703-602-7390 ext. 6625 e-mail: peojast@tecnet1.jcte.jcs.mil Technical Contacts: Reed Morgan Wright Laboratory Avionics Directorate WPAFB, OH 45433 513-255-4709 morgandr@aa.wpafb.af.mil Ralph Lachenmaier Code 505B Naval Air Warfare Center Warminster, PA 18974 215-441-1634 lachenmaier@nadc.navy.mil

94J281/575 10/2/97,5:39 PM

94J281/575 10/2/97,5:39 PM

Appendix A The IEEE Scalable Coherent Interface -An Approach For A Unified Avionics Network (An Avionics White Paper) A.1 Summary The U. S. Navy Next Generation Computer Resources (NGCR) High Speed Data Transfer Network (HSDTN) program has chosen the Institute of Electronics and Electrical Engineers (IEEE) Scalable Coherent Interface (SCI) as one of its baseline standards. This paper proposes to use SCI as a unified avionics network and describes SCI and extensions to it-particularly an extension known as SCI/ Real Time (SCI/RT). Because SCI can be used in a serial configuration, such a network provides an alternative to the need for ever denser and ever more reliable backplane connectors by reducing the number and size of interconnects and, hence, the need for large numbers of pins. In addition, SCI reduces packaging problems by using a small amount of board real estate and by using distance insensitive links which can extend board to board or box to box, thus facilitating a distributed backplane approach. SCI is currently being applied to both ring and switch based networks, to both parallel and serial implementations, to both message passing and shared memory computing paradigms, and to both electrical and optical physical layers. SCI/RT is a set of proposed enhancements, developed initially by the Canadian Navy and now being evaluated by the NGCR HSDTN and IEEE working groups, to make SCI more fault tolerant and to provide the determinism and priorities necessary to support Rate Monotonic Scheduling. Other enhancements have also been proposed for SCI/RT including fault tolerant board designs, reduced latency, and board pin assignments for commercial and military boards and modules. Addition of these features will allow SCI to perform, in a unified and seamless manner, the functions of command and control interconnect, data flow network, and sensor/ video network. As an added benefit, SCI has potential for use interconnecting multiple processing chips on the same board. Because SCI development has taken place in an open IEEE forum, it is cross national having implementation efforts by U. S., French, and Norwegian companies. A.2 Introduction Current avionics architectures utilize a number of different digital interconnects for a number of different avionics applications. Figure A.2-1 shows a typical design for a current system with the various interconnects conspicuously labeled. These interconnects were designed in the mid to late 1980s. With the rapid advances taking place in digital electronics most of these interconnects are a generation or two old. Indeed new commercial interconnects are being developed with an order of magnitude higher speed than the current networks shown in Figure A.2-1, although they have not been tested in tactical aircraft. The speed and flexibility of these new interconnects opens the opportunity for reducing avionics costs by allowing a single network to replace most or all of the current interconnects shown.
A-1

94J281/575 10/2/97,5:39 PM

Optical Interconnect

Pi Bus,TMBus, DataNetwork

Optical Interconnect

SAE High Speed DataBus

F.O. F.O. SIGNAL DATA SENSOR SWITCHED SWITCHED FRONT ENDS NETWORK PROCESSORS PROCESSORS NETWORK DISPLAYS APERTURES INTEGRATED RACKS EO FLIR/IRST S E N S O R N E T W O R K I N T E R R A C K N E T W O R K V I D E O N E T W O R K

AIRCRAFT SYSTEMS CONTROLS MASS MEMORY

MISSILE WARNING R F A R R A Y S

SWITCHED NETWORKS

DISPLAY DISPLAY DISPLAY F O COCKPIT INDICATORS ELEC POWER SYS FLIGHT CONTROL SYSTEM INERTIAL SENSORS AIR DATA SYSTEM RECORDERS

PAR. & SERIAL BUSES

RADAR

EW/ESM

CNI

SWITCHED NETWORKS

A V I O N I DISPLAY C S

ACOUSTICS

PAR. & SERIAL BUSES INTEGRATED BACKPLANES

B U S

ANALOG

DIGITAL

DIGITAL/ANALOG MODULES

DIGITAL

ANALOG

Figure A.2-1. Example of a Current Modular Architecture A.3 The Need For A Unified Avinoics Network As stated above, technology has progressed to the point that a single unified interconnect has become feasible. It is not only feasible, it is also desirable for the following reasons. First, using multiple interconnects on a module takes excessive board real estate and may require more pins than would be necessary with a single interconnect. Special logic is needed to decide which interconnect to use for which purpose. In addition, reducing the number of pins and logic can increase system reliability. For example, if a system has 400 pins per connector and 200 modules, the total number of pins where a failure could occur is 80,000. Second, passing data from one interconnect to another interconnect requires special interface modules called bridges or gateways. Because of dissimilarities of the interconnects, data passing through a bridge is often delayed. This results in increased latency and lower performance for the processors involved. Using a single interconnect system across the avionics system can eliminate bridges and support "wormhole routing", resulting in improved avionics system performance. Third, buses require very elaborate backplanes with multiple power and ground layers to control high frequency ground bounce and noise which infects single-ended bused interconnects. Because of the very high frequency at which new semiconductors switch, very high frequency noise is produced. Since most high frequency current is carried on the surface of conductors (the skin effect), a large number of grounding and power surfaces is necessary in a single ended tapped
A-2

94J281/575 10/2/97,5:39 PM

backplane. This requires many ground and power planes causing backplanes to become very thick (1/4 inch or more) and heavy (on the order of several pounds). Point to point links, either differential electrical or optical can result in much simpler and lighter backplanes. Evolving technology places some additional requirements on the interconnect which cannot be met by current interconnects. For example, the recent developments in parallel processing hold promise for making avionics systems much more capable and making software simpler and more scalable in very high performance applications. Studies, such as the Westinghouse "Parallel Processor Applications Study", (1989) done for PAVE PACE, have shown that to maximally exploit future sensors, processing requirements will have to reach many hundreds of BOPS. Thus the interconnect must be capable of supporting hundreds of processors. Some high performance avionics applications which might benefit from parallel processing are fused sensor tracking applications, automatic target recognition, and sensor signal processing. Currently, software in most parallel processing avionics applications is custom coded to make efficient use of the processors. This makes the software non-transportable and difficult to scale to larger or smaller processor configurations as computing needs change. By using a cache coherent computer paradigm, software can be made simpler and more scalable. Therefore, the interconnect must be able to support cache coherent shared memory architectures. An interconnect which supports shared memory must also have the property of very low latency. Although a processor in a cached shared memory system waits for a new cache line only a small percentage of the time, a high latency interconnect can cause this wait to be significant. With processors expected to reach 1000 MHz by year 2000 every nanosecond latency delay means a lost processor cycle. If a large number of parallel processors are waiting for memory access, the lost cycles can add up in a hurry. Presently it is not uncommon for large parallel systems to achieve processor utilization of less than 20%. This must be improved upon if parallel processors are to become feasible. Networks used in military systems must meet the special military needs for harsh environment, fault tolerance, real time usage, reliability, maintainability, and testability. Current networks shown in Figure A.2-1 meet most of these requirements. A unified network will also have to meet these requirements. However, one additional requirement which is necessary for a unified network is that it be able to mix command and control messages with streaming data and be able to ascertain that the command and control messages are not blocked by the streaming data. This requires a schedulable network using a technique such as Rate Monotonic Scheduling, or use of a very fast but lightly loaded network so that messages have little impact on one another. A unified avionics network must be flexible enough to be used in different applications. Since some interconnects are less than an inch in distance and some are many yards long, the unified network must be relatively distance insensitive. Since some interconnects, such as shared memory interconnects, require multi-Gbits/sec bandwidth to keep up with emerging processor technology, and some interconnects, such as video networks are less than a Gbit/sec, the network should efficiently support a variety of bandwidths
A-3

94J281/575 10/2/97,5:39 PM

and be upwardly scalable. With present limitations on serial bandwidth to one or two Gbits/sec, parallel as well as serial versions of the interconnect are needed, at least for the present. Finally, the interconnect should be able to support both electrical and optical implementations, since optical is needed for long distance and for EMI control whereas electrical implementations are smaller, cheaper and sufficient for most "inside the rack" applications. Another consideration is that new military systems want to leverage commercial technology. This is an important consideration in the U.S. Navys NGCR program. Leveraging commercial technology will provide lower costs and give an upward migration path, as long as popular standards are selected. However, the downside of commercial technology is that popular items are not necessarily based on the best underlying technology, but on other factors such as marketing. Thus commercial technology cannot be used blindly and enhancements are often needed in applying commercial technology to military systems. A.4 Proposed Avionics Architecture Using a Unified Network In a unified avionics network one interconnect will be used for a sensor network, video network, inter-rack network, command and control network (both backplane and inter-rack), data flow network and test and maintenance network. It must have the performance, flexibility, and scalability to be used in these different applications. A unified avionics network will lessen or eliminate the need for interface modules and logic to go between the different interconnects. A reduction in the number of unique interconnects will reduce cost and weight, improve performance and provide a path to parallel processing. The IEEE SCI, IEEE Std. 1596-1992, is a good candidate to form the basis for such a unified network. SCI is a high performance, flexible and scalable network. It allows up to 8 Gbits/sec on each link, and since it is point-to-point it allows multiple simultaneous 8 Gbit/sec conversations. In addition, node latencies are relatively low (on the order of 25 to 100 nanoseconds). Point to point links can be configured as a centralized switched network, distributed switch, ring-based network, mesh, butterfly, hypercube and more, all of which can be intermixed in the system. The physical media can be electrical or optical, serial or parallel or intermixed. Its relative distance insensitivity allows its use as a peripheral interconnect to mass storage, sensors, and displays in addition to interconnecting parallel processors. SCI supports both message passing and cache coherent shared memory computing paradigms. It supports up to 64K nodes. And since simultaneous conversations are possible, the bandwidth scales with the number of processors. Figure A.4-1 shows SCI/RT used as a unified network replacement for the current interconnects shown in Figure A.2-1.

A-4

94J281/575 10/2/97,5:39 PM
SCI/ RT Swit ched Net w o rk SCI/ RT Rings, Dist ributed or Cent ralized Swit ched Net work SCI/ RT Swit ched Net w o rk SAE High Speed Dat a Bus

F.O. F.O. SIGNAL DATA SENSOR SWITCHED SWITCHED APERTURES FRONT ENDS NETWORK PROCESSORS PROCESSORS NETWORK DISPLAYS INTEGRATED RACKS EO FLIR/IRST S E N S O R N E T W O R K I N T E R R A C K N E T W O R K V I D E O N E T W O R K F DISPLAY O DISPLAY A V DISPLAY I O N I C S DISPLAY B U S

AIRCRAFT SYSTEMS CONTROLS MASS MEMORY COCKPIT INDICATORS ELEC POWER SYS FLIGHT CONTROL SYSTEM INERTIAL SENSORS AIR DATA SYSTEM RECORDERS

MISSILE WARNING R F A R R A Y S

SWITCHED NETWORKS

PAR. & SERIAL BUSES

RADAR

EW/ESM

CNI

SWITCHED NETWORKS

ACOUSTICS

PAR. & SERIAL BUSES INTEGRATED BACKPLANES

ANALOG

DIGITAL

DIGITAL/ANALOG MODULES

DIGITAL

ANALOG

Figure A.4-1 Proposed Avionics Architecture Using SCI/RT As A Unified Network A.5 Description of Baseline SCI SCI is an interconnect system designed for both backplane and local area network usage. It was designed for high performance commercial computer systems and has been adopted by a number of commercial computer manufacturers. Commercial interface chips are available with additional research taking place to improve performance and capability and establish long term viability. In its basic format it is a system of rings and switches. It is intended for very high performance parallel processing--from small scale parallel to massively parallel. Rings and switches were selected as the basic communication medium, because they require only point to point links rather than multi-drop T tapped bus lines. Point to point links provide inherently cleaner signals and hence can run at higher speeds and lower voltages than multi-drop bused interconnects. In addition, switches provide for multiple simultaneous conversations among boards--a necessity for highly parallel systems. SCI rings, since they are insertion rings with bypass buffers, also allow multiple simultaneous conversations depending on the configuration of senders and receivers within the ring data flow. Because two party rings degrade into simple full duplex data links (one input and one output to/ from each node), SCI has been able to define interface protocols which are applicable to both rings and switches. With its support for both rings and switches, SCI is applicable for use in both centralized switch based parallel systems such as the BBN Butterfly
A-5

94J281/575 10/2/97,5:39 PM

machine, as well as in distributed switch systems such as the mesh based architecture in the Intel Touchstone. Obviously it can also support hybrids of the two architectures. Figures A.5-1 through A.5-5 illustrate some of the various topologies which SCI supports.
Processor/ Memory Module Processor/ Memory Module Processor/ Memory Module Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module Ring

Processor/ Memory Module

Processor/ Memory Module

Centralized Switch

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Figure A.5-1 Basic Ring


Processor/ Memory Module Processor/ Memory Module Processor/ Memory Module Processor/ Memory Module

Figure A.5-2 Central Switch-- Two Party Rings


Processor/ Memory Module Processor/ Memory Module

Centralized Switch

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Processor/ Memory Module

Figure A.5-3 Rings Interconnected by a Switch

Figure A.5-4 Distributed Switch, Toroidal Mesh Fault Tolerant


A-6

Figure A.5-5 Distributed Switch, Wrapped Butterfly -- Fault Tolerant

94J281/575 10/2/97,5:39 PM

As its name implies SCI is designed for use in tightly coupled cache coherent shared memory systems, although it can also support message passing and block transfers. It uses a directory based cache coherency protocol, because of the inherent scalability of that scheme. It conforms to the IEEE 1212 Control and Status Register (CSR) standard and supports the shared memory locks specified therein. Two physical variants of SCI are presently defined: (1) A 16 bit wide (plus a clock line and a flag line) parallel version running at 8 Gbps on each point to point link; and (2) A serial version, which may be either electrical or fiber optical, running at 1 Gbps. Because the SCI protocol requires no reverse direction handshakes between boards, links can extend for long distances. Electrical links can extend to about 30 meters and fiber optic links to several kilometers. To improve signaling characteristics and control noise, the electrical versions of SCI use differential signaling. This has led to the development of the IEEE P1596.3 Low Voltage Differential Signaling standard, an additional option for SCI, which allows 0.4 volt signal swings, thereby reducing power. Current SCI parallel implementations are single chip solutions, including all driver chips. Serial implementations currently require the addition of serializer and deserializer chips which make that variant a three chip interface. However, it may be possible to combine the serializer chips and protocol chips. Semi-conductor houses are investigating this option. Data is transferred between nodes in SCI using transactions (usually) consisting of request and response subactions. Each subaction consists of sending two packets--send and echo. Transfer of data can occur during request-send, response-send or both. The SCI subactions are shown in Figure A.5-6. SCI uses small packets to transfer data. Send packets are less than 300 bytes and echo packets are only 4 bytes long. This allows SCI to use small buffers and queues. It also allows higher priority packets to be quickly transmitted without waiting for a long packet to finish. The echo packet provides a ringlet local acknowledgment that a send packet was received at the agent or destination node allowing the ringlet local send queue entry to be cleared.
r e q u e s t e r source Request Subaction request-send target r e s p o n d e r

target

request-echo source source

target source

response-send

response-echo target

Request Subaction

Figure A.5-6 SCI Subaction However, the echo does not indicate whether the data received was good or bad. This information is sent to the source node in the response-send packet. SCI also allows Move transactions, which consist only of a request subaction (no response subaction), and Event transactions, which have neither an echo nor a response.
A-7

94J281/575 10/2/97,5:39 PM

Priority enforcement in SCI is done through comparison of priority information for the nodes next waiting packet and the nodes estimate of the ring priority. The nodes estimate of ring priority is gathered from the last send/echo transaction to traverse the ring and distributed to all ringlet nodes in idle symbols. In addition, a blocked node can temporarily shut down other nodes access to the ring in order that it can gain access. Unfortunately, SCI supports only four priority levels. A small portion of the ringlet bandwidth is reserved for low priority packet "fairness" so that no node is indefinitely prevented from accessing the ring. This is done by use of "high-type" and "low-type" idle symbols. When idle symbols are created by stripping packets from the ring, a small percentage of "low type" idles are always created. Since packets must always be appended to an idle, and since only low priority packets can be appended to a low-type idles and a few lowtype symbols are always present, some bandwidth is always reserved for low priority fairness. One node on each ring is designated the scrubber node, which provides housekeeping tasks, such as deleting damaged packets, monitoring ringlet activity, returning node ID addressing errors, and maintaining certain flow control parameters and timeout counters. Baseline SCI has a number of features that make implementing a fault tolerant system easier including: (1) The use of differential signaling in the electrical variant provides for good noise immunity; (2) The subdivision of the network into multiple ringlets provides a compartmentalized fault containment region (FCR) which allows for good fault isolation and distributed fault recovery capabilities; (3) Good fault traceability and hardware support mechanisms for fault handling are inherently provided in the scrubber maintenance, trace bit, stomped packets, status codes, time-outs and the CSRs; (4) The echo and response time-outs are also useful fault detection mechanisms but the base standard needs to be augmented with an end to end "duplicate suppression" mechanism so that faulty packets can be resent without side effects; (5) The CSRs are useful in isolating errors within the system; (6) The use of a distributed recovery list in the cache coherency protocol provides for resiliency to any single effect; and (7) SCI uses a 16 bit polynomial for a CRC level error check which provides for all single and double bit errors (regardless of data block length), all errors on an odd number of bits, all burst errors under 16 bits, and if the burst error is over 17 bits, the probability that an error will be undetected is 1/(16)16. A.6 Description of SCI/RT and Other Proposed Enhancements While the baseline SCI is an established IEEE standard, further development is still taking place on additions and variants. Additional parallel link widths are under consideration such as a narrower 8 bit wide version and a wider 32 bit wide version. Higher speed serial versions are being investigated with a target of 8 Gbps. A tree structured directory based cache coherence protocol is being developed. Additionally, an IEEE working group has been formed to enhance the SCI standard to make it more applicable to real time and fault tolerant applications such as are found in military and specialized commercial applications. The SCI/RT Working Group was started jointly by the
A-8

94J281/575 10/2/97,5:39 PM

Canadian Navy and the NGCR program. An initial draft of the SCI/RT standard was proposed by Edgewater Computer and the Canadian Navy and is available from the IEEE. The proposed enhancements are in the areas of improved determinism to better support real time scheduling, improved throughput and lower latency in real time applications, additional pinout and board specifications for military and other applications, and improved fault tolerance for use in mission critical applications. Security is also an area of concern for SCI/RT. To improve determinism the SCI/RT proposal would increase the number of priority levels, replace input and output FIFOs with priority driven preemptive queues, and replace the single packet bypass FIFO with a multi-packet preemptive bypass queue. The increased number of priority levels allows use of Rate Monotonic Scheduling theory to schedule the interconnect. Priority driven preemptive input and output queues allows higher priority data to proceed through the interconnect ahead of lower priority data. And a priority driven, multi-packet preemptive bypass queue allows higher priority packets to begin transmission rather than be blocked by lower priority packets from upstream neighbors on a ring. SCI/RT proposed enhancements to improve performance include better flow control, a method of immediately passing packets through a node without waiting for address checking, and a method of virtual circuit switching to support streaming data. The improved flow control will control ring access on a node group basis rather than for the ring as a whole. Immediately passing packets through a node without waiting for address checking will lower the latency from the current range of 25 to 100 nanoseconds to a one cycle delay (2 nsec. in a parallel implementation, and 1 bit or 1 nsec. in a serial implementation). Virtual circuit switching will support sensor and video networks. Although baseline SCI provides many features for implementing fault tolerance, additional fault tolerance tools are needed to meet the requirement that no single failure can take down an entire box. A number of proposals are before the SCI/RT Working Group including: (1) a 32-bit CRC polynomial to provide additional error detection; (2) ringlet local fault retry; (3) end-to-end fault retry that provides for the duplicate suppression of transactions that if retried may have harmful side effects; (4) skip-a-node and dual node topologies that will be specified as fault tolerant node configurations. The basic ring configuration of SCI and SCI/RT allows for a number of flexible topologies and alternative approaches to fault tolerant architectures. The NGCR program is supporting three SCI topologies, the basic single node, the dual node, and the skip-a-node topologies. These node configurations are shown in Figures A.6-1, A.6-2, A.6-3 respectively. It is expected that the dual node configuration, which supports mesh, butterfly, redundant centralized switch, and distributed switch topologies, all in a fault tolerant manner, would be used in tactical aircraft. Figures A.5-4 and A.5-5 show dual node boards used in fault tolerant mesh and wrapped butterfly configurations. Figure A.6-4 shows use of either dual node boards or skip-a-node boards in a fault tolerant redundant centralized switch configuration. Figure A.6-5 shows skip-a-node boards used in a ring configuration. Finally Figure A.6-6 shows dual node boards used in a fault tolerant counter-rotating ring configuration.
A-9

94J281/575 10/2/97,5:39 PM
SCI Node Chip

SCI Node Chip

Crossover

SCI Node Chip

Figure A.6-1 Basic Node Technology

Figure A.6-2 Dual Node Topology

F a n o u t

SCI Node Chip

M u x.

Figure A.6-3 Skip-A-Node Board Configuration

C e n t r a l iz e d Sw it c h Pr o c e s s o r / M em or y M o d u le Pr o c e ss o r / Mem o ry M o d u le Pr o c e s s o r / M em o r y M o d u le Pr o c es s o r / Mem o ry M o d u le

C e n t r a l iz e d Sw it c h

Figure A.6-4 Redundant Central Switch

Figure A.6-5 Skip a Node Boards in a Ring Configuration-- Fault Tolerant

Figure A.6-6 Dual Node Boards in Counter-Rotating Ring Configuration -- Fault Tolerant

A-10

94J281/575 10/2/97,5:39 PM

A.7 Other SCI Related Work Other SCI-related work includes the previously discussed Low Voltage Differential Signaling (LVDS), RamLink, and an SCI-like chip to chip interconnect. RamLink, IEEE P1596.4, is a simplification of the SCI protocol for a high bandwidth interface to exchange data between a single master (the memory controller) and one or more DRAMs. The chip-to-chip interconnect is an effort to provide a simple on-chip network for connecting multiple processors. Like RamLink, it is a simplification of the SCI protocol, but in this case it connects multiple masters instead of a single master with one or more slaves. A.8 How SCI and SCI/RT Satisfy the Needs of A Unified Network SCI and/or SCI/RT can be used as described below to replace the unique interconnects that are now used in avionics systems. The command and control bus in an avionics system needs to guarantee arrival of real-time command and control messages. This can be accomplished using a lightly loaded SCI or a schedulable SCI/RT interconnect. TheTM bus is used to test and troubleshoot modules throughout the system. It needs to provide an alternate access to the module to resolve the ambiguity of whether the module or the interconnect failed when a module does not respond to a query. The extra interconnect on a dual node system could function as a test and maintenance interface. The data flow network connects processors in a high performance, cache coherent, shared memory or message passing parallel processor architecture. SCI or SCI/RT can be used in this application using moderate performance rings or high performance switches - either centralized or distributed. The high speed data bus has lower performance requirements than the data flow network, and the cost of a single interconnect needs to be low enough to be cost effective in this application. In some cases, the HSDB functions could be mixed with other inter-box SCI interconnects. The sensor/video network requires sending large amounts of data simultaneously between many pairs of nodes. An SCI-based centralized switch or virtual circuit router could accomplish this task. Lastly, box-to-box interconnects requires a high performance, distance insensitive interconnect that can be accomplished using SCI or SCI/RT rings or switches. The flexibility of SCI can provide a seamless interconnection from backplane to LAN, from electrical to optical and from serial to parallel. A.9 Packaging Ramifications of the Unified Network
A-11

94J281/575 10/2/97,5:39 PM

Ramifications of using a SCI-based unified network include impacts on pin counts, type of cables, power/weight/volume/cost of the backplane, and board real estate. A comparison of pin count and performance between an unified avionics network and a current avionics network shows that the unified avionics network gives much greater performance per pin than is currently available. The pin count of a current avionics system using PI-bus, Data Flow Network and dual TM buses is about 145 pins. The throughput of this system is about 1 Gbps. Using a parallel implementation of dual SCI interconnects provides a total of 16 Gbps throughput with approximately the same number of pins (taking into account the interconnects, power, and ground pins). A serial implementation of a dual SCI network has 2 Gbps--twice the throughput as the current system while using only 44 pins (all but four are power and ground). Fewer interconnect pins allow for more I/O pins, which allows for more flexibility in plugging in modules since fewer unique slots will be needed. It is expected that connectors will still require all the pins that they can hold, since there never seems to be enough pins for I/O. Since SCI is flexible enough to provide parallel or serial implementations, the best implementation can be selected for a particular application without affecting the logical layer of the protocol. SCI can use either electrical cable or fiber optic cable to connect nodes. Again the choice can be tailored to the application without affecting the logical layer. SCI uses differential pairs in the electrical implementation to eliminate ground bounce noise. Coaxial cable seems to be necessary for Gbit/sec and higher links, although Autobahn has been able to tune differential stripline to run at 1.8 Gbps. In tests at the Naval Air Warfare Center/ Aircraft Division - Warminster, 5 Gbits/sec appears to be the limit for coaxial cable going two to three feet, as may be the worst case in a large integrated rack application. Beyond that speed and distance, fiber should be used. Therefore, it may make sense to go electrical within a row of cards, but use fibers between rows in the same box. Although coaxial contacts take more space, a single coax can replace the 18 contacts necessary for a parallel implementation. In the future, it may be practical to use parallel optics if small multiple fiber contacts can be developed and produced in a cost effective manner. The power, weight and volume of the backplane will be affected by using SCI. The backplane will be simpler and lighter since the need for multiple power and ground planes is lessened. And the SCI Working Group has completed work on a low power version (LVDS) of SCI as previously described. In addition, if using the serial implementation of SCI, the number of unique backplane slots may decline since more of the backplane can be used for I/O. Also, by using a unified network there is no need for bridges between diverse interconnect systems. A.10 Conclusions

A-12

94J281/575 10/2/97,5:39 PM

The performance, flexibility, and scalability of SCI allows it to be used in a number of avionics applications, such as module-to-module interconnects, box-to-box interconnects, and sensor/video interconnects to form a unified avionics network. This would replace the current situation of having different interconnects for each network. A unified avionics network will allow commonality within an aircraft and among aircraft and will improve maintainability and reduce spare parts costs. A unified avionics network will reduce the number of unique interconnects which will save cost, lower weight, improve performance, and provide a path to parallel processing. A.11 References 1. IEEE Std. 1596-1992 IEEE Standard Scalable Coherent Interface 2. IEEE P1596.2 IEEE Standard for Cache Optimization for Large Numbers of Processors using the Scalable Coherent Interface (SCI), Draft 0.35, March 22, 1994 3. IEEE P1596.3 IEEE Standard for Low Voltage Differential Signals for SCI, Draft 1.00, December 30, 1993 4. IEEE P1596.4 IEEE Standard for High-Bandwidth Memory Interface Based on SCI Signaling Technology, Draft 1.00, December 13, 1994 5. IEEE P1596.6 Scalable Coherent Interface for Real-Time Applications, Draft 0.12, October 2, 1992 6. IEEE Std. 1212-1991 IEEE Standard Control and Status Register (CSR) Architecture for Microcomputer Buses

A-13

94J281/575 10/2/97,5:39 PM

Appendix B PAVE PACE Description B.1 Introduction The design of an advanced, highly integrated avionics architecture for military aircraft for use in the post-2005 time-frame is described. In addition, potential advantages of this design over current approaches is provided. This section of the report emphasizes that the dramatic increase of avionics system costs will force avionics architects to extend the integration concepts previously applied to digital systems into the analog/sensor arena where the dominant costs now lie. Further, new technologies in the areas of RF micro-circuitry and packaging, cooling, digital packaging, and opto-electronics will make this needed architecture feasible for the referenced time-frame. This architecture will significantly re-shape not only how digital systems will be integrated, but will make highly integrated RF sensor systems feasible and desirable. Findings include: integrated RF systems can reduce weight, volume and cost of todays most advanced designs by at least 50% and can be built using a small family of standard modules; even more dramatic improvements in digital processing speed and memory will be achieved; the use of advanced liquid flow-through cooling and the use of photonic backplanes will be required to enable rugged, lightweight, high speed integrated processing systems to be built; both wire and opto-electronic bus-structured networks will yield to switched photonic networks for the transmission of both digital and sensor-based analog information. As a result of integration, fault tolerance and system reconfiguration can be achieved which will improve system availability. Further, situation awareness can be improved by the fusion of information. Penalties paid for these strides are increased software cost and complexity, and faster, more complex information networks. B.2 Architecture Evolution PAVE PACE is a thrust in validating enhancements and extensions to recently emerging third generation architectures and hence, could reasonably be described as a fourth generation architecture. Figure B.2-1 shows top-level diagrams of the various architectures that have evolved over the last half-century. Independent (analog) avionics is viewed as the first generation where each functional area had separate, dedicated sensors, processors and displays and the interconnect media was point-to-point wiring. Federated avionics, the second generation, is typical of most military avionics flying today. Resource sharing occurs at the last link in the information chain, via the controls and displays. Also, a time-shared multiplex circuit "highway" is used. Several standard data processors are often used to perform a variety of low-bandwidth functions such as navigation, weapon delivery, stores management and flight control. The data processors are interconnected by time-division multiplex buses which range in data rate capability from 1 megabit/second (MIL STD 1553) to 20 megabits/second (STANAG 3910). Low interconnect bandwidths and central control is possible because high speed signaling requirements such as A/D conversion and signal processing occurs within the "black boxes" through interconnections within dedicated backplanes in each of the federated chains. Use of these networks have dramatically
B-1

94J281/575 10/2/97,5:39 PM

simplified physical integration and retrofit problems. Such avionics systems are achieving up to 10-12 hours MTBF reliability (field reports).

Comm

Radar NAV Comm

Radar NAV Mission

Mission

Independent Avionics (40s - 50s)


Common Integrated Processors

Federated Avionics (60s - 70s)

ASDN
Radar

Common Analog Modules

Common Digital Modules (Supercomputers)

Comm

EW

Integrated Avionics (80s - 90s)

Advanced Integrated Avionics (Post 2000)

Figure B.2-1 Architecture Evolution This type of architecture was necessitated after the appearance of digital data processing on aircraft. The programmability and versatility of the digital processing resulted in the interconnectivity of aircraft state sensors (e.g. speed, angle of attack) along with avionic sensors which provide situation awareness (e.g. target range, terrain elevation) and crew inputs. In contrast to analog avionics, data processing provided precise solutions over a large range of flight, weapon and sensor conditions. Further, time-division multiplexing saved hundreds of pounds of wiring, even for a small fighter. "Integrated avionics," which should really be called integrated digital avionics, makes up the third generation avionics architecture. It is typical of PAVE PILLAR-type avionics which was developed in the 1980s. This architecture is discussed in Annex A. Note a more robust sharing of common controls and displays. The main feature of this architecture however, is the use of a small family of modular, linereplaceable units to accomplish virtually all the signal and data processing, arranged in conveniently located racks. Third generation systems also demonstrate how sensor functions such as EW and communications are integrated within their own "affinity grouping". The motivation behind this architecture was to simultaneously achieve several strides in avionics including the use
B-2

94J281/575 10/2/97,5:39 PM

of line replaceable modules, the elimination of the intermediate repair shop at the air base to reduce maintenance personnel, system reconfiguration down to the module level, cost reduction through the use of common, replicated modules, and the exploitation of blended data. Network requirements have escalated as more integration, time sharing and increased circuit density has increased, with a photonic high speed databus (50 megabits/second) being used to interconnect processor racks and several photonic point-to-point sensor/rack interconnects (400 megabits/second per circuit) being used to interconnect the digital data streams from the sensors. Note how the increased centralization of the digital design has impacted the integrating network. The signal processing data streams previously confined to dedicated backplanes for the second generation designs are now handled by a more complex backplane containing the routing for a 32-bit parallel network to accommodate the switching of multiple sensor data streams and digitized video between global memories and signal processors. These modules are arranged in clusters to accommodate various processing functions. This architecture has many significant benefits over the second generation, including two-level maintenance, fault-tolerance and reconfiguration capability to enhance system availability, and reduced acquisition and support costs through high-volume production. All these capabilities have been made possible by improved componentry and integration. Viewed more simplistically however, this architecture was needed to integrate affordable programmable signal processors into real-time avionic networks. Just as data processing had replaced analog computing earlier, VLSI improvements in micro-circuitry was making powerful, programmable signal processors possible for aircraft. Extraordinarily complex and expensive signal and graphic processors began to appear and unnecessary proliferation was occurring. These strides opened up a new information domain for exploitation by the system designer. Target, terrain and threat data from local sensors and stored data could be fused; new capabilities in situation awareness in which fused data was presented to the aircrew was now possible. A new, higher speed network was needed to "get at", fuse, and display this information. The concept of a small family of common signal processors thus emerged. The common data processors, connected to the same backplane as the signal processors, continued to do high level tasks as before but now, they also provide control for the signal processors. From a performance perspective, the tight coupling of both signal and data computing assets allows new capabilities in data fusion to improve situation awareness. Weight and volume are also substantially reduced due to packaging advances. In return, a complex, hybrid network of photonic and electrical buses and switches is required. Fourth generation architectures, appropriate for consideration in the post-2005 time-frame, take the next step as we move towards the skin of the aircraft by integrating sensors within both the RF and EO domains to achieve a modular, line-replaceable design. Fundamentally, the same integration philosophy used in third generation digital systems is at work, particularly for the RF function. The identity of many radar, communications, and EW functions is lost from a hardware perspective. Functionality is achieved through software. RF apertures, which could be multifunctional in nature, are connected through an avionic system distribution switched network. Additionally, fourth generation avionics will need to improve the interconnection network design
B-3

94J281/575 10/2/97,5:39 PM

and support "supercomputer" class processing required by advances in such areas as automatic target recognition and emission control. The key observations to be made here are: the general trend in avionic architectures has been towards increased digitization of functions and increased sharing and modularization of functions; the extension of these integration/sharing concepts has migrated steadily towards the skin of the aircraft; functionality has increasingly been obtained through software; software complexity has increased whenever hardware architectural strides are made and; hardware sharing increases are also purchased at the expense of increased network complexity and speed, whenever a dedicated function is physically removed from its federated chain and placed at a higher system level. B.3 Avionics Cost Escalation The major challenge in considering any advanced form of integrated avionics is whether it can solve the cost "problem." Figure B.3-1 shows the steady increase of avionics hardware "flyaway" costs as a function of the year of introduction of the weapon system. Costs (mostly second generation systems) of software development and weapons are not included, nor are support costs. Software development costs for a new weapon system amounts to about 22-25% of the Research, Development, Test and Evaluation (RDT&E) costs for avionics (which might be around a total of $3B). This is about the same percentage spent on the RDT&E for the avionics hardware. Software support and retrofit costs over a 20-year life cycle will likely be about twice as much as the development cost. The significance of the "software problem" lies with the number of aircraft built or retrofitted. If the number is small, the software burden, measured as a percentage of the total, is very large. The cost of fourth generation software will be higher, possibly by as much as one-third, but detailed estimates are not available yet.
40 %
New Systems

30 % Avionics % of Costs of Weapon 20 % System


F-4

F-16C F-18 F-15C F-14 F-16A F-15A

10 %

1960

1970

1980 Time

1990

2000

Figure B.3-1 Increase in Avionics Costs


B-4

94J281/575 10/2/97,5:39 PM

It is obvious that the reason for the constant increase in avionics cost is that performance capability has steadily increased (the cost of digital processing has decreased on a per function basis). It is the authors opinion that the second generation architecture allowed the importance of avionics to be recognized for the first time; and the cost trend reflects the added investments subsequently made in adding capability to the airframe. Without this form of integration, avionics costs would have escalated much further with time if the same level of capability was to be achieved. B.3.1 Avionics Cost Savings Opportunities Figure B.3.1-1 shows the distribution of flyaway costs for various parts of a (third generation) avionics system for a multi-role, advanced fighter that uses the latest state of the art in avionics technology. For each avionics category, the relative cost, weight, volume, electrical power and reliability problem contribution is shown in that order, reading from left to right. Note the dominant impact that sensors have in all categories of interest and the relatively small impact that data and signal processing has on the various parameters shown, in spite of a very robust processing requirement. This figure also clearly shows that digital integration has not been responsible for the cost escalation shown in B.3.1-1. In fact, the real significance of the digital portion of the F-22 design is that its approach to modularity has kept costs down. The major cost components of fighter avionics will lie with the sensors.
Avionics Contributions by Functional Area
100% 90% 80% 70% 60% 50% 40% 30% 20% 10%

SENSORS

Mission Processing

PVI

VMS

SMS

Cost

Weight

Volume

Power

Reliability

Figure B.3.1-1 Avionics Contributions to Cost by Functional Area


B-5

94J281/575 10/2/97,5:39 PM

Processing cost has been driven down by several factors, including market forces that demand the expenditure of huge commercial investments to continually drive down costs by providing extremely capable VLSI micro-circuitry. Another reason for the low processing cost is the use of a small family of common, line replaceable modular units. A new way of looking at avionics is required if further cost savings are to be achieved. Many avionic assets are very similar in their operation regardless of the function being performed. Until recently, technology only allowed functional integration (e.g. time-sharing) of controls/displays and processing. With the advent of millimeter and microwave integrated micro-circuits which combine several previously discrete circuit elements on a chip, the RF section of the design which makes up well over half the cost of sensors can now be integrated in similar fashion as VLSI for digital systems. Apertures can now be built with hundreds of small transmitter/receiver modules which provide partial amplification and beam steering of the received and transmitted signals. Functionally shared apertures are now possible. Aperture electronics, frequency conversion, the primary receivers and transmitter sections of the design can also be viewed as modular, common assets because functional packaging density has now improved. We will look more deeply in this design and its attributes to investigate cost and weight savings potential. Integration of the electro-optic sensor systems also presents another significant opportunity for cost and weight savings. However, the significant integration challenges there lie in the areas of sharing windows, optics and focal planes and not in the general sphere of interest for avionic architectures. In looking towards the manner in which fourth generation systems will be built, we must acknowledge several observations. The "first principle" is that most assets, whether in the RF, signal/data processing or control/display domains do not have a 100% duty cycle requirement and can be time-shared. Some assets are only needed during certain mission phases, others when specific events dictate their use. It is time-sharing of assets across classically functional boundaries that accounts for significant weight, volume and cost savings made possible by integration. Secondly, asset-sharing requires a more complex, sophisticated data distribution network to interconnect sources and sinks of information. However, after this network has been established, the system designer can now (relatively) easily achieve extremely important functions such as system-wide testing, fault isolation and reconfiguration, and information fusion for improved situation awareness. Performance, availability, support, cost, weight, and volume improvements result in exchange for increased network and software complexity. B.4 Fourth Generation Network Considerations To support sensor-based signals and the associated signal processing functions, the networks must accommodate streaming data of very high rates (gigahertz-wide analog signals and hundreds of megabits/second of digital data after conversion). Bus-oriented networks are of little use in accommodating this function because the combination of high speed and the continuous nature of the signal stream over a period measured in minutes to hours would saturate a bus.
B-6

94J281/575 10/2/97,5:39 PM

Some form of switched or frequency-division multiplex network is needed, akin to a non-blocking telephone network. Further, the majority of other sources and sinks of information in the avionic system, especially the controls/displays, mass memory and digital processing rack-to-rack communications require either large blocks of data or continuous, streaming data. A switched network (eg, crossbar or multi-stage switch) can satisfy these requirements. Speed, latency, number of sources and sinks involved in the network and duty cycle must be considered in designing the network. Figure B.4-1 shows a simplified version of the F-22 network to allow comparison with an advanced switch approach. Figure B.4-2 shows an optically switched network that permits the streaming sensor data to affect their point-to-point connections to the clusters through a cross-bar. Fault tolerance and resource sharing is improved. The electrical DN with the speed limitations and complexity of electrical backplanes has been replaced by a photonic backplane and optical switch. Significant cost, weight and reliability improvements potentially result from the deletion of I/O modules and fewer power supplies. Further, if this photonic circuit can replace PI and HSDB networks, further cost, weight, and reliability improvements are possible. The number of signal conversions (parallel to serial, serial to parallel) has been reduced and the high cost of energy conversion (electrical to optical, optical to electrical) is reduced. The electrical backplane theoretically can be substantially reduced in cost and its reliability improved by deleting over 10 layers and approximately 100 metal pins can be saved (but replaced by a fewer number of photonic connectors however). Further, a more unified network would result in fewer interfaces and hence, simpler software. However, such an optical system has not yet been built and validated as actually achieving cost, weight and reliability improvements.

COMMON PROCESSORS
I N T E R F A C E

PROCESSING

PROCESSING
I N T E R F A C E
I N T E R F A C E

SENSOR

I N T E R F A C E

DFN

DISPLAY

PROCESSING

PROCESSING

SENSOR

I N T E R F A C E

SENSOR

I N T E R F A C E

I N T E R F A C E

PROCESSING

PROCESSING
I N T E R F A C E
I N T E R F A C E

DFN

DISPLAY

PROCESSING

PROCESSING

Serial Interconnect

Parallel Electrical Interconnect

Figure B.4-1 Representative PAVE PILLAR Network

B-7

94J281/575 10/2/97,5:39 PM

COMMON PROCESSORS
PROCESSING PROCESSING

SENSOR

VHSON SWITCH

DISPLAY

PROCESSING SENSOR

PROCESSING

PROCESSING

PROCESSING

SENSOR

VHSON SWITCH

DISPLAY

PROCESSING

PROCESSING

VHSON INTERFACE CIRCUIT (VIC)

Figure B.4-2 Optically Switched Network The challenge is to build a system-wide, unified switched network that does not require the expensive burden of optical-to-electrical and electrical-to-optical energy conversions and the associated parallel/serial conversion at several interfaces. The goal is to utilize the electrical backplane as a means of providing only electrical power. Finally, further increase in avionic integration must address what breadth of the overall domain of airborne electronics should be included and how tightly coupled the various subsystems should be. Figure B.4-1 shows an overview of the PAVE PACE architectural design. The design philosophy (identical to the F-22) is to only loosely couple safety-critical or flight-critical functions such as stores management and vehicle management subsystems with the avionics complex and to provide video and graphics processing support to the displays. Emphasis will be placed on the further integration of the portion of the overall design containing the RF sensors and digital processing (and the associated networks) because they are the dominant contributors to cost, weight, volume, and reliability issues. The other subsystems shown in this Figure should also benefit from these developments. The major integration challenge lies with time sharing hardware as we move closer to the skin of the aircraft.

B-8

94J281/575 10/2/97,5:39 PM

COCKPIT

A p e r t u r e s

Pre-Processor Pre-Processor Pre-Processor Pre-Processor


D a t a

D i s t r i b u t i o n

Vehicle Managment

Advanced Common Signal Processing

Advanced Data Processing

SYSTEM INTERCONNECT

Stores Interface

Figure B.4-1 Architecture Overview B.5 PAVE PACE Requirements PAVE PACE system design contracts with The Boeing Military Airplane Company, Seattle, Washington, and McDonnell-Douglas Aerospace Company, St Louis, Missouri. In addition, Harris Corporation of Melbourne, Florida, has investigated network requirements and developed designs for advanced networks. These efforts are providing avionic system requirements, performing technology forecasts, along with providing architectural-level designs and LCC studies to assist in planning for downstream demonstrations. These design studies have resulted in the processing requirements for a 2005-era multi-role fighter which are shown below. The program objectives were to design a fourth generation architecture that reduced the LCCs, increased the availability and reliability of avionics system, extended the common module concept into the sensor system and maintained or exceeded the capabilities of third generation systems. A third generation design was used as a baseline. The relative benefits resulting from upgrading this baseline with advanced technology versus the use of the same technology applied to an advanced architecture was the key feature being considered. B.5.1 Computing Requirements For a fairly robust dual-mission (air/air and air/ground) aircraft, Table B.1.5.1-1 shows typical data and signal processing requirements for a fourth generation avionics system. The avionic areas shown are pilot vehicle interface (cockpit) or PVI, vehicle management (flight and propulsion control) or VM, integrated core processing (data and signal processing) or ICP and stores management or SM.

B-9

94J281/575 10/2/97,5:39 PM

Area PVI VM ICP SM

Data Proc (MIPS) 6.5 13.5 175.0 5.3 TABLE B.5.1-1

Signal Proc. Program Memory (MFLOPS) (MBYTES) 1900 --2.1 17250 840 --Fourth Generation Processing Requirements

The values for the Integrated Core Processor (ICP) avionics system signal processing shows the largest growth relative to the third generation system (roughly 3-fold growth). The assumed avionics suite has a sophisticated multi-function radar (synthetic aperture mapping, terrain following, air-air modes, etc), FLIR, a robust electronic combat suite, etc. However, additional functions could have been assumed for the suite, such as automatic target recognition several billions of operations per second) and adaptive side-lobe cancellation for radar (around 15000 megaFLOPS). It is expected that such performance capability will appear on some post-2005 aircraft since the processing technology will "gracefully" allow it (if one forgets about the matter of software). Figure B-5.1-1 shows a comparison between third and fourth generation modular computing capability, with a 90-slot rack being the baseline reference for the third generation hardware. Note the potential for a 10-1 decrease in module count. This capability results from a combination of decreased micro-circuit feature size (e.g. .3-.5 micron) which permits high clock rates (200-300 megahertz), advanced hybrid wafer packaging and liquid flow through cooling.
ATF 90 Slot Rack (3 ft )
3

PAVE PACE Equivalent

73 Modules (79 Slots) 400 MIPS 2,350 MFLOPS 3,600 MOPS 15 Power Supply Modules

7 Modules (7 Slots) 450 MIPS 7,200 MFLOPS 1 Power Supply Module

Figure B.5.1-1 Comparison of Third/Fourth Generation Processing Systems

B-10

94J281/575 10/2/97,5:39 PM

Historically, the arrival of advanced processing capability is soon followed by advanced sensors and algorithms that again challenge digital hardware designers to make even further strides. Assuming this trend continues, future digital designs will become input/output bound and new network designs will be needed. Before leaving this area, we need to remind ourselves that data and signal processing only account for about 20 % of the cost "problem". B.5.2 Advanced Network Requirements Table B.5.2-1 shows the comparative summary of digital network requirements for both third and fourth generation architectures (Harris Corporation, Very High Speed Optical Network Study (VHSON)). Requirement Third Generation Fourth Generation Network Size <32 connects <256 connects Data Rate-per path 400 Mb/s 2.5 Gb/s Data Rate-aggregate 10+ Gb/s 50-100 Gb/s Data Integrity-streaming 10E-7 BER 10E-7 BER Data Integrity-packets 10E-10 BER 10E-10 BER Packet size Unlimited Unlimited Path Latency (microsec) <100 <10 Link Control Self-Routing Self-Routing Blocking Limited Non-Blocking Packaging SEM-E SEM-E TABLE B.5.2-1 Digital Network Requirements Based on the Harris and other PAVE PACE studies, which also looked at digital network requirements, the conclusion drawn was that an optically based, switched-network is needed. However, the data rates shown are based on what technology can provide over the next few years. It is important to recognize that the parameters in Table B.1.5.1-2 reflect an architecture for a design dominated by streaming data in and out of sensors, mass memories, global bulk memories, signal processors, and displays. Sources and sinks (sensors, processing modules, mass memories, displays, etc. are linked together with a photonic backplane-based switch(es). Remote sources and sinks (e.g. sensors) are now linked in the same manner as racks and modules inside racks, thereby reducing the number of optical/electrical and electrical/optical conversions, as well as serial/parallel and parallel/serial conversions, while meeting the requirements in Table B.1.5.12. Achieving connectivity for systems having more than around 32 connections (viz the possible size of the switch) will require multistaged switched networks. However, such a design is currently not available for operational use due to current limitations in technology. The VHSON interface circuits are laser-based transceiver MCMs and are needed by every source and sink in the network. A small laser that does not require thermoelectric cooling
B-11

94J281/575 10/2/97,5:39 PM

is needed and it appears such technology will become available soon. Further work is also needed on photonic connectors and the switch itself. B.6 Integrated Sensors As stated earlier, a new way of looking at how sensors are built is required if significant reductions in the cost of avionics is to be effected. PAVE PACE studies point out that integration (sharing) of assets is the highest-leveraged approach to reducing costs. The RF domain is such that functional integration can play a dramatic role in reducing weight, volume, and cost through sharing of assets. Further, RF systems can make up about 75% (or greater) of the sensor costs and potentially represent half the investment in avionics. The key to understanding what integration opportunities are available to the architect is shown in Figure B.1.6-1. This figure shows that for any RF process of interest to military aircraft (communications, RW, IFF, radio navigation, or radar), the basic processes are invariant. The

PAVE PACE Integrated RF System


Active Array Receive Frequency Converters Receivers Pre-Processors

MASA

RF Interconnect IF Interconnect Baseband I&Q Photonic Exchange Network Integrated Core Processing

Slot Array

Multi-Turn Loop Transmit Frequency Converters

Multi-Function Modulators

Cost: Reliability: Power: Weight: Volume:

$2.52M 410 hours 27.4 kWatts 492 lbs 8.57 cubic ft.

Figure B.6-1 PAVE PACE Integrated RF System PAVE PACE studies have shown that with the advent of monolithic micro-circuits and miniature filters, a small family of RF modules can be built which, if properly replicated and interconnected, can accomplish all RF functions. See Figure B.1.6-2 for a more detailed break-out of the design. The apertures, high power antenna amplifiers, beam forming networks and RF interconnect (an RF crossbar switch) shown on the left of the figure would be aircraft-unique. The RF switch,
B-12

94J281/575 10/2/97,5:39 PM

under the control of a resource manager in the digital processor complex (not shown), connects the appropriate aperture to either receive or transmit frequency converters. On the receive chain, the output of the frequency converter, centered at a common Intermediate Frequency (IF), is switched to the appropriate receiver type, the information is extracted in the analog domain and then sent to preprocessors. The transmit chain is activated by control signals from the resource manager which activate multi-function modulators. From there, the signal is up-converted from the standard IF and then switched to the appropriate aperture.

Apertures & Aperture Electronics

PAVE PACE Phase 1 - System Refinement Integrated RF Architecture


VHF/UHF Communication VHF / UHF Receiver Baseband IFF & GPS 3780 MHz 0.4 - 2 GHz Type A Frequency Converter (Receive) Type B Frequency Converter (Receive) 3780 MHz I F S w i t c h 3780 MHz Channelized Receiver Wideband IFM Receiver RF Memory Receiver Multifunction Superhet Receiver Pulse/ Navigation Preprocessor IFF / GPS Receiver

2 - 18 GHz

3780 MHz

3780 MHz

Spread Spectrum Preprocessor

3780 MHz

X-Band

Type C HDR Frequency Converter (Receive) Type F HDR Frequency Converter (Transmit) Type D Frequency Converter (Transmit) Type E Frequency Converter (Transmit)

3780 MHz 2300 MHz

Photonic Exchange Network

Reference LO1 Signal Generator

X-Band

ESA & ESA Electronics

I F 3780 MHz S w i t c h

3780 MHz

0.4 - 2 GHz

Multi-Function Modulator

2 - 18 GHz

3780 MHz

Figure B.6-2 Integrated RF Architecture The rectangular boxes to the right of the RF switch in Figure B.6-2 represent SEM-E sized modules (approximately 6 x 6 x .6). Shown here are two IF signals, reflecting the possibility that 2300 MHz might be needed for the radar to reduce spurious noise problems. The issue of the need for two IF frequencies is still under investigation. This design is able to accommodate CNI, radar and EW functions for a dual-role tactical fighter. The shaded areas show the modules that would be used for an EW signal search mode. Table B.6-1 shows a comparison between the type of RF system shown in Figure B.6-2 and a third generation architecture. Again, much work remains to validate the predicted performance of the integrated RF system and the cost, weight and reliability projections. .

B-13

94J281/575 10/2/97,5:39 PM

Cost Weight Size Reliability * 1989 US dollars

Fourth Gen. RF ISS $3.9M* 500 lbs 8 cu. ft. 225 hrs

Third Gen. RF System $7.9M* 1245 lbs 16 cu. ft. 142 hrs

TABLE B.6-1 Comparison of RF System Parameters It is worthwhile noting that about 35% of the improvement in these parameters came from the use of advanced microwave circuitry and packaging, while the balance came from integration, viz the time-sharing of functional assets. B.7 The PAVE PACE Architecture Figure B.7-1 shows a top-level view of the fourth generation architecture. Note the five major partitions of the avionics and that both RF and EO sensors are integrated. In comparing this architecture with the emerging third generation, our analysis to date reveals a potential cost and weight reduction of about 50%, over 60% volume reduction and a 30% increase of the reliability.
Pilot Vehicle Interfacing

Integrated RF Sensing

Integrated Core Processing

Integrated EO Sensing Integrated Vehicle Management

Integrated Stores Management

Figure B.7-1 PAVE PACE Architecuture

B-14

94J281/575 10/2/97,5:39 PM

B.8 The Future of Integrated Avionic Systems Starting in 1994, the US Air Force will begin the development of an integrated RF avionics. The intention of the program will be to demonstrate, in the laboratory, that the common elements of the system can be built, packaged properly and operate responsively in a real-world, real-time environment. Further, building blocks for the advanced photonic and digital processing technologies are underway in the hope of also demonstrating an integrated system in the 1998 time-frame. It is important to realize that even more advanced designs than those shown for the integrated RF system are already being designed. These advances include conversion to digital signaling immediately after IF frequency conversion for the entire spectrum discussed here. Such an approach is feasible today for low-band communications signals. Further, the use of photonic RF signal distribution at the aperture is under development. Such strides would substantially reduce the number of integrated RF module types, further reduce weight, and enable dramatic performance strides due to reduced noise levels. B.9 References The following is a listing of currently available PAVE PACE documents. The title is followed by the Contractor, the Report Number, AD Number and Date.

Architecture Specification for Pave Pillar Avionics, WL/AAAS-1, AFWAL-TR-871114, A188 722, Jan 87 PAVE PACE System Requirements Specification, Boeing, WL-TR-91-1043, B160 490, Dec 90 PAVE PACE System Requirements Specification, McDonnell-Douglas, WL-TR-911044, B175 029, Apr 91 PAVE PACE Final Report, Lockheed, WL-TR-92-1002, B166 136, Apr 92 PAVE PACE Final Report, McDonnell-Douglas, WL-TR-1003, B165 632, Apr 92 PAVE PACE System Segment Specification, Vol I, Boeing, WL-TR-93-1067, Apr 93 PAVE PACE System Segment Specification, Vol II, Boeing, WL-TR-93-1068, Apr 93 PAVE PACE System Segment Specification, Vol III, Boeing, WL-TR-93-1069, Apr 93 PACE System Segment Specification, McDonnell-Douglas, WL-TR-93-1071, May 91 PACE System Requirements Specifications, McDonnell-Douglas, WL-TR-93-1072, Sep 90 PACE Technology Roadmap & Integration Strategy, McDonnell-Douglas, WL-TR93-1075, Nov 92 PACE Final Report, Boeing, WL-TR-94-1004, Bxxx xxx, Dec 93 PACE Final Report, McDonnell-Douglas, WL-TR-93-xxxx, Bxxx xxx Advanced Aircraft Avionics Packaging Technology-Interconnect Testing/Advanced Cooling Technology Final Report, Lockheed, WRDC-TR-90-1140, B151 328, Dec 90
B-15

94J281/575 10/2/97,5:39 PM

Advanced Aircraft Avionics Packaging Technology-Photonic Connector and Backplane Final Report, Harris, WL-TR-92-1064, B171 646, Aug 92 Advanced Aircraft Avionics Packaging Technology-Modular RF Packaging/WSI Technologies Final Report, Westinghouse, WL-TR-93-1112, B178 188, Oct 93

B-16

94J281/575 10/2/97,5:39 PM

Appendix C Advanced Avionics Subsystems & Technology Program and Next Generation Computer Resources Program C.1 Advanced Avionics Subsystems & Technology The Navy efforts under this core 6.3A advanced technology demonstration effort were zeroed by Congress in the FY1994 budget and may or may not be continued in the future. This cut was part of a much larger cut in avionics programs across the three services. The following represents a discussion of where the Navy Program Manager for this effort would plan to invest Navy money if this program is restored. It is prefaced by a discussion of the requirements and warfighting benefits expected from continuing this Navy work along with projects planned by the Air Force and others. Future plans include a mix of continued efforts (many started in FY93 are on-going) and planned new starts. C.1.1 Requirements/Warfighting Payoff Future operational requirements and especially littoral warfare involving multiple forces consisting of a mix of low observable platforms, unmanned aircraft and vehicles, high density sophisticated electronic countermeasures, smart weapons, smart airframes and information warfare demand a paradigm shift in the entire spectrum of core and mission avionics as well as display technologies, man-machine interface (situational awareness), integration of on-board and off-board information and use of data bases. The efforts in the Advanced Avionics Subsystems and Technology (AAST) project along with efforts of other services, DoD and industry programs will form the basis for this new paradigm and for reducing the risk associated with its incorporation. AAST efforts not only strengthen the reliability and fault tolerance of future aircraft systems but also address several issues critical to cost effective aircraft which will meet future requirements by exploiting the concept of Integrated Modular Avionics (IMA) and an open scalable Advanced Avionics Architecture (AAA). Examples of warfighting high payoff areas are: (1) Situational Assessment and Awareness based on utilization of on-board data bases, on-board sensors and off-board sensor data displayed in real time to provide improved aircrew/system cognitive performance in high stress, combat environments. These efforts are believed to be key to realization of information warfare and effective multi-force warfare in a tactical platform; (2) Shared Aperture Sensors and Common Radio Frequency (RF) modular subsystems which permit reduced observability of a platform while actually improving the performance of its sensors, especially in the passive modes, through the increased gain of active array technology, adaptive nulling of jammers, implementation of cooperative engagement capability, wide bandwidth, and multi-spectral capability; (3) Low Probability of Intercept (LPI) sensors and communications; (4) advanced algorithms which improve the performance of sensors in both existing and future aircraft; and, (5) multi-function aircraft could decrease the total number of aircraft types required to complete a mission and throughout an engagement increase the total numbers of bombs on target since aircraft used one day, for example, as stand-off jammers could be used as bombers the next day once the enemy radars have been eliminated.

C-1

94J281/575 10/2/97,5:39 PM

The IMA and AAA concept is predicated on use of open scalable architectures built from a combination of mostly common modules and some unique modules. Past efforts incorporated in F-22, RH-66 and A-12 used this approach in the digital systems which account for 25% to 30% of the avionics. Future efforts in this program, Air Force programs, and the Standard Hardware Acquisition and Reliability Program (SHARP) will investigate applying the concepts of common modules to the Radio Frequency (RF) modules like receivers, exciters, switches, etc. which make up the remaining 70% of the system. Use of common modules reduces the total number of modules and electronic functions needed to build an aircraft while increasing the mean-timebetween-critical-mission failures and decreasing weight. Use of modules can eliminate a whole level of maintenance (intermediate-level) and simplify the ability to maintain avionic systems. Training would also be simplified. IMA is foreseen to be the only way to cost effectively implement new avionics in the future. The new paradigm will also require more extensive modeling and simulation of advanced avionic architectures to minimize risk of implementation, provide for rapid prototyping, provide technology transparency and permit effective evaluation of commercial-off-the-shelf (COTS) hardware. Use of open architectures will lower the support cost of future avionics and permit industry to better focus on Navy needs thereby leveraging their investments. Use of multifunction apertures and sensors will also simplify updates by permitting new functions to be implemented with non-recurring software changes in most instances. The realization of multifunction aircraft not only could reduce the number of aircraft types on a carrier but also reduce the number of aircraft required for a particular mission. Finally, the ability to shorten battles by being able to dedicate more of the available multi-function aircraft to bombing once the surfaceto-air threats have been removed could result in enormous resource savings. C.1.2 Future Plans (Continued Efforts and New Tasks) An Avionics DEM/VAL Effort - The cost of demonstrating new and truly Advanced Avionics Architectures and systems is prohibitively high for any one program to afford, any one company to afford, or possibly any one service to afford. This problem is compounded when requirements are unknown, loosely defined, or incorrect. Yet, there is a need to reduce the acquisition cost, upgrade cost, and life-cycle cost of avionics while not sacrificing performance or effectiveness. There is a need to create future avionics in a more technology-transparent and open architecture format so as to maximize competition, permit use of commercial-off-the-shelf (COTS) hardware/software, and provide industry with identifiable targets for their independent developments. In order for this to happen both the government and industry must make more extensive use of systems engineering tools, modeling and simulation tools, rapid prototyping, commercial standards, synthetic environments, etc. to conduct avionic systems-level evaluations and demonstrations independent of a specific technology. Plans include a tri-Service and industry planning phase for a new initiative leading to a new paradigm in how avionics are developed, demonstrated, evaluated, specified, and procured. Included would be an initial effort to benchmark what tools are or are not available and what new tools are needed. The objective of this effort would be to establish a tool set for accomplishing the new paradigm. This planned effort was briefed to over 200 industry representatives in May 1993 and was well received and encouraged. It has also been coordinated in the tri-Service Reliance Integrated Avionics Panel.
C-2

94J281/575 10/2/97,5:39 PM

Initiate an effort for a scaleable avionics open architecture capable of meeting the needs for retrofit, upgrades, and new aircraft applications. Work will include a novel hierarchical data bus approach using a high speed optical network, optical/electrical switch networks, fault tolerant design concepts, dynamic reconfiguration, real-time processing, algorithm analysis, system level architecture simulation, airframe monitoring, system/architecture simulation tools, open interface standards, applications of COTS, and demonstrations. Payoff would include scalable open architecture solutions which include the bandwidth, flexibility, scalability, and fault tolerance required for next generation aircraft. It will also lead to a path for more cost effective avionic upgrades in the future. Continue joint Navy and Air Force Situational Awareness Developments including PowerScene software for real time perspective scene rendering using on-board data bases, digitized sensor data, and/or off-board sensor data as well as pertinent display technologies. This is an open architecture approach with Navy owned software developed utilizing commercially available standard Open Graphics Language (Open GL). These tools when combined with real time image generation will enable virtual systems design and testing prior to building. This effort would support the concept of integrated product/process development teams for advanced avionics. In the future the concepts and tools being developed under the existing situational awareness program will be extended to a framework for an avionics integrated design environment which would allow for rapid development of complex elements while allowing for continual technology insertion. A fundamental building block could be a graphical modeling language referred to as extended systems modeling language (ESML+) which is an outgrowth of early functional decompositional techniques published in the late 70s and 80s. Continue efforts on Shared Aperture Systems and Common RF to demonstrate that cost and weight of aircraft can be reduced through the use of a common set of radio frequency modules in an integrated rack coupled to a limited set of shared apertures to form multi-function systems. Approximately 65% of the cost of avionics is associated with these subsystems in terms of fly away cost. Further cost reductions could result from decreasing the need for derivative aircraft types and through the ability to rapidly reconfigure aircraft for different missions, thereby bringing more power to bear on a given target in shorter time. Efforts under this joint Navy and Air Force project include transmit/receive modules, common RF modules, antenna elements, electro-optic and infrared apertures, resource managers, system and subsystem controllers, and others. Continue avionics photonics efforts towards sensor data distribution networks and high speed switching functions which are necessary to effectively use COTS, manage multi-function systems and distribute system control. Work on Optical Backplane Interconnect System (OBIS) Development will continue developing a system of militarized optical components to overcome data transfer bottlenecks associated with electrical backplanes. OBIS technology is dual use: current commercial efforts in optical data transfer such as the IEEE Scaleable Coherent Interface (SCI) and the ANSI Asynchronous Transfer Mechanism (ATM) lack
C-3

94J281/575 10/2/97,5:39 PM

hardware implementations. OBIS will provide the hardware components necessary for successful implementation of the concept of Air Forces Very High Speed Optical Network (VHSON) program and is coordinated through the JDL Reliance. Concepts for an advanced optical network for data, sensor and aircraft systems monitoring will be initiated and demonstrated in the outyears. Continue advanced avionics algorithm efforts which improve performance of existing platforms through software upgrades and can be used for new platforms as well. For example, the Advanced Detection and Tracking Algorithm effort is developing novel air-to-air mode algorithms for increase detection and tracking ranges for fire control radars. Other efforts include benchmarks for evaluating COTS processors and/or hardware. Coordinated with various Air Force and Navy activities so algorithms can be integrated with Air Force algorithms to enhance multiforce capabilities. Continue avionics packaging, specifically the two-phase Immersion Cooling Development (dual use technology) which is a circuit cooling technique for next-generation avionics modules and racks. Direct impingement of an inert fluorocarbon within clamshell-configured standard electronic modules enables cooling of the microcircuitry through the low thermal resistance associated with impingement and the high heat transfer capability associated with latent heat of vaporization and the turbulent mixing of localized boiling. A laboratory configuration demonstrated capabilities in excess of 800W/module while maintaining junction temperatures of < 1008 s ryvhiyr rhv s pvpv r # hqyr s conventional conduction-cooled SEM-E modules. Other efforts include development of a super-high-density electronics connector necessary to support the high data transfer rates of next generation avionics and advance RF packaging. Efforts in this task will have dual use and could substantially increase the US lead in aerospace platform technology.

Annex B (distribution limited to DoD contractors) contains additional information on AAST shared aperture work. C.2 Next Generation Computer Resources Program Overview The Next Generation Computer Resources (NGCR) Program is the result of a need to reduce the cost of acquisition, upgrade and ownership of military tactical weapons systems. When the program was formulated in the mid 1980s, advances being made in the commercial computer arena presented an excellent opportunity for the military to leverage the ongoing industry investments in open systems standards and technology. The NGCR Program was established to exploit this opportunity for tactical weapons systems. The NGCR program is taking a new approach to mission critical computer standardization by aligning with the commercial market and taking maximum advantage of rapidly evolving technologies and open system architectures trends. The NGCR Program is providing weapons systems acquisition managers and system developers of air, surface, subsurface and land-based tactical systems with a family of commercially-based,
C-4

94J281/575 10/2/97,5:39 PM

widely accepted, non-proprietary computer hardware and software open system interface standards capable of meeting their needs. Proper application of these standards will allow these systems to leverage commercial technology and transition to commercial open system designs. Just as open architectures have reduced the cost of ownership in the commercial personal computer and other markets, the use of open system designs can reduce weapons systems acquisition and ownership costs. Systems can be more affordably upgraded to benefit from technology advances, without costly changes to the underlying infrastructure of interface standards. The militarys use of commercially-based open system designs will reduce dependence on the original system suppliers by allowing competition for modification/upgrades to mission critical systems. To enable mission critical systems to take maximum advantage of commercial interface standards, the NGCR Program is influencing these standards to include requirements critical to weapons system performance such as real-time and fault tolerance. The NGCR Program has established working groups in critical computer standardization areas to implement this effort. These working groups are comprised of NGCR program personnel with voluntary participation from industry, user program offices, academia, and other DoD and government agencies. The working groups task is to identify weapons systems needs, evaluate and select promising standards, and work directly with national and international standards organizations and industry consortia to include these needs in the standards. The DoD can significantly leverage commercial investments in product designs and technology by applying the resulting open commercial standards to weapons systems designs. In addition to the standards working group activity, the NGCR program provides engineering support to weapons systems acquisition managers to assist in the application of open standards. The program has developed conformance test capability for selected standards to assure product compliance with the standards. Compliance testing reduces system integration risks and facilitates product compatibility among product vendors. In addition, contracts have been issued to analyze the suitability of selected standards to real world weapons systems applications. The results of these analyses have then been used to assist the working groups in the definition of detailed technical requirements. Procurement of commercial open systems and designs for mission critical applications represents a significant change in the way the DoD acquires and supports tactical systems. To assist users in acquisition of open systems, training course material is being developed that can be used as a basis for courses taught at DoD and industry training facilities. Designing military weapons systems based on open architecture and interface standards makes economic sense for DoD in todays environment of rapidly evolving technologies and declining budgets. The need to upgrade systems in a modular cost effective manner has never been greater. Using widely accepted commercial interface standards as opposed to standards unique to the military provides the added cost avoidance benefits of leveraging a large and mature market base. System design, development and integration time and cost can be significantly reduced. System upgrades can be made incrementally and without the need for major system redesign. By incorporating tactical system requirements into these commercial standards, the NGCR Program
C-5

94J281/575 10/2/97,5:39 PM

is making these standards, and the resulting benefits of their use, applicable to a much larger percentage of military systems. Three general areas of standardization are being addressed by the program: multiprocessor interconnects, multi system interconnects, and software interfaces. Eight standards are being defined or selected in the standardization areas: A. Multiprocessor Interconnects Baseline Backplane and Modular Open Architecture High Speed Data Transfer Network (HSDTN) High Performance Backplane

B. Multi System Interconnects SAFENET High Performance Network (HPN) C. Software Interfaces Operating System Interface Graphics Interface Database Management System Interface

C.2.1 Multiprocessor Interconnects The three NGCR multiprocessor interconnects are described in the sections that follow. C.2.1.1 The Baseline Backplane and the NGCR Modular Open Architecture The NGCR baseline backplane and the modular open architecture breaks computer and electronic systems into board (module) level components with standard interfaces between the boards. Figure C.2.1.1-1 shows the three NGCR standard board form factors and some intended applications for each. The IEEE 896.5 standard, IEEE Standard for Futurebus+, Profile M (Military), and its referenced documents, provide the details for each form factor. The three form factors are the MIL-SEM-E intended for integrated rack use, MIL-10SU intended for ATR box and 6U VME replacement use, and MIL-12SU intended for milder environment cabin avionics and shipboard use.

C-6

94J281/575 10/2/97,5:39 PM

SEM-E (~5.9" x 6.7") HELICOPTERS TACTICAL AC-new SHIPBOARD

1O SU (~6" x ~8.5") HELICOPTERS TACTICAL AC-new SHIPBOARD

12 SU (~10.4" x ~11.3") CABIN AVIONICS SHIPBOARD

Figure C.2.1.1-1 NGCR Board Form Factors and Applications Figure C.2.1.1-2 shows the standard board level interconnects for the NGCR program. These are the IEEE 896 Futurebus+, the IEEE 1394 High Speed Serial Bus, and the High Speed Data Transfer Network (HSDTN). Detailed pinout and other specifications for Futurebus+ and Serial Bus are contained in IEEE 896.5. While the Futurebus+ is the primary backplane interconnect, the auxiliary Serial Bus is used for test and maintenance, software debug, low bandwidth data transfer and other purposes. Customized local interconnects among a few boards, input/output interconnects, backplane discretes, control and status registers, power, grounding, mechanical interfaces, and thermal interfaces are also defined in IEEE 896.5.

Sensor

Sensor

Sensor

Display

Display

Display

S w i t c h

HSDTN Extended Links

S w i t c h

HSDTN

HSDTN

Bridge

Bridge

Futurebus+

Futurebus+

Rack 1
HSDTN

Serial Bus & Discretes High Speed Data Transfer Network

Rack 2

Figure C.2.1.1-2 NGCR Modular Open Architecture; Board Level, Sensor / Display, and Inter-rack Interconnects
C-7

94J281/575 10/2/97,5:39 PM

C.2.1.2 The High Speed Data Transfer Network (HSDTN) The HSDTN augments the baseline backplane (Futurebus+ / Serial Bus linear buses) with a backplane switched network and a backplane ring network. Figure C.2.1.1.1-2 shows HSDTN used as a switched network. This figure also shows the HSDTN extended links connecting to sensors and displays as well as interconnecting separate racks. This sensor/video extended interconnect network contains 1+ Gbits/sec point to point links for bringing very high speed sensor information to processing racks, for driving displays with refresh memory remote from the display, and for interconnecting racks at backplane speeds. The HSDTN allows multiple simultaneous conversations among boards and among Futurebus+ clusters. It also allows simultaneous distribution of sensor data to multiple racks, and distribution of video data from multiple racks, for fault tolerance or other purposes. The IEEE 1596 Scalable Coherent Interface (SCI) and its derivative SCI/Real Time (SCI/RT), have been selected for the shared memory function and Fibre Channel has been selected for data channel functions. C.2.1.3 High Performance Backplane (HPB) Standard The third multiprocessor interconnect standard, the High Performance Backplane (HPB), has not yet been initiated, but the effort will start in September 1997. Candidate technologies for the HPB mostly revolve around the use of optics, although further improvements to electrical protocols have not been ruled out. Optical technologies of interest include fiber optics, free space optics, and optical switches. Electrical technologies of interest include lower power, lower latency protocols, and higher performance electronics. C.2.2 Multi System Interconnects These interconnects provide rack-to-rack, sensor rack-to-rack, and rack-to-display communication. They are local area networks (LANs). Two different complementary LANs, one medium speed and one high speed, are being standardized: SAFENET and a yet to be selected High Performance Network (HPN). C.2.2.1 Local Area Network Standard SAFENET was selected as the Local Area Network Standard. It is a dual-ring, token passing LAN-based standard for inter-computer and computer-to-computer peripheral data transfer. It is based on the ANSI X3T9.5/84-89 Fiber Distributed Data Inerface (FDDI) standard. C.2.2.2 High Performance Network Standard The HPNET working group was started in September 1992, with publication of the standard targeted for March 1997. C.2.3 Software Interfaces
C-8

94J281/575 10/2/97,5:39 PM

Three different software interface areas are being standardized, and these are described in the sections below. C.2.3.1 Operating System Interface (OSIF) Standard The OSIF effort commenced in March 1989 to prepare a commercially-based family of operating system (OS) interfaces. Following the requirements definition and a survey of available technologies and standards activities, POSIX (IEEE 1003) was selected as the baseline standard for the NGCR OSIF standards. The Operating System Working Group is now actively involved in the IEEE 1003 project to participate in the POSIX standards definition. This set of standards will address systems which are Ada-oriented, real-time, distributed/networked, multi-level secure, reliable and realizable on heterogeneous processors. The initial standards will be published in October 1995. C.2.3.2 Data Base Management System (DBMS) Interface Standard The DBMS standard effort was initiated in May 1991 with the preparation of a white paper and organization planning for the working group, which was formed in September 1992. This standard will define DBMS interfaces for naval systems which are typically real-time or critical-time, heterogeneous, distributed, language and operating system independent, network independent, secure, and fault tolerant. Current trends in commercial and military DBMS technology as applied to C3, sensor, intelligence and weapon systems will be assessed to define standard interfaces for a broad range of platform applications. Publication of this standard is scheduled for September 1998. C.2.3.3 Graphics Language Interface Standard The Graphics Interface Standard working group commenced September 1992, and has a scheduled standard publication date of September 1998. C.2.4 NGCR Standards Applied to an Avionics System Figure C.2.4-1 illustrates the application of the various NGCR interface standards to an integrated avionics system. These hardware and software standards are designed to allow the mixing and matching of components from different vendors in an aircraft.

C-9

94J281/575 10/2/97,5:39 PM
High Speed Data Transfer Network IEEE Futurebus+ & Serial Bus Backplane O. S. Standard High Speed Data Transfer Network SafeNet or High Performance Net

APERTURES

F.O. F.O. SENSOR SIGNAL DATA SWITCHED SWITCHED FRONT ENDS NETWORK PROCESSORS PROCESSORS NETWORK DISPLAYS INTEGRATED RACKS EO FLIR/IRST

AIRCRAFT SYSTEMS Database Standard

CONTROLS MASS MEMORY S E N S O R N E T W O R K SWITCHED NETWORKS V I D E O N E T W O R K DISPLAY DISPLAY DISPLAY F O A V I O N I C S B U S COCKPIT INDICATORS ELEC POWER SYS FLIGHT CONTROL SYSTEM INERTIAL SENSORS AIR DATA SYSTEM RECORDERS

MISSILE WARNING

Graphic Standard

PAR. & SERIAL BUSES

RADAR R F A R R A Y S

EW/ESM


DISPLAY

CNI

SWITCHED NETWORKS

ACOUSTICS

PAR. & SERIAL BUSES INTEGRATED BACKPLANES

ANALOG

DIGITAL

DIGITAL/ANALOG MODULES

DIGITAL

ANALOG

Figure C.2.4-1 NGCR Applied to Avionics C.2.5 NGCR Modular Open Architecture Features Table C.2.5-1 contains a Requirements versus Supporting Features itemization for the NGCR modular open architecture. It shows the specific features developed for satisfying the various system requirements. Table C.2.5-1 NGCR Modular Open Architecture Features System Requirement NGCR Modular Open Architecture Supporting Feature High Performance Futurebus+ at up to 6.4 GBPS is 10 times faster than Emerging multi-hundred previous buses megahertz computer chip HSDTN at up to 8 GBPS on each leg of switched support network Cache coherent shared memory Low Weight, Volume, Power Low heat 3.3 volt power Emerging ultra high density High performance air and liquid flow through cooling electronics support

C-10

94J281/575 10/2/97,5:39 PM

Table C.2.5-1 NGCR Modular Open Architecture Features (cont) Economical Systems Based on widely used commercial standards to allow use of economical commercial electronics technology Defines interfaces so components from different vendors can work together and be used in multiple systems providing economies of scale At the board level so components are small enough that they can be developed on vendors own money thereby shifting development costs away from government Supports ruggedized and mil-spec versions of standard commercial boards Supports full ATR size boards on 0.8 pitch to accommodate cheaper high profile devices and heat sinks up to 0.2 for economical but high capacity air flow through cooling Reduced development and upgrade Open architecture allows system development from times previously developed components as well as piecemeal upgrades with newer components Both new platform and retrofit Three board sizes to fit efficiently into the available space in support different platforms: MIL-12SU (~10.4 x ~11.3) MIL-10SU (~6x ~8.5) MIL-SEM-E (~5.88 x ~6.68) Varied platform environmental Four environmental levels supported: requirements support Commercial off the shelf Ruggedized Full mil-spec (shipboard) Full mil-spec (airborne) Integrated architecture support Very high performance backplane and switched network (HSDTN) allow combining of data from different sensors Cache coherent shared memory (as well as message passing) for efficient tightly coupled processing Fault tolerance Optional dual backplane buses HSDTN provides rack to rack interconnects at backplane speed for N+1 redundancy across racks Maintainability and reliability Serial bus for test and maintenance, software debug, and support miscellaneous functions On board stress and error history log On board revision history log Live insertion (some form factors)

C-11

94J281/575 10/2/97,5:39 PM

Table C.2.5-1 NGCR Modular Open Architecture Features (cont) Software development support Standard unit for real time non-intrusive debug of integrated systems Both cache coherent shared memory and message passing for either tightly or loosely coupled systems Massively parallel processor HSDTN provides standard mesh (or other) interconnect for up to 65000 processors. support Fiber Optic Interconnects Optical fiber optic contacts in backplane connector HSDTN may optionally have fiber optic physical layer Input/output from external to the Standard pin assignments for commonly used I/O such as Mil-Std 1553, Mil-Std 1397 and others rack Multi-board functional elements Connector space allocated for custom interconnects among a few boards making up a multi-board functional element Hierarchical bus structures Modules with dual buses may use one bus as a secondary bus to form cluster or other bus structures. Sensor / video network HSDTN provides 1+ GBPS serial link Mixed digital and analog/RF +/-15 volt power systems Separate analog ground Clock signal Serial bus to be used a RF control bus Optional coax contacts in backplane connector Optional board covers Real time usage Clock coordination accuracy to 50ns or better Up to 256 priority levels for deterministic scheduling Nuclear event survival Nuclear event shutdown discrete Unstable power system ride Power fail imminent warning message and discrete through allows orderly shutdown Battery backup power Signal processor support Futurebus+ provides the speed to handle many signal processing applications on single linear bus HSDTN provides a switched network for signal processing applications requiring multiple simultaneous data transfers.

C-12

94J281/575 10/2/97,5:39 PM

Appendix D Supportablity D.1 Supportability Guidelines The avionics systems availability is critical for the performance of the future strike weapon systems missions. Future avionics systems must be supportable. Following the JIAWG concept, the combination of a reliable and maintainable design under the performance, weight, volume, and cost constraints will yield an affordable avionics system for the next generation weapon systems. Several trade off analysis among the several supportability related concepts, will be required during the design phase to determine the best combination of these concepts that will meet the operational requirements at a reduced life cycle cost. These new avionics systems will be used by different services for a variety of missions, for this reason, they must be suitable for support in a variety of scenarios, to include carrier based weapon systems and austere base locations. Reliability is one of the most important aspects of any weapon system design. The reliability of a weapon system determines the amount of support that will be needed over the life cycle of the system Early implementation of reliability eliminates many of the later design changes that are extremely costly. The importance of reliability can not be overemphasized. The decrease in the defense budget creates a need for systems that are affordable and last longer. The reliability of a weapon system affects all the elements of integrated logistics support. The baseline system identified for JAST avionics is the F-22. Improved reliability over the F-22 avionics needs to be realized for savings in JAST avionics life cycle cost. System design should demonstrate high reliability in support of its availability requirement. A single reliability factor, such as MTBF may not be meaningful (e.g. the temporary loss of module processing capabilities should not constitute a system failure. JAST hardware system modules should be considered operational if 50% of the capabilities are operational, or if 50 % of the roles are fully supported. Modules should be constructed to ensure that capability provided remains functional in the presence of one or more failures within the total system. Survivability mechanisms such as hardware redundancy and functional partitioning should be considered to enhance overall system reliability. System design of modules should include part location by temperature and component sensitivity to temperature to improve reliability by maximizing heat transfer and minimizing thermal gradients to drive reliability enhancement for performance survivability. System design shall ensure that the predicted reliability exceeds the defined specifications. Maintainability is the effort it takes to repair a failed system. The importance of maintainability is the turnaround time between the finding of a failure and the repair. Maintainability affects the availability of a system, the easier a system is to maintain, the faster it will be available to accomplish the next required mission. Because of the need for quick turnaround time, emphasis has been placed on the need for integrated diagnostics throughout the weapon system, but specially in the avionics. New systems depend more and more on the avionics, making them more complex systems. The extensive use of integrated diagnostics throughout the avionics allows the
D-1

94J281/575 10/2/97,5:39 PM

use of reconfiguration and graceful degradation, plus reduced time for failure detection and isolation. Integrated diagnostics contribute to the overall life cycle cost savings by eliminating the need for costly external support equipment which occupies valuable space in a carrier, or requires transportation to the different operating locations, presenting a burden in war time situations. A supportability goal for JAST avionics is to be able to perform avionics maintenance in an opportunistic way. The use of common avionics modules, redundancy and the ability of the avionics architecture to reconfigure around failures, present the opportunity to defer avionics maintenance actions to times when the plane is down due to a failure in another subsystem that requires an immediate maintenance action. This will eliminate any avionics downtime. This maintenance strategy could be utilized up to the point where the probability of mission success or safety of flight concerns become an issue due to low levels of redundancy remaining available. JAST avionics should contribute to the deployability of the weapon system. Reductions in the logistics tail are required to allow the weapon system to be deployed to a different operational base or an austere location with minimal support. The conceptual supportability characteristics described above support the deployability requirements by providing an avionics subsystem that is self contained for logistics purposes, requiring only a limited numbers of spares for support. Life cycle cost is composed of three phases, research and development, production, and operation and support. In order to make the avionics system affordable, reductions in life cycle cost are required. In the research and development phase, the reuse of some of the system elements both in the hardware and software area can prove to be helpful in reducing life cycle cost. Efforts in manufacturing technology at this stage can be applied during the production phase to lower the production costs. The operation and support phase of a weapon system is dependent on the decisions made during the research and development phase. Decisions made at the early phases of the program will determine most of the system attributes that will define the costs to operate and maintain the system. The avionics system for JAST must support a reduced maintenance concept, without degradation to system readiness or sustainability. This relies heavily on integrated on-aircraft diagnostics to fault isolate to the LRM with minimal ambiguity. JAST avionics line replaceable modules (LRMs) should support tooless maintenance, the technician must be able to remove and replace the failed LRMs with out the use of tools. The JAST avionics should not require the use of any special tools in order to perform any repairs to the system, only common hand tools will be allowed for maintenance. BIT[ C-BIT(continuous), I-BIT (initiated), M-BIT(maintenance)] will then be used to confirm restoration of the systems operational condition. The LRMs must be within easy access for repair purposes, first tear location is required for ease of maintenance. Maintenance personnel should be able to perform all user-level tasks wearing NBC/Arctic gear. No adjustments or calibrations should be required at the flight-line level. A complete maintenance action, including fault detection, isolation, remove, repair, and verification should not require longer than 15 minutes for execution by a skill level three avionics technician. (Nothing in the module design shall preclude the modules from meeting the MTTR of < 15 minutes.) Maintenance should occur upon mission critical failure(s) of the system and be consistent with the
D-2

94J281/575 10/2/97,5:39 PM

maintenance concept of each weapon system. Other factors effecting system maintenance should include consideration in design for fault tolerance, deferred maintenance and reconfiguraility. Module diagnostics must support system concepts of dynamic on-line reconfiguration, fault tolerance and graceful degradation through varying degrees of BIT, and fault data status reporting. Module diagnostics must facilitate the reduction of false alarms, false removals, cannot duplicates and retest OKs by providing accurate fault isolation at the appropriate level and easily retrievable on-module fault data Module BIT routines must consider failure predictions and failure frequencies to minimize the time between the occurrence of a functional fault and the detection of a functional fault. Module BIT routines must be resident on the module to the extent required herein, and shall provide the BIT fault coverage. System design should include resident module BIT routines to drive down necessary coverage of fault detection/isolation normally covered by external ATE diagnostics. When identifying system readiness objectives for JAST, the following should be considered: 1) Identification of overall installed performance requirements of JAST and the specific support required from other on-board systems. 2) Identification of necessary performance availability thresholds to provide survivability levels against specified threats as well as aircraft performance constraints. This should include all system readiness drivers. In addition to the above guidelines, the following elements as measures of system readiness and R&M should be considered. These elements should provide a scenario of low (Peacetime), medium (Peacetime with surge) and high (Wartime) for reliability and maintainability goals. a. b. c. d. e. f. h. g. Inherent Availability (Ai) Operational Availability (Ao) Maintainability - (Mean Time To Repair) - MTTR Maintainability - (Mean Corrective Time) - Mct Maintainability - (Mean Down Time) - Mdt Reliability - (Mean Time Between Critical Failure) - MTBCF Reliability - (Mean Time Between Failure) - MTBF K Factor - (Equipment Operating Time)

System module design should be considered operationally available if the critical tools and functions can be used with the expected or planned response time by at least half of the normally planned peak user load. Mechanisms (e.g. security, physical environment, system backup) should be employed during systems design to reduce system susceptibility to internal and external damage events. Where possible, JAST technology development efforts should utilize subsystem technologies developed for similar advanced architecture development efforts by other services to maximize commonality within the constraints of Air Force/Navy operational requirements. This commonality should include not only system design but system support considerations.
D-3

94J281/575 10/2/97,5:39 PM

The JAST avionics should provide fault detection and isolation to the failed LRM by the extensive use of integrated diagnostics. JAST avionics should support an intra-aircraft and air-to-base data link for the transfer of diagnostics information. This will allow the transfer of information on failed LRM prior of landing allowing the maintenance crews to procure the required LRM replacements and be ready to execute the maintenance as soon as the plane lands, reducing the avionics turn around time. This data link should not be used exclusively for diagnostics purposes, but shall support the diagnostics data requirements. The LRMs should contain non-volatile memory for the storage of module status, historical maintenance information, elapsed time since last maintenance action, and environmental data at the time of failure. This information will allow the identification of bad actors and facilitate the reduction of Can Not Duplicate (CND) failures and false alarms. The technician at the depot or selected repair location will be able to replicate the environmental conditions under which the failure occurred. This data shall be compatible with the Computer Aided Logistics Support (CALS) digital data format and interface. All the technical order (TO) information should be stored in the avionics mass memory. The information will be presented to the technician through the cockpit displays, maintenance data panel or displayed through a portable maintenance device (PMD). This will eliminate the requirement for paper based TOs which require storage space, transportation, and are cumbersome to use. Environmental considerations, (system design performance), from outside weather conditions which affect flight line and carrier deck repair activities and the frequency of preventative maintenance to spaces for on-board storage of spare parts, should be considered during supportability planning supportability planning. Many of the environmental problems affecting current fleet supportability will continue to exist, just by the nature of the JAST operational requirements and necessity for exposure of some equipment to the outside environment during mission-related operations . Supportability planning for environmental considerations should highlight factors such as: (a) the location of JAST equipment on the aircraft (e.g., requirements for O-level maintenance activities internal to the aircraft could limit use of certain SE because of its size, whereas repair of externally accessible systems could be affected by restrictions on SE operation in adverse weather); (b) special storage facilities or restrictions (e.g., storage restricted to temperature between 55-85 degrees F ); and (c) special restrictions on test equipment (e.g., not useable in high humidity, high heat/sand [Gulf War results]). Manpower requirements constitute the highest costs in the operation and support phase of a program. Reductions in personnel requirements have the potential for significantly reducing a weapon system life cycle cost. A number of the items in the area of supportability will influence the amount of people required to support a weapon system. Making the avionics system reliable and maintainable (Higher the MTBF and MTBCF the lower the LCC) reduces the need for people to maintain the system. Series of Life Cycle Cost trade-offs should be considered during system design, which should include but not limited to system maintenance concept, alternative system/product design configurations, alternative production approaches and alternative product
D-4

94J281/575 10/2/97,5:39 PM

distribution methods. The applicability of new technology for upgrades of existing systems through retrofits will lower the price per module because of the economies of scale. The use of commercial of the shelf (COTS) components where they meet the requirements is highly encouraged and presents possible life cycle cost savings. If used, COTS, selected by the contractor should meet or exceed the overall design, performance and R, M & BIT definition requirements of the JAST program. Commercial components cost less to procure and are easier to upgrade. Every new avionics technology will be required to buy its way into JAST by proving their potential to decrease the avionics life cycle cost. The extent to which any or all of these supportability enabling concepts are applied to the JAST avionics architecture design will require a number of tradeoff analyses. They will determine the optimal design solution which provides an avionics architecture that meets the operational requirements at a reduced life cycle cost, making affordable avionics a JAST reality.

D-5

94J281/575 10/2/97,5:39 PM

Appendix E Estimated Data Rates for Electro-Optic Sensors The data rate projection for a 640 x 480 pixel FLIR operating at 30 frames per second is approximately 160 Mbits per second for 16 bit words. The rate will be scaled upward if a 1000 x 1000 pixel array is considered. The anticipated throughput projection is 3 - 10 GFLOPS. The data rate and throughput calculations for an IRST are highly dependent on update rate, scan, resolution, and algorithm complexity. It should be assumed that spatial-temporal detection processing (500 - 1000 operations per pixel) will be used in the 2010 time frame. Depending on the scenario, the data rate is expected to be 120 - 200 Mbits/sec and the throughput 4 - 10 GFLOPS. It will be assumed that the threat warning, navigation, and situational awareness functions will be handled by ADAS. For threat warning, ADAS acts as an array of IR detectors distributed throughout the airframe to detect missiles and aircraft at short range. A relatively simple algorithm should be adequate for this function. The data rate is expected to be about 500 Mbits/sec with a throughput projection of 1-2 GFLOPS. Navigation produces a faster display that provides the pilot with an unobstructed view no matter which way he turns his head. Because of the multiple sensors involved, the data from each sensor must be merged to produce a seamless image adding complexity to this function. The navigation data rate for ADAS can be as high as 2 Gbits/sec with a throughput projection of 1 - 2 GFLOPS. For a conventional NAVFLIR the data rate projection should be about 300 - 500 Mbits/sec for a 1000 x 1000 pixel array. Situational awareness consists of detecting and tracking objects over the full field-of-regard. The complexity of the algorithms is comparable to the IRST, but over a larger field-of-regard. The data rate projection is 500 Mbits/sec and the throughput projection is 15 - 20 GFLOPS.

E-1

E-1

S-ar putea să vă placă și