Charles Small's profile

Charles Small's Technical Articles

DiagrammaticProgramming

Article from: EDN

Author: Charles H Small

A host ofdiagrammatic-programming and debugging tools allows you to picturewhat your program is rather than making you describe in words what itdoes.

Programmingused to mean just one thing: typing endless strings of ASCIIcharacters into bottomless files. Luckily, you no longer have toextrude your software designs through the linear-sequential filter oftext-based programming tools. Now you can work like an engineer,drawing a compilable diagram of what your design is rather thancrafting a list of steps that it must perform.

Thedifference between textual programming and diagrammatic programmingis the difference between a description and a mug shot, between alist of directions and a map, between a formula and a graph. Indiagrammatic programming, the ideal is the real; the documentation isthe program.

Diagrammatic-programmingsystems spring from elegant, clear ideas in the minds of theirdesigners. These designers have taken the paradigms for theirdiagrammatic-programming systems from proven engineering practice,such as networked instruments; directed graphs ("bubblecharts"); and DSP, circuit, and control-system diagrams. Newdiagrammatic compilers, based on diagrammatic software-analysistools, are on the horizon.

Usingdiagrammatic programming, you can create what the computer-sciencemainstream has vainly sought for years: comprehensible software.Diagrammatic programming produces comprehensible software becausediagrammatic programming taps the unsurpassed creative and analyticalpower of the visual hemisphere of the human brain. Simply stated,while humans cannot even begin to approach computers' text-handlingexpertise, humans can still easily beat computers'pattern-recognition abilities.

Althoughprogramming-language designers pay lip service to human readability,the consumer they have in mind is clearly their compiler, not thepoor programmers. All programming languages require humans to ordertheir thoughts in a fashion easily ingested by computers. Therefore,programming languages introduce a high level of what Jeff Kodosky,inventor of National Instruments' Lab View, terms "artificialcomplexity" into programming, making large programs literallyincomprehensible to normal humans.

Diagrammaticprogramming can make easy what were once difficult software tasks.Writing, debugging, and verifying real-time software is, with thepossible exception of DSP, probably the most difficult kind ofsoftware to write. But with diagrammatic programming, developingmultitasking or multithreaded programs is effortless. You simply drawtwo lines of execution.

Thetechnique of drawing programs is so new and its applications, sodiverse, that the technique has no name. People label the technique"pictorial," "graphical," "iconic," or"diagrammatic" programming. Pointing up the lack oflanguage skills in visually oriented people, some call the technique,oxymoronically, "visual language."

Althoughdiagrammatic programming requires a good graphical user interface(GUI), don't confuse diagrammatic programming with a GUI itself orwith software that merely creates GUIs, such as XVT's.Diagrammatic-programming systems are in the same league asprogramming languages.

Obviously,if you are going to draw a program, the computer you are working onmust have high-resolution graphics. Indeed, the inspiration for manydiagrammatic-programming systems was the Macintosh. Without a hostcomputer such as the Mac, with its high-resolution graphics andgraphics-oriented operating system, or a Unix computer running the XWindow System, crafting a diagrammatic-programming system would bedifficult. Because of its once-clunky graphics, the IBM PC was thelast to acquire any diagrammatic-programming systems.

Diagrammaticprogramming is inherently better than text-based programming for tworeasons: Humans, especially visually oriented engineers, can generateand comprehend pictures much more easily than they can linear listsof text. And, the linear structure of a text-based program simplydoes not match the structure of real programs. A multidimensionalpicture can model a complex program much more elegantly andsuccinctly than can any linear list.

Text-basedprogramming languages, on the other hand, are a jumble of unrelatedoddments. Programming languages confound constructs that looselymirror mathematics with structures that mimic paperwork. Also thrownin the pot are commands derived from the innate testing and loopingabilities of certain CPUs--C's increment operator, "++,"being an egregious example. Tellingly, even the most modern softwaremetaphor from the text-based world is no more than a messy pile ofpapers on a desk--windows, in other words.

Thecomputer-science mainstream periodically proclaims yet anothervariation on text-based programming that will magically transformprogramming from a craft into an engineering discipline. Remember"structured programming?" The latest such fad is"object-oriented" programming (OOP).

OOPtakes as its model the hierarchy. (We can thank the Roman emperorDiocletian for the idea of hierarchy. He organized his empire into arigid rank structure, which the Catholic Church adopted. Later,biologists employed the hierarchy concept in classifying plants andanimals into a coherent scheme.)

Hierarchieshave their limitations, however. As worthwhile as the biologists'ongoing task of classifying all plants and animals into a hierarchyis, such a scheme tells you nothing about ecology. That is, if youknow the location of birds and worms in the hierarchy, you stilldon't know that birds eat worms for a living and not vice versa.Similarly, OOP does wonders for packaging like functions and derivingnew offspring from their forebears. But the familial history offunctions tells you little about how they interact when a programactually runs.

Becauseall these variations on text-based programming tools depend on humanbeings' feeble reading, writing, and text-comprehension abilities,the latest, hottest textual-programming method just never seems topan out. And progress in hardware is making the situation worse:Advances in hardware engineering continue to outrace any advances insoftware engineering.

Acommon and very effective optimizing technique points up howincomprehensible large programs really are. All optimizing compilersperform "dead-code removal" as part of their bag of tricks.This optimization technique searches for and deletes all subroutinesand functions that never get called. Compiler writers include thisoptimization technique because it is very simple for a computer toperform and because experience shows that all large programs areriddled with dead code like raisins in an oatmeal cookie.

Thinkfor a minute, though: How does dead code come about? Clearly, at onepoint, a programmer took the time to craft a function or subroutine.That programmer would not have written that code for the fun of it.The programmer must have felt the code to be necessary. Yet otherprogrammers working on the project who could have taken advantage ofthat code either forgot about it or never needed it.

Ifprogrammers used diagrammatic programming instead of textualprogramming, dead code would be as unlikely as civil engineers'drawing up blueprints for a road to nowhere. Comparing graphicaltools that engineers use with programmers' development tools provestelling. Engineers' diagrams have two characteristics that set themapart. First, engineers' tools depend on powerful, formal systems ofmathematics. Consider such graphical tools as Smith charts, Bodeplots, pole-zero plots, and root-locus plots. Although these toolsare intensely graphical, they suppress powerful mathematics.

Thesecond distinguishing characteristic of engineers' graphical tools isthat they allow easy bidirectional travel between the real and theideal domains. Take a pole-zero plot, for example. Engineers canextract such a plot by analyzing a physical system. They can alsotell much about a system by glancing at its pole-zero plot. But thisprocess is eminently reversible. Engineers can dispose poles andzeros on a pole-zero plot in any fashion they choose to create anideal system that has the performance characteristics they want. Thenfrom this pole-zero plot, engineers can synthesize the specs for realcomponents that, when assembled properly, yield a system thatperforms according to the idealized pole-zero plot.

Fromthe ideal to the real and back

Conventionalsoftware tools do not exhibit these characteristics. The tools do notrest on powerful, formal mathematics, and they are not bidirectional.Until recently, you could only diagram a program. But to realize theprogram, you still had to concoct linear lists of unfathomable text.Now, with diagrammatic programming, you can actually compile yourdiagram. With the ability to compile software diagrams, the need togo back and forth from the ideal and real domains disappears.

Notext-based software-development system displays this level ofbidirectionality or this kind of automatic optimization. In fact, inthe programmers' lexicon, "optimization" means taking astab at making a program as fast or as small as possible, not makingthe program achieve a complex, formally defined, performancespecification.

Proponentsof text-based systems can point to relative improvements in theircraft with pride. However, in absolute terms, text-based programmersproduce the most fault-ridden technological products available. Errorrates in the percentages (of lines written) are the norm for largeprograms (Refs 1 and 3). Hardware projects of a similar level ofcomplexity have error rates in the parts per million.

Advocatesof text-based tools argue that new programs have high defect ratesbecause all the components of the programs are crafted from scratch,whereas a complex new piece of hardware--an airplane, for example--iscreated out of existing components using established design methods.

Evenif you grant the textually minded their view, just why does softwaredevelopment have to differ from other forms of technologicaldevelopment? According to proponents of text-based tools, the answeris that software is inherently much more flexible than hardware;hardware is very constrained, presumably giving hardware designersfewer wrong turns to take compared with the virtually infinite waysthat software can go wrong. This observation merely raises anotherquestion: How did hardware come to be constrained in the first place?This constraint did not happen naturally to the hardware componentsas they ripened, hardening like nuts on a tree, nor will it happennaturally to software components. Indeed, the only evidence forsoftware's peculiar properties is software itself.

Considerthat for all new technological fields, except text-based programming,rapid progress at exponential rates followed an initial period ofconfusion. Examples are automobiles, aviation, and--ofcourse--electronics. But in all the time that text-based programmershave been hacking away, they have not been able to accumulate thepowerful mixture of art (practice) and science that practitioners inother technological fields have. For example, reusable softwarecomponents, such as Unix utilities and Fortran scientific subroutinesthat have been in use for more than 30 years, are still full of bugs.

Text-basedprogramming once had its place

Engineersand programmers who have worked by themselves or on very smalldevelopment teams might protest that they don't need anydiagrammatic-programming systems. After all, high-quality, bug-freetext-based software is possible. Lone programmers pursuing concretegoals, using languages and machines with which they are familiar, cando good jobs on small projects--if they have enought time and iftheir employers are willing to live with a 80:20% or worsesoftware/hardware division of design effort.

Theproblem with text-based methods is that you cannot scale them up tolarge projects. An analogy would be building with mud bricks. You canbuild a perfectly satisfactory 2-or 3-story building from mud bricksif you site the building in an arid region. But don't build amud-brick, 20-story hotel on a tropical seashore: It would wash awayduring the first heavy rainstorm if it didn't first collapse underits own weight. The technique just doesn't scale up.

RobertTroy, chairman and managing director of Verilog, agrees thatsomething is wrong with the current state of software practice (Fig1a). Writing software line by line is a craft comparable withbuilding a wall brick by brick, says Troy. Further, text-basedprogramming is not an engineering discipline, even if engineers arewriting the software. But software practice is only half of theproblem Troy sees. Engineers at least realize that writing softwaremust become a form of engineering. Management contributes to theproblem by continuing to view software as a service rather than as aproduct.

Fig1b diagrams Troy's schema for engineered software. Here, engineersfashion software systems at the engineering level by designing andassembling software components. These engineers usediagrammatic-programming systems, writing absolutely no code at all.Management directs the engineers to produce usable softwarecomponents as products. Management also sets up a crucially importantnew certifying organization that ensures that the software componentsare useful, functional, and bug free. Without the certifyingorganization, reusable software will continue to be just a dream.

AlthoughVerilog already makes computer-aided software-engineering (CASE)tools for OOP and real-time systems, the company is working withAerospatiale to develop general-purpose compilers that accept charts,graphs, and diagrams as inputs.

ConventionalCASE tools do produce lots of charts, graphs, and specifications butno actual software beyond header files and function prototypes.Consequently, engineers who use conventional CASE tools labormightily to produce a detailed design at the front end of a project.Then they go off for a year or two to code the design. Typically, atthe end of the project, the resulting program bears no resemblance tothe original design's documentation. As Troy sees it, the solution tothis problem is to make the CASE material compilable.

Thecompany will not field-test its diagrammatic compilers untilmid-1994, and they will not be commercially available until late1994. Verilog's software-analysis and reverse-engineering tool,Logiscope ($800 to $15,000), provides a hint of what the company'sversion of diagrammatic programming might look like. Logiscopeextracts graphs, histograms, and tables from a program to illuminatethe program's structure, however large. The tool understands 90programming languages and dialects, including Ada, C, C++, Fortran,and Pascal. It determines test coverage and recasts the structure ofa program from one language to another. It runs under the OpenSoftware Foundation's Motif specification.

Aquick tour of some of the available diagrammatic-programming systemsdiscloses that they fall into the following four boardcategories--with some products overlapping categories:



NationalInstruments began work on its diagrammatic-programming system,LabView ($1995), in 1984, qualifying the company as the undisputedpioneer of diagrammatic programming. When developing LabView,inventors Kodosky and James Truchard had a vision of "virtualinstruments"--software entities that would plug together aseasily as lab instruments do. Kodosky chose the data-flow diagram asa visual metaphor.

Conventionalsoftware diagrams, such as the data-flow diagram, invariably proveinadequate as compilable program descriptions. In National's case,the company's programmers had to immediately extend the data-flowdiagram to make it into a complete program specification. Forexample, as soon as the first prototype was working, the need foricons that performed looping and iterating became painfully apparent.These icons perform the functions of common software constructs suchas While, For, and Do loops. The company also concocted icons forcontrol that correspond to CASE statements as well as an icon thatimpresses a sequential execution order upon segments of the data-flowdiagram.

LabView'sevolution is a story of continued improvement. It began life as aninterpreted application program that ran only on Macs. But becauseNational told LabView users that the product would allow them tosimply draw programs, users expected the program to be as easy towork with as MacPaint. MacPaint cuts and pastes mere pixel fields;enhancing LabView to allow cutting and pasting program elementsproved much harder to effect.
Next,LabView became a compiled application instead of an interpreted oneto raise execution speed. Soon after, the need for a "runtime"version became apparent as users began to ship LabView systems asproducts rather than as services. The latest version of LabView is atrue compiler, producing programs that run stand-alone. It also callstest routines written in other languages. The software now runs onSun workstations and Windows PCs as well as Macs.

WithLabView, as with other diagrammatic-programming systems, multitaskingis effortless and crystal clear. If your application demandsmultitasking, you simply draw multiple streams of execution. Usingtextual programming for realtime programming results in murky codethat rivals in incomprehensibility old-fashioned Fortran "spaghetti"code.

Notethat National Instruments did not abandon text-based programming. Thecompany finds that a certain percentage of test-program writerssimply cannot use a diagrammatic-programming system. Lacking afoundation in cognitive psychology, National takes a stab atexplaining this phenomenon by positing that some programmers are tooused to text-based tools to be able to switch to diagrams. For theseprogrammers, the company has a semiautomatic text-based programgenerator, called Lab Windows.

Wavetek'sWaveTest ($1995 for PCs, $6490 to $7995 for various Digital EquipmentCorp computers) has evolved into a similar diagrammatic-programmingsystem for IEEE-488, VXI, and RS-232C instruments. The latest versionruns under Microsoft's Windows and supports dynamic data exchange(DDE). Wavetek combines two visual metaphors: The data-acquisitionportion of the software employs the flow chart as a visual metaphor,and the analysis portion--a product developed by DEC--uses thedata-flow diagram.

Wavetekhad to add enhanced case, looping, and branching icons to thestandard flow-chart symbols. The company also added an interestingreal-time construct to handle asynchronous service requests (SRQs)from IEEE-488 instruments. To set up a prioritized interrupt handler,you simply expand the SRQ icon and fill in a table. This tableprovides a clean and concise solution to what is a nasty bit ofcoding in other programming systems.

Toaccommodate text-based programmers, the company's VIP ($695) allowstext-based programs to link to WaveTest's 200+ instrument-controldrivers. The program includes a front-panel editor.

Alsobridging data acquisition and display, as well as general-purposesimulation, is Hewlett-Packard's VEE (simulator, $995; IEEE-488control, $500). This software runs on HP 9000 computers under HP-UX(HP's Unix) and the X Window System. The visual metaphor for VEE(pronounced "vee") is the data-flow diagram. You draw yourprogram by connecting icons. The software also comes with icons forengineering and mathematical operations. The icons themselves areexamples of OOP and tend to be more complex and powerful thanLabView's or WaveTest's. For example, the simple-seemingmultiplication icon can perform any mathematically allowablemultiplication. The icon can sense the types of the data that appearat its fron end. The icon can multiply integers, floating-pointnumbers, complex numbers, and matrixes without specific instructions.

Notall test-software generators take a diagrammatic-programming tack.Capital Equipment's TestPoint ($995) takes an adamant anti-iconstance. This Microsoft Windows test-program generator reducestext-based OOP to the barest possible minimum. To concoct a testprogram, you drag and drop named objects from various collectionsinto your "action list" (program). Some of the objectsrequire you to type a few parameters. But you set up most objects inyour action list for an application by selecting options from pop-upmenus.

Thesoftware can multitask IEEE-488 and RS-232C instruments. Oneinteresting feature is its ability to mix test results, schematics,and pictorial information. You can use this feature to makecomputerized equivalents of the famous Sam's "Photofact"annotated circuit diagrams. That is, you could aid test techniciansby combining a circuit diagram showing a test point with a photo of apc board. You could also display the required readings against theactual readings at that test point.

KeithleyMetraByte's Visual-DAS ($99) takes a similar approach. The softwareallows a programmer to incorporate test routines for the company'sdata-acquisition boards into Microsoft Visual Basic for Windowsprograms simply by selecting and customizing objects. Meanwhile,Intelligent Instrumentation's Visual Designer adheres to the thediagrammatic-programming paradigm.

Softwareaddresses specific tasks

SinceDSP engineers have always manually diagrammed their designs, it's notsurprising that several vendors have developed diagrammatic compilersfor DSP. Without a diagrammatic compiler, DSP is a programmer'snightmare that combines arcane mathematical algorithms, abstrusephysics, and quirky [mu]P instruction sets into a horrific witch'sbrew.

Comdisco'sSignal Processing WorkSystem ($25,000), which runs on Unixworkstations, began life as a simulator. The simulator acceptedsimulation programs drawn on workstation screens using engineers' DSPfunction blocks. Comdisco's visual metaphor is the engineers'familiar block diagram. Now that C compilers are available for DSP[mu]Ps, the system has evolved into a program generator as well.

Similarly,when Star Semiconductor developed its SPROC DSP chip, it also fieldedits SPRCOClab diagrammatic-programming system ($5460). The IBM PCsoftware allows you to interconnect standard function blocks to buildsignal-flow diagrams. It then compiles these diagrams directly overthe DSP chip's hardware. (You can call these functions from programswritten in other languages.) The software package also includes adebugger.

Otherdiagrammatic-programming tools for DSP come from Analog Devices, TheAthena Group, Compass Design Automation (includes VHDL entry),Dynetics, Hyperception, Multiprocessor Toolsmiths, and SignalTechnology.

However,diagrammatic programming isn't limited to test systems and DSP.Extend V2.0 ($695) from Imagine That Inc allows you to simulatecomplex systems on a Mac. This general-purpose dynamic-systemsimulator comes with function-block icons for amplifiers, filters,digital gates, and control functions as well as icons for mechanicalsystems and economics. The latest version has extensions ($990 each)for business-process reengineering and manufacturing. It also hasdata-analysis and display icons so that you can see the results ofyour simulation.

IntegratedSystems' [Matrix.sub.x] ($2500 to $63,000) includes adiagrammatic-programming system exclusively for control engineers.The software, which runs on Unix workstations, accepts programspecifications in the form of control engineers' block diagrams.Underlying the simple-seeming blocks are rigorous mathematicalanalytical engines for linear-system analysis. A fuzzy-logic systemis also available.

Thesystem first creates a non-real-time simulation from the blockdiagram. You can run this simulation from a virtual front panel,observing the simulation's response. The program displays outputs ingraphical forms familiar to control engineers, such as Bode,root-locus, and Nichols plots. The software also incorporatesmeasured data to simulate real-world devices and generates real-timesource programs in C or Ada. The company also makes a Multibus-basedsimulator on which you can run programs in real time.

Anotherexample of diagrammatic-programming systems are neural networks--theepitome of parallelism. For example, NeuralWare's NeuralWorks ($3995to $7995) allows you to select any one of 28 major neural-networkparadigms and dozens of variations. The software, which runs on PCsand common workstations, displays a diagram of the network you havechosen and lets you customize the network via a menu. As you trainyour network, the software displays diagrams of the network'sevolving structure and outputs. Only when you are finished trainingyour network does the software generate the C code that realizes yournetwork.

Alittle-remarked-upon deficiency of text-based programming tools isthat they suppress time. Not surprisingly, timing analyzers fordigital circuits such as those from Chronology and Dr Design featurepowerful graphical means for describing timing relationships.

Ifyou've a hankering to try diagrammatic programming, you are notconfined to grandiose, high-level projects such as designing as ASIC.Tao Research's GDS-11 ($1725) is a diagrammatic-programming systemfor the 68HC11 single-chip microprocessor that includes a compiler, asimulator, and a symbolic debugger. The software currently runs onlyon the Macintosh, but version for Microsoft Windows and the 68HC16are in the works.

Justlike other diagrammatic-programming systems, GDS-11 makesmultitasking effortless. You simply draw a state-transition diagramfor each independent process. The state-transition diagrams act asvisual metaphors and follow usual engineering practice with a fewextensions. One minor change is that the states are in rectangularboxes instead of ovals because boxes are easier for a computer todraw than ovals. More important, you can subsume a collection ofstates and their relationships under a single state. That is, you canhave submachines hierarchically ranked beneath a parent statemachine. This facility proves handy when a state machine becomes toobig or unwieldy to display on a Mac's screen.

Asthe capacities of gate arrays and field-programmable gate arraysgrow, the need for special-purpose compilers that can handle suchdevices grows apace. One class of special-purpose digital-devicecompilers are "hardware-description languages" (HDLs). In aclassic case of "give a carpenter a job to do and he'll think ofa solution in terms of hammer and nails," textually orientedproponents of HDLs are bucking the trend toward making programmingmore like engineering; they want to turn digital engineering intoprogramming. Instead of drawing a symbolic representation of what anelectronic device is, HDL proponents would have digital engineersenter a text description of what it does.

Therationale for the switch from diagrammatic to textual methods is thatgate-level symbolic systems are too cumbersome for large designs,necessitating a shift to a form of programming. And, of course,"programming" can only mean entering long strings of ASCIItext into bottomless files.

Ordoes it? Consider architecture--a visually oriented profession ifever there was one. When architects design a house, they produceblueprints detailing every brick, every nail in every truss, everydoorknob, and every hinge. But if architects are designing a largeshopping center or planning a city, they don't abandon blueprints fortext. They continue to produce visual designs but on a differentscale and a different level of detail than when building a house.

BecauseHDLs must describe inherently parallel hardware elements with asequential stream of ASCII characters, the "artificialcomplexity" of HDLs is even higher than that of conventionalprogramming languages. So complex are HDLs that even experiencedusers caution against using more than a tiny, proven subset of anHDL's full range of expressiveness.

Further,even ardent HDL advocates admit that HDLs do a poor job of handlingrandom logic. HDLs fare best on regular hardware structures,providing an easy way to size such structures for specificapplications by changing a few parameters. But stripped of theartificial complexity of a text-based programming language, suchscalable HDL constructs reduce to a fill-in-the-blanks chart--astaple of diagrammatic-programming systems.

Onevendor, i-Logix Inc, has pursued a diagrammatic path to"synthesizing" digital designs. The company chose aprodigiously enhanced state-transition diagram as one of its visualmetaphors. The company's booklet (Ref 2) describing the enhancementsis worth reading to improve your manually drawn state-transitiondiagrams. Along with the enhanced state-transition charts, thesoftware provides an activity chart for diagramming process views,functions, activities, interfaces, and data flow. Last, a modulechart shows how the components of the architecture relate physically.

Thecompany's first product, Statemate V5.0 ($20,000 to $40,000),simulates finite-state machines on Unix workstations. You draw astate-transition diagram and other supporting documentation on aworkstation screen. You then link your state machine to a simulatedoperator's control panel, and "operate" your state machine.Recent improvements allow you to use a finite-state-machine charts asa template with which you can instantiate multiple examples of thatchart. The software now also aids you in correlating systemrequirements with the state machines you have designed. Such"requirements traceability" is easy because it lets youdirectly compile engineering documents rather than using an HDL.Similarly, Ractive Concepts' Betterstate ($995) turns enhanced statecharts into C code.

Anotherproduct from i-Logix, Express VHDL V3.0 ($32,500), turns your statemachine into an HDL listing in VHSIC HDL (VHDL). You can then compilethe resulting VHDL listing into an ASIC. In other words, you canrealize an elegant finite-state machine in an ASIC at the press of akey without writing any code.

Chartinga slightly different course, Redwood Design Automation's Reveal($65,000) takes VHDL and Verilog HDL text files as inputs. Thesoftware then abstracts the HDL elements in compact, visualrepresentations. You can manipulate the structure of theserepresentations as well as simulate the diagrammed systems.

Diagrammatictools can also help you make sense of text-based source code. Forexample, Procase's Smartsystem ($35,000) can convert an ugly,undocumented, 1-million-line C program into a comprehensible directedgraph (bubble chart). The software has facilities for navigatingthrough this directed graph of call dependencies and homing in onselected sections of the program being analyzed. It alsoautomatically finds dead code and a variety of syntax errors.

Anotherdebugging tool, Pure Software's Quantify ($1198), analyzes a programto determine "watchpoints." Thus, it comprehensivelyanalyzes the execution of an entire program, including calls toshared and third-party libraries. Setting a flag during Makeoperations enables the performance measuring; it requires norecompilations or special libraries. The software, which runs on Sunworkstations, graphically displays the gathered performanceinformation in a striking "river of time," pinpointing theroutines that consume the most processing time.

Finally,Wind River Systems' WindView ($4995) graphs with 1-[mu]sec resolutionthe dynamic behavior of multitasking software systems. The softwarepresents a time line of the actual sequence of high-level systemsactivities, such as task switching, interrupts, semaphore operations,and message passing.
Distributedpower takes center stage

Article from: EDN

Author: Charles Small

Distributed power hasbecome a strategic architecture, if not a total solution, for digitalsystems. In particular, systems that need flexibility in eithersupply voltages or power levels benefit from distributed power, whichalso suits large digital systems.

Digitalsystems that in the past would have used a single, centralized-powersupply now use distributed-power supplies. A distributed-powerarchitecture is particularly attractive to several classes of digitalsystems. The first class is those systems that need multiple voltagelevels. Also, highly configurable systems, such as workstations, canbenefit from distributed power. Distributed power is not limited tojust small- and medium-sized digital systems; by using distributedpower, large digital systems, such as private-branch exchanges anddigital telecommunications systems, can eliminate both high-current,low-voltage bus bars and the single-point failure mode of acentralized-power supply.

Theterm "distributed power" means that each pc board or modulein a digital system has its own local dc/dc converter situatedphysically close to the point of load (Ref 1). One key factor of theattractiveness of distributed power is purely mechanical: It lets youhandle distributed-power dc/dc converters as components, assemblingthem on your pc boards, just as you would any component. Bulk powersupplies, on the other hand, are typically large mechanicalassemblies that you must install and connect separately.

Distributedpower leads to a significant advantage for racked systems. Ratherthan distributing low voltages, such as 3 or 5V, at high current, adistributed-power system's ac/dc "bulk" converter dispenseshigher voltages, such as 48 or 300V dc to its pc boards (Fig 1).Board-level distributed-power converters are available that acceptstandard telecomm, military, and industrial input-voltage levels.These higher intermediate voltages obviously result in proportionallylower currents. Consequently, conducting the lower currents requiresmuch less copper and fewer backplane-connector pins.

Distributed-powersystems have many physically smaller power assemblies than docentralized-power systems. Centralized-power systems tend to have asmall number of heavy assemblies. The weight of the power assembliesis important not only in manufacturing but also in the impact on asystem's resistance to vibration and shock.

Asthe inexpensive supplies in PCs show, the hardware cost of a custom,bulk supply can be low, but other custom-supply costs are not. Acustom supply often takes a significant amount of time to design. Thepower supply's designer must foretell the maximum load currents thesupply will encounter over the life of a product, even if the ownerlater installs options. While standard ICs' data sheets provide themeans to estimate power consumption, such estimations for customdevices can require advanced tools, such as Systems Science's $18,500PowerSlim for VHDL ICs. Consequently, custom supplies are oftenoverdesigned.

Anychanges in requirements entail design changes. After each redesign,you must requalify the custom supply with safety agencies. If thecustom power supply has a fan, the fan are a limited-lifetimecomponent with a relatively high failure rate.

Ratherthan concentrate power converters and their resulting powerdissipation, distributed-power systems diffuse heat throughout asystem. Using distributed power, onboard converters in the 5 to 50Wrange can supply most loads. In these cases, natural convection canoften cool the systems, eliminating the use of fans. Higher loadsoften require forced convection. The price for poor cooling is a 50%reduction in MTBF for every 10 [degrees] C temperature rise. Or, asCalex's Steve Hageman says, "If you cannot touch your designbecause it runs too hot, it probably isn't reliable."

Distributedpower is not a new concept. Engineers have long been using DIP-sizeddc/dc converters to develop tiny amounts +/-12 or +/-15V for RS-232ports or small analog circuits from local 5V digital-circuit power.Even today, a less efficient--but much less costly--onboard linearregulator is often the best choice for deriving small amounts ofpower from a higher intermediate voltage. The telecomm industry isalso using small, board-mounted, dc/dc converters to developelectronics voltages from standard telecomm-equipment voltages, suchas 48V dc. However, these dc/dc converters are limited to specificapplications.

Theconcept of distributed power really took off in the mid-1980s whenVicor Inc fielded high-power, compact converters in component form,and, simultaneously, the workstation industry needed to develophighly configurable products. Vicor's early lead led to anindustry-wide "Vicor-standard" footprint--but, alas, not astandard pinout or any compatibility between products from differentsuppliers (Fig 2).

Benefitsof distributed power

Distributedpower can reduce development cycles. You can select a converter foreach pc board as you develop the board. The power supplies can thusbe integral to your system--not an afterthought. You do not have towait until the end of your development cycle to determine a system'spower requirements all at once. A distributed-power system design isvery predictable and can lead to reduced NRE costs. These savings canoutweigh the distributed-power modules' higher cost. Power Microexpects 1500W distributed-power systems to cost less than $0.75/Wwithin the next two to three years.

Usingthe 1-converter/board approach eliminates low voltage dcdistribution--except for distributing power on individual pc boardsthemselves, of course. In other words, you can eliminate hefty wiringharnesses and bus bars. Because a distributed-power system minimizesparastics, it can have better transient response than that of acentralized-power system.

Upgradesare often easier, too. Consider that when you want to upgrade asystem, you may want to do more than just increase the power. You mayneed to add a new voltage for some advanced ICs that operate fromlower voltages. As part of an upgrade, you can sometime simply swapout the local power converter rather than the whole power system.Upgrading a system having a centralized-power supply this way may notbe physically possible. The pc traces and backplane connector may nothave enough pins or enough power-handling capacity.

Thearchitecture of a power system includes not only the power buses, butalso power-supply control, fault diagnosis, and status reporting.Distributed-power systems obviate remote sensing along withassociated reliability and diagnostic problems in most systems.

Youcould monitor or control a wide range of power-system elements:output voltages, airflows, temperatures, and energy saving duringbattery operation, among others. The most basic and useful control isturning the converter in and off with an external signal. Using suchsignals, you can easily accomplish power sequencing. Some accontrolpanels, such as those from Pulizzi Engineering, can help you sequenceyour bulk supplies.
Telecommsystems often require the convertee to sense its own input voltageand to turn itself off if the input voltage goes below a certainvalue to safeguard a battery. Some newer converters allow you toprogram the voltage levels at which the converter turns on or off.You could also adjust the dc/dc converters' output-voltage marginswith fixed resistors and analog switches or with D/A converters.

Faultisolation

Youcan isolate faults and contain damage more easily in adistributed-power system than in centralized-power system. You mayneed no more than simple board-level diagnostics. Distributed-powersystems usually combine the converter with the "field-replaceableunit" (FRU) it powers. Standard engineering techniques can makepc boards amd modules "hot-swappable." Thus, a servicetechnician can simply replace the entire function and its powersupply at the same time in the event of a failure (Fig 3). If aconverter's output in a distributed-power system goes high, itdamages only one pc board. If a centralized-power supply sustains anovervoltage condition, it can fry every component in the entiresystem.

Acceptablereliability differs, depending on whether you want a fault-tolerantor a high-availability system. By definition, no single failure everbrings down a fault-tolerant system. "Fault tolerant"implies full-brown duplication of hardware and exhaustiveself-diagnostics. "High availability" means that only therarest and most unlikely failures can bring down the system. Highavailability trades off availability for cost.

Themost obvious potential culprit for a catastrophic, single-pointfailure in a distributed-power system is the ac/dc bulk converter.The probability of an output short in an ac/dc converter is verysmall, but not zero. Techniques used in high-availability systems tomake the ac/de conversion less failure-prone include N+1 redundantac/dc converters (or N+2...N+M). A fault-tolernat system would have2N-redundant ac/dc converters.
Insome cases, a pc board may demand more current than a singleboard-level dc/dc converter can supply while still meeting thecomponent-height restrictions of your card cage. In such cases,consider paralleling on-board converters. You can also parallelon-boardd converters for N+1 redundancy.

Whetheryou are paralleling ac/dc converters or dc/dc converters, parallelingadds complexity to the sytem and typically entails accepting someperformance or cost compromises. When paralleling converters, mountall the converters in a common thermal environment so that theyexperience as close to the same temperature as possible.

Parallelingsupplies with blocking ("ORIng") diodes is more reliablethan simply paralleling the supplies' outputs. Run such diodes hot,and use very low forward-drop devices. After all, reverse-leakagecurrent is an issue only on failure.

Distributedpower maskes hot swapping easier. Because hot swapping a module of adistributed-power system affects only a small portion of the totalpower, "glitch-free" swaps are easy to ensure. Hot swappingcan be a big advantage jfor large systems that must remaincontinuously on-line. Blocking diodes also simplify hot swapping.
Ifyou step back and take a systemwide view, you will see that adistributed-power architecture duplicates many power-supply circuitelements. In a distributed-power system, each converter has its owncontrol and fault-handling circuitry. In a bulk-supply system, thebulk-supply has only one of each of these elements.

Giventhat increasing the number of components decreases reliability,distributed-power makers have had to increase the reliability oftheir dc/dc converters. For example, Vicor has demonstrated MTBF ofgreater than 20 million hours. However, not all converter makers havetaken the time to characterize their products over such long periods.Consequently, you often have no choice other than to rely oncalculated MTBF.

Althoughtelecomm standards exist for calculating MTBF, most power-supplyvendors use MIL_HDBK-217 instead. Even though this practice iswidespread, MIL-HDBK-217 has its problems; it depends on a databaseof component types and their field-failure rates. This databasefocuses on military components and takes time to accumulate. As aconsequence, most newer commercial technologies are not available inthe database.

MIL_HDBK-217imposes a harsh--and possibly unjustified--penalty on nonmilitarycomponents. further, some of its component-failure rates are notconsistent with those components' actual performances. for example,transformers and magnetic devices have a very low actual failurerate, but MIL_HDBK-217's predicted rate for these components is veryhigh. ICs fare even worse than do magnetic components.

Atleast two converter companies have compared MIL-HDBK-217'spredictions to the actual field performance of their dc/dcconverters. Ericsson finds that its converters run 3 to 10 timeslonger than MIL-HDBK-217 predicts, while Vicor sees two to threetimes longer performance.

Toselect your intermediate-bus voltage, first consider the ease ofsafety approval vs cost. A lower voltage entails more expense tohandle the higher currents, but the lower voltage may be moreacceptable to regulatory agencies. [TABULAR DATA OMITTED]

Everycountry has some kind of safety standard or requirement that limitsthe maximum voltage to which you can expose equipment oprators andservice vice personnel. The common term for this limit is "safetyextra-low voltage" (SELV), but not all agencies set SELV at thesame level. The most commonly accepted value for SELV is slightlymore than 60V. Consequently, if your intermediate bus voltage is lessthan 60V, your product more easily complies with safety shielding andregulations.

However,your nominal intermediate-bus voltage has high and low limits forconditions such as battery charging and load switching. For the48V-dc telecomm standard, for example, the maximum voltage is60V--very close to the most generally accepted SELV limit. Therefore,a nominal 48V is currently the highest SELV for a distributed-powersystem's intermediate voltage.

Butyou could follow the example of mainframe-computer makers, rectifyingand filtering the ac line to yield 300V -dc intermediate voltage.This scheme reduces the cost of both the ac/dc converter and theintermediate-voltage distribution. Your genuine safety concerns for ahigh-voltage bus are creepage and clearance, preventing access toshock hazards, and large amounts of stored energy available to shortcircuits.

Ifthe load on the intermediate-voltage bus switches rapidly, such aswhen a fuse opens, the bus's inductance can generate a voltage pulsehaving as much as 70 Wsec of energy.

Fora 300V intermediate bus, overload and short-circuit protectionrequire large devices to handle inrush and arcing. However, there isa dearth of standard connectors and fewer standard converters for300V. And backing up a 300V bus with a battery obviously requiresmore cells than does backing up a 48v bus.

Youneed to carefully consider over-current protection for theintermediate bus. Two common problems are that start-up requireslarge currents to charge the bus's capacitance. This large chargingcurrent means that the over-current-protection circuits can trip atstart-up. The result is that the system never actually gets started.In this case, you must sequentially enable load converters only afterbus voltage is stable.

Youmust also carefully choose your board-level converters' overcurrentprotection. For example, in a battery-backed system, theconstant-power nature of the load could trap a brick-wall-limitingconverter at a point beyond the knee of its overcurrentcharacteristic.

Opinionsdiffer about the relative prevalence of low- and high-voltageintermediate-bus distributed-power systems. According to Ericsson,most distributed-power systems have bus voltages below the SELVlimit. Vicor, on the other hand, sees a 50:50 distribution between 48abd 300V systems.

Youcan opt for isolated and nonisolated dc/dc converters. Isolatedconverters are more expensive but are also safer, and they reduceproblems with system noise, ground loops, and interaction betweenoutputs. In addition to operating from either polarity of inputvoltage, isolated converters permit flexible system grounding.

In acentralized-power system or a distributed-power system usingnonisolated converters, the common of the power-distribution systemis also signal common. The common of the power-distribution bus isisolated from signal commons in distributed-power systems usingisolated dc/dc converters.

Noengineer runs dc/dc converters continuously at their rated full load.Ericsson reports that most designers allow margins of 15 to 40%. Tgepenalty for under-margining is obviously more extreme than that forover-margining. The power-supply margins are easier to determine in adistributed-power system.

Constantvs. variable frequency

Eachconverter manufacturer has its own circuit topology. Some employconstant-frequency converters that use PWM for voltage control. Vicoruses a variable-frequency resonant scheme. According to Vicor, theefficiency of PWM converters is usually lower than that ofsimilar-capacity resonant converters. Vicor also notes that a PWMconverter's efficiency drops rapidly with load, culminating with highdissipation under output short circuit. The company also states thatPWM converters emit difficult-to-filter conducted and radiatedcommon-mode (Denkaplate), normal-mode, and radiated noise and thattheir output ripple increases with load.

PWM-convertermakers, however, have been busy enhancing their designs. So your bestguides are spec sheets and your own tests. However, make sure yourquality-assurance staff is not using outmoded tests designed forlinear supplies (see Ref 2 for proper setups). Test your candidateconverters in a realistic circuit.

Aconverter's operating frequency is important, however, because itdetermines the time required to sense and respond to a change in loadcurrent. The converter's topology and circuit design set a limit forthe amount of energy delivered to the load per converter operatingcycle. A converter may take several operating cycles to meet a demandfor dynamic current.

Youcan synchronize many converters, but AT&T questions thispractice. The fear is that two units operating at nearly the samefrequency will "beat" and produce extraneous emissions. Butsynchronizing makes emissions worse because it causes all theconverters' emissions to add arithmetically. Without synchronization,the reflected currents add in rms fashion.

Youmust look very closely at efficiency. Small size combined with lowefficiency spells disaster. The higher the efficiency, the higher theMTBF for both the converter and the system. Also, high efficiencyextends backup-battery holdup time. Efficient converters permit theuse of smaller heat sinks and quieter fans.

Converterefficiency is a family of curves, not a single figure. So, look atefficiency across both line and load variations. For safety's sake,also look at dissipation under short-circuit conditions.

Efficiencyfor dc/dc converters currently ranges from about 75 to 83%. At firstglance, this small range may appear to be meaningless, but it isactually a very significant difference because converting differencesin percentages into percentage differences is not intuitive. A75%-efficient converter dissipates 60% more power at full load thandoes an 83%-efficient converter. Politicians take advantage of thisweakness in human intuition when they call an increase in taxes from5 to 6% a "1% increase" (when it's really a 20% increase).

Advertisedpower levels for dc/dc converters are often very optimistic. Don'tneglect the fine print, which says that the converters need heatsinks to achieve their advertised performance. Also not at whatambient temperature the converter needs derating. Converters fromdifferent manunfacturers exhibit a wide range of ambient-temperatureoperation. "Ambient temperature" means different things todifferent manufacturers. See Table 1 for temperature definitions.

Ericssonfinds that distributed converters have 3 to 11 times the powerdensity of the pc board they occupy. That is, converters are aconcentrated source of heat. Ericsson recommends, therefore, for freeconvection the converter can occupy no more than about 2% of the pcboard's area and that for force convection, the converter can occupyno more that about 7% of the pc board's area.

Onlyconverters at both extremes of the power range use conduction forcooling (Table 2). Mainframe-computer converters that supply hundredsor even thousands of amps use a recirculating coolant. At the otherextreme are low-power converters, 10W or less, that conduct heat outthrough their leads.

Convectioncooling is more difficult to model and analyze than is conductioncooling. Free-convection cooling is very simple and reliable. Also,convection cooling does not entail the acoustic noise, maintenance,cost, and degraded reliability that fans introduce. However, manysystems require fans because forced convection can cool about fourtimes the power per board compared with free convection.

Opinionsdiffer on filtering. Both Ericsson and Vicor say that you do not needto use a filter at the input of the dc/dc converter if you haveproperly designed and executed your dc distribution and decoupling.Datel, on the other hand, says most of its customers want suchfilters. Datel adds that engineers are specifying IEC noise limitsfor dc/dc converters (but not, of course, the IEC test setup becausethe spec actually applies to ac-line noise).
Selectingac/dc converters for distributed-power systems is much like selectingany ac-input supply. You need to decide if you want manually strappedor autoranging inputs for single- or 3-phase mains voltages. Evenwith power-factor correction, many systems are already drawing themaximum amount of current allowable from a single-phase connection(Table 3). Power-factor-correcting ac/dc converters are also becomingmore common as regulatory agencies tighten up on conducted noise. Onemore hint: Look for ac/dc supplies that require no preload.

Problemswith distributed power

Onemajor problem with distributed power is that the most frequentlypromoted spec for dc/dc converters, power density, is also the mostuseless. Fantastic power densities tend to wilt after you expose themto the harsh glare of your application's environment. You can comparepower densities of different makers' dc/dc converters only aftertaking into account heat sinks, derating, and other designconsiderations. Fully configured, some high-density dc/dc convertersare large and heavy enough to damage their host pc board duringshipping and usage.

Next,the switch to a 3.3V digital standard is not as easy as just swappingout converter modules. If you are to take the JEDEC standardseriously, its [+ or -]0.1V tolerance means that a [mu]P drawing 4Acould have no more than 0.025[Omega] trace resistance between itselfand its dc/dc converter. In other words, a fine trace or a connectorcan put you out of spec. Also, noise currents in the system groundcan quickly eat up 3.3V noise margins. If you have a mixed-voltageboard, carefully check your margins for the worst-case supplycondition: 3.3V supplies at their high end (3.4 or 3.6V) and 5Vsupplies at their low end (4.75 or 4.5V). You might be in for a nastysurprise if you are interfacing 3.3 and 5V ICs.

AlthoughJEDEC has promulgated a 3.3V standard, little actual conformanceexists in the industry. Various manufacturers are going ahead withlow-voltage standards other than 3.3V. Semtech notes that onmixed-voltage boards, you might have to be very careful how yousequence your supplies up and down to avoid failures. The companyalso notes that transient-voltage-protection devices for 3.3Vcircuits are rare.

Atthe new, lower voltages, large current surges can occur on the pcboard itself. So-called "green" PCs (which switch largedigital devices on or off as needed) and low-voltage disk drives aretwo possible sources of such surges. These surges may necessitateremote sensing for board-mounted converters, reintroducing a problemthat distributed power supposedly eliminates (Fig 4). Further, thedigital ICs themselves may be drawing pulsed currents at a highenough frequency that the skin effect may come into play in theirpower and ground lines.

References

[1.]"The Power Book," Ericsson Components AB, K3(93025) A-Ue,Stockholm, Sweden.
[2.]Hageman, Steven C, "DC/DC Converter Application Notes,"Calex, Concord, CA.
[3.]"Applying DC/DC Converters," Conversion Devices Inc,Brockton, MA.
Charles Small's Technical Articles
Published: