Columbia University Computing History   

Memories of Burroughs computers from readers of this site

As far as I know, Columbia University never had any Burroughs computers, although it did have some Burroughs adding machines and some pretty feature-full desk calculators in the early days up through the 1970s.

Most recent update: Fri Feb 23 11:04:01 2024

Contents:

  1. Short Burroughs History by Loren Wilton
  2. More Burroughs History by Steven Emert

Short Burroughs History

Loren Wilton
Burroughs (Unisys)
January 2004

Burroughs (pre Unisys days) had three main computer product lines, and some other interesting things we would call minis or micros. All of them were born at the Pasadena Plant, originally known as Burroughs Electrodata Division (EDD), which was originally the Electrodata Corporation, bought by Burroughs in the early 1950s to get them out of adding machines and into computers.


First Generation

The first generation machines were the 205 and later the 220. (Note, not B205 or B220. The "B" numbering scheme came later, and the B220 was an entirely different machine from the 220.) The 205 was a drum machine contemporaneous with the 650 or thereabouts. 2K words of 10 digits + sign BCD, the usual arithmetic capabilities. For a few more bucks you could hang an extra rack on the side of the processor and add the floating point module that did hardware floating point math.

The tape controller could handle something like 6 or 10 (I forget) tape drives. These were 3/4" wide tape, 12 rows. That is, 6 rows, bi-directional, interlaced. It was block addressable tape, and had to be preformatted by the tape controller before it could be used by the CPU. Shades of DECtape! I forget if the tape control was a channel control and could transfer independently of the processor; I think it might have. There was also an interesting coffin-shaped device called the DataFile (most commonly known by FEs as the DataFail) that had 50 strips of tape that ran from one bin to another, and a single head and pinch roller pair on a leadscrew in the center of the coffin. Since the tape was addressable and bi-directional, it didn't take too many seconds to screw your way over to the strip that had the data you needed, when it worked. Static was a killer though. (Compare with IBM Data Cell.)

There was also a selection of two unit record controllers you could hang on the beast, the PCC (Punched Card Converter) and the Cardatron. The PCC gave you a single reader/punch and a pair of printers. The Cardatron could handle up to 5 printers and 5 each (or maybe more, again, memory is hazy) readers and printers. This machine contained a drum of its own that was used for rate conversion. The CPU did a full-track DMA transfer to the PCC drum, then continued to process until the unit record operation was complete. I forget if they had invented an interrupt or if you had to poll for complete; I think the latter. BTW, Burroughs didn't at this time make unit record equipment. The Punched card machines were IBM gang-summary punches, using the cable that would normally connect to the accounting machine, plus an imaginatively wired plugboard. The printers were 402s.

The 205 was quite a popular machine, I think well over 100 were built. Tended to end up at places like air bases; I know Norton had one.

After the 205 came the 220. Don't recall much about this machine, other than it was faster and bigger. This machine had core memory in place of the drum.


Second Generation

After the vacuum tube era, Burroughs EDD made two radically different transistorized machines. At the low end was the B200/B300/B500 series of machines. These were duodecimal word based but character addressable machines with small memory sizes, typically in the 4K to 16K character range. I never really worked with them, but it was clear to me from the architecture that they were intended as direct competition to the IBM 1401, which I considered to be architecturally the better machine by a little. Still, Burroughs sold boatloads of these machines, mostly into banks and other financial institutions. This was one of the first machines that could run MICR check sorters. This was a "BCL" machine, using the Burroughs Common Language character set. Which curiously differed from BCD by only about 3 character graphics, having things like the ALGOL not-equal sign.

The second machine was the B5000. This fairly quickly mutated into the B5500, which fixed two or three really nasty problems with the B5000. Almost all machines sold were B5500s rather than B5000s. This was not only a stack machine and an ALGOL machine, it was the first machine that didn't have an assembler. It was also quite possibly the first machine that was released with a full operating system, which was designed at the same time as, and in conjunction with, the hardware design. The OS was programmed in Espol, which was a variation on ALGOL. The compilers were themselves written in ALGOL.

We sold a boatload of these machines too, and #1 was still operating at Pasadena into the 1980s, doing the payroll. It was finally scrapped, but a couple of enterprising programmers contacted DATAMATION on the sly, and got a motion going to pressure Burroughs into donating the machine to the Smithsonian rather than scrapping it. This happened, but it was a really close call. The machine was literally within hours of heading for the junk dealer when the Smithsonian call made it to the plant. The techs had to go back and put parts back in that they had removed before it could be shipped out. We had these machines all over the world, even in Japan.


Third Generation

After the second generation machines, the plant decided that we needed two different kinds of machines for the third generation. There would be a small to medium sized mainframe used for business and accounting, and a large mainframe intended for scientific computing. This division already effectively existed with the B200/300/500 vs the B5500, so it was logical to continue in concept.

At this point the B2000/B3000 was born. Which like most good Burroughs machines became the B2500/B3500 before the first one was sold. These machines were designed from scratch, with little reference to past history. They were COBOL machines, in the same sense that the B5000 was an ALGOL machine. The machine had a 16 bit word, but this was virtually immaterial. It was digit addressable, and all math (including addressing) was done in straight decimal. All instructions had variable field lengths, encoded in the instruction format. It was a three-address machine, so you could add A to B giving C in separate fields. Moreover, each field could be a different length, and each field could individually be a packed decimal field (4 bit data) or a display field (8 bit data). Unlike earlier BCD machines, this was a full EBCDIC machine. And like the 360 series, also had an ASCII flag that could be set to change the way numbers were handled. Also like the 360s, the flag was never used for anything. (And put the wrong zone digit in the numbers anyway!)

The machine was specifically designed to be a COBOL machine, and COBOL-68 was the first compiler available to the field, along with the assembler. Unlike the B5500, the B2500/B3500 -- known as "Medium Systems" in later years -- had an assembler, and this was used to write the OS. The COBOL compiler and all later compilers were written in an ALGOL-like language known as BPL, Burroughs Programming Language. Originally this existed as a cross-compiler on the B5500 upstairs (good old #1), but in later years was bootstrapped into itself running on a B3500 box. This language wasn't released to customers until some 10 or more years later. In later years there were also FORTRAN (66 and 77), RPG-II, Pascal-68, and COBOL-74 compilers released to the field.

It was a standing joke that this machine broke one of the basic differentiations between a compiler and an assembler. Academics generally held that an assembler generated one machine instruction per source line, while a compiler generated many. In Cobol, for which the machine was designed, the compiler typically generated a single machine instruction per Cobol statement. Did that make Cobol an assembler?

The B2500 and B3500 (differing only in the processor speed and the peripherals Marketing would let you put on the machine) were very popular with banks and moderately popular in schools with business DP courses, or doing school DP work itself such as course scheduling or accounting. These machines were comparable in power to 360/30 and 360/40 machines, and about contemporaneous. Memory suggests that they were designed in 1959-1960, and the first machine released in 1962, but that might be a couple of years early, since these were 3rd generation machines.

About 1970 the next version of the machine was released, twice as fast and with 3x the maximum memory (1MD, rather than 300K digits; a.k.a. .5MB vs 150KB. But we thought in terms of digits, not bytes.) The B4500 (which became the B4700 by the time it was released) came with a new upward-compatible version of the OS, optimized to run check sorters and datacomm lines. Instead of the previous OS limitation of 20 simultaneous jobs, you could run 80. (And you really could, even on a fairly small machine.) Instead of one or at most two sorters, you could put 4 sorters on a single machine. This was a huge help to big clearing houses like the FRBs, since it cut down the machine room required to support 10-16 sorters.

In addition to sorters, you could also hang 40 datacomm lines on a processor, and 20 channels of IO gear such as cards, tapes, and disks. All of these machines were of course disk based. Burroughs had head-per-track disks, with a 17ms average access and no seek time. Put an 80MB disk farm on a machine and you had a pretty impressive (and physically LARGE!) machine.

In order to increase system reliability and accessibility, we invented "shared disk". This consisted of an exchange so that you could put the same physical disks on up to 4 processors at once, and a Record Locking Unit (another shared peripheral) that the OS used to maintain record locks on the disk directories and user files. Thus you could have one machine doing remote capture and teller terminals over datacomm, another two machines handling the sorters, and a third machine doing batch or the like, all accessing the same databases at once. While a 4-machine shared cluster was unusual, virtually every bank of any size had either a 2x or 3x system. Note that this was a "loosely coupled" shared system, sharing only peripherals, but not main memory.

There was a time, in the late 1970s to mid 1980s, when B4700s, and the followon B4800 (twice the memory, twice as fast) handled 80% of all check clearing operations in the world. If you wrote a check, anywhere, it almost undoubtedly went through one of my machines before you saw it again. We had the machines in every FRB branch but one, in all the major bank clearing houses both in and outside the US, and initially had SWIFT in Belgium, although internal corporate infighting got that account moved to Large Systems machines later, rather to their detriment.

The Medium Systems line continued with the B2900, B3900, and B4900 in the 1980s. About then the company became Unisys, and the machines were renamed. The B4900 became the V300, and new machines were built as the V400 (an actual dual-processor shared memory machine, our first), and the V500, which was a completely new machine (4x the speed, 100x the memory), and also a completely new software architecture - which nonetheless would still support the first B2500 program ever written with no changes!

By the mid 1990s, shortly after the introduction of the V400 and V500, political infighting in the corporation doomed the V-Series (nee Medium Systems) product line, and it was terminated, as the Small Systems line had been some years before.


B6x00

Rewinding to the early 1960s, there needed to be a followon machine to the B5500. This was to be the B6500, which became the B6700 by the time it was released and workable. This machine, while a design from scratch, was very much a child of the B5000 architecture. The same 48 bit word was used, although this would now be either 8 BCL characters OR 6 EBCDIC characters, OR 12 hexadecimal digits, depending on flags in the address reference field. More tag bits were added to the word, the one tag bit of the B5000 having been shown to not be enough. The instruction set was greatly reworked, but it remained an ALGOL machine and a stack-oriented machine. It also, like the B5500 before it, implemented virtual memory and presence bits in the hardware, decades before IBM "invented" virtual memory on the 370 series. The maximum number of processors sharing memory was increased from 2 to 6, and they became fully symmetrical, rather than a master and a slave.

The B6500 was physically quite large, and was expensive. It did ALGOL very well, Cobol poorly, and unfortunately FORTRAN very poorly compared to a 7090. A great deal of effort went into FORTRAN compiler redesign to correct this, and vector math operators were added to the machine, creating the B6700. It still wasn't as fast as a 7090 at FORTRAN, but it was close.

Early on in this design the Pasadena Plant physically ran out of room. The plant was built for about 1500 people, and there were over 2500 people working there. There were no available offices, no furniture, no room to put any anywhere. One of the main managers of the B6500 had a "desk" that literally consisted of the door from his boss's office, laid across the sink in a janitor's closet. To say there were people coming out of the woodwork would not have been an exaggeration. As a result, the B6500 people got their own plant, on Proctor Road in City Of Industry, some 20 miles away. Since the B6500 was certainly larger than the B3500 then in production at Pasadena, the Proctor Plant became known as the Large Systems plant, and Pasadena (no longer EDD, although the asset tags on the furniture didn't know that) became Medium Systems.

Proctor started cranking out B6700s and trying to sell them. Obvious first customers were those with B5500s, but there weren't a lot of those. Medium Systems had by this time a fairly decent customer base. Since the same sales force sold both Medium Systems and Large Systems, and the sales commissions were a lot larger on Large Systems (being more expensive machines), the sales force started to cannibalize the Medium Systems market, selling B6700s in to replace B3500s.

Sometimes this worked. Sometimes the customer threw out the B6700 (which couldn't run sorters, and could barely run Cobol) and got the B3500s, or perhaps by then B4700s, back. We sold an inordinate number of GE and RCA machines this way, and not a few Sperry-Univac and IBM machines. Despite Herb's contention that GE never managed to make a computer, they actually made quite a few and the Burroughs sales force probably kept them in business for an additional 5 years.


Expansion

Burroughs had by the late 1960s more computer plants than just Pasadena. Burroughs was Detroit-based, and had a number of plants in the Detroit vicinity, and also in Pennsylvania, north of Philly. These Penn. plants were largely doing military electronics, making things like the D825, which was used by NORAD if I recall. (Or was it SAGE? I forget.) As the military computer market saturated, the plants found themselves with excess capacity. And the Proctor Plant didn't have a huge amount of capacity, and Pasadena was running the production line 24/7 cranking out B3500s.

First B3500 production was started back at the Tredyfferin ("Tredy") plant, and later B6700 production. Pasadena sent a few engineers out there to get production started, but most of them came home, not wanting to stay there. Proctor, on the other hand, sent engineers that stayed. And multiplied. Soon there were two Large Systems plants, one on each coast. And one of them 2000 miles closer to the head office than Pasadena was. When Corporate needed an Engineering opinion, it was easier to go down the street than across three timezones.

Not surprisingly, Tredy engineering management started working up the political ladder -- rapidly. Not surprisingly, when someone in Corporate asked Tredy if a particular site or potential contract were more appropriate to Medium Systems or Large Systems, the answer was invariably Large Systems. This began the trend that flowered in later years, of Burroughs Large Systems considering their major competitor to be Burroughs Medium Systems, not IBM, GE, or any of the rest of the Bunch1. In later years this led first to the closing of the Small Systems plant, and later Medium Systems (by then known as V Series), leaving only Large Systems (by then known as A Series) in existence.

With production on both coasts, and attractive commission structure for the sales force, and most importantly a direct line into Corporate management (and increasingly promoting managers from Tredy to Corporate) Large Systems prospered. After the B6700 from the Proctor Plant, Tredy itself developed the B7700, a much larger machine. For a number of years, the west coast made "large systems" and the east coast made "really large systems". This lead to a certain tension between the plants, as the OS began to diverge between the plants, but was still supposed to be common for all machines. In later years this was corrected by a strong-willed manager of both plants demanding that the source base be normalized again, and some of the hardware technology shared between the plants to make this possible. The end result was a common source base for all of the software products, which is worked on in both plants simultaneously, and still manages to avoid double update problems.
______________________

  1. BUNCH = Burroughs, Univac, NCR, CDC, Honeywell.


A-Series and Micro-A

After the B6700 and B7700 came a long series of machines from both plants: The B6800, B7800, B5900 (a disaster), B6900 (good), B7900, and then into the "A Series" era of machine naming. Several years ago A Series could boast a performance range of 1000 to 1 from the smallest machine to the largest machine in the line, all of which are user-software compatible. It is no longer boasted, but the ratio is now over 4000 to 1.

In the mid 1980s, several Pasadena engineers became convinced that it would now be possible to build an entire system on a single chip, or at least in a single multi-chip package. Finding no real interest for this idea in Pasadena, they migrated south to A Series, by then located in Mission Viejo. Mission Viejo was an upscale beach community, unlike the Proctor Plant, which was located in City Of Industry, which lived up to that name. (Rumor had it that the Proctor Plant manager wanted to retire to the beach, but couldn't afford it. So he got the entire plant moved to the beach, and everyone got 50% cost of living raises for the new community. Several years later he retired.)

The ex-Pasadena engineers soon produced the Micro-A: a complete A-Series mainframe system on a single PC card that plugged into the then-new IBM PC. All of the IO subsystem for the mainframe consisted of PC device drivers running under OS/2. This made an incredibly cheap to produce system, which nonetheless had the same performance as the two lowest-end actual mainframe systems. This was not really all that surprising, since the Micro-A actually used the same processor chip as these two low-end systems! Selling them was hard at first, since the commissions on such a system were miniscule compared to a larger hardware system. Eventually though a number were sold, mostly as programming development systems into shops that already had an A Series mainframe. Now every programmer could have a "mainframe on their desk", not just a terminal into the shared (and scheduled!) development system in the machine room.

The Micro-A looked like the wave of the future to many people, except for one problem. The hardware engineers still thought in mainframe product cycles; six years was about right to turn over a system. Eighteen months was not something they could comprehend. PC performance had increased by a factor of 4 in this time, but the Micro-A was still a 100 RPM machine.

A word on that "100 RPM" in the previous paragraph. Burroughs never used MIPs to measure processor speed or system performance. Early on, a group was sent out to collect a bunch of "typical applications" with data, from our customers. This was massaged by a specially created Performance Group into a standardized benchmark, which could measure the time it took to complete the benchmark. This standardized benchmark was used, and is used, on every machine the company ever made to measure performance. Thus, a 200 RPM ("Relative Performance Measure") machine is exactly twice as fast at this benchmark as a 100 RPM machine. The Performance Group still exists, and is politically independent of the engineering and sales groups, and is quite particular about the numbers produced. They still maintain tractability of all performance ratings back to the original baseline 100 RPM machine, which they still own. Many other benchmarks have been developed and used over time, TPS, SAP, etc. But the RPM numbers still exist, and are still used to price the machines.

Back to the Micro-A and its growing pains. Right about this time the Pasadena plant started to shut down, and a whole lot of software types found themselves headed South. Most ended up in the Mission Viejo (or in a few cases Tredy) software groups. Some of us found ourselves in the Mission Viejo Hardware activity. It became pretty clear that Intel and AMD and the like could "turn over" processor chips far faster than the local hardware engineers could. A series of local engineering management changes ensued, and the Micro-A hardware (and those OS/2 device drivers) found themselves supported by a "hardware engineering" group consisting purely of programmers.

The next step was obvious -- all of the Micro-A IO subsystem was already PC software. All we needed to do was to write an emulator for the CPU, and one for the IO Controller which had been part of the processor chip. Since the Micro-A, like machines for many years before, was micorprogrammed, it became a fairly straight-forward process to translate the machine to C, and later C++.

Straight-forward in concept, but not necessarily in implementation. PC processors were still not blindingly fast, and right in the middle of design IBM pulled the plug on OS/2, necessitating a conversion to Windows NT as the base OS. So there was an OS conversion (all new device drivers!) at the same time as the final debugging, and more importantly, performance tuning. Nonetheless, by 1992 a new completely soft "mainframe" was released on a PC platform -- with a performance exceeding that of the low-end hardware platforms then being sold! These days, the emulated system performance covers the low half of the A Series performance range, and is gaining rapidly on the hardware platforms at the top.

Well, I haven't covered Small Systems at all (which also split off from Pasadena and moved to Santa Barbara -- on the beach -- to make a bit-addressable and microprogrammable mainframe; nor the "D Machine", which was a completely microprogrammable bit-slice machine (implemented in SSI/MSI) and used as a disk controller and military secure communications controller; nor the merger with Sperry (Burroughs bought Sperry, and Sperry management took over the combined company). But this is probably more than enough to bore you silly, so I'll save that for another day.

More Burroughs History

Steven Emert
White Bear Lake, MN
February 2024
I joined Burroughs as a Field Engineer in September 1973 in the Minneapolis/St. Paul area and lasted through September 1995 (one week shy of 22 years) when I left Unisys (my own decision, not in a layoff!) to go to Bay Networks as a System/Sales Engineer.

I saw at the end of the article you mentioned the D Machine. My main experience with it was when it was used as a front end processor for the Medium Systems, I believe the model was B774. I was on the Minneapolis District staff during that time and I recall “FIMCS” or Field Installation Message Control System” was the primary hardware diagnostic on that machine. I had been trained on the machine and created a troubleshooting guide for it and published it to the Minneapolis District, and perhaps to the Central Region (I don’t recall with certainty), and was surprised to find that many of the branches decided to forego sending their FE’s to training on the B774, as my guide did a pretty good job of letting them work on it without additional training.

I started out in Medium Systems, getting a District training class on the B2500/3500 in Minneapolis after the Burroughs Training department had eliminated the class, concentrating on the newer B3700/4700 series. Later, I transitioned to the B6800 and B6900 then later the B7900. I never did get officially trained on the A9, A11 (“A Series Large Systems”) or the A15 (“A Series Very Large Systems”) but did work on them quite a bit. After being an FE specialist on first the District, then the Region staff, I made the mistake of becoming a Branch Technical Manager (I learned I hated being in management), but then rectified that mistake by moving into the newly created network integration group, later called “Network Enable” in 1989, lasting through my departure in 1995.

When the B6900 came out, we sold six of them in the Minneapolis District, so as the District FE Specialist at the time I was responsible for more of them in the Central Region than all the other districts in the region combined. The machine, if you recall, was rushed out and had lots of bugs requiring field changes, both in the CPU and in the I/O subsystem. Mission Viejo came out with a field kit intended to install all of the changes in a single weekend. All the parts and instructions for each machine were contained in an approximately 3’ x 3’ square box. The first one I and my team did was at the University of Wisconsin at Steven’s Point. With that accomplished, I realized that very few of the other customers could take their system down for an entire long weekend, with the associated risk of additional downtime due to mistakes, so over the next few weeks I borrowed the fan-fold paper instructions for the machine at the University of Minnesota Hospitals and went through it laboriously, looking at each change individually, determining dependencies and a sequence of installations that could be accomplished, each in a single two-or three-hour preventative maintenance session, allowing the system to run after each update session. I published those instructions to the Central Region, and all but one other system was upgraded using my installation sequence.

A year or two later, one of the design engineers from Mission Viejo, Larry (something, wish I could recall his last name) was called in to the area to help us with a very intermittent and difficult problem on a B6800 at one site (TIES) and while babysitting that machine, the FE he was with was called over to General Mills to help troubleshoot a B6800 CPU problem. Larry asked if he could “tag along”. Tom, the FE, said “Sure!!!”. After looking over the FE’s shoulders for a few minutes at General Mills, Larry asked if he could try something on the console. Of course. The FE’s said that Larry poked a few buttons on the maintenance console, stepped it through a few clock cycles and almost immediately said, “There! This flip-flop is bad.” Tom asked how he found it so fast. Larry replied, “Well, I have a bit of an advantage. I designed that circuit.”

Later during that trip, I had a chance to talk with Larry, and described the sequential “bite size” B6900 upgrade sequence I had created, and lamented, “Why didn’t Mission Viejo think to break the kit up so people could put the changes in over the course of several PM sessions?” He replied that he DID create a set of instructions to do that, as he also realized very few customers could accommodate the extended downtime. While the instructions were created, they never made it to the field!

Somewhere else on the Internet, I saw a history that concentrated mostly on the City of Industry and Mission Viejo and Pasadena plant projects. In it, the author talked a lot about the “birth” of the B5900. I was also trained and was the District Specialist on that machine. That article opened my eyes on the political situation between Pasadena, Mission Viejo and Corporate, and how much the engineers went through to create it. It was (to my knowledge) the first mainframe that Burroughs, finally, went to the board swap philosophy of maintenance rather than troubleshooting down to the chip. While in the class in the El Monte CA training center, going through a troubleshooting training session using the troubleshooting flow documents, several of the students came to a spot where the directions said, “Call your District Specialist for help!”. So, of course they’d yell, “Steve, come over here!” to which I’d reply, “Fix it yourself!” (Since we were in training, not in the real world).

Thanks for the article! As I said, it brought back a lot of memories. I’m not sure any of this adds anything of significance to your history, but if you can use any additional info to expand upon it, we still have a pretty good group of former (and even current) Burroughs people in the Minneapolis/St. Paul area that get together occasionally, and could provide a little more historical info from our field perspective.

Columbia University Computing History Frank da Cruz / fdc@columbia.edu This page created: 5 January 2005 Last update: 23 February 2024