* This site is based on internet standards not supported by IE 6.
wJones
EC Container 5

Limitations of Stored-Program Computers

<< Background: Stored-Program Computers

Citation

Warren Jones, Lana Rubalsky (2010) "Limitations of Stored-Program Computers", wJones Research, January 18, 2010

The basis for current computer architecture, “Stored-program” (see Background) was a tremendous design innovation in the 1940s, 1950s and 1960s when processors handled just hundreds or thousands of instructions per second. By the mid 1970s, compute capacity and design advanced, and computer manufacturers moved away from simple stored program design to add support for shared storage, standardized peripherals, communications and the capacity to support concurrent users. As computer technology advanced, with it came demand for additional services such as support for large files, graphical interfaces, search, messaging, typography, object oriented programming, images, video, sound, communications, program add-ins, multiple input peripherals, internet data, and security.

The generation of multi-program, multi-user machines of the 1970s and 1980s did not adopt well to the new requirements. Large data and program blocks did not run well on multi-user systems.

New personal computers built by the hobby community, showed promise over larger systems, benefiting from fast new processors and simpler single user operations. Lacking the complexity of advanced operating system software, hardware makers were able to adapt personal computer designs to meet the new requirements.

The more advanced business computer designs became seen as impediments to innovation. Many original stored-program computer vendors such as UNIVAC, and the remaining “seven dwarfs1” failed to meet emerging requirements and narrowed their business scope. IBM did address requirements, buying and licensing respectively the flexible and lightweight PC DOS 2 and UNIX operating systems. These simple operating systems got out of the way of stored-programs and were well suited to supporting the rapid program innovations of the 80’s and 90’s.

By 2000, the simple stored program design of personal computers had also become the dominant software on business servers and small personal communications devices.

Large organizations would typically employ a different computer for each type of information (i.e. accounting, customer management), employ different computers for each sub-function for each type of information (i.e. database, file storage, messaging), employ different computers for person and physical asset class (i.e. elevators, heating system, door security, fire suppression) and finally different computers for each form factor (i.e. server, PBX, laptop, mobile communicator).

The PC/Server stored-program architecture thus had several limitations:

  1. The architecture created an immense quantity of disparate systems, each with their own storage systems, operating system instances, and particulars with respect to configuration and security.
  2. The environment was extremely complex, with each program, data store and computer being an independent asset that was not part of a central catalog or homogeneous management facility. Large companies typically employed hundreds or thousands of staff members simply to manage the assets in this inventory.
  3. The architecture was extremely expensive, with each program, data store, operating system and communications technology requiring distinct employee expertise, software licensing, hardware maintenance and support consulting.
  4. The component systems, programs and information structures were separate and distinct, meaning there were no facilities to apply rules or service features across all the information or processes of a client, project, industry, employee or organization. To bridge the disparate systems, efforts in the form of “integration” projects were required, each to integrate some sub-domain of system information or components. In large companies, the complete technology plant was rarely integrated at any moment because constant component changes or upgrades frequently impaired or “broke” integration work in progress.
  5. The architecture was extremely slow and inefficient. Although any single device could process up to billions of instructions and millions of processes per second (if slow disk storage was avoided during processing), each information transaction typically queried multiple services across multiple computers “after” the transaction request was made. This made each transaction subject to delays due to resource coordination, communication latency and processing delays due to resource contention. This design limited such systems’ performance, even those consisting of thousands of servers.
  6. The architecture was extremely fragile and insecure. The system contained many parts with minimal management facilities. The operating system software was not self aware and thus was frequently hijacked for unintended purposes without patron awareness3. Vulnerabilities at any point within the systems’ inventory could impair major portions of a system’s availability.
  7. Processing redundancy was distributed in a serial instead of parallel manner. In parallel redundant systems, multiple devices multiply the “mean time between failure” of the overall system, thus making systems more reliable. In the serial redundancy of PC/Server architecture, any single device failure would typically impair a portion the system, thus dividing the mean time between failure and reducing overall system reliability.
Typical Commercial and Open Source Application Programs (in no particular order)
After 2000 software repairs, expectations about what a computer program should “do,” grew. Computer customers expected systems to learn and remember their usage patterns. Application programmers added rudimentary facilities to each program enabling it to save usage state information. Programs gained the ability to show the last websites browsed or documents opened. Programs integrated built-in database programs to remember that, “tuxedo” was the last item retrieved when typing “t.” Programs integrated search engine programs that enabled them to find the word “tuxedo” in a document, message or web page history. They integrated programming “macro” languages such as Python and Basic that enabled programs to repeatedly perform multiple actions like sending all the results of a “tuxedo” search to a printer.

By 2006, besides the computer instructions and data needed to support its advertised function (i.e. web browsing, enterprise resource management or document editing) a typical application program frequently contained programming languages, database management, file management, problem management, problem reporting, and other software subsystems that had little correlation with the program’s advertised purpose.
Supporting Software in Typical Application Programs


These “features” were added to each application program separately, causing each application program to gain a tremendous amount of “heft4.” The heft made programs slow.

A 1984 word processor and twenty page document required approximately 64 kilobytes of memory. On a 2006 system, the same word processor and document could have resided completely in processor cache, where a computer could sustain billions (1,000,000,000+) of instructions per second.

Unfortunately, due to growth in program size, a typical 2009 word processor and document required more than 100 megabytes of memory. When loaded in memory along with other programs, each with their built-in databases, search, program management, reporting and other facilities, the programs and documents would be paged to extremely slow hard drive or flash storage, where a computer could retrieve and store information just thousands of times per second (1,000+). This meant that a typical dual core system with a total processing capacity of 2 x 2.3 billion instructions per second, devoted just a fraction of its total available processing capacity to the primary purpose of a computer user.

Example Large Program with third Party Software, Program Name:  Adobe Acrobat, Function:  PDF File Viewer, Application Size: 779.1MB (6)
By 2006, “feature” oriented program code and data often comprised up to 90% 5 of many delivered products. To reduce cost and effort of supporting this additional functionality, software companies increasingly licensed the supporting software from third party sources in the commercial and open source communities. This led to the additional consequences of making products larger still (as licensed software included its own supporting software) and reducing developer access to, and understanding of source code within, their own products. This caused an initial decline in the quality, usability, security, and reliability of computer products.

To counter this decline, the dynamics of innovation in the computer industry changed. Customers stopped expecting computers to do more and began demanding that they perform better. With the notable exceptions of internet search and the migration from chat rooms to internet social engines, the leading computer products in 2006 bore striking resemblance to the leading products of 1996 which bore resemblance to the products of 19867. Product makers abandoned many fields of innovation, choosing instead to improve the look, feel and quality of existing technologies. Companies which performed this task best, such as Apple, thrived. They did this by heavily leveraging the large library of free, open source software components accumulated over the half century of stored-program computing and by turning over hardware innovation to companies such as ASUS in China.

Rather than be accused of being simple colorists, some companies pursued new set-top, tablet and mobile device forms, by combining the same open source components, now with outsourced user interfaces designed by firms such as TAT in Sweden. By 2009, computing innovation could be summarized as the art of reducing computer hardware cost and integrating, testing, packaging and polishing third party tools into ever more attractive mobile and web products.

Other notes about the computer industry relevant to the invention include:

The Internet’s success promoted standards in communications and media formats that were globally adopted. Rapid, free access to global data, previously locked behind the barriers of formats readable only by proprietary stored-programs, quietly faded.

Chipmakers continued to outpace Moore’s Law with processor and memory chip design. Graphic Processor technology lowered the cost multi-gigaflop massively parallel computation required for symbolic intelligence processing. Barriers to advanced machine designs caused by limitations of electronics, quietly faded.

Software makers continued to improve technology that made it possible to run stored-program PC/Server systems in efficient virtual machines. Such software had became widely installed in datacenters around the world removing a key barrier to major technology shift.

Tim Berners-Lee, one of the first to envision the potential of the Internet, led a two decade campaign to convince companies, organizations and governments to make a semantic, machine readable web. The new web began to rise, albeit very slowly because of the tremendous scarcity of its intended customer … machines that could read.

How Addressed


Stored purpose computing was devised as a platform engineered to meet new automation needs discovered after the introduction of stored-program computing, sixty-five years ago. The following are general invention improvements:

  1. Methods to eliminate duplicate information, apply system-wide rules and deliver services as required by context.
  2. Methods to span a range of device types sufficient to meet known automation requirements.
  3. Methods to coordinate resources pre-emptively as necessary to perform actions without delay (predictive, responsive).
  4. Methods to communicate with brief, semantic gestures (semantic communications).
  5. Methods to compute with maximum achievable efficiency and use the least amount network bandwidth, hardware and energy (self efficient).
  6. Methods to archive and retrieve inactive data as a single instance with minimum practical storage requirements and without user identifiable loss of access (archiving).
  7. Compatibility with Internet communications, data, messaging, media storage, data rendering and display canvas layout.
  8. Compatibility with legacy Windows/Linux stored-program applications and data (backward-compatible).
________________________

1 In the 1960s, the eight major American computer companies were referred to as "Snow White” (IBM the largest) “and the seven dwarfs"—Burroughs, UNIVAC, NCR, CDC, GE, RCA and Honeywell. Note that UNIVAC’s mainframe was a direct descendent of the EDVAC computer documented (but not designed) by von Neumann’s in his 1945 Report. EDVAC and the von Neumann Architecture was actually based upon a design invented by Presper Eckert and John Mauchley.
2 Microsoft delivered an improved Quick and Dirty Operating System or Q-DOS (also 86-DOS), to IBM as MS-DOS / PC-DOS. Post-delivery, Microsoft purchased rights from Seattle Computer for $75K in 1981.
3 Neolithic Windows security hole alive and well in Windows 7 http://www.itworld.com/security/93442/neolithic-windows-security-hole-alive-and-well-windows-7.
4 Relatively light programs such as Apple Safari or Mozilla Firefox will have typical program sizes of 45 - 55 megabytes while the core browser software upon which they are based is only 5 megabytes. The additional software and data includes support for database management (to remember recent searches) add-in programs and other software.
5 Relatively light programs such as Apple Safari or Mozilla Firefox will have typical program sizes of 45 - 55 megabytes while the core browser software upon which they are based is only 5 megabytes. The additional software and data includes support for database management (to remember recent searches) add-in programs and other software.
6 Adobe Acrobat. 8 Professional, dated March 20, 2007. Total application contents 907.2MB, Acrobat (779.1MB), Uninstaller (229KB), Distiller (127.8MB) measured on Mac OS 10.6.
7 Multi-decade software leaders include Microsoft Windows (released in 1985), Microsoft Office (1986), Adobe Photoshop (purchased by Adobe in 1990) and Intuit products such as Quicken and TurboTax (late 1980’s). Graphical versions of Microsoft Word and Excel were first released for the Macintosh in 1984 and 1985 respectively. Microsoft purchased the Presenter product and renamed it to PowerPoint in 1987.

<< Background: Stored-Program Computers



EC Container 6