Emergent Order of Computer Science and Engineering

Why software APIs and hardware protocols are two expressions of the same modular system

When people speak of computer science and computer engineering, they often imagine two separate territories: one abstract, algorithmic, and mathematical; the other tangible, electrical, and physical. But step back, and a different picture emerges. Both fields are simply building blocks in a larger, self-organizing order — where modules define clear functionality, interfaces enforce shared rules, and integration yields systems greater than the sum of their parts.

A POSIX system call and a PCI bus handshake may look worlds apart, but at their essence, they are both contracts. They define how one part of a system communicates with another, regardless of the implementation beneath. This lens — seeing software and hardware as parallel expressions of the same principle — reveals not a divide, but an emergent order that underpins all of computer science and engineering.

 

“A System” → The Modular Lens

Computer engineering could be perceived as a process of discovering various productive arrangements of functional modules (or functional blocks). Such an abstract module can be expressed as a self-contained unit with two key qualities:

  • Functionality — What it does.
  • Interface rules — How inputs are furnished and outputs channeled.

This principle is universal, whether it’s the components in a IoT device talking to a software cloud storage service or an Android phone multi-casting audio on Alexa or a graphics pipeline inside a compute cluster powering AI. The same principle comes to life differently in a software and hardware view of the world. But the modular lens lets us see their symmetry. The same qualitative attributes of functionality and interface rules are integral to both software and the hardware paradigms. Quite easily illustrated by the below diagram.

Figure 1: Software and hardware both emerge as networks of functional blocks bound by contracts (APIs, protocols).

The distinction between software and hardware is merely in methodology, both are conceptually an integration of abstract modules communicating via shared rules to achieve a larger purpose.

 

“Software Hardware Architecture” → Functions and Contracts

Software is a network of modules linked by recognizable data structures. The output of one module often becomes the input of another, creating layered stacks of functionality.

At the higher levels, interfaces may be standardized — for example, POSIX APIs in operating systems. But internally, each module may have its own contracts. Even at the assembly level, the object code is an interface: once decoded, the hardware ALU or Load/Store unit is signaled to act. Eventually the behavior of a generic processor unit will depend on the sequence and also the content of the application’s object code.

Hardware Hardware design mirrors this approach. High-level IP blocks interconnect through bus protocols like AMBA, PCI, or USB. Each functional block samples recognizable input patterns, processes them, and emits outputs across agreed-upon channels. For example, a DMA unit connected to multiple RAM types must support multiple port interfaces, each with its own protocol. A unified abstract representation of such a functional block is attempted below.

Functional Block
Figure 2: A unified view of functional blocks across software and hardware.

Functional Block → The Atom

At a highly abstract level, computer science is either about designing such functional blocks, or about integrating them into coherent wholes. So eventually an electronic product can be perceived as an organization of abstract functional modules, each communicating via shared interface rules, together they deliver a complex use cases valuable to its end user.

When employed at scale human cognition recognizes objects by their abstract high level qualities, not their gritty details. But details matter when developing these abstract functional blocks. So a process intended for engineering a complex system at scale will need to harness specialized detailed knowledge dispersed across many individuals and the functional modules they implement.

For instance, the application engineer cannot be expected to comprehend file system representation of data on the hard disk and similarly a middle-ware engineer can afford to be ignorant of device driver read/write protocols as long as the driver module plays by documented interface rules. An integration engineer need to know only the abstract functionality of modules and their corresponding interface rules to combine them for delivering a product.

Thus by lowering the knowledge barrier we reduced the cost and time to market. Now the challenge of implementing functional blocks lies in balancing abstraction with performance. Too much modular generality slows a system; too little makes it rigid and fragile.

 

“Open v/s Closed Source” → Impact of the Extended Order

Open-source framework accentuates the advantages of the previously mentioned modular construction. While proprietary systems evolve only among a set of known collaborators, open source leverages on a global extended order, enabling contributions from both known and unknown individuals. So from the development perspective it harnesses the expertise of a larger group.

Market economics is about finding the most productive employment of time and resources, in our case it would be about discovering all the possible uses of an abstract functional module. The lower barrier to knowledge within the open-source market accelerates this discovery process. It also leverages on the modular structure for coordinating dispersed expertise. In other words, depending on their individual expertise any one can integrate new functional blocks, improve or tailor the existing ones. For instance, a generic Linux kernel driver might eventually end up on a server or a TV or a smartphone depending on how that module is combined with rest of the system.

Figure 3: Open vs Closed ecosystems — cohesion vs disjointed growth.

The above Venn diagrams illustrate how the nature of an order can influence the development, cohesion and organization of these functional blocks.

“Universal Epistemological Problem” → The Knowledge Challenge

What emerges from these modular interactions is not merely technology, but an order — a living system shaped by countless contracts, shared rules, and dispersed expertise.

This is the emergent order of computer science and engineering: a subset of the larger economic order, subject to the same knowledge problem Friedrich Hayek famously described. No single mind can master it—yet through modularity, openness, and shared rules, it flourishes.

Relevance of Quad Core processors in mobile computing

October edition of EE Times Europe has an article written by +Marco Cornero from ST Ericsson explaining how Quad core processors for mobile computing is ahead of its time. The following tweet was send from the official ST Ericsson account.

“Quad cores in mobile platforms: Is the time right?” An article in EETimes Europe written by Marco Cornero, ST-Ericsson http://ow.ly/6TrDo

Please note that you might need to log in to EE Times website to access the full article along with the whole October edition.

In some ways this is quite a brazen stance taken by ST Ericsson.

1. +Marco Cornero states that there is a 25% to 30% performance overhead on each core while moving from Dual to Quad, this is due to systemic latency attributed to L1/L2 cache synchronization, bus access etc.

2. This overhead will mandate that for a Quad core to out perform Dual core each software application needs to have 70% of its code capable of executing in parallel. How this is calculated is clearly explained in the article.

The article also argues that there are not many complex use cases which can create multi-tasking scenarios where there is optimal usage of all the four cores and multiple other arguments which seems to conclusively prove that Quad cores is a case of “Diminishing returns” (See below quote from ST Ericsson CEO Mr.Gilles Delfassy)

“We aim to be leaders in apps processors, but there is a big debate whether quad core is a case of diminishing returns,” – Gilles Delfassy

The whole Article hinges on one fact that there will be a 25%-30% performance overhead on each core while moving from Dual to Quad, but isn’t this purely dependent on hardware? This figure may hold good for ST Ericsson chip-set, but what about ASICs from Qualcomm, NVIDIA, Marvel etc?

The crux of the argument is this very overhead percentage and if we bring this overhead marginally down to 20-25%, then only 50-60% of the application code needs to execute in parallel for optimal usage of Quad core. This situation is not so bad. Is this inference inherently flawed? The fact that Qualcomm and NVIDIA are close to bringing out Quad core solutions to market makes me wonder about ST Ericsson’s claims!

NVIDIA Acquisition of Icera

What are the prospects for Nvidia having bought Icera?

Icera acquisition is very strategic to NVIDIA in many ways.

  • Considerable effort is required in developing a modem DSP core and RF from scratch, and acquisition seems logically the correct way to go. More importantly, this move complements their existing strength in Application engines.
  • Every semiconductor company aims to offer OEMs a complete solution which is profitable in terms of margin and much easier to manage. Currently NVIDIA has to integrate separate 3rd party modem core which will be outside of the application SoC. But companies like Qualcomm and STE follows a better design where both modem and application engine are built into the same SoC. This is better in terms of density of integration, power consumption and costs, the downside is more SOC complexity, which anyway these guys can handle.
  • NVIDIA is also aggressively moving to a position where they are want to deliver the complete system. Icera acquisition is a move in that same direction. I would expect more acquisitions from NVIDIA in future which could be in the connectivity domain. As far as I know NVIDIA does not have any expertise in WLAN, BT and NFC like technology, these are very critical and companies like CSR are ideal candidates for takeover.

In general handset market is moving in a direction where there will be only complete platform providers. Looks like it is the inevitable case where single component manufactures will either perish or get acquired.