Coding After Thirty

The Essence of Computer Science (and Why It’s Not Really About Computers)

Posted on February 24, 2025

No alternative text provided

The Essence of Computer Science (and Why It’s Not Really About Computers)

Computer science – the name itself is a bit of a misdirection. You might be surprised to hear that _“computer science is no more about computers than astronomy is about telescopes.”_​

In fact, many experts have pointed out that calling it “computer” science is like calling astronomy “telescope science” or geometry “compass science.” The field isn’t primarily about the tools (computers) we use, but about something deeper. So what exactly is the essence of computer science? Let’s explore this question in a casual, bite-sized way, using examples and analogies to keep things relatable.

More Than Just Computers: Why the Name Is Misleading

The term “computer science” can give the wrong impression. It suggests that the discipline is all about computers – the physical machines or perhaps how to build them. In reality, the computer is just the tool, the medium, much like a telescope is a tool for an astronomer​

One of the pioneers of computer science, Edsger Dijkstra, famously quipped this comparison to emphasize that the computer is to computer science as a telescope is to astronomy.

Another comparison comes from geometry. The word geometry literally comes from the Greek “geo-” (Earth) and “-metron” (measure), meaning “measuring the Earth.” Ancient Egyptians developed geometry thousands of years ago to survey land – for example, to redraw field boundaries after the Nile’s floods​.

To those early practitioners, geometry really was about using ropes, stakes, and surveying instruments to measure plots of land​.

It was a very practical tool-driven activity. But over centuries, we came to understand geometry’s true essence: it’s about the properties of space and shape, not about the measuring tools. Geometry evolved into a rich field of mathematics dealing with points, lines, angles, and proofs – far beyond its original surveying purpose.

Computer science is in a similar position. In the early days (mid-20th century), it was inseparable from the electronic gadgets – big room-sized computers – and people thought of it as “the science of computers.” But that was a youthful misunderstanding. When a field is just getting started, it’s easy to confuse the essence of what you’re doing with the tools you use​.

Just as geometry isn’t really about yardsticks and compasses, computer science isn’t really about the physical computers. So if it’s not about the computers themselves, what is it about? To answer that, we need to talk about what those computers are used for: namely, computing processes.

What Computer Science Is Really About: Processes (How to Do Things)

At its core, computer science is about processes – the methods and recipes for solving problems. Another way to say this is that it’s about how to do things, not about the things themselves. In academia, they sometimes distinguish between two kinds of knowledge: declarative knowledge (knowing what is true) and imperative knowledge (knowing how to do something). Geometry and mathematics in general deal a lot with declarative knowledge – statements of fact or definition. Computer science, on the other hand, focuses on imperative knowledge – the instructions or algorithms for accomplishing tasks​.

An example will make this clear. Imagine you asked a mathematician, “What is the square root of X?” A mathematician might give you a definition: “The square root of X is the number Y such that Y ≥ 0 and Y² = X.” That’s a classic declarative statement – it tells you what the square root is, in terms of a property it satisfies​.

It’s like saying “a sibling is a person who shares at least one parent with you” – a definition of the concept. But this definition doesn’t tell you how to find that square root value for a given X; it only tells you what the square root means.

Now consider how you would actually compute a square root, especially before calculators were common. You might come up with a step-by-step procedure, i.e. an algorithm:

  1. Guess a number G that might be the square root of X.
  2. Improve the guess by averaging G with X/G (which tends to get closer to the true root).
  3. Repeat that improvement step until the guess is “good enough” (close enough to the actual answer for practical purposes).

These are imperative instructions – they tell you exactly how to do the task of finding the square root​.

If you follow those steps (and a computer can do this blindingly fast), you’ll arrive at an answer for the square root. Notice the difference: the declarative definition is like a fact or a description, whereas the imperative recipe is like knowledge in action – a method.

Computer science is deeply concerned with this kind of how-to knowledge. In fact, one way to describe the essence of computer science is: it’s the study of effective procedures – ways to get things done. We design procedures (also known as algorithms or programs) to solve problems, whether it’s sorting a list of names, finding the best route on a map, or securing data transmissions. The focus is on developing these step-by-step solutions and understanding their properties (Are they correct? Efficient? Can we make them better?). In short, while a mathematician might declare what a square root is, a computer scientist figures out how to compute it.

Conjuring Processes: Programs as Magic Spells

So, what happens when we tell a computer how to do something? We set a process in motion. Think of a process as the dynamic activity that unfolds when a computer executes a program. It’s a series of states and steps – the computer crunching through the algorithm we provided. This concept can feel a bit abstract, because you can’t physically touch a “process.” It’s not a tangible object, but it’s very real in its effects. When you run a program (say, to play a song or send a message), there is a flurry of electronic operations inside the machine – that is the process doing its work.

Hal Abelson and Gerald Sussman, legendary MIT professors, described a running process in a wonderfully whimsical way: “A computational process is indeed much like a sorcerer’s idea of a spirit. It cannot be seen or touched... however, it is very real. It can perform intellectual work... It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory.” In their analogy, _“people create programs to direct processes. In effect, we conjure the spirits of the computer with our spells.”_​

In other words, a program is like a magic spell or recipe that a programmer (the “sorcerer”) writes, and the process is the magical creature that gets summoned to do the work.

This might sound like pure fantasy, but it’s a useful way to think about what’s happening when we code. Just as a wizard’s incantation causes something mystical to happen, a programmer’s code causes the computer to carry out some task – often something that would be incredibly tedious or even impossible for a human to do by hand. For example, imagine writing a program that sorts a million numbers. Once you launch it, an invisible process whirrs through those numbers and, in a blink of an eye, you have them in order. You didn’t sort them yourself; you conjured a process to do it for you.

The magic metaphor also highlights why computer science has a creative, almost artistic side. We aren’t limited to pre-existing processes – we can invent new “spells” to produce new kinds of processes at will. The possibilities are limited only by our imagination and our ability to describe the method precisely enough for a computer to execute.

Abstraction: Hiding Complexity in Black Boxes

If programming is like magic, then as any fantasy reader knows, magic can get complicated. Real-world programs (think of an entire operating system or a huge app like Facebook) are incredibly complex, with millions of lines of code. How do human beings manage to create something that elaborate without getting lost in the details? The answer lies in abstraction – a fundamental idea in computer science for coping with complexity.

Abstraction is all about simplification – taking a messy reality and defining a cleaner model or interface to work with. In practice, this often means hiding the complicated inner details of a component and treating it as a “black box” that we can use without understanding everything inside. In computer science, we constantly build these black boxes. We might write a procedure to perform a specific task, and once it’s working, we don’t need to think about its inner workings every time – we can just use it as a single unit. This is called black-box abstraction: we encapsulate a piece of code (or a system) in such a way that from the outside it’s just seen in terms of what it does, not how it does it​.

By suppressing the details, we can compose these pieces into bigger and bigger systems without being overwhelmed​.

Consider a simple analogy: a car. When you drive a car, you interact with a gas pedal, a brake, a steering wheel – these form a simple interface. You likely have no idea exactly how the engine is combusting fuel at that moment or how the transmission is changing gears; you don’t need to know. The car is a black box to you as a driver – you just care that pressing the pedal makes it go faster. This abstraction (hiding the mechanical details) is what allows millions of people to use cars without being mechanical engineers. Similarly, in software, a programmer can use a library or a function written by someone else without knowing the gory details of its internals, as long as they know what it does. For example, you might use a function sqrt(n) to get a square root in a programming language. You don’t need to know whether it’s using a fast approximation algorithm or something else internally – you trust that sqrt does the job, because its interface (taking a number, returning the square root) is well-defined. It’s a black box.

Abstraction is the key strategy for managing complexity in computer science. Abelson put it nicely when he said that building big software systems would be impossible if not for techniques to control complexity – and those techniques are what computer science is really about​.

By constructing layers of abstraction, we create a hierarchy of black boxes: low-level details get packaged up so we can think in terms of higher-level concepts. Just like in writing, you form sentences out of letters, paragraphs out of sentences, and stories out of paragraphs – at each level you hide the lower-level details (you’re not worrying about individual letters when crafting a plot). In programming, we build modules out of lines of code, and systems out of modules, each level suppressing the complexity below.

An Engineering of Ideas (Not Atoms)

You might be wondering, isn’t dealing with complexity a part of all engineering? That’s true – every engineering field strives to manage complexity. But computer science has a unique advantage (and challenge) compared to, say, civil or mechanical engineering: computer scientists deal with idealized components in an almost purely mental realm. In other fields, engineers contend with physical reality. A civil engineer designing a bridge must worry about material strengths, weather, imperfections in steel; a mechanical engineer building an engine deals with friction, heat, wear-and-tear of parts. Those are physical constraints. In contrast, the “components” a software engineer builds with (like numbers, data structures, or logic gates) are abstract and perfect – they do exactly what the specifications say, with no manufacturing variability or entropy (at least in theory). We don’t have to worry about a function we wrote suddenly physically breaking – as long as it’s logically correct, it will run the same way every time.

One professor joked that _“computer science is like an abstract form of engineering… it’s the kind of engineering where you ignore the constraints that are imposed by reality.”_​

There’s a lot of truth there. Our components are ideal: a bit in memory is a perfect 0 or 1 (no “almost 1” due to a faulty circuit, ideally), and an operation like addition of integers has a mathematically precise outcome. We know as much as we want about our pieces – we defined them, after all. There’s no tolerance or uncertainty in how a software component behaves the way there is with, say, an electronic component that might have ±5% resistance tolerance​.

In fact, in software, there’s often not much difference between what we can imagine and what we can build, given enough time and resources​

If we can dream up a logical procedure, we can usually implement it. The primary limitation becomes not the laws of physics, but the limits of our own understanding and organization (again, complexity!).

This lack of physical constraints is a double-edged sword. On one hand, it’s freeing – we can build incredibly complex software systems that would be impossible to realize in hardware or in the physical world. On the other hand, because nothing inherently forces simplicity (unlike, say, gravity which forces a building to have a certain structure), it’s easy for software to become too complex for anyone to manage. That’s why abstraction and careful design are so critical. Computer scientists must be disciplined about introducing structure and constraints by choice, since Mother Nature doesn’t impose many on our digital creations. In a sense, we construct our own “reality” with rules and interfaces to keep our systems comprehensible.

Conventional Interfaces: Teamwork at Scale

As software projects grow, they aren’t built by a single wizard in isolation – they’re often built by teams of people, sometimes hundreds or thousands of developers working on different parts of a system. How can so many different pieces, written by different people (or even different companies), fit together into one coherent system? The answer lies in conventional interfaces – basically, agreed-upon rules for how components interact.

Think of a simple real-world example: the wall socket and the electric plug. Manufacturers of appliances and the electricians who wire buildings all conform to a standard interface (the shape of the plug prongs, the voltage, frequency, etc.). Because of this convention, you can plug any lamp into any outlet in your country and it just works. You don’t need to rewire your house for each new gadget – the interface (plug/outlet) is standardized. This kind of standardization is a form of abstraction, too, because it means appliance makers don’t all need to know the fine details of the power grid; they just conform to the interface.

In computer science, we do the same. We define protocols, file formats, and APIs (Application Programming Interfaces) so that different software components can communicate without needing to know each other’s inner workings. For example, a web browser can talk to a web server because they both follow the HTTP protocol – a conventional interface for requests and responses. Your code can use a library because it knows the API (the set of functions and what parameters to give). As long as everyone follows the agreed rules, the pieces fit together like Lego blocks, even if the blocks were made by different people.

These interfaces are crucial for managing complexity in large-scale systems. Each team or component can work almost as a separate black box, as long as it presents a certain interface to the others. Abelson likened this to having “standard impedances” in electrical engineering​.

– an electrical engineer designing a stereo system can connect speakers to an amplifier without caring about the exact internal design of each, as long as the output impedance of the amp and input impedance of the speaker match up by convention. In software, similarly, we rely on conventions (like how data is formatted, or how functions are called) to glue the pieces together. This allows very large programs (think an operating system or the Google search system) to be built in a modular way: each part is complex in itself, but thanks to well-defined interfaces, the complexity doesn’t all mix together into an unmanageable tangle.

Metalinguistic Abstraction: Creating New Languages

So far, we’ve talked about abstraction in terms of building layers of components and using interfaces. There’s another form of abstraction in computer science that’s particularly powerful: creating new languages for expressing problems. This is sometimes called metalinguistic abstraction – “meta” because it’s abstraction about the language itself.

Why would you want a new language? Well, sometimes the best way to handle a complex problem is to change the vocabulary you use to think about it. In programming, this can mean designing a domain-specific language (DSL) – a mini-language tailored to a particular kind of task. For example, SQL is a language specifically for querying databases. Graphics software might have a special language for shaders (to describe how surfaces should look). Even the formulas you write in a spreadsheet can be seen as a little domain-specific language for calculations.

Creating a new language might sound like a huge undertaking, but it can sometimes be done within an existing programming language. Some languages are flexible enough that you can essentially mold them into a new, more convenient form for your problem – for instance, by writing libraries or even using macros that extend the syntax. The idea is to make the language of the solution as close as possible to the language of the problem. If you’re working with music, you might want your code to have constructs like notes, chords, timing, etc., rather than forcing everything into low-level bits and bytes.

Abelson and Sussman point out that designing new languages is a way to highlight certain aspects of a system and suppress others​.

In other words, a good DSL lets you focus on what’s important for your task and ignore irrelevant details. By choosing the right “words” and constructs, you make the problem simpler to think about. Metalinguistic abstraction is essentially building a new tool – a language – to attack a problem more directly. It’s a bit like a mathematician inventing new notation or a new branch of math to solve a problem that was awkward in old notation. In computer science, we have the luxury of creating these languages and then writing interpreters or compilers (programs that understand those languages) to execute them. In fact, a significant part of advanced computer science is about how to design and implement programming languages.

One famous example of this approach is the programming language Lisp itself. Lisp was designed in the late 1950s by John McCarthy as a language for artificial intelligence research​, but it was also, at its heart, a vehicle for metalinguistic abstraction. Lisp has a very simple, uniform structure – so simple that Lisp code can easily be treated as data by other Lisp code. (In Lisp, everything is written in parentheses, and both code and data ultimately share the same list structure.) Because of this, it’s comparatively easy for Lisp programs to create other programs or even modify themselves – Lisp makes the boundary between language and data very thin​.

This property has made Lisp a powerful tool for building new languages on top of it. In fact, many Lisp programmers write little languages (called DSLs or using Lisp’s macro system) for specific tasks, essentially extending Lisp to better fit their problem domain.

Lisp: A Language Designed for Abstraction and Expressing Processes

Let’s talk a bit more about Lisp, since it’s a language often mentioned in discussions of the essence of computer science. Lisp (which stands for List Processing) was developed around 1960 at MIT by John McCarthy​. It was founded on the mathematical theory of recursive functions​

– meaning it embraced the idea that functions could be defined in terms of themselves (this is what we earlier called recursion). In fact, recursion is a natural way to express repetition or looping in Lisp. Instead of using explicit loop constructs, early Lisp encouraged a style where a function would call itself on smaller sub-problems until a solution was built up​.

For example, you could define a function to compute factorial of a number n by stating “if n is 0, return 1, otherwise return n multiplied by factorial(n-1).” This definition is recursive (the function refers to itself), and Lisp would handle the bookkeeping of those self-calls. To a newcomer, this might seem strange, but it’s elegant: the definition mirrors the mathematical definition of factorial. Lisp showed that such recursive processes could be just as efficient and fundamental as loops in other languages – and it turned out, you don’t actually need a special “loop” construct at all if you have recursion.

Beyond recursion, Lisp was innovative in treating code as data. This sounds esoteric, but it’s quite powerful. The structure of Lisp code is so uniform (it’s made of lists and symbols) that a Lisp program can read, manipulate, and generate Lisp code as easily as it can handle data. Why is this a big deal? Because it allows for that metalinguistic abstraction we discussed: you can write programs that write programs. In Lisp, you can create new constructs almost as if you were extending the language itself. Suppose you wish your language had a convenient way to do some high-level operation – in Lisp, you might implement that as a macro or a function, effectively teaching the language a new “word” or construct. It’s not an exaggeration to say Lisp lets you build your own language for every problem, if you want to. This is one reason Lisp has been called a “programmable programming language.”

Another aspect that made Lisp a great tool for exploring the essence of computing is its simplicity. The core of Lisp is small – the rules of the language can be described in a few pages. In the classic textbook Structure and Interpretation of Computer Programs, Abelson and Sussman chose a dialect of Lisp (Scheme) to teach not because they wanted students to use Lisp per se, but because Lisp’s simplicity lets you get straight to expressing elegant solutions without a lot of boilerplate. Lisp is as easy to learn as the game of chess: the rules are straightforward (e.g., the way you write functions and expressions), but just like chess, those simple rules lead to endless possibilities for complexity and creativity​.

With a language like Lisp, you can quickly start playing with the big ideas of computer science – like abstraction, recursion, and even building new languages – without getting bogged down by complex syntax.

To give a concrete feel, here’s a tiny Lisp example. Suppose we want to define a procedure to square a number (multiply it by itself). In Lisp, we could simply do:

(define (square x)    (* x x))

This creates a black-box abstraction named square that we can use anywhere to get a square of a number, without thinking about how it works (which, internally, is just multiplication) – a nice simple example of abstraction. Now, because Lisp treats code as data, we could even write a Lisp program that takes that definition and transforms it or analyzes it (for instance, to automatically differentiate it, do optimizations, etc.). This is the sort of metalinguistic power Lisp gives us – and why it was so well-suited for AI research, where programs might modify their own behavior (a very high-level concept).

Historically, Lisp introduced or popularized many ideas that are now common in other languages: things like garbage collection (automatic memory management), dynamic typing, and first-class functions (treating functions as values you can pass around). Its philosophy of emphasizing recursion and high-level operations influenced modern functional programming languages and even parts of everyday languages like JavaScript and Python (which have lambda functions, list processing methods, etc., owing a debt to Lisp). Lisp also proved that you could implement a whole programming language with a few primitive operations and a simple evaluator – a profound discovery that showed how interpreters (the programs that execute our code) can be built concisely. In the SICP course, the grand finale is actually implementing a Scheme interpreter in Scheme – a mind-bending feat that underscores the idea of a “language that can implement itself”​.

It’s like lifting yourself by your bootstraps, and it demonstrates just how powerful abstraction can be: you design a small set of rules (an interpreter) that can understand a rich language (Scheme/Lisp), and because Lisp code is made of the same stuff, the interpreter can even interpret itself.

Conclusion

We’ve covered a lot of ground in a conversational whirlwind, so let’s step back. Computer science is not about computers in the same way that storytelling isn’t about the pen you write with. Computers (and programming languages) are the tools we use, but the big ideas center on computation itself – the processes and transformations that solve problems.

We explored how the field is really about describing how to do things (imperative knowledge) rather than just stating facts, and how running those descriptions on a machine produces processes that almost feel like magical entities carrying out our commands. We saw that to manage complexity we rely on abstraction: hide details, build layers, use black-box components – much like solving a big problem by breaking it into smaller, more manageable pieces.

We noted that unlike other engineering disciplines, we work with idealized components in a world of pure thought, which gives us tremendous freedom (and the responsibility to impose our own structure).

We also looked at how using standard interfaces allows large teams to collaborate on enormous systems without everything collapsing into chaos. And finally, we delved into the idea of creating new languages when needed, and introduced Lisp as a shining example of a language built for playing with these very ideas of processes and abstractions.

In essence, computer science is about problem-solving at scale and in detail – it’s about the art of turning a conceptual solution into a step-by-step mechanical process that a computer can execute. It’s a blend of science, engineering, and art. There is science in analyzing algorithms and proving things about them, engineering in designing systems that are reliable and efficient, and art in the creative leaps and elegant abstractions that make complex problems tractable. And yes, as Abelson joked, there’s even a bit of magic in conjuring these invisible processes that do our bidding.

For the general reader, the take-away is this: next time you use a computer or a phone app, remember that behind every feature you use, there’s a carefully orchestrated set of processes at work – little spells cast by programmers. The beauty of computer science is that it teaches us how to write those spells and, ultimately, how to think about any complex system in terms of layers, abstractions, and precise instructions. It’s a discipline that teaches how to systematically tackle the question “How do I accomplish this task?”, which is a powerful concept both inside and outside the realm of computers. So, while the name “computer science” might be misleading, perhaps we can forgive it – after all, “software sorcery” or “process engineering” didn’t quite have the same ring. But now you know what this field is truly about: the science (and art) of process, abstraction, and computation. And that is a beautiful thing, no matter what we call it.

References:

  1. Harold Abelson, Structure and Interpretation of Computer Programs, MIT Lecture 1A (1986) – Introduction discussing the essence of computer science vs. tools​

    ocw.mit.edu

    ocw.mit.edu

    .

  2. Hal Abelson & Gerald Jay Sussman, SICP (MIT Press, 1985) – “Programs must be written for people to read, and only incidentally for machines to execute,” and the famous magic metaphor for processes​

    cs.stackexchange.com

    cs.stackexchange.com

    .

  3. Edsger W. Dijkstra (attributed), “Computer Science is no more about computers than astronomy is about telescopes.” – Quote highlighting the misnomer of “computer science”​

    quoteinvestigator.com

    .

  4. SICP Lecture Notes, MIT 6.001 (1986) – Explanation of declarative vs imperative knowledge (square root example)​

    mk12.github.io

    and the importance of abstraction and conventional interfaces in managing complexity​

    mk12.github.io

    mk12.github.io

    .

  5. Britannica – Article on LISP programming language, history and features (code-as-data, recursion)​

    britannica.com

    britannica.com

    .

  6. Abelson, MIT Lecture – Computer science as an “abstract form of engineering” without real-world constraints​

    mk12.github.io

    , emphasizing ideal components vs physical reality.