Skip to content

Paradigms of Software Development: How Great Engineers Adapt

The Singaporean drummer Brandon Khoo was once asked in a Q&A session: how do you tell a good drummer and a bad drummer apart? To which he replied: a good drummer is someone who knows how to play for the music.

An analogous case might be made for software engineering: a good engineer shapes their work around the needs of the application.

At its heart, software engineering is about trade-offs. Different stages of a product's lifecycle demand different engineering priorities. There are different attitudes and perspectives to software engineering, and a master engineer will change their approach as circumstances evolve.

With this in mind, I began thinking in terms of paradigms. In this article, I outline three distinct yet complementary approaches to engineering: Start-up Engineering, Bulletproof Engineering, and Optimisation Engineering, which broadly correspond to the inception, maturity and scale of a software project.

Start-up Engineering

TLDR: Write flexible code.

You’d typically work within this paradigm at the onset of a user-facing project, while you’re still figuring out what’s possible and what recipients of the software truly want.

Development iterations are intentionally small in scope, designed to rapidly gather user feedback. Avoid adding functionality or features until it’s explicitly necessary, and adhere closely to project requirements. Ultimately, you want to create software people actually want, not what you think they want. As John Carmack puts it: “Rarely architecting for future requirements / applications turns out net positive”.

However, maintaining this rapid experimentation for an extended period can lead to a volatile codebase, as ideas that once seemed promising may quickly turn naïve or irrelevant. Thus, successfully operating at a high development pace requires writing flexible code - that is, code that can be changed at a later point. Perhaps counterintuitively, this means investing time in maintaining a clean architecture.

Writing new code is almost always easier and faster than modifying existing code, especially once others have come to depend on it. To mitigate this challenge, the underlying architecture must be as modular as possible. Each class should have one and only one reason to change, meaning it should have a single, clearly defined responsibility. Functions should have a clear separation of concerns, and modules should be open for extension but closed for modification. Ideally, you should be able to introduce new features without modifying any existing code. You’ll know you’ve achieved this when writing code feels like playing with lego blocks.

Abstractions play a crucial role here, and can be achieved through static polymorphism (interfaces, templates, etc.). An experienced engineer will evaluate trade-offs across multiple levels of abstraction, knowing precisely when and how much abstraction to introduce. Remember, the ultimate aim remains working within short development cycles to quickly capture user feedback, so you shouldn't spend all your time architecting the perfect abstraction.

Readability also matters greatly; your code should be self-documenting, easy to follow and straightforward to debug. A design-by-contract approach is common. You should generally avoid creating excessive wrappers or state-altering functions – incline to create new objects rather than modifying existing ones. Essentially, the goal is straightforward, maintainable code. As Albert Einstein put it: “Everything should be made as simple as possible, but not simpler.”

Key acronyms: SOLID, YAGNI (You aren't gonna need it), KISS (Keep it simple, stupid).

Bulletproof Engineering

TLDR: Write robust code.

Software has a large capacity for errors and tends to degrade over time. As your userbase grows, your once faultless code begins to groan under the weight of their increasing demands. There comes a point in an application’s lifecycle where faults and bugs begin to affect an organisation’s reputation, or worse, carry financial or legal ramifications. When these concerns emerge – which in some cases may be at the very onset of the project – it’s time to adopt a “bulletproof” approach to engineering.

This approach calls for a much more defensive development style. You should catch exceptions properly and log error messages instead of letting the program crash. Incorporate unit tests, integration tests and boundary tests early and frequently in development cycle to identify issues before they reach production. Success in this approach hinges on writing testable code. Write small, modular components with well-defined defined inputs and outputs and minimal external dependencies.

Furthermore, maximise your system’s observability (traces, metrics and logs) to quickly identify the root causes of issues or bottlenecks, and minimise the time between an error occurring and deploying a patch to production.

Additionally, robust guardrails and rigorous user input validation are critical. Ensure your application is protected against injections, cross-site scripting, and other common vulnerabilities. Inputs should neither be allowed to be empty nor infinitely large. Remember, “anything that can go wrong will go wrong”.

Defensive measures against common attacks (DDoS, MiTM, CSRF, etc.) should be considered at every stage of development, as security issues can hardly be resolved “posthumously”. But beyond covering these essentials, bulletproof code includes different forms of penetration testing and intrusion detection systems. Furthermore, you might also seek accreditation for compliance frameworks such as SOC 2 or ISO 27001.

Leveraging CI/CD pipelines is highly beneficial. CI/CD helps detect bugs and prevents failures while enabling rapid build, test and deployment cycles. Like many software-related concepts, the CI/CD rabbit hole runs deep. You could get fancy with blue/green deployments, static vulnerability analysis or automated rollback for self-healing deployments. But my advice is: build exactly what you need, precisely when you need it, and no sooner.

Key acronyms: TDD (Test-driven development), CI/CD (Continuous Integration and Continuous Deployment)

Optimisation Engineering

TLDR: Write scalable code.

Before diving into this, a word of warning from the father of the analysis of algorithms himself, Donald Knuth:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Optimisation engineering emerges when your system matures to the point when demand create bottlenecks, and simply throwing money at the problem won’t make it go away. Exactly how to optimise an application is more nuanced and context specific than the previous two paradigms. It hinges on the question of what you are optimising for: speed, memory, power usage, etc. However, generally speaking, optimisation comprises considerations of system architecture, databases, data structures, algorithms and compilers.

Designing scalable systems starts with choosing the right architecture. Horizontal scaling is in vogue, partly because FAANG use it and partly because modern tools such as Kubernetes, RabbitMQ, and various NoSQL databases facilitate it. However, horizontal scaling introduces additional complexities, such as load balancing and data consistency issues. Vertical scaling, on the other hand, offers simpler resource management and, in certain scenarios, can be more cost-effective. Instagram, prior to its $1 billion acquisition by Facebook, demonstrated just how far vertical scaling can go by serving 35 million users with a single monolithic Django application and a single PostgreSQL database. Similarly, WhatsApp pushed a single server’s capacity to the max by handling 2 million concurrent connections. The point I am trying to make is to thoughtfully evaluate the trade-offs between horizontal and vertical scaling. Carefully determine which strategy best suits your software rather than defaulting to horizontal scaling because it’s trendy to do so.

One thing that is apparent in the scalability successes of both Instagram and WhatsApp was their deep understanding of database performance, knowing precisely when and how to index, shard, cache and manage connections. A prerequisite for optimisation engineers is a thorough grasp of database management. Go beyond using object-relational mappers and learn to write (and optimise) queries directly!

Beyond databases, optimisation engineering demands careful attention to computational complexity. Algorithms and data structures matter immensely – those LeetCode problems you grinded through will finally pay off. Profiling code effectively to identify and address performance bottlenecks is also essential. Employ caching solutions like Redis or Memcached intelligently to reduce latency. Understand your critical paths, rigorously test your assumptions, and, as always, make changes only when the benefit clearly outweighs the cost.

Key acronyms: CAP (Consistency, Availability, Partition tolerance), CDN (Content Delivery Network)