I asked my mother a few days ago if her two-year old iPad had become slow. It had, and this phenomenon bemused my father. How can a computer ‘become’ slow? I studied computer science, sometimes code as a hobby or for special projects, and help build software for a living. I buy a new computer (in its many incarnations: PC, phone) every few years when the sluggishness of my current computer becomes intolerable. But I couldn’t really answer how computers become slow off-hand. So I went back to my first few months in computer science to try and make sense of it all.


The one thing everyone can understand about the tech industry is Moore’s Law, which states that the number of transistors on a chip will roughly double every two years.

Moore’s Law is relevant because the number of transistors on a chip equates to the number of tiny operations a computer can do at any moment – operations such as add these two numbers, or fetch that number from memory. These tiny operations are put together in code to do more powerful things, such as to center align your essay or to fetch the news from London. In English: computer processors are expected to become twice as fast every two years.

In reality Moore’s Law isn’t a ‘law’ – it is not based in any scientific, physical observation. It is just a prediction or conjecture. We call it a Law because it’s a ridiculously good conjecture, it’s almost always right. Our ability to put more transistors on a computer chip tends to meet or beat Moore’s Law. Moore’s Law is mathematically unprovable but anecdotally irrefutable. No one can guarantee that chip makers will continue to grow computing power at this amazing pace, but they somehow always do.

So computers roughly double their capacity to crunch information every two years. So far so good. Doesn’t explain why the iPad slows down. The catch is that other advancements in the computer industry, such as how much information we can store (on hard drives and other kinds of storage devices) or how much information we can capture (such as dots from a digital camera lens) also rise at roughly the same rate.1 Of course as we are able to capture and store more information, we also want our computers to process it. With the passage of time we store larger, better photos than we used to, and we store more of them than we used to. And the increased speed of the computer chip is used to take all of the digital bits we added onto our photos and display it on our screens.

Just as computer hardware gets faster and we amass more data, engineers are also working to make software do more complex things.2 We are writing software to make websites display richer content, to have your phone talk to your computer, and to make predictions based on data we have already gathered.

The net result of all of this is that hardware, software, and our data expectations are all advancing. With the caveat that the iPad you bought can get a software update but not a hardware update. The result of which is that we either don’t update the software, put less data on our devices, or that we do all the processing slower than before.


Most computer science students will take an algorithms class early in their education. The point of the algorithms class is to discuss the intellectual pursuit of instructing machines to do things smartly. Take for example the list of contacts on your phone. This list is likely in alphabetical order, and to get it so the computer has to be able to take a list of names in any random order and sort them alphabetically. Turns out that by varying which order you go through the list and how you move around the elements, you can sort a list in a much smaller number of operations than if you were to do it the very obvious way. The number of operations you save (and hence the time you save) from using a smarter algorithm grows with the number of contacts you want to sort. The algorithms class teaches computer science students that writing slow, dumb software and relying on hardware improvements to make their software work better is a bad idea because with hardware improvements also comes a rise in the amount of data we want to process.

Computer science students will likely also take a systems class, where they learn how instructions are actually processed from code written in latin alphabets to the metal on the chip. In this class students are encouraged to learn to keep their instructions simple. Because more complicated code is harder to understand, harder to have other people help with, and harder to fix if something is wrong. Smarter algorithms often tend to be a bit more complicated than their dumber counterparts, and hence the systems class encourages students to not be clever unless they absolutely have to.

In short, the algorithms professor asks students to do shit smartly. The systems professor tells students to keep shit simple, and get shit done.

These are two often conflicting philosophies of software engineering, manifested in the differences between an almost academic desire to write algorithmic poetry and in the entrepreneurial desire to engineer with power and gusto. Some programmers pick their sides, most are torn between them and remain conflicted their entire lives.

In programming, as in life, we face a choice between paths that seem to go in opposite directions. But despite their divergence, neither ideology can survive without the other. Without smart code many entrepreneurial advancements would be impossible. And without the desire to engineer greater things the need to find more beautiful, elegant ways of solving problems would be moot. As a result most code bases are in perpetual identity crisis. Parts are written in profound, ingenious ways. Others are hacked together in an effort to work around deadlines and bad decisions. The imperfections of this programmatic self-discovery manifest in a codebase that appears almost confused in what it wants to be, often unable to associate fully with the identity of any of its human contributors.

This confusion is bewildering for many engineers. Ideally, programmers wish to write smart solutions in ways that are simple, on-time and deliverable. But when the need arises to choose between them, programmers often find themselves shoreless. Engineers engage in endless arguments about how a product should be built, about the respective values of the two programming philosophies. In essence these debates are about what every engineer stands for. It is as much a question about personal belief as it is about the software itself. And central to this confusion is a debate on meaning – an everlasting question of what it is that humans should consider to be of value.

For all the physical and intellectual movement that occupies our days, it’s strange how little we seem to know where we want to go. Because we really don’t know what matters. All of our growth – the education, the relationships, the pain – seems to be preparation to better answer the question of what matters to us. Because what we think is important is – for a mind that engages thought – what we should do.

Often what we think is important is determined by why it is we think we exist. Some answer the question the other way round – why they think they exist dictates what they think is important. We try to answer this in religious, existential terms, and sometimes in utilitarian, worldly terms. It’s all a bit confusing. In the end we often just find ourselves waiting to come across something that appears meaningful. And in that meaning we begin to find what matters to us, and perhaps why it is we think we exist.


My view of the way humans perceive meaning is as follows. At the center of every universal mystery is, in David Foster Wallace’s words, a capital-T Truth. The capital-T Truth is what every human would see if we could look past everything, if we could assess perfectly the state of our existence. Take the following question: how important is a university education? In real life there’s no way to definitely measure or answer this. As a result different people find varying amount of meaning in going to college. But if we could measure it, if we really could tell how important it was, that would be the capital-T Truth.

The trouble is that this, and all the other great questions of life – How much truth is there in the concept of God? Is what I do a good way to spend my day? Is my anger justified? Is taking on the technical debt worth it? – all can be answered with ‘it depends’. Which makes them incredibly frustrating in everyday life, but for once this makes them useful to me as an illustration for all kinds of questions that have no straight answer.

Rarely a human perception will manage to see something as exactly as meaningful as it really is, i.e. Humans may sometimes be able to get a glimpse of the capital-T Truth. Most other times however we will either over or under-estimate the true meaning of anything, our perception will either be romantic or nihilistic. So for all the ‘it depends’ questions, it really just depends on how you look at it. And this depends on who you are and what your frame of mind is.3

Every perception we make lies somewhere on this spectrum. Smart people will realize the fallibility of human cognition, and with it accept that our views about the value of everything around us will often be incorrect, implying that we will often overshoot or underestimate really how meaningful something really is.

Assuming that it is a desirable goal to uncover the capital-T Truth, and that we can be affected by the views of people around us, the best shot we have at hitting the capital-T Truth about the universe as a whole, if not for each individual component of it, is to average out our beliefs across the spectrum – sometimes skeptical and sometimes dreamy.

One of the simplest ways of doing this is to surround ourselves with people that constantly challenge our beliefs by pushing them in the other direction. Diversity of opinion in a society has a beautiful effect of re-centering communal beliefs, of pushing each other towards the capital-T Truth, when alone we are skewed and false. This is what fuels my love for modern community – cities, universities, large corporations, and the internet.


My life has had many rocks of belief: parents, religion, college. At various points in life I have put near-blind faith into these rocks, keeping one thing constant while I learnt to process the rest. Yet inevitably every rock in my life has been unturned to reveal uncertainty, disillusionment, confusion and isolation. The first time you find out that you might be able to answer something better than your parents, when you begin to find inconsistencies in the faith that you tie everything back to, or even when you realize that the college you thought represented perfection in how the world should organize itself was really looked upon by dear friends as a corporate feeding line.

Every time I have found reason to believe that one of my previously considered ‘rocks’ of belief – the idols I believed to be true – are fallible I have been left clueless, unable to develop a framework to process the new-found uncertainty in my thinking.

This, and other sources of personal chaos often lead me to shut down my mind. For all intents and purposes I remain functional, I work at decent pace, I continue living my life. But I turn off my desire to think and feel. This a knee-jerk reaction to the fear of feeling lost and disillusioned.

I think there is something profound in the fact that depression is considered a mental illness but stupidity is not. In common understanding depression is characterized by sadness. But depression is really less about sadness and more about numbness. Depression brings with it a lack of vitality that seems to take away the emotional centering of a human being. The ability to feel something in your gut, an instinctive assessment of a situation, believing something, being convinced, that comfort goes away. It is instead replaced by an emptiness that wants direction. Medically we consider a person broken not when they feel something incorrectly or irrationally, but rather when they fail to feel something at all. On the spectrum of meaning, it is less dangerous to be far away from the capital-T Truth than to not be on the spectrum at all. There is no place farther away from the Truth than the place where you no longer want to find it.


Logically it makes sense that if we are affected by people around us, then finding people who question our beliefs is likely to lead us closer to the capital T Truth. But I am afraid of the bewilderment of having what I know be washed away by disagreeing smart friends and their seeds of logic and the resulting doubt. And I am certain I am not alone in being this afraid. So as much as my I desire to understand the world and chase meaning, I dread losing the grounding I do have. But there’s no other way around it – trying to be conscious to the world makes you more vulnerable to run into its contradictions. It never will make complete sense.

Because the truth is, we will never be right. The one thing we know for sure is that complete understanding of the capital T Truth is beyond human intellect.4 The complexity of our world and of the human experience within it will always outpace our ability to observe and understand it. And as bewildering as that seems, it shouldn’t scare us. Because the moment we do understand the capital T Truth, the moment it all makes sense, the moment we can finally be right, at that moment it’s all probably over.

  1. Another reason why Moore’s Law is great. You can assume it explains everything and you’ll mostly likely not be proven an idiot. 
  2. Aside of research in Quantum Computing, modern advances in hardware design are largely around making chips smaller and faster. So what we commonly refer to as technological advancement in terms of what a computer can do is largely just advancements in software. Some of this is just that we are able to write smarter and more complex code to add functionality to our computers. There are however some advancements where we don’t make an significant intellectual progress in the software, but faster hardware makes the processing time tolerable, like lighting simulations in 3D graphics. 
  3. My theory is that our perceptive lens on life is subtly affected by everything we experience. Everything we read, everything we do, everything that happens to us affects how we think of ourselves and everything else. And slowly put together it gives us an intuition that helps us build feelings towards everything. Sometimes we can describe this in rational thought but other times it’s beyond our ability to express. 
  4. Which is one reason why some often define God as the only being that can understand the capital-T Truth. It is the ultimate emblem of human humility, to accept that we will always be incomplete, always wrong. The only way we can understand the picture is to conceptualize of something that isn’t incomplete or imperfect. Imperfection only makes sense in the face of perfection. Otherwise it’s just context-less fact.