-
Jul 29, 2016
The top shelf
In case you haven’t noticed, the internet is kind of amazing. Not only do you have the sum of all human knowledge available 24/7, but you can also instantly communicate with people around the world, possibly while also watching videos of 80’s Russian Winnie the Pooh and ‘mining’ this week’s flavor of cryptocurrency, all in the comfort of your own home.
Really, it’s all pretty great. And as for how to deal with that massive pile of information that no person could possibly go through in a dozen lifetimes, we’ve come up with a solution for that too. You simply go to a link aggregation site of your choice, where a community of intrepid internet grazers has collectively determined what the best links of the moment are so you can maximize the pleasure:time ratio of your browsing session. Oh, and don’t worry about having to think critically about what you’re reading. You can just read the comments which (surprise!) are also sorted best-first so you can know how you should feel about this thing without even having to read beyond the title. Great, right?
Food is a similar story. Now in the old days, you probably tried out a few local pizza places to find out what you like the best. No more, friend. Just type ‘pizza’ into a textbox and we’ll show you the best pizza place in your state as rated by thousands of people who surely are just like you in aggregate. Why be constrained to the confines of your locality or the limits of first-hand experience? You deserve The Best™1.
I could go on, but you get the idea.
The amount of information that we have available to us is remarkable and has great potential, but in a world where everything has ‘sort by rating’ functionality, it feels like we’re all reading, eating, watching, and doing the same 14 things…
…except those who aren’t. You see, in the same way the internet lets you find the ultimate (or ‘penultimate’ for those who don’t know what that word means) version of everything after pitting it head-to-head against its peers, it also lets you find the ultimate version of people who share your beliefs. And, as with the pizza and the links, it’s not limited by geographical boundaries. So whether you enjoy woodworking or restoring classic RVs, you’ll find entire communities of people who share your passion. In fact, there are likely to be people in those communities who have dedicated their lives to that passion and whose knowledge and skill is superlative. Suddenly, you go from being ‘the RV guy’ in your neighborhood to being just an average member of this RV group.
Now, imagine your passion is something other than RVs. Let’s say instead that you have a keen interest in contrails or perhaps, uh, in the melting points of various kinds of construction material. Having a nuanced conversation about these things with real people you know is so last century. Instead you can now spend your time reading and commenting on stuff that’s already sorted by how perfectly it reinforces your pre-existing beliefs, and how well it reduces those with contradictory viewpoints to one-dimensional caricatures just waiting for your ridicule. It’s much easier to feel good about yourself this way.
It’s a bit of a paradox, really. Here we are in the most technologically advanced and connected period the world has ever known, with virtually no limits on the information we can access, and yet all of the tools we’ve built funnel us ever narrower into content which is supposedly The Best but in reality is only valuable because it reinforces our beliefs and strokes our sense of self.
-
Sometimes “The Best” may have paid to be called that, but we’ll ignore that for now. ↩
-
-
Jan 11, 2016
Nothing is free
As programmers, we are constantly standing on the shoulders of others. Whether you’re writing client-side web stuff or a kernel-mode driver, you have to build a reasonable mental model of the layer below you and proceed based on that model. If that mental model is too vague, you won’t get anywhere, and if you try to understand every minor detail of every layer that’s below you, you will never get started on solving your actual problem. Striking a balance here is key, but sometimes it’s hard to know what’s important and what’s not.
As it turns out, when tuning for performance, a lot of ‘unimportant’ things really do matter. Sure, your language/platform/framework might hide some complexity in an effort to make it easier to get things done, but ultimately, there is one overarching rule when it comes to performance: nothing is free.
Consider a very simple example. You have some data at memory location
x
. There is only one memory locationx
in your process’s address space. You know that. That’s how the system works, and it wouldn’t make sense otherwise.Well, several levels down the stack, you have the hardware that this thing is running on. Here, there may be multiple caches all containing either what is the value at
x
or was the value atx
at some recent point in the past. There’s some magic gnome running around at this level managing these copies for you so that every thread in your program (if it’s written properly) is reading/writing the correct value at each point in time. This is called cache coherency, and it’s happening automatically. You never really have to worry about it being inconsistent, but it’s not free. Nothing is free.For a nightmare scenario demonstrating how not free it can be, consider the situation in which you have two threads running on different cores that are not even accessing the same data. Thread 1 is using data at memory location
m
and Thread 2 is using data at another locationm+n
. Those are two distinct locations, and from each thread’s perspective, nobody else is going to be writing to the location that it cares about. Surely, nothing can go wrong at this point.Well, as it turns out™, if
n
is not big enough, you’re going to have a bad time becausem
andm+n
, which may be holding completely unrelated values from a program logic standpoint, will end up on the same cache line. As each thread writes to its own variable, that write invalidates the cache line in the other core that happens to contain its variable, and vice versa, and you’re now spending precious time updating cache lines to reflect changes in values you don’t care about. You didn’t think you needed to worry about this, but nothing is free.Now, I’m not suggesting that you start stressing out over all the things that could potentially be going wrong in every layer of your program down to the metal. If you’re lucky, you may never even hit a performance problem that’s caused by a really low-level detail like this. For most classes of software, it’s probably not going to be noticeable even if it does occur; there are usually other bottlenecks that are more significant. But apart from just being aware of this class of problem, it’s also useful to remind yourself the next time you want to add yet another framework or library to your stack that you probably don’t need and definitely don’t fully understand: nothing is free.
-
Mar 31, 2015
Know your tools
Tools are awesome. They make the world go round. There’s nothing like having the perfect tool for a job, and conversely, few things are worse than having to solve a problem when you have no suitable tools (and believe me, when you work on developer tools, you often have no tools because you’ve destroyed all your tools with your tools).
That being said, it’s important to understand what your tools are doing and how they work. This seems obvious, but it’s really easy to fall into the trap of having a flawed mental model of how a tool works and then being misled when that model turns out to be false.
Here’s a quick example of how even a debugger can mislead you if you’re not careful.
Say you have some C++ with a vector and a reverse iterator over it. Nothing complicated.
During execution, you set a breakpoint on the line where
currInt
is being set and hover overreverseIter
. The debugger shows you a helpful tooltip that says:So what will the value of
currInt
be after executing this line of code?It’s really easy to say 3 because, well, of course it is. The debugger is clearly telling you that, and the debugger is your friend, right?
Well actually, the answer is really 2, and before you say it, this isn’t a bug in the debugger. What it showed was correct, but it was easy to miss a crucial word in that tooltip.
Looking at the official C++ standard, we find that the word
current
, which the debugger showed us and we ignored, means something very specific in this case.reverse_iterator
has a member calledcurrent
which points to the element one position after the element the iterator points to. The reason whycurrInt
gets set to something different fromcurrent
is becauseoperator*
does the following:One day, we’ll have tools that can read your mind, understand your problem, and show you exactly what you need to see. Until then, no matter how tempting it is to believe we can blindly rely on our tools, it’s scarily easy to misjudge what they are saying1.
Know your tools and stay awake.
-
This problem is even worse when the tool in question is something like a profiler, in which case there are so many potential things to misunderstand. You need to have a basic handle on how it collects data, how it displays the data it collects, whether the way your program was behaving when you profiled it was representative of it’s usual state, whether the very presence of the profiler has a performance impact on your program, and so on. ↩
-
-
Jan 24, 2015
History
History is underrated in our community.
This is true on a macro level, in that most developers have never studied the history of the field as a whole, but it’s also true on a micro level, in every software project that is active for any nontrivial amount of time.
If you talk to developers working on different projects and in different companies you’ll notice a reverse Lake Wobegon effect going on; everyone believes that their codebase is a mess and that parts of it need to be thrown away and rewritten. While this is sometimes because the code in those parts is smelly or poorly architected, it is often because the approach that was taken to solve the high-level problems seems fundamentally wrong.
At the same time, any developer worth their salt knows that it’s generally not a good idea to throw away large components and rewrite them from scratch without (a) a good reason to and, crucially, (b) a solid understanding of the old code that’s being thrown out. It’s easy to think that a chunk of code in some routine is doing something that’s completely pointless, but the fact that someone put it there in the first place means that at least one person thought it served a purpose (more than one person if you have a code review process, which you should). There are many possible explanations for its presence:
- Perhaps it has a legitimate purpose and you’re not seeing the big picture.
- Maybe it made sense at some point but something somewhere else changed since then, making it redundant.
- The person who wrote it was in fact a moron1.
There are other possibilities too, but the point is that you don’t know which is the case unless you have solid context for the existence of that code. Unfortunately, that generally means you had to be the one who wrote it. If that person is no longer around, then tough luck; you can either avoid the risk of inadvertently breaking things you don’t have tests for by keeping the moldy old code around, or you can bite the bullet, perform a costly rewrite, and take the chance that someone will send you hatemail because you broke support for their favorite scenario on their pet platform, at which point you’ll have to debug some hard-to-reproduce race condition which leads you, after much pain, to the reason why that chunk of code was there in the first place.
Clearly, neither of those choices (being paralyzed by fear, or being reckless) is ideal, and we can’t start forcing anyone who writes any code to answer phone calls about it long after they’ve moved on. Comments in code are great, but they only get you so far; they don’t tell you what alternative approaches were considered, what were the explicit goals and non-goals for that component at the time it was written, whether there were some impending deadlines that may have affected decision-making, and other stuff that you can only find out from someone with a good memory who was around at the time.
I don’t know what the perfect solution to this problem is. I know that some teams take good notes on everything that’s being done and keep a log of all the options and proposals that are considered, along with reasons why they were or were not implemented. The idea is that if someone comes along and has an idea, chances are that someone had that idea earlier, and by searching this log they can find out why it was a bad idea in the past (or not). This approach makes more sense for some projects than for others, and it can be heavyweight if not done right, so it’s not a perfect solution.
We need to get better at this and figure out best practices on our own projects and teams that will ensure we’re not entirely dependent on people and their memories in order to leverage the learnings of the past.
-
This is generally unlikely, but if you find that it’s a frequent occurrence, then maybe you should find some new people to work with. ↩
-
Nov 1, 2014
Code Reviews and You
I was recently asked to speak about best practices and guidelines for effective code reviews. In preparing for that presentation, I thought a lot about what works and what doesn’t, and why some teams seem to get a lot of use out of the code review process while others see it as a necessary evil or a hindrance in the way of pushing code.
Much has been said about specific ways to ensure useful code reviews, and most of it really is useful advice that you should go read and practice.
What’s more important, though, is that every member of your team understands what reviews are for. Reviews are not just for the benefit of the person whose code is being reviewed. They are a way for everyone involved in a project to understand how things are changing in other parts of the codebase. They ignite some of the most useful debates which end up leading to better design decisions in the long term. They give junior reviewers a chance to observe the kinds of things more skilled reviewers hone in on. They even lead to better naming, which is the second hardest problem in computer science and something you should worry about getting right.
Once it becomes clear that all of these are goals of the code review process, guidelines and rules are no longer necessary. Just use common sense.
subscribe via RSS