12 December 2017

A month ago I left a job as a software engineer and data scientist at Twitter.

I was there 4.4 years and I learned a great deal out of it. I not only got significantly better at the job-description part of my job - contributing to code and data analysis - but I also picked up a few bits about what makes larger efforts and collaboration successful.

So… without pretense of completeness or correctness, and without further ado, here is a sample of things I learned.

It’s better to get a relatively smaller win now, than a larger one

later.

Shipping a successful feature brings more resources and people towards your project and will actually accelerate the larger, future win. Work is like a Hydra. Kill one head “successfully” and three more will grow out of it. You’ll have more work to do. When a project is done successfully, that usually opens up other possibilities, on top if it. It also brings momentum to that work stream. Exciting people like to join exciting projects, and successful projects are more exciting.

I was on the Timelines team when we were moving from reverse-chron timeline to a ranked one. This project was successful and helped our team grow, and attracted awesome people to join the team. Shipping a win made the team better and able to ship further wins faster. If we were to wait until we’ve done some of the follow up work, it would have taken a lot longer to get it done without the new team members. MVPs rule.

Working together is not a zero-sum game.

We mutually help each other, but it’s counter-productive to keep score of how much each person has contributed. Keep helping and supporting others and it will pay back, in unpredictable ways.

Most times I’ve taken time to help with other’s brainstorm, or work through some issue with them, or pay extra effort in code reviews and design reviews, I’ve felt an increase in comradeship from the other person, and I’ve received more unsolicited help and advice from them in the future. Not only this helped us do better work, it made it more of a pleasure. Teamwork matters. Can’t do it alone. Teams will beat geniuses about 99.99% of the time in the short term, and 100% in the long term.
 What we’ve been able to accomplish as a team is a lot more than what each of us can do by themselves.

In a typical project there are just too many details to get right for a single person. Even for the smallest code changes, we do code reviews and having a different person with a detached perspective on the problem take a look often finds ways to improve it. This is even more apparent when there’s a larger project which requires different roles. No matter how smart I may think I am, it is pretty clear that some aspects I’d never think about, even if given unlimited time.

Collaboration amplifies. Or if you like TPS reports, you might want to call it synergy :)

Doing a lot of code reviews for others the fastest ways to learn

It helps to keep up with what all the teammates are doing, builds up karma, and is the best way to discover the code patterns to use. It also gets a lot easier to do with time - economies of scale.

As a relatively junior engineer to begin with, there were a lot of systems and patterns that I didn’t know and understand. Doing a lot of code reviews for my more senior teammates showed me the way they write code, the considerations they make and helped me learn about the different systems we were using as a team. There were times during which I tried to review almost all code by teammates. This really helped me keep up with what each of them is doing, and gave me enough context so I can provide more adequate and specific suggestions. It also made it easier to write my code which interacted, or built upon their code, which increased my productivity - at least measured quantitatively. I believe there was a qualitative increase as well, as code reviews enhance collaboration, and not merely cooperation.

If you are a junior engineer, or new to a team or company - doing a lot of code reviews is the hands down best way to get better. It’s also my favorite example of collaboration done right.

Make BigData small.

When analyzing large amounts of data, it’s important to aggregate and summarize as much as possible. As much as we don’t like to admit it, we as humans are not good at pattern recognition. We can recognize straight lines, and some high level features, but that’s about it.

Often, I had to analyze terabytes of data. I had Scalding, Hadoop and MapReduce as tools, but still, I had to find ways to summarize that data to only a handfull of dimensions and metrics and create plots for those, that make sense for humans. For data-mining it was a two-stage process. First, create a small dimensions and metrics dataset that would fit in memory or in database. Then, it’s a lot easier to mine this smaller dataset for insights, than having to go back to Hadoop. Each iteration and new idea takes seconds, not hours to investigate.

I really like the way Tableau approaches it, by explicitly calling out “Dimensions” and “Metrics” for the small data table. Dimensions are how you can slice and dice the data, and metrics are statistics for each possible slice. To anyone new to data mining, this would be my first advice.

Visualization matters.

How we communicate our findings is as important, and as hard to get right as deriving the findings in first place. The most common bias, myself included, is to overestimate the importance of the findings, and underestimate the importance of the communication.

I took a one-day course by Edward Tufte on how to present quantitative data and it was well worth it. It taught me how it is important to show comparisons, causality, proper documentation, and not to underestimate the communicative power of text labels. Text labels rule. I’ve learned the hard way that it takes just as much work to communicate a finding effectively, than to do the work for the finding. Sometimes even many times more.

If something it isn’t tested, it WILL break.

If it’s tested, it might still break. This so far has been true 100% of the time.

Unit tests have saved my butt many times. I don’t know if anyone can write bug-free code, but almost every time I’ve written unit tests I’ve caught bugs and code design issues. I learned to associate writing tests with finding bugs so much that I do it all the time, and avoid giving a “Shipit!” to other people’s code if they don’t have tests.

One vivid example is from the day of launching the ranked timeline. We had to make some last day changes the day before, which came from high level. It was also impossible to push the launch date further until we’ve done more testing, due to press commitments. We had added new code, and added a whole bunch of unit tests for it. But we forgot to unit test one case and that did break during the launch. I felt embarrassed. Exceptions in production, and delaying the launch by a few hours. At time though, we were not panicked and calmly proceeded to write some more unit tests to reproduce the issue, as to be certain when it’s fixed. Our manager read the situation correctly and tried to give us the calm and isolation so we can focus on the issue. He offered to bring us coffee.

Clean data models are worth the investment.

It’s worth the time to make sure data model and structs are right for the case, and have clear semantics. If you are clobbering and reusing fields that will cost you more in the future.

Me and my teammates were working with a result structure representing tweets. A field in the structure was used for multiple purposes, which were most likely mutually exclusive at the time the data structure was designed. Or maybe it was designed with only one case in mind and the other one sneaked in there, in the same field, as it is a PITA to update data format once live. Anyhow, all was rainbow and sunshines, except not. There was a bunch of work required to parse out the different cases into a less ambiguous format, and everything broke when we were migrating, rebuilding one of the services and a different person had to re-do the parsing in a different language, and the quirks of the fields were only documented in some old JIRA tickets.

We spent much more time working around a clumsy data format that would have been necessary to do it right the first time. We also had tricky to fix production bugs that remained undiagnosed for a while. This kind of tech debt comes with a steep interest rate and unpleasant payment schedule.

Names stick.

My professor in college used to say that naming is very powerful. He’s so right. So when coming up with a name for some feature or behavior, find a good one. Chances are, that everyone will start using the name and it will be impossible to change later.

I’ve had my fun of observing that process in vivo, when I started using the term “headless reply” for one of my project, and in the following months I started hearing it in other contexts and other projects. Perhaps I didn’t coin that term, but sure enough I hadn’t heard it before. I’ve seen other suboptimal terms start to get used so much that it just becomes impossible to start calling them something else.

Except in very rare cases, names don’t change. Once they’re in people’s minds you can’t press Fn+F6 and do a “Refactor -> Rename” on them in IntelliJ.

Every metric will become a vanity metric.

Monthly active users, User active minutes, etc. These usually are great to optimize in some cases, but will stop being what correlates well with user satisfaction and product health in the medium-long term.

Of course, it’s a lot easier to see when other teams are over optimizing a given metric. It’s a lot harder to be self-critical. Over the long term, the product built is the best fit to the targeted metrics. Oh. Key. Ars. If these metrics don’t evolve, the product will overfit.

Estimate time generously. And then pad it some more.

Work will always take longer than you thought, even when you adjust for that. Nobody will complain if you get it done sooner, instead. There’s a very asymmetrical cost reward to any engineer to whether their estimate is too long, or too short.

When I’ve been asked to estimate how long it will take me to do a certain work, I’ve always had the inclination to imagine the steps of the work and imagine how long they’ll take. But this doesn’t account for meetings, off-sites, missed edge cases, time spent helping other team-mates, and other delays beyond my control. And also…

After the first 90% of each project, it’s time for the other 90%.

For example, if building product, no matter how hard you think about it there will always be edge cases which users are doing, that are broken and need further work.

We’ve always, always found issues when dogfooding new features, and many times we’ve had to iterated in A/B tests. Every rare, or weird, use pattern will happen in very small percentage of the time. But that’s still too many times when the product is used by hundreds of millions of users every day. The question I learned to ask myself is not whether any unexpected behaviour would happen, the question is what will the code do when it inevitably happens.

Volunteer for more projects than you can do.

It’s better to drop a less important, and less exciting project than to be stuck on it as the only project.

I think this tends to work because more important projects are usually more exciting. It becomes possible to communicate the benefits of the more important and exciting project to the manager and convince them that you should focus more on it, and not the less important one.

One short post per week, discussing actionable mental models. Join a community of readers, who receive these posts over email.