Join us as John Sadler describes the beneficial uses of Agilent Technologies. Agilent Technologies is a company that creates equipment and peripherals to help them measure the quality and purity of the items we depend on to live a modern life.
John leads the only software program at the company, ensuring the user experience is smooth and accessible. As our conversation continues, we discuss the elements of leadership and how the company developed their belief system over time. From autonomy to making sure they hold true to the promises that are made, John shares the value of his credible leadership methods.
1. Agilent Technologies
2. How to build credibility as a leader
3. Leadership autonomy
4. Leadership by influence
5. Bad news early
6. Capacity model
7. Learning loop
8. Technical debt
1. Agilent Technologies
3. Hard Facts, Dangerous Half-Truths, and Total Nonsense by Jeffrey Pfeffer and Robert I. Sutton
4. Great at Work: How Top Performers Work Less and Achieve More by Morten T. Hansen
5. The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever by Michael Bungay Stanier
6. The Feedback Fallacy by Marcus Buckingham and Ashley Goodall
7. Harvard Business Review
8. Feynman Appendix to the Challenger Report
9. What My Dog Knows About Leadership by John Sadler
Carlos: Hello everybody, and welcome to another episode of Method Matters. I just wanted to start off by apologizing for not having our regular cadence for the last couple of weeks but we are back now. We are going to have more and more episodes being published in a more consistent manner. The podcast is still very much alive. I’ve gotten hundreds of emails of people asking me, “What’s going on.” But I assure you it’s going to be great.
Well, in fact, today, we have a really good episode. We have John Sadler from Agilent on the show, and part of what drew me to talking to John was just his expertise, not only as an engineering leader but as a leader overall. You know, what caught my attention from him was a couple of articles he wrote and how he has been able to learn about leadership through the different methods or different places and including his dog. It is an interesting conversation. The main topic of the conversation is leadership by influence and not by mandate which I think it’s timely in today’s political arena, and just making sure that people are motivated by what you are trying to get them to do. When I say political, I mean, in today’s arena where even we are trying to be mindful of how we communicate with others. It’s a very strong tool to be able to lead by influence and not just by telling people what to do in a very strong manner.
So anyway, this is going to be a very interesting conversation. John brings a great amount of experience not only from his current role at Agilent but also just overall his entire career has been in this real. So anyway, without further ado, let’s welcome John Sadler from Agilent.
John, my friend, thank you so much for joining us today. It’s been a long time coming. How are you?
John: I’m doing very well, thanks! It’s my pleasure to be here. Thanks for inviting me.
Carlos: Part of what caught my attention in what you’re doing today it’s just the breadth of the realm that you have to cover within your role, so I think this is going to be a fascinating conversation because not only do you lead people but also you have to respond to a business. This is a preliminary title, where you’re going to be more likely to by the time this is live the title will be a bit different but the essence of this episode is going to be leadership by influence. But leadership doesn’t only mean top-down but also how you report to others.
But anyways, with that introduction, once again, thank you so much for joining us today. It’s a pleasure to have you.
John: It’s a pleasure to be here, thank you.
Carlos: John, for those that are just tuning in, tell us a little bit about your background and how did you get into tech. I think that will be a good way for us to start warming up the conversation and to get into the actual topic today.
John: I think I’ve always been in tech in some sense anyway. I think I decided when I was four years old that I wanted to be an engineer or a scientist. I think that was sort of the family business. I taught myself to code around age 12, I taught myself electronics out of a hobbyist magazine. I always love the intersection between music, electronics, and what would eventually become software, but when I was a kid there wasn’t really much in a way of software that was accessible to a normal hobbyist type person, that came a little later. When I was a kid, if you wanted to learn to program a computer you had to rent a time-sharing machine.
In my case, I happen to be lucky enough to go to a school where they had an account on HP2100 that you could program in Fortran or Basic, and we had a Teletype ASR-33 that printed on paper towels that we’d stolen from the boy’s locker room downstairs in the gym. And I started writing games, and it was really enjoyable. I liked it and I like the, you get that same sense of deep satisfaction I think writing code that you get being in a dark room, or carving wood, or doing something artisanal where at the end you got a result that sort of expresses your intent. In software, it’s the purest embodiment of intent that you can get in the role essentially. If you can dream it, you can code it generally. And that’s one of the things that I really like about it. I’ve always been very excited about multi-disciplinary usage so my passion has often been about using computers in service of working with moving parts, working with instrumentation, working with music or art, those things all excite me. I think that was the drive that sort of started me off in the direction of being interested in technology in general.
Carlos: Today, you are at Agilent Technologies.
Carlos: Give us a little bit of context about what Agilent does for the layman. I know it’s very complicated what you guys do and a lot of it is very contextual to the science field.
John: I think I can make it pretty simple actually. The simple answer is that Agilent Technologies makes equipment and peripheral stuff that goes with that equipment that helps them measure the quality and purity of your food, your air, your water, pharmaceutical drugs, and the chemicals and fuels that we depend on to live modern life. So we’re about being able to tell you whether your olive oil really came from Greece. We’re about being able to tell a pharmaceutical company whether the active ingredient in the pills that they are making to help with your headache is sufficiently pure to be safe. We help to detect contaminants in food and water before it gets to your table. Those are the kinds of things we do.
In addition to that, we make medical devices that contribute both to medical research and to medical diagnostic, so we make automated pathology equipment that helps to identify, for example cancerous tissue. And we make companion diagnostics which is a kind of test that helps you to determine what therapy would be the best therapy for a cancer that you might have. We’re very, very good at making short, very pure strands of DNA and RNA, and those skills go into companion diagnostics as well as doing other diagnostics of our own both for cancer, prenatal diagnostics and eventually other applications as well.
So it’s a pretty broad company. We have something like 14,000 people right now and we do business worldwide, and we have business offices and R&D locations around the world as well.
Carlos: So tell me a little bit about your role at Agilent, and how your role impacts the mission of the company?
John: I lead Agilent’s only software division. Agilent traditionally has been a scientific instrument company. We are an offshoot of Hewlett-Packard, and the mission of the Software and Informatics division at Agilent is to unify how our products look to the customer when they are using them on a day to day basis, and to maximize the value of the data that customers generate in their lab. So that’s a fairly broad thing but in essence we make the software that you use when you’re getting answers from ours.
Carlos: Got you, so not only your clients, but also the end user, let’s say, scientists would interface with your instruments through the software that your team creates.
John: Correct. A scientist, a lab technician, in many cases, also we have important stake holders and procurement and in the IT department to be able to make sure that our customer’s labs run reliably without interruption, and that they can recover from disaster. If you look at it broadly, the mission is bigger than just running the instrument. The mission is not just to run the instrument reliably and easily with the minimum amount of training but it’s also to protect the customer’s results for as long as they want them, and to make them reusable, which is actually more challenging than you’d think with scientific data.
Carlos: Yeah, to think nowadays, I was having this conversation with a friend recently that at some point our DNAs are going to be stored so more for our use. So what you’re saying is that in part you guys are doing something like that but of course for your own usage.
John: Yeah, I mean, let’s take a simple example. Simple example might be as a routine thing you want to measure the purity of a drug or you’re doing drug discovery research. You conduct an experiment where you’re looking for the compound that you’re actually interested in but you are also trying to find out whether other things in there that you’re not interested in. In order to do that you typically take a number of steps; you get a sample from somewhere, you prepare it, and some cases that sample preparation can be extremely complicated depending on what you’re trying to measure.
For example, if you’re trying to discover pesticides in cannabis, there’s a lot of sticky oils and proteins in there that can interfere with your analysis, and you have to prepare the sample in such a way that those don’t interfere with identifying the pesticides that you’re looking for. Then you inject that sample into a measuring instrument of some sort and what comes out in the other end is eventually it’s a report, but in order to get to that report you actually have to work with the software to help it identify what feature is really interesting to you. All of those things provide context for the experiment, everything that you did in the way that you prepared the sample, what the temperature of the room was that day, who did the work. All of those things can be factors in helping somebody who comes along five years later, for example, to ask, “Is that conclusion still relevant to me? Can I use this data for a purpose that the original investigator didn’t intent?” You have to have all the context in which that experiment was done even if it’s very routine. If you don’t have that context, it’s very, very difficult to establish whether that data can be safe to read. So that’s the challenge.
Carlos: Got it. In order to start measuring or to get an idea of the complexity of the very least the execution of this project, tell me a little bit about how many people and how many teams work together to deliver these products in general for Agilent?
John: It really depends on how you count it. So while we are the only division in Agilent that does software exclusively, we don’t have the exclusive lock on software at Agilent. I think inside the Software and Informatics division including people who are on contract and outsource companies that we work with, there are probably hundreds of people. Maybe 300 who are involved in some way in the delivery of our stream of releases to market. But then there is at least that number and possibly more who are outside the division proper and there are plenty of other places where software get developed in the company.
So we have an interesting role in the sense that if we want to look like one company we have to influence other people to work with us and to want to converge with us on common look and feel, support standards, compatibility thing in synchrony on what I like to call the IT hamster wheel, right? The constant stream of operating systems obsolescences, bug patches and things like that. If you have, let’s say, 20 groups of people who are not synchronized on that, it’s very, very difficult for a customer or for that matter one of our support people to have enough brain power to understand how to keep their software working if they have a lab that has more than one of those things. So in an ideal world, one company would have one policy about that. Getting there in a company with a traditional, highly autonomous culture is a challenge, and it requires more than just pounding your fist on the table.
Carlos: And I think it’s what kind of drew us into this topic, right? There’s a ton of people that you have to have some, put it this way that, your company’s strategic goals depend on and you have to dictate how the work is done or what get done, but also you have the business to kind of report to essentially. What is the difference or how do they relate? How those orchestrating and influencing related in this manner?
John: I would have to start by saying that Agilent’s history is that we came from Hewlett-Packard, and the founders of Hewlett-Packard were very, very big on delegating decision making authority to the levels of the organization that were closest to the work. So they created what I would call a culture almost radical autonomy where people felt empowered to go do what they thought was the right thing most of the time with a very, very light touch from the top in terms strategic direction. And that works great up to a point but it can reach scaling limits. And Agilent spun off of Hewlett-Packard in late 1999, and then like Russian nesting dolls we actually split again into G-Site which is kind of a sister company that makes electronic testing measurement equipment and then Agilent retained that name, and Agilent is focused exclusively on life sciences and healthcare at this point. So we have a much narrower market focus than the Hewlett-Packard company used to have which was in consumer products like printers and PCs, plus high-end computers, plus consulting, plus electronic testing measurement which was the original business of the founders, plus life sciences and many more concerns. The company has split off now into four companies, each one of which has a much narrower range or focus.
But the challenges that we brought along this legacy of autonomy, and that meant that in general each business in Agilent went its own way with regard to how a software work with and how it operated but that’s not how our customers work. Our customers have a lab or it may have many labs but their cost of operating a lab depends a lot on the cost of training and supporting the software and the instruments that they buy. So it is an advantage for a company like Agilent that has a pretty broad portfolio to act like one company because 50 companies can do 50 different things separately, but if you are one company with those 50 different products then you can make them look, feel and act alike. You are lowering your customers cost to own a new product. And that’s an important force of differentiation, so we have to do this. I’m getting around to answering your question here. So that this background is why it’s critical, that we found it critical to drive convergence in this area. And you’ll see that today, Agilent products, they look like they come from one company, they turn to work alike in many respects, we still have a lot of work to do. And we are driving rapid convergence in the software field in the same way.
Now, you can do that a few different ways. So I’ve seen companies do things like, say, well, all of the software is going to report to one place. Generally, if you do that prematurely especially what happens is that the people who run the businesses feel underrepresented in what gets done almost immediately. It’s very difficult for monolithic software organization to represent the needs of multiple constituencies in an adequate today. And so, your other option is to attempt to create some common foundation, so platform, something like that upon which individual stakeholders or individual businesses can autonomously create the things that they need while still maintaining enough commonality, that you preserve the advantages of your scale. And that’s the approach that we’ve taken. In order to get there though, you have to have credibility as a partner. And it’s so difficult to just say by executive fiat you will do the same thing if you don’t have a partner that you’ve counted on before.
So when I got here about four years ago it was really clear to me that the first things we had to do in order to be able to influence people to go along with us was to become a credible business partner. And what that meant first of all was to have, I call it predictability, but effectively it really means a few things. The first one is that, you can count on us to make reliable software in a timely cadence, number one. Number two, we have a fairly transparent way of intaking work request from our partners and reflecting our capacity to do work back to those partners so that if they have a reasonable idea, that if they start to commit to planning with us, if they alter their strategy to work with us, they have a fighting chance of getting out the other end and what they need to support their business. And finally, the same kind of commitment to our field, sales and support organization, and our customers themselves. That if you find a problem, and we try to make it as rare as possible, but if you find a problem in our software there’s an easy way to submit it to us and we’ll fix it in a timely way and we’ll keep you posted. That kind of transparency is what you need is what you need to be credible. And all three of those transparencies starting with we write quality stuff and we release it on time. We’re critical to our being able to influence other people to train with us. They have to believe that we are a credible partner first.
Carlos: I think that this is foundation essentially for anything else beyond that. Because if you are able to do what you say but then also do it every single time, you’re almost setting up a bit of a baseline for any future things you say. And I say things you say, any promises you make, they’ll hold you up to those.
John: Yes, and interestingly not many people are all that good at it so it’s actually a fairly powerful strategy also with customers. It turns out that with large enterprise software customers tend to think several years out in their investment decisions. So if you’re actually credible at delivering what you say you’re going to deliver in a timely way, that’s almost as good as actually happening. So that means that customers feel like they could trust you and make plans with you for their future as well, and that can be a powerful differentiator for company as well. Inside the company, of course, if business leaders don’t feel like they can count on you and deliver what they need, they are going to go on their own way and they almost always have veto power from their profit and loss statement. Right? If they say, “Well, look, if I take this risk and I lose this, this is going to cost my business.” And in an American public company that argument is always going to win, so you have to get people to have confidence that they can count on you, that they can trust you, and that we can lean on each other and work better as a team then we could separately before the math works out for them. You go do it this way.
And I think this also generalizes out into influencing in other areas, right? I mean, it’s one thing to talk about a division, hundreds of people, working with other divisions. But I think it is also true as a leader in general of people, people’s ability to count on you, people’s ability to some degree to know what you stand for and how you’ll behave in certain circumstances goes a long way to helping them make the internal decision that it’s worth trusting you and following.
Carlos: So, because you have to be this credible business partner to the business, can you describe some maybe systems and processes you have put in place in relation to leadership by influence and not by mandate. And the lens that I’m trying to put it through is the people that are under you that you have to count on in order to deliver all those promises. What are some systems and processes that help you keep an eye on that or your ear to the ground?
John: So the first one is I do of having this three tier definition of predictability that dictates the way that we do software. So we’ve gone through over the last few years, we’ve gone through an Agile transformation. We have taken a Scrum, and we’ve adapted for some of the realities of our market. But effectively we do the usual things, we have two weeks sprints, we release software on a cadence, and we do our very best to empower product owners, area product owners, Scrum masters and the Scrum teams themselves to make local decisions to drive the software where they need it to go. You get them enough customer contact and enough opportunities to demo things for customers or for virtual customers that they can make autonomous decisions wherever possible with strategic guidance from the leadership team. A lot of the system is that. I think underlying that though probably before we were able to do that we had to make an environment that was a safe place for the truth, a safe place for learning, and therefore, where we value evidence above opinion. That means one of the ways that you influence people, you get people to follow your lead is by showing that it’s not your opinion over their opinion. Right? But that you are all looking at an external source of evidence that you can all steer to in the same way.
In my case, I try to convince people that the important thing is to look for good sources of evidence and then to use them to drive a rapid learning. The point is not to fail fast, the point is to learn fast. And we learn fast when we can tell the difference between evidence and opinion, and when we get decent sources of evidence and use them to iterate rapidly. Morten Hansen calls it a learning loop. I love this concept of a learning loop. The idea is to drive down the cycle time that it takes for you to find out whether your hypothesis is ready or not, and that’s what Agile is all about. The sprint is essentially a 2-week learning loop. You make some software that works, you stick it in front of somebody who could tell you whether or not you got the idea right, and you go back and you try something a little bit different each time until what you got at the end. If you started with working software, what you got at the end is working software that makes your customer happy and gets the job done. That’s a learning loop. That’s a great example of a learning loop. And that concept can be generalized out to almost anything else. You need a source of evidence, you need a short cycle time to deploy work in and you have to be able to demo something at the end.
Carlos: Something I want to lean into is the whole concept of making a safe place for the truth, right? I think everything that we’re talking about works if there is a safe place for the truth. Otherwise, learning that something didn’t work could be “failure” or saying, my hypothesis was A but it turned out to be incorrect. This is a potential success not a negative thing. So if the culture says, “that oh you found out it was negative so you’re wrong”, that means it is not a safe place to find the truth.
John: Exactly. You know, creating that I think is up to the leaders to model behavior that, first of all, we don’t punish people for bringing forward a problem. It’s a very common element of a culture in large companies that nobody wants to tell management when they are wrong about something or when things aren’t going the right way. Everybody wants to put on a happy face when they talk to a CEO. But reality should be a good CEO is going to want to know about a problem before it’s too late to do something about it.
As a senior leader of a division, I tell people very explicitly that I will reward you for bringing forward bad news early. Bad news early is the culture we try to strive for. It’s not that we love bad news, it’s that we love the opportunity to be able to do something about it before it’s too late. What I hate, what drives me crazy, and what I will absolutely go anywhere from bugging my team to taking disciplinary action about is when people sit on bad news until it’s too late to do something about it. And almost all of our delivery processes are built around the idea of accelerating, bringing the risk forward in the process, right?
I mean, this is another way of looking at Agile. When you start working with a software there is two big sources of risk in a traditional software development processes, in the Waterfall process. The first source of risk is market risk. You spend three years of writing some piece of code that you thought was the right thing to do, you get it out to market and nobody wants to buy it, or it was the wrong thing in some important way. That’s pitfall #1 of Waterfall, that you spend a lot of time working on the wrong thing and get no feedback, right? The second source of risk is integration risk, particularly when you are working across software hardware boundaries, but even software software boundaries. The possibility that, it’s like the analogy of digging a subway tunnel from both ends at the same time and not checking regularly to make sure they’re going to meet in the middle. The Agile way to dig a subway tunnel is first is to put through a wire, right? A tiny little tunnel that’s filled with something and then you dig out around the wire a bigger, and bigger tunnel until you can drive a train through it. The Waterfall way of doing a subway tunnel is you put a boring machine at each end of the line and you start them going and you hope that they meet three months later. If you’re wrong, you are going to start all over again.
Carlos: That’s a good analogy.
John: So integration risk and market risk are really the two big things that Agile attempts to take out. You have to constantly remind your team of that though, that the purpose of Agile is to allow you to learn fast, and to allow you to move risk forward in the development process. One way to look at an R&D team or a product development team is that their whole job is to reduce risk as rapidly as possible over time – to reduce technical risk, to reduce market risk. And to get you something that at the end when you have invested all that money and you know you can sell.
Carlos: Part of that is the bad news early. I love that concept. But it absolutely depends on two things, on initial culture of being able to delegate decision making to the people closer to the work and empowering those people to come to you with those bad news early.
John: Yes, and to let them know that that’s an absolute expectation not just a request.
Carlos: Exactly, because this is what you’re expecting from the people, say, under you, right? People that report to you within the business. You are part of management. But are there situations where you have to report to higher ups or say no to higher ups and how does that happen on your end?
John: So, I try to keep good dashboard measures around. So I trust my team to tell me when there is going to have a problem. So my team in our last release got very, very upset at themselves when they were a week late making a release, and this was after several months worth of work. Actually, they were so upset of themselves without being asked they were putting an extra time and taking extraordinary measures to get our software delivered in a high quality way as close to the time that we said we would as possible. And it turned out that even though the initial release candidate date slipped a week they still hit the manufacturing release on the nose. That was all because of basically they took so much pride in what they do that they put in the extra effort. At the end I told them, “Look, I thank you from the bottom of my heart for everything that you guys did to make this possible but this is not how I want to run a business. I want to run a business where we find out about these surprises earlier. And so one of the things that we’re going to look to do as a team when we do a retrospective on this release is to try to understand why we got surprised late and figure out what we do about it and how we can recover from it faster. And my promise to you is I won’t ask you to work nights and weekends unless it’s absolute emergency. We’re going to try to absolutely minimize this as much as possible. I don’t want that to be our habit. I want our habit to be that you have time to think about what you’re doing and make it better all the time.”
Anyway, I like to think that my team is conditioned to believe that they can come and talk to me. I make that happen in a bunch of different ways. The first one is, when I’m in my cubicle, I am standing so people can see when I’m there and people generally know that they can come, walk up to me and I expect to be interrupted. If I don’t want to be interrupted I go someplace else. So I try to spend time being accessible. I walk around which is kind of part of the company culture for quite a while. Try to bump into people and have conversations with people I don’t ordinarily talk to so that I hear things, and that sends a message to my leadership team as well that they ought to do the same or else I’m going to be hearing about things before they do. And finally, I let people text message me so if they have something they need me to know they can text me and I’ll respond and go find out about it. But the biggest thing is to create this culture where we are explicitly saying we want the news when we can act on it. So people come in if they are in trouble, they know that it’s safe to come and say so, “We have a problem. Here’s what we are doing to work on it.” And basically, my job is to, if things do get really bad for whatever reason, I’ll tell my boss. I’ll just say, “Look, this is what we are doing. Here is what we are doing about it. We may have a problem.” Most of the time it’s not an issue anymore. Most of the time we do reliable software. But I know that a part of my job is to do the same thing for my boss that I expect the folks that report to me to do for me which is bring me the bad news when I can do something about it. Ask for help before I think I need it and make sure that people are posted, so I do that.
You also ask about saying, no. A wise woman who mentored me once said, “Learn how to say no when you do make sure you have a good reason.” So I think there is lots of different ways to say no. I actually wrote a post on this on my LinkedIn page actually. One of the things that you often find yourself of having to maybe say no about is when people ask you to do work when you already got a full plate. You don’t actually have to say no to that. What is often a more effective way of dealing with that is to have a list of the things you already signed up for and ask the question where does this fit on the list. And start a negotiation about when something needs to get done. Typically, if my boss comes to me and asks me for something, my unconscious assumption is that I’m supposed to drop everything that I’m doing and start working on it. That’s not what he meant. It’s usually not what he meant. It’s usually, what he means is, can you tell me when would it be reasonable to have this?
Carlos: Yeah, people think by culture, like just culturally that when a boss or even a client request something that it’s right now. But this person is counting on your expertise to put anything to backlog essentially.
John: Right, exactly. This ties right back in to predictability, right? I mean, effectively, if you have a ranked backlog, a personal ranked backlog or a business ranked backlog, and your customer, your boss, your client, whatever, says, “I want this.” You get to say, “Okay. Well, here are the other things that I’m doing for you now and when they are going to be ready. Do you think that this should go at the end of the list or do you want me to interrupt something I’m doing now so that I can do this. So I can tell you what the consequences will be and let’s make a decision together.” And so they have the opportunity to say, “Well, okay I understand that this is going to cost me something if I ask you to interrupt everything now. So maybe I’ll wait until two weeks from now or three weeks from now. When do you think it would be reasonable to have this if you don’t stop what else you’re doing.” You say, “Well, if I put it at the end of my list it will get done in…” Think whatever the number of weeks you think your backlog is going to cost you. And you had a very transparent negotiation there where you didn’t automatically assume anything and you didn’t have to say no. You simply reflected back to your boss what your capacity is to do work and ask that person to make a priority call. And it’s okay, if they say, “This is more important than anything else.” That’s fine, you can tell them what the consequences will be, “That’s fine but these are the things that are going to get delayed. I’m going to have a tear down cost. I’m going to have a startup cost again so it’s going to cost you a little bit more time than it actually takes me to do the task that you want right now. Are you okay with that?” And if the answer is no then there’s another negotiation to have but you are transparent and you work from evidence.
Carlos: Something that is interesting is as an external engineering group we are often asked to try to do more than one thing at the same time, as you would imagine. Priorities change and all of a sudden not only do we have to deliver Project A, but also Project B with the same amount of people. Whenever we say, alright, we are resource is limited. If we want to do this, we need to up the budget, up the people etcetera. And many times they say, “Yeah, that’s fine. Let’s add more people.” There is something missed when the solution is to add more people, in order to add more people you need to stop doing other things in order to unboard those people.
John: That’s right.
Carlos: Yes, your velocity will increase but for a period of time it won’t so that’s something to always consider.
John: Absolutely, and so this goes back to having a good capacity model especially for a team of people, when you have, even for one person having a capacity model is helpful when you’re talking about multiple people, a capacity model is essential. A capacity model is an artifact that you refine continuously that helps you to forecast how much work you can get done at a given time and how your people are deployed. And so when somebody comes to you and says, “I’ve got this much more work to do.” You can start to say things like, “Well, I could either put it at the end of my list and get it done in my current capacity or I can start to think about expanding the team if you think that this is going to be a long term expansion as opposed to short term expansion, but there is a capacity cost to expanding the team. It will take me time to bring in good people. It will cost my capacity to interview and train them. Do you want to bear that cost?”
Carlos: That’s brilliant, there is a capacity cost to expanding the team.
John: You said it yourself actually.
Carlos: Yeah, but you just worded it so much better than I could.
John: It’s just fancier.
Carlos: Way fancy, to expanding the team. But no, this is brilliant. This is a topic unto itself by the way. If you’re looking for articles to write, I think this is like what is a capacity model.
John: Maybe I will. I would actually got a fairly refine one right now, but still got lesser room. The other cool thing about a capacity model that I want to touch on that goes back to your question is it, capacity is not just about people and time, it also about waste. And so one of the things that I think that Lean approaches to development try to do is to identify sources of waste, cost in developer productivity, places where people are spending time is really not for getting good stuff to the costumer, places where you have to do do-overs, work in process, delays. All of these things are opportunities to actually improve your capacity on the same budget, the same number of people.
As an example, I found when I walked into this particular organization that I estimated that we had an opportunity to triple our capacity on the same dollar by doing some things different. We have proven over time at this point that I think we’ve at least double that capacity, and we’re on track to keep improving that. We found lots and lots of opportunities to reduce waste and to grow capacity simply by reducing defects, by reducing reverts, by shortening the cycle time between when a developer commits a change, so when they found out that they broke the built, looking for areas of code that are likely to cuff up more defects than other areas, doing static analysis. Everything that we can do to shorten the cycle time helps us to eliminate waste so we can measure that waste and improve capacity or we can measure the reduction of waste and improve capacity.
Carlos: Well, of course, with the previous set of questions we were talking about and its, sometimes somebody comes to you and says I need help or they say no to you alright, and they don’t have enough evidence. Sometimes they ask for help but they don’t need it. How do you handle that?
John: So, I would say most of the time I’m pushing on people the other way which is to nudge them to ask for help earlier rather than later. It’s very rare that somebody comes to me for help and doesn’t need it. But there are times when it’s better from a developmental perspective to not give people the answer but to help them find within themselves the resources to answer the question of themselves. And this is a skill I’m still developing, I have to confess. It’s really really easy for me to just want to give people the answer, maybe it’s just because I’m male. I don’t know. But if I think I know the answer, there is this massive temptation to just say this is what I think it is, it makes you feel like you know what you’re doing, but honestly it doesn’t necessarily teach the person how to help themselves. So when somebody comes for me to help, the behavior I’m trying to begin to model now is to help them to go as far as they can in the thought process themselves by asking some provocative questions, by doing better coaching like asking them. So, you know, “What is the challenge for you, what would you do if you’re left to your own devices, what help do you think you need for me, who have you spoken to”, and try to frame it up that way in a way that helps them to walk through a process that will them to clarify better what it is that they are wrestling with. And there can be an emotional component to that as well as an organizational and skills base component to that. So the emotional component might be that they are afraid of getting in trouble if they go across some other part of the organization and ask to escalate a particular problem if they’re not getting real for example, or they feel like they are going to get somebody else. This is a really common thing in a big company, that people know that relationships help them to get stuff done. They don’t want to get the other guy in trouble, and they think that if they go to the boss, or the boss’ boss of somebody in another organization and say, “I could use your support on this.” I think that’s going to get people in trouble but actually often times the opposite is true if he go get support for what you trying to do soon enough, if he go communicate what you’re trying to do soon enough and high enough in your organization, it can help to give everybody else permission to do the right thing. So it’s really important to encourage people to go ask for the right kind of help soon enough.
Carlos: In the same sort of vein, but I’m thinking more, because your company is obviously in lined. I’m getting ahead of the question. But it’s, how do you keep a balance of the focus given to something like technical debt or maintenance to new products? Though this is a part where it gets interesting right, is how do you foster a culture that respects that balance between the engineering folks in the business. It’s not necessarily us not understanding what technical debt is and why it’s important, but how do you foster that culture for the business to understand it?
John: I love this topic. I’ve had to deal with this in every engineering job I’ve had but it is especially true in software because you know there’s always this expectation with software that you’re going to reuse it, that you’re going to keep it around for a long time. You’re not just redeveloping from scratch most of the time. So that means that you always have to be looking out for places where there is tension in the code, or the architecture is not as flexible as you want it to be, where technology becomes obsolete has to be replaced and there is no overhead to that. In fact, I tell people, you know people who are used to manufacturing things are used to the idea that you can talk about gross margin. Gross margin is the difference between the price the costumer pays and what it costs you to make and sell the thing that you’re selling. And in software, if you apply that calculation, the gross margins are huge, right? I mean, software cost nothing to make. It cost something to sell it, so it’s not at all unusual for software companies to have gross margins in the 85 to 90 percent range.
But what that miss is, is that if you have stop maintaining your software, its value falls off the cliff very fast because it doesn’t keep up to fill reported bugs, it doesn’t keep up with IT obsolescence, it doesn’t keep with technology obsolescence and the architecture, and it can’t adopt to requested changes from costumers. And costumers don’t see software as something they just buy and used the same way for twenty years most of the time, some do, but most don’t. Most expect their software to evolve over time to keep up with the needs of the context if the software runs it. So, effectively there is an amount of work that you do just to stay in the game in software. So, that’s a very a long way to preamble. I would say in every software leadership job I’ve been in, it has fallen to me as a leader or somebody else as a leader to reserve capacity, to keep architectural continuity happening, to defend the architecture, to defend the continuity of the software. And my rule of thumb is somewhere between 15% and 25% depending on the organization, the newness of the code you’re working on that sort of thing.
So interestingly, somebody has to give that a voice. And typically if you’re at an Agile organization that’s very costumer facing, and marketing led, and you don’t have a voice for the architectural continuity, you will constantly be turn by the latest feature the costumer wants to pay for. Which is not inappropriate but somebody has got to create an equal voice for being able to keep developer productivity up, being able to make sure that the architecture will last another year, being able to make sure that you survive the obsolescence of Windows 97 or whatever. And so you can do that in a few different ways, one of them is for there to be a leader, a senior leader say who actually says we’re going to allocate this much capacity. Another way to do it is to have an architect to be effectively part of one of the product owners to voice that. The way that I like the best is actually to map out a stream of capacity that we review periodically, whose job is dedicated essentially to housekeeping. We do that quite commonly to guarantee a short cycle time for defect fixes. We measure what percentage of our capacity we’re spending fixing defects and we actually set us a turnaround goal for ourselves. You know, we say that we want 80% of field reported defects to have fixes submitted for them within 60 days to the report, once we identify this defect for example. So in order to do that, you got to reserve capacity and you got to reserve release windows. So we’re in the habit of doing that kind of thing, we reserve capacity for that. Obviously, we try to minimize the number of field reported defects because it’s very expensive to respond to them and we’ve been with that, so the next tier is to say we’re also going to reserve some capacity for the internal housekeep. That’s take care refactoring, it takes care of obsolescence, it takes care of places where we see that we’re going to need to go in a different direction architecturally in the future than we’ve gone in the belt.
Carlos: And of course, this does impact your capacity model. Like, you have to take all of this into consideration when building that model.
John: Correct, absolutely correct.
Carlos: And this is so interesting.
John: But it’s a wonderful thing, right? Once you know that and you have that capacity model, you can be completely transparent about it and said this is what it takes to make a healthy piece of software.
Carlos: It’s almost like knowing that, you know, if you make a $100 you’re going have to pay 30% in taxes. That’s almost just planning for that ahead of time will not know your surprises, essentially that’s what it is.
John: And you can also add to that. You don’t have to be 100% correct, right? All you have to be doing is refine, it’s the same thing, learning loop right. The capacity model itself is a learning artifact. You refine that model over time to make it better and better. We get better and better estimating how much unplanned capacity loss we’re going to have, we get better and better at estimating how much of our capacity we’re going to spend supporting fires that arise in the field, we get better and better at estimating how much capacity we’re going to need to do internal housekeeping work, and we get better and better at understanding how much capacity marketing has to use on request from costumers and our business partner.
Carlos: Is there a lot of documentation that the team has to do in order to log or record the duration of time something took or just say logging hours in order for you to measure that?
John: No. What we’ve found actually ironically is that, once again going back, if the objective is to learn as opposed to measure people’s performances or something where the objective is to learn, we had two big insights. One of them is, the stuff that needs the most attention, everybody usually knows what it is at the end of the release. So we need to retrospect, it’s a lightweight thing, it’s a team interaction where we get people together and we know it’s not going to be punitive. We say, here are the things, you know, we get the band together and ask what could we have done better? And we expect people to be candid and up front about it and I try to model that behavior myself. I usually have some pretty strong ideas about what I think could have gone better, and I just lay them out there and I say, “What are we going to do about this?” And usually the team understands implicitly that either they’re going to come up with some ideas or what we can do better and what they want to do later. I’m going to tell them and I try to wait as long as possible so they come up with their own.
Carlos: You have to measure your words in those because you don’t want to blame somebody and nonetheless…
John: Yeah, it has to be sincerely about learning, right? You just have to frame it that way. This is about learning and not about blame. I don’t care whose fault it is. I just care about getting better.
John: I actually wrote that down. We actually have a written, I call it the manifesto, but there’s a little thing that says here are the values that we care about, we care about teamwork, we care about evidence. And so having an evidence based culture means it’s got to be safe for the truth and it means that people have to feel like they’re not going to get blamed when they come forward with a mistake. It’s got to be about learning first. So I try really hard to model that behavior. I don’t know if I’m perfect about it but it is important to model the behavior that learning has to come first and that’s what you do.
Carlos: As a leader sometimes and as just humans, right, it might be easy to get emotional or sometimes just to use the right word to get pissed off about a mistake and then if you do though, if you fall into that trap then you do lose that safe place, so where people feel safe to come and give you news early.
John: Absolutely true, that’s a hundred percent.
Carlos: So something to be careful about.
John: You have to be really careful about it. It’s very, very fragile thing. If people feel like they’re been evaluated, if they feel like they’re not trusted, they’re not going to come to talk to you.
Carlos: In part of that I think this is, just so you know, I think we’re in our last set of actual questions before the final one. But I think that this ties back to the beginning right, the whole culture of autonomy. And in order to have that culture of, let’s call it extreme autonomy, and to make sure that people are in line with the strategic decisions, they need to be able to have that safe place. And I really like this by the way, this is something that I’m taking note for myself, because there is a lot that I can learn from these.
John: I’m glad it’s helpful.
Carlos: And hopefully a lot of people are listening in absolutely. So, alright John, so we are down to our last question and it’s a question that is probably one of my favorite ones here in our listeners because people get to expand in some of the conversation we just had. We want to go do a little bit more research. What resources or books do you recommend? What has influenced your point of view?
John: Well, I think I’ll answer that question in a slightly different way than you ask. So I think the first thing is that if you want to read more about my point of view, I’ve written quite a few essays on my LinkedIn page, so that’s one place to look where I’ve elaborated on a number of these ideas and I will continue to do so I’d invite anybody who’s interested to take a look and comment.
Carlos: We’ll link it on the show notes.
John: Yup, that would be great. And then I thought a lot about this, there’s a great question and so I thought of a few books that I think are quite good but I’m not aware of any one book that’s just about influencing. So I think I will take you up on writing at least an opinion piece on that myself, but here are a few things that I think are great. The first one is, Pfeffer and Sutton, there’s no book they’ve written I don’t like but one of them is my favorite which is about building cultures of evidence. It’s a book called, “Hard Facts, Dangerous Half-Truths & Total Nonsense”, and it is a wonderful read. The subtitle of the book is “Building a Culture of Evidence Based Management.” And as I say, I would recommend Pfeffer and Sutton’s books, unreservedly to anybody. They are much more pragmatic than most management books are, and this one in particular is a wonderful read about evidence.
Then another one that I would suggest is in my heart mostly because he coined the term “learning lose”, but I think Morten Hansen has a book called “Great at Work: How Top Performers Do Less, Work Better, and Achieve More” That’s nice to read.
And then, another one that I like quite a lot is a book called “The Coaching Habit” which talks a little bit about a particular way of coaching that is constructive and helpful and it’s an easy set of questions that you can use that really draw people out and use the ways and help them to solve their own problems, so by Michael Stanier.
And then the most recent edition of Harvard Business Review actually has some great pieces, the last two issues of Harvard Business Review, the most recent one, a cover article called Why Feedback Fails, I think that’s a great one to read. And then the one behind that January/February issue is a whole issue about managing innovation. Both of those I think are highly relevant so I’ll plug Harvard Business Review there. I really enjoyed those reads.
And then there’s one final piece that I give to you anybody who listen that you can find on the internet anywhere. I think probably most people are familiar with the “Challenger Space Shuttle Disaster” that blew up mid launch. During the investigation into the causes of that disaster they called in Richard Feynman, the famous physicist, and he wrote a wonderful appendix to the commission report, on there it’s a few pages long. It’s incredibly elusive and very, very clear and it talks about essentially if you can read it between the lines and it is a philosophy of building a reliable thing. It also talks about management team is deluding themselves into thinking things are more reliable than they actually are. And so I strongly recommend it, if you only read one thing, read the Feynman appendix to the Challenger report, it is just brilliant. I really love it. I referenced it on one of my LinkedIn articles, a piece called “When Unreasonable is Good” There are also times when unreasonable is not good. The challenger disaster is a really good example when unreasonable was not good.
Carlos: I love your articles by the way.
John: Thank you.
Carlos: I really liked the one about “What My Dog Knows About Leadership”, taught me about leadership. I really liked that piece.
John: Dogs are excellent teachers.
Carlos: They are because until you have a dog, you don’t know that it is all about, an exercise essentially and being able to see the world through your dog’s eyes.
Carlos: Its everyday becomes a matter of thinking like what does he understand based on what I am doing. And something that happens to dogs and sad thinking here but, we lost one of our dogs recently and we had to put it to sleep well about two weeks ago. But you know it’s one of those things it, she was 15 I think, no she turned 14, and it was time but the thing that I always told my wife is the thing about a dog is that you are their entire life and they are just a part of your life, right? A dog is a part of your day, but for them you are everything, so this exercise and being able to think. So basically, simplify communication to that level, I think makes everybody better people.
John: Yeah I agree. I mean dog is a great leadership trainer, right? I mean dogs are one of the purest reflections of your leadership skills that are out there because they are completely molded by how you approach them. Although they are also molded to some extent by their environment before you knew them, and things that happen to them, there’s so much a reflection of how you lead them.
Carlos: Well, that’s it my friend. I just want to thank so much for being on the show. Those that are listening in it took us a few times to get this going and I really appreciate your patience and coming back. But I really enjoyed having this conversation and there’s golden nuggets in this conversation, and hopefully we’ll see an article from you explaining how to design an engineering capacity model. I’m very interested in that.
John: Note taken. I’m writing notes right now just about that. That’s something that I think I probably can do.
Carlos: Yeah, that is a very interesting one and especially if you add the whole point of there’s a capacity cost to expanding the team. I think if we think of that within the capacity model that’s brilliant… and yes, I thought about it when I asked you the question but I didn’t think of that in terms of a framework or a model that we can all learn from, right, because can adjust that to all of our saturations and I think that’s one of the…
John: Yeah, a capacity model is an incredibly powerful tool. It can be very simple but incredibly powerful because it gives you a basis to negotiate.
Carlos: And like as you said in one of your articles, always have a list.
John: Pretty much.
Carlos: That boils down to that. Well, my friend, thank you so much again for being on the show and I look forward to catching up and having you in the show some time in the near future.
John: It has been a pleasure. It would be my pleasure to do it again. You’re a wonderful interviewer and a wonderful host. Thank you, Carlos.
Carlos: Thank you so much.