
Tech Exec Talks
Tech Exec Talks
E2: Engineering teams - Decoding team metrics with Mohamed Ait Si Brahim
About the episode:
How do you measure the success or otherwise of an Engineering team? How do you communicate that to the wider company as well as within Engineering?
These are some of the questions, my guest Mohamed Ait Si Brahim, CTO at Profile Pensions, and I seek to answer.
About the guest
Mohamed Ait Si Brahim is the Chief Technology Officer of Profile Pensions where he is building a market-leading investment advice digital product for the UK mass market taking it from pre series A to £1b AUM.
You can find Mohamed here: https://www.linkedin.com/in/maitsibrahim/
About the host
Kulvinder is an accomplished technology and product executive with an unwavering passion for driving technical innovation and business growth on a global scale. With over 20 years of experience, I’ve honed my expertise in leading cross-functional teams, building strong relationships with key stakeholders, and developing strategic plans that leverage technology to solve complex problems. I hope you enjoy our conversations as much as I do.
You can find out more about me on -
LinkedIn: https://linkedin.com/in/ksmaingi
Twitter: https://twitter.com/kuli
https://techexectalks.com
Tech Exec Talks - Episode 2
===
Kulvinder: [00:00:00] Welcome to Tech Exec Talks where we sit down with the leaders of the tech industry to discuss their experiences, perspectives, and vision for the future of technology. This episode covers how to measure the success of an engineering team. Engineering teams can sometimes be a black box and assessing the productivity and performance of the team can be tricky.
Kulvinder: I'm Kulvinder, I'm your host, and in this episode we'll be talk to Mohamed Ait Si Brahim about how you might go about fixing that and dealing with the, uh, repercussions of that.
Kulvinder: So just to introduce him, Mohamed is the Chief Technology Officer of Profile Pensions, where he built market leading investment advice digital product for the UK mass market.
Kulvinder: Taking it from pre-series A to 1 billion pounds of assets under management. Previously, Mohamed has led senior engineering roles spanning many areas of financial services, including risk management, portfolio asset management, and [00:01:00] trading. From FinTech scale ups to more established financial services firms.
Kulvinder: Previous roles include VP of Engineering at Open Gamma, where he helped the company pivot from providing open source libraries for derivatives pricing to a cloud-based SaaS business. He's also been head of risk technology at the London Stock Exchange, where he rolled a risk analytics simulation service in AWS before Cloud was an established model for highly regulated financial services.
Kulvinder: He loves transformational change, data science, as well as cloud native engineering approaches. One recurring theme in Mohamed's experience is how to build shape and grow strong engineering teams \cultures to fully leverage new technology and deliver great customer and business outcomes, which makes him the perfect guest for us today.
Kulvinder: So Mohamed welcome to the show and thank you very much for joining me. How are you?
Mohamed: I am great. Thanks for having me. It's actually exciting to be talking about something I'm really passionate about, so really happy to be here.
Kulvinder: I mean, you've obviously got lots of experience of leading engineering teams and for me, being [00:02:00] able to measure and share their success, both of the team as well as stakeholders, is a critical factor for a technology leader. So, I mean, with that in mind, what would you say are the, the kind of key performance indicators, the KPIs are that you might use to measure the success of your engineering teams?
Mohamed: Yeah, it's, um, you know, well, there's a long history of how to measure engineering teams. In, in my 20 odd years I've seen all various, uh, horror stories, but really now the modern, kind of shape of an engineering team has to be first and foremost product focused, which means really, there are two ways I would approach, um, measuring, uh, how successful an engineering team is, is the first question is, do they ship the right product?
Mohamed: Cause first and foremost, an engineering team, like any team in any enterprise, the whole purpose is about shipping value and should be product. So that's the first and foremost, uh, uh, set of KPI, which are actually linked to nowadays more and [00:03:00] more companies having business facing KPIs. So the first takeaway is don't measure your, your engineering team with something you can measure with something else. But first and foremost, put that front and center. What are the KPIs? For us, for example, Profile Pensions. Everything is looked at through the lens of two main OKRs, which is, uh, growth revenue, simple and uh, customer satisfaction, which are the two kind of proxies about are we shipping the right product, which means.
Mohamed: Customers are happy about it and at the same time we are making money. So that's the first. Um, and in addition to shipping the right product, there's the second angle. So, and that's where it comes to the specifics of an engineering team. It's about shipping the right, uh, the product, right. So you ship the right product, but you also have to ship the product right, and maintain the technological runway of the company.
Mohamed: One of the big responsibility of successful engineering team is always to have that front and center. Their job. There's the day-to-day job, but really the long-term vision [00:04:00] for any successful engineering team is to maintain what I call the technological runway, which is about making sure how we build stuff is always leveraging and making sure the, the future of the product, the future evolution of the company is made possible and, uh, in a timely manner and in a scalable manner.
Mohamed: So that's kind. The two sets of KPIs and for those ship the product right metrics, I would tend to be more inspired by uh, metrics, um, provided by DORA, which is the DevOps, uh, Research Association, which are known and, and well written about, which basically are time to value, which is predictability of releases.
Mohamed: How long does it take you as an engineering group to get, get from an ID to, uh, realizing value the deployment frequency across all environments? People tend to forget that. Think of releasing frequency is just about prod, but it's about how frequently you release to your internal environment, how, how [00:05:00] smooth the flow is.
Mohamed: So I do tend to measure how often do we push to the integration environment. Whatever staging environment you may have is really good indicator of, um, doing the right thing to maintain the technological runway, mean time to recovery. So obviously good, good indicator of how well maintainable and understood the system is, and resilient.
Mohamed: So it tells a lot actually that metrics. The quality of the code, which we might talk about in a later stage in this podcast. And, uh, the, the change failure rate. So how often do, uh, releases break? And then, so there's kind of industry standards, which are, I think, are good to take as a, as a proxy, as a good, uh, Uh, leading indicator.
Mohamed: And then I would supplement them for, with one or two metrics that are specific to the product that you're dealing with. Uh, for example, um, in a, in a SaaS enabled business or business that is largely, um, driven by tech running in the cloud. One metric I like to use a lot is the, the ratio of [00:06:00] cost of infrastructure to revenue.
Mohamed: So it tells us a lot about the scalability of the system and how efficient it is. Um, so like very highly successful companies, um, like Shopify for example, have those ratio well below 10%, for example. But if you're above 10% or 20%, that means um, there's probably room to get there. It's kind of a stretch target, if that makes sense.
Kulvinder: That makes perfect sense. And if I think about, um, I mean, you men, you mentioned OKRs and business goals and, and, uh, you know, aligning the KPIs alongside that, where you are talking about, let's say quarterly OKRs, um, and you have some very specific ones. Do the same KPIs apply or do you find you have to find new KPIs to, to maybe meet those OKRs?
Mohamed: No, I think, uh, the, the same OKRs should, should, uh, when it comes to OKR, they're really organizational driven, so I don't like to have specific OKRs for my team because it creates that [00:07:00] misalignment. And then you have to substantiate how your in inward facing OKRs linked to the OKRs. So it's much simpler just to say we, as an engineering team, our job is to shift products.
Mohamed: So therefore, if we meet the business objectives and. Um, outperform in some cases that that must, we must be doing something right, , we must be doing a good job. And, uh, cause otherwise it's diluting the effectiveness of engineering to try to engineer, uh, other proxies if you, if you know what I mean. It's kind of creates that confusion.
Mohamed: So to avoid misalignment, try to stick as much as possible to the business, understood OKRs, and make sure everybody is aligned behind. Um, I think that that would be my recommendation. Um, and then if the, if you need specific, uh, OKRs for your team. Make sure they are still related to the product. Not so, again, when it comes to OKR, it's really engineering teams out there to deliver outcomes.
Mohamed: The, it's a different, you know, [00:08:00] measuring the engineering efficiencies, something else that's not done through OKRs. So that has to be done to build high performing teams. But when it comes to, um, business alignment, it's really around business goals. So then you can work out some OKRs that are probably closer to your.
Mohamed: But there are a good proxy to, OKRs, things like churn rate or drop sessions might not be stated OKRs for the business. They don't need to be, but you can use them internally as a leading indicator if you are on track or not, or if you're deviating for your, from your goal, if that makes sense.
Mohamed: So like, uh, uh, drop sessions or fail sessions or, I dunno you can use any, any sort of measure that gives you comfort that you're on the right track and turn them. That are closer to the engineer engineer day to day in a sense, but try to keep them really close to what you're trying to achieve and not something else.
Kulvinder: Excellent. Again, if I'm thinking about, [00:09:00] um, Those kind of measures. And obviously there is that tight alignment to the business, which makes complete sense. Um, with, with kind of, and you mentioned it with sort of DORA metrics, with those kind of metrics, how do you make them make sense to the rest of the business?
Kulvinder: Who may or may not be as technical, uh, as, you know, engineers are. But um, is there anything specific you, any kind of translation you have to do or anything specific that you do?
Mohamed: Yeah. Um, there's a, there's a big role in my sense to. , um, to, to make, to socialize those metrics in a way that is jargon free and explain them. Uh, so for example, uh, I spend a lot of time, especially with non-tech audiences outside the engineering group, it's really making sure what we care about as an engineering team is understood by the whole business.
Mohamed: And the good thing with DORA, um, is it's very easy to translate them. They're not that technical in nature. So you can use analogies, but you can really explain to people that we don't want to break the [00:10:00] system too often use words that make sense to people, and you can get very quickly to good understanding and actually make them public.
Mohamed: So when, when we talk about, uh, stability of the system and use those words, the whole business, understand what they mean without creating a different measure. So, uh, cycle times, um, are people fully understand that it's a good measure of how fluid our collaboration is to kind of get from an idea to something, uh, delivered in prod.
Mohamed: So what I would tend to do is communicate a lot about them, not make them inward facing again, and, uh, broadcast them to the business in a way that they understand that's not too cryptic and, and it's that translation is the job of, of a good engineering team, I guess. And sometimes engineering team might be too much, um, stuck in their ways and their way of thinking or their language in some cases, like that tribe, those nerds that only speak their language.
Mohamed: And I think, uh, good engineering team always have that. Power of [00:11:00] simplifying and demystifying tech and show what is possible. So for example, we use different words when we come communicate outwards, like when we talk about microservices, we talk about little workers. So that really is a lot to operations or this little workers is, you know, Fallen asleep, Oh, then it didn't do its job. Uh, we try not to use heavily, technically loaded terms like clusters outside the, the words that have a meaning, but say the same thing,
Kulvinder: No, I like that. I'll, I'll have to remember that one. That, that's a good one. Um, so, I mean, let's think, think about it the other way. So let's say you are, you are sharing the success, uh, of the engineering team, of, you know, metrics to, to reflect that. What about feedback coming back the other way?
Kulvinder: So in terms of, Questions they might have. Do you, do you find in your role that you ever have to, I'll use the word sanitize, but perhaps sanitize or again, [00:12:00] translate what the feedback, feedback might be from, from the wider audience?
Mohamed: Absolutely. So it's like, uh, it, it goes both ways. Like, uh, I think the first key principle, you know, before getting any feedback is to really work hard to get out of the black box, what I call the black box syndrome. Uh, and you can see that in a lot in our companies with an engineering team where, Uh, the engineering team is seen as a, as a cryptic black box, and nobody really knows what's going on inside them and who does what and how, how they work.
Mohamed: So there's a, a first job, uh, before anything happens of, um, and that's probably a role of a CTO or leaders in the engineering team to kind of be the evangelist and be able to, to kind of build the bridges. So, Engineering is not that scary. It's the same group of people working committed to the kind of same business outcomes, but with just slightly different skillsets.
Mohamed: So, um, be an acceptable face to the business [00:13:00] and that that black getting out of the black box process create a lot of understanding and common language to use. Uh, gradually. So once that's established, um, it's, it's having an open channel. Uh, so for example, when you use Slack, we have, uh, an open support chat.
Mohamed: Uh, so there's no, remove any friction or barrier to give Feedback is really an important one. So when we see, I've seen companies that, um, Put engineering behind a ticketing system, trying to protect them. I think it does a disservice to help unlock innovation and remove, uh, barriers of collaborate and adds barriers of collaboration.
Mohamed: So what I would tend to do is to enforce public forums, uh, using like tools like Teams or Slack, where it's actually a perfectly, uh, valid for anyone to raise a question, even if their understanding of the tech is not, uh, that clear, but we can use that as an opportunity to educate again. So it's kind of, it's not a, a closed loop.
Mohamed: It's fast [00:14:00] feedback. We enforce, we encourage people to comment about how slow the system is, even if. It's subjective sometimes, but it's building those bridges that are really important. So at Profile Pensions, for example, in my previous roles, it's, I've always insisted on making sure engineers are in touch with the reality of the business.
Mohamed: And it's not by putting them behind a ticketing system, it's by having forums. And by moving out of that black box scenario. Um, another really interesting thing, um, which is subtle, but how do you know you are facing a, a, a business where engineering is a black box? It's really simple red flags. It's like you can hear it a lot in the leadership teams, for example, saying, saying, I don't really want to hear, how it's done , you know, when you talk about engineering, uh, this is gobbledygook, um, um, this is all black box. To me, even literally, they can say that. Uh, for me, I would immediately react on that. This is a red flag that's like, you should be [00:15:00] interested.
Mohamed: The business shouldn't be scared or, uh, it shouldn't be daunting to talk about engineering. We just have to find common ground of knowledge and, uh, understanding, and it's all right. Um, to have the, to kind of not know exactly how it works because it's our job to decipher that and get to the real pain point as engineers and solve that.
Mohamed: Uh, it's not by putting, again, engineering behind a, a wall.
Kulvinder: I'd absolutely agree with that. I think a number of companies that I've gone into where the, the short term goal of meeting or, or getting a ticket done, is, is considered success. Um, and it is, it is in a, in a fashion, but it's not necessarily leading to the business outcome that you want.
Kulvinder: So, I mean, with that in mind, where theoretically systems to get things done, and there are the, you know, creating those forums for communication, how do you balance. The needs, the short-term needs. So let's say sprint based or [00:16:00] ticket-based versus the long-term goals for both the business and in some ways the engineering team as well.
Kulvinder: In terms of the business strategy, we will be there and there'll be a technology strategy that underpins that. But, how do you ensure that you can measure that folks are delivering in the short term, but also against the long term goals that you might have.
Mohamed: That's a really good question, and it, it's a typical challenge, right, of, uh, keeping, keeping the right allocation and not being overwhelmed by the firefighting that tends to happen. Um, even with the best strategy roadmap, it's invalid as soon as you create it, almost that, or when you look back after a year, what you said you've gonna have done is totally different from what you've actually done.
Mohamed: This is so typical. Um, so what I, what I try to do is really to be clear with the business about the, what I call the two rhythms. For me, they're really two rhythms. There is the fast and slow way of doing things, and both are important. Um, so we can have a fast [00:17:00] track for anything that is what I call unplanned, right?
Mohamed: So by definition, it's okay to have unplanned things. It's what you don't want is again, add friction to innovation by putting things behind a process and people just give up hope that, and you end up creating a factory of grooming, backlogs, et cetera. It's very simple. There's a fast. Fast means we are gonna attempt to do it quickly, but it's not a big, uh, company bet.
Mohamed: Right? It's, uh, unplanned experimental in nature and it's kind of discretionary. That means there's no guarantee that it's gonna be done at any given time. It's just, it goes into, think of it almost as a Kanban style kind of work progress pile of stuff. Um, and. But when it goes into that fast track, it's really because we really want to do it.
Mohamed: So there's the first filter that needs to be immediate thing. Doesn't sound like a good idea, so we're not even gonna log it anywhere. So it's okay not to log things , because if they're important enough, they will come back. [00:18:00] So there's the fast and the slow is really the big bet. Uh, big bets for the company where we kind of bank on the big future value creation is based on these, right? Where the first one is experiments and iterative in nature. Um, so this is what I call the slower map. It's not like slow is, um, just saying it's medium term. We're talking about quarters. But the big difference here, it's committed. That means we will ring fence resources around it and we will communicate dates around those bets. Which is not valid for this fast track where there's no date, it's gonna happen, when it's gonna happen. There's no expectation set that we are gonna estimate it , uh, for how long it's gonna take. We're just gonna come around doing it. And it's supposed to be quick anyway. So no point going into estimating exercise.
Mohamed: It has to be fast and planned discretionary. If it's bigger than that has to go into the part of discussion about what is it? Is it really a big bet? Are we gonna invest all that time and effort? And then what you do is, so both are valuable. . What you [00:19:00] can start doing with that is you can start tracking the ratio of time spent between the two and where comes interesting is the, the ratio between fast and slow can be different depending on where you are as a company.
Mohamed: So like in in flight mode, it would be perfectly acceptable to spend only 20% on fast track stuff and 80% cos people things are smooth. Uh, the problem is understood. There's no reason to fire fight. There might be peaks in time where you want to move the, the, you know, the, the indicator or change the ratio slightly.
Mohamed: But what's important is to agree clearly the pace with the business all the time and saying, I will tell my stakeholders and the people I work with, so like, well this phase we are gonna enter a phase of turbulence , so we're probably gonna shift the dial slightly to experimental. So that comes with a different way of working. Which is a bit taking more risk fix, probably taking, uh, a little bit of debt, but that's [00:20:00] fine. The debt will be resolved with the slow roadmap. So when there is a debt element, always add to the slow roadmap, how to fix long-term and tackle both in parallel, if that makes sense. It's getting that clarity of how things should work culturally with the people.
Mohamed: I've been, I spend so much time kind of, um, educating and repeating that message until it lands. It doesn't land immediately. People forget, and people just go back to their, you know, historical ways of working. That's a bit of a repetition exercise until we settle. That's been working for me. There's slight variations to that, but the key is to work it out and be clear in terms of expectations.
Kulvinder: It sounds like that that communication piece is key, getting the business on side and talking a common language in, in terms of what you're doing. And, uh, that sounds like a, a, a great approach. Um, so talking of technical debt. Um, let's talk a little bit about code quality for a minute. What metrics would [00:21:00] you typically use to, to measure the quality of code produced by the team?
Mohamed: Well, that's an interesting, another one of those historical debates, right? In software engineering industry, um, we've seen horror stories, right? Um, I think deeply, I think the quality of code is, is subjective in nature. For me, the code is an interface to the machine. Uh, it's highly subjective. It's like, almost like sometimes software is a craft and an art rather than a science.
Mohamed: So I would start with. Because it's largely subjective and largely driven by the human interpretation, which is the engineering team. I think the, the first sanity check for me is to make sure the engineering team is happy with the code and they don't feel dirty. I use that a lot. Do you feel dirty touching this piece of code?
Mohamed: Because if you do let me know, it's probably a problem. So you have to enjoy, um, working on a piece of software, uh, or be confident enough to know where you're doing and you have the harness and the scaffolding in place [00:22:00] to, to work on it safely. So it's highly subjective. There's no tool that will give you that sense, what I mean.
Mohamed: So a lot of emphasis on measuring, uh, feedback loops and, you know, code reviews and attention to how code reviews are done. How much time people spend on them, the quality of the commands, um, the, the, the frequency of them. Make sure that everybody has a say and every, every engineer is involved in that process because, um, it's not the same guy reviewing the code for everyone.
Mohamed: Is, is a good indicator that people care about the code and people and the, and therefore it's it's first leading indicator, then yes, obviously. Because we track at the high level DORA type metrics, right? These are also very good proxy to answering that question about are we doing the the things right?
Mohamed: Because for me, if, if, if the code we deliver frequently, we have no outages, uh, when there is a [00:23:00] problem, we can recover quickly and people are happy to step in. These are all good signs that the code is to good level of quality, right? Before, again, without going into opinionated philosophical question about how a certain language or which is kind of a different conversation, uh, but, um, so first and foremost, make sure developer experience is as a front and center, but also also add tooling when it matters to save a little bit of time. So, for example, use a test automation, um, code quality. Um, tools that enforce quality at build time, like EsLint comes from a JavaScript background, for example, and spend time nurturing and defining the rules that apply to your product.
Mohamed: You can use templates that are already available and, you know, Airbnb are notoriously known to have produced a very comprehensive library of, uh, Of, uh, quality checks, but you can please tweak them and adapt them to what works for your product. Cause [00:24:00] some of 'em might not be relevant and, uh, cover the basics, right?
Mohamed: With vulnerability scans, for example, you can use the dependabot, which is the GitHub owned tool, which is pretty decent. To do the job. And then again, once you have the experience front and center, you add automation when it's needed, when you believe it adds tool, uh, it had, it adds value. Um, you can go all the way to more comprehensive toolkits like Sonarqube, uh, to have a more holistic or even forensic analysis with more coverage, like test coverage is something, for example, we don't do, but that's, that's, we've decided that it was fine because we as highly distributed microservices or architecture where services are so small that actually making sure the code review is, uh, relevant, gives me satisfaction that it's, and the, the business boundary of the service is so small that actually the overhead of maintaining a lot of, uh, scaffolding around it is too expensive.
Mohamed: Right. So we, we'd rather just, um, [00:25:00] uh, spend time making sure the people who are dealing with the code are happy and there's resilience, systemic resilience, building to the distribution of the workload. Um, but the key is always to think about what you need in your engineering context for the product you are building.
Mohamed: In terms of code quality approach that, uh, on a case by case basis to do just what's right and not. Cause I've seen companies that just follow by the book and implement all those things, but very quickly becomes a burden almost. Um, or even worse, we start to publicize those metrics outside, which are completely back to the point of providing KPIs that have absolutely no sense to the people that are outside. It's like giving a, an electrical schema with, uh, uh, power limitations and um, and friction parameters, you know, to, uh, somebody in the high street, unless you are an electrician or plumber, there's no way you're gonna understand the meaning of those metrics.
Mohamed: So I, I think sometimes engineering behaves like that. They say, oh, we have our criteria, we have our scientific measures. [00:26:00] Then there's no job to be done. We just publish it. If people don't understand it and we get, uh, bitten back by misinterpretation, that's their fault, not ours. I think that we should keep that to ourself.
Mohamed: We're engineers, but we should translate that in a very easy to use manual with little pictograms saying You're gonna die or not , you know, to make sure that you're on track. So use them internally to assess the quality. Of the code. But for me, the best proxy of the quality of the code, I will get back to that a lot, is how engaged is the developer community around your code base.
Mohamed: For me, that's the ultimate test. If people hate the code base can be, you can have the best green coverage, a hundred percent coverage, et cetera. Uh, you're still not fulfilling the full potential of your engineering team cause people are disengaged.
Kulvinder: I'm interested in your view on this, cause I mean, I, I found when you mention metrics to some people, immediately they, uh, there's a, there's a sense of fear that you are be measuring [00:27:00] them as an individual and you know, it's, um, it'll somehow impact their performance and their, their standing within the company.
Kulvinder: Um, so. In, in that respect. And you, and you have touched on it in, in terms of making sure that you know what you're delivering is valuable, um, and is related to business outcomes. But are there things you'd recommend to, to speak to those kind of fears where an individual might say, well, why, why are you measuring this particular thing?
Kulvinder: Um, you know, because arguably it might make me look bad when actually I've done nothing wrong.
Mohamed: Yeah, absolutely. I mean, I've seen in my experience so many it is happen again and again, right? Where whatever KPIs that you build, I think there's an interesting nature of the engineer is to, I think I trust an engineer in is wired to be efficient in a sense. I mean, there wouldn't be an engineer if they weren't interesting in fixing problems efficiently.[00:28:00]
Mohamed: So it kind of comes across as counterintuitive when you measure them that way with metrics that are disconnected from that. So from my, in my experience in my teams, I've always steered away from measuring them based on those metrics, like code coverage, all that are just, uh, health checks in a sense. And systemic health check, not individual. So you can take them with a pinch of salt. It gives you a health check of your system, your architecture, your code overall. But when it comes to personal, individual performance, don't use them. Lines of code, horror stories. I'm personally a big fan of, um, a framework called, UM, SPACE. I don't know if you heard of it, it's um, SPACE is like as a framework, which is a, based on understanding.
Mohamed: The, and improving the developer productivity first and foremost. So you, you take, you flip it, you, you measure everything about how do I improve my developer productivity and how do I give the tools and the development needed for my [00:29:00] engineers for them to be, to achieve the highest velocity? And it's, The code is the last problem actually in that context, it was developed by GitHub, Microsoft and, and University of Victoria in Canada.
Mohamed: And really it's encouraging engineering leaders to have holistic approach to productivity and not just the engineering for the sake of engineering traditional that we've seen over the years. I think it's gonna be game changing and I'm, I'm a fan, I've been doing that implicitly in terms of best practice for years, but now it has a name.
Mohamed: Um, so it's really focusing on outcomes and behaviors rather than output because you can't measure output in software engineering. It's like codes. You can have code really well written and not drive customer satisfaction or features that customers love, that don't drive customer value. So it's like the mechanics of writing code doesn't, is not a predictor to success as an engineering team. It's really anchored about an, uh, measuring and building [00:30:00] a team that is really an efficient engineering team, first and foremost, and then get them to ship successful product. Those two missions together, get your results. So SPACE stands for S is satisfaction. Simply measure the state of mind and the wellbeing of software developers.
Mohamed: Cause if you have. Developers who are disengaged and unfulfilled it's very likely they're not gonna perform even if they're the best. 1% quantile performance is the outcome of, of the systems and the, and the process, uh, that, that you want them to do. So, for example, um, it's, uh, you know, PRs are they are, are they involved in PRs?
Mohamed: Have they, um, uh, do they commit, do they deploy? That kind of stuff. How much engaged are with the, with the system? Then you also have to measure all the other things that are actually not coding, right? Like I think there is a study from Microsoft. I can't remember. [00:31:00] When it was, I think it was 2019, and there are numerous studies around it that said that on average, an engineer, a software engineer only spends 17% of their time actually coding.
Mohamed: So 17% is, is pretty low. That's because they have all those other kind of, uh, activities, you know, like being engaged in design review. Pull requests, uh, operational supports, incident resolution, where all does that go? And, and it's very hard to put a KPI that tells the developer you're doing that covers all of that.
Mohamed: So it's be conscious of that. And the last, the C is communication and collaboration. And the E is, uh, efficiency of flow. How much can they progress with with their task? Do they have any friction that prevents them from doing their work? So I like that. So I use that a lot in my teams to kind of assess.
Mohamed: It's not measuring the performance at the individual level, it's really assessing how good they are. I don't know if that makes sense. It's slightly different. It's, it's to be able to [00:32:00] benchmark them against, how do they become the best engineers they can be. There's no measure scale. It's in, in the particular context and the particular product they're trying to build.
Mohamed: So that's more, much more efficient than putting numbers on an individual performance because I don't think that works in the engineering, um, uh, industry.
Kulvinder: That's obviously one of the pitfalls to watch out for. I guess if you are, if you're just, again, coming into a new engagement, new role and you wanna put some, uh, measures in place, something to watch out for, are there any other. Common sort of pitfalls that you can think of that, that people should be aware of when, when trying to measure the success?
Kulvinder: Uh, or just measure the engineering teams.
Mohamed: Yeah, I think there, there are many pitfalls. Historical ones that we, we talked about a lot in the literature, which is to measure the wrong things, like lines of code or number of bug fixed that totally create the, have been proven to cr create the wrong behaviors and the wrong outcomes. Cause if you tell somebody, um, uh, as a tester, for example, you're gonna be rewarded on number of bugs you [00:33:00] find.
Mohamed: Their incentive is to create bugs, right, so they can find more, and same for the developers. Um, so it creates the wrong incentive and lines of codes is the same. There is no correlation between the lines of code and the success of the product. Simply because, um, you have more lines of code that it's, um, it's probably less maintainable, I would say.
Mohamed: Um, and the other thing, pitfalls is any metrics that gives an illusion of control. I've seen velocity charts being thrown out to management again, that need to be decrypted, really, that don't really give a sense of insight in terms of how, not how well the engineering team is doing. Again, it is really how good the engineering team is cause how well it's doing so much of a function of how well the company is doing in its culture, in its way of collaborating.
Mohamed: It's, it's a bit unfair to single out engineering as a black box system where they can be happy and do well, completely disconnected from, from the, the business outcome. So it's, it's how good [00:34:00] are they? comparatively in terms of benchmark. Um, so when you throw out a velocity chart for a particular sprint, I've seen people saying, look, we delivered like so much story points, this sprint and then arbitrary set.
Mohamed: Next sprint will do more. , that's wrong. You can't set arbitrary targets on velocity. All you can do is to work on how good your engineering is and hope that maybe you'll hit the velocity, but actually velocity doesn't matter unless you've, you actually can measure the value creation. That would be different.
Mohamed: So that's another pitfall. This is the wrong thing to measure. By all means have those data points as a leader to have, have a look as proxy to things. It's, it's okay, but really it's, it's just a piece of information which it needs to drive insight from that piece of information. But don't throw it out there in the public and say, look how good we are doing.
Mohamed: It's just asking for trouble for me because it doesn't measure the right thing. Um, so, uh, I [00:35:00] guess it's really what I've seen is. Why I I broadcast in terms of information is how, how well we are doing in terms of our developers', um, engagement. So we can measure it with surveys, right? We use, for example, Officevibe, which is, uh, an employee NPS platform that you can customize, consider, ask all sorts of pulse survey questions about specifically for engineers and how they feel about things.
Mohamed: Yes, I'm happy to broadcast that the engineering team is healthy, is getting healthier, uh, mentally and happier in doing their work, more fulfilled, more goal driven, more attached to the business. That's, for me, is a signal of getting to a high performing team.
Kulvinder: Excellent, I think, that, that's sometimes missed, I think when we talk about measuring engineering teams and that kind of employee satisfaction, right? How happy are they? And, um, I mean, I'll always go back to, there's a book here, probably read it, Accelerate, around that culture of sort of DevOps, et cetera.
Kulvinder: And I think [00:36:00] there was a metric, which escapes me, the measurement, around, it was Microsoft again. But they had their, um, their engineering teams before and after, so they put in DevOps and the, the employee satisfaction, I think that this, I'll probably get this number wrong, but went from sort of mid thirties all the way up to, I think it is high seventies in terms of their satisfaction because they knew, because of the continuous delivery and DevOps processes that were in place.
Kulvinder: They, um, they weren't gonna be called necessarily on a Friday night to come out and fix something so they could relax and therefore they were happier. So I think, yeah, that, that kind of satisfaction and, um, engineer satisfaction is something that maybe is not, not spoken about enough. So, I'm glad you've, uh, shone a light on that .
Kulvinder: So in terms of those kind of measures, are you, you're publishing them out to the business. How much do the engineering team see of those kind of measures?
Mohamed: Um, they all get a report automatically. Um, so this is all transparent and public and everybody can have access [00:37:00] to team based, um, those metrics across departments. Um, some of the feedback, uh, can also be anonymous, which is great. So you can hear a lot about what, what people are worried about. So first is it has to be public and transparent.
Mohamed: So it's a, it's individual. What is private individual, I would say is like backed by all this to develop an efficient kind. Um, uh, when you are using SPACE and you wanna optimize for performance, that's first leading indicator to kind of get the team to perform well, then you still have to work on the individual rights.
Mohamed: So, but to do that, we, what we've done is we have an. Taking the, the framework, the SPACE framework, develop an engineering led progression framework that is designed to, to develop that, uh, engineering velocity. And what we did is we built that engineering framework by the engineers. For the engineers. That means, um, it was a collective democratic exercise where we said we need to [00:38:00] come up with a.
Mohamed: Um, uh, level playing field for everybody to, to be, to have a reference or sat nav or guide in terms of how well they're doing, in terms of how good engineering looks like, if that makes sense. So we've developed that very precise, I call it progression framework, written down in notion there's a spreadsheet and it, it's a really powerful tool and the assumption is.
Mohamed: It gives a, like an idea of all the things that need to worry about to be extremely good, engineers that are happy, that are fulfilled, that are delivering value and creates that culture of efficiency that, that, that you want. So, and we use that to measure the progression of people and reward them in a, in a, in a, in a way.
Mohamed: And that is really helpful because it's, it's a, as I said, it's clear and transparent for everybody, but where everybody sit, they can keep that to themselves. It's like it's their own tool and just have the satisfaction that they know where they. And the things they need to work [00:39:00] on. Um, but it's, um, it's very powerful.
Mohamed: But I personally like anything that allows a developer to the ability to deal with ambiguity, friction, how to be a product engineer, how to eliminate waste, how to be a, an active, uh, contributor to the, that effort of making things, uh, removing inefficiencies. Because again, it is brain wired. Every developer will say yes to that , your job is to remove any efficiencies because that's kind of how an engineer's, software engineer brain is wired.
Mohamed: The progression framework, which is documented, visible to incoordination. Like all that is so powerful because you can. Shove into that progression framework. So many dimensions around leadership, around inclusion, around diversity, around the values and behaviors that really create exceptional engineers.
Mohamed: And when I say to people, why would you join my team? I would tell them it's, it's not about, it's really two things you want, you like to ship [00:40:00] product and you want to become the best engineer you can be. That's really what drives people. Nothing else, no other arbitrary metric.
Kulvinder: I think that's a excellent sales pitch for any engineer coming in as well.
Mohamed: Yes.
Kulvinder: You say they want, they wanna solve problems and they want to deliver value. So, so that's fantastic. Um, so I'm gonna finish up just by asking you something a little bit wild and wacky, I guess a little bit, something of a different question.
Kulvinder: Um, knowing what you know now with the experience that you have at the start of your career, is there something you'd go, if you were able to go back in time, is there something you'd tell yourself, uh, maybe not to do or do differently knowing what you know now?
Mohamed: Absolutely. I think it's, I mean, uh, yeah, it's like I wish I knew all those things. I would probably have gained so much time. Uh, it's probably use your brain and don't, don't don't, don't just apply processes by the book. There's a lot of literature in our software engineering industry. Lot of really interesting is [00:41:00] just to take all of that and really tailor to your world.
Mohamed: I've seen in my career so many example, and I was a culprit of that in early in my career of just being naive and saying, oh, let's do agile, let's do sprints. Let's do all these like things that are almost rituals that we've developed in our tribe. But really at the end of the day, I think the best engineering.
Mohamed: Are done through, um, just smart people working it out, using all that knowledge, all those best practice to kind of create an environment where they can unlock innovation that is really fit for purpose, for what they're trying to do, uh, in terms of product and get away from the debates around this approach or this approach or this approach.
Mohamed: I mentioned SPACE is just the last later situation because it's, I think it's game changing if it's adopted because for once we're gonna move away from measuring bloody lines of code and that kind of, uh, or just, uh, you know, uh, and take a holistic picture [00:42:00] to kind of build a better engineering team and more inclusive teams.
Mohamed: So my advice to younger me would be, you're smart enough, you can work it out. You can le read all the literature, but really sit down with your teams and let engineers build system for engineers and make them, uh, I always use the word geeks that can speak human. That's the best engineers for me.
Kulvinder: Fantastic. That's great advice, really great advice. I think, um, we, we often, you know, I think all of us were guilty of over complicating things sometimes. Um, maybe it's the nature of engineering and
Mohamed: Absolutely,
Kulvinder: keeping things simple and, and, um, You know, giving folks the tools to be able to do the job and do it well.
Kulvinder: Um, and, you know, uh, collaborate together is so important as well. So, no, completely agree with you. That's great advice. Thank you.
Kulvinder: Well, look, thank you so much for your time today. I, I hope everyone who listened or watched got as much out the discussion as I did. You know, it was really great to hear, hear from you in terms of your experiences and, and how you were able to distill those [00:43:00] for, for everyone.
Kulvinder: So thanks. Thanks very much for coming onto the show.
Mohamed: Thanks a lot, Kulvinder. Really appreciated the invite and it's always good to talk about and give back some wisdom. Um, humble wisdom whenever we can is always, uh, a nice experience. Thanks for having me.
Kulvinder: Fantastic. Well look, that's all for this episode of Tech Exec Talks. I hope you enjoyed our enlightening conversation, as I say, as much as I did. Um, you can find out more about Mohamed in the show description. There's links, um, in there as well. And if you like this episode, be sure to follow the show, uh, and leave us a review.
Kulvinder: Your support helps us reach more listeners and viewers. We'll be back soon with another episode, and until then, stay curious.