Systems Engineering graphic

Systems Engineering

April 20, 2017  |  Time: 1:00:015  |  Subscribe in iTunes

Our PM methods face stress in the face of projects that are related to research and development…where systems engineering is the key discipline is needed. The focus is on the requirements that change, that the project scope is unstable, as there are decisions that will arise after the project is undertaken. The PM’s task becomes a focus on creativity over structure…but what are those trade-offs? We need to follow more conditional branching, tasks that aren’t executed, and tasks that are suddenly forced to be repeated until an exit condition is met. It gets complicated when seen from our standard linear view. Listen in and here three engineers. Randall Iliff, Ruth Barry, and Nathaniel Fischer discuss projects with different definitions of what “done” looks like.

Listen online or read the full podcast transcript below.

About the Speakers

portrait of Randall IliffRandall C. Iliff

Eclectic Intellect, LLC

Mr. Iliff has over 35 years’ experience leading developmental effort, and has participated in all phases of project execution from proposal to close out. He is a seasoned large-project PM as well as a recognized expert in Systems Engineering. Mr. Iliff holds a BS in Engineering / Industrial Design from Michigan State University, and an MS in Systems Management, Research and Development from the University of Southern California. Mr. Iliff is a charter member of the International Council On Systems Engineering (INCOSE), and currently serves as the INCOSE representative on an alliance between INCOSE, PMI, and the MIT Center for Program Excellence. Until early 2016 Mr. Iliff was VP at the award-winning design firm bb7, where he was also Director of Strategy, Methods and Learning. Prior to that he worked for Motorola, Martin Marietta, and McDonnell-Douglas. In 2016, he left bb7 and founded Eclectic Intellect.

portrait of Ruth BarryRuth Barry

Director of Electrical & Software Engineering

VOLCANO (PHILIPS) CORE FM Project Manager Wireless device used to measure blood pressure and blood flow. The project included human interface, industrial, mechanical, advanced electrical and software design, and full product testing. ORASCOPTIC XV1 INTEGRATED LOUPE AND HEADLAMP SYSTEM Project Manager The first fully-integrated loupe and headlamp system for use in dental and surgical applications. The project included IP research, human interface, industrial, mechanical, optical, electrical and software design, and full product testing. The loupes launched 18 months after approved concept. KERR DENTAL CURING WAND Project Manager Two generations of ergonomic curing wands; the second product features ultracapacitor technology in place of traditional battery technology. Both projects included IP research, human interface, industrial, mechanical, optical, electrical and software design, and full product testing. The first wand launched within twelve months of concept approval; the second product followed one year later.

portrait of Nathaniel FischerNathaniel Fischer

Mechanical Engineer

LANAIR PORTABLE RADIANT HEATER Mechanical Engineer Addressed reliability concerns with existing machine. Re-designed a portable radiant heater to reduce cost, and simplify the design to minimize assembly time and reduce rework and scrap. MALLINCKRODT PHARMACEUTICALS Lead Mechanical Engineer Designed modular test fixture for gas sensors that allows for several different gas flow configurations. GE HEALTHCARE VARIABLE COST PRODUCTIVITY (VCP) PROJECTS VCP Mechanical Engineer Work with a cross-functional team to reduce component costs. Tasks include: writing engineering change orders, verification activities, updating documents. NASA KENNEDY SPACE CENTER MODELING AND SIMULATION Simulation Engineer Intern Using Mathworks Simulink and Simscape, developed a model of a closed-volume fluid tank, and tested the model in a real-time environment.

Full Podcast Transcript


0:00:04 Randy Iliff: You need a PM and an engineering team that goes, "Well, it's not completely impossible so I guess we'll keep working on it."

0:00:10 Ruth Barry: It's a race to find out how quickly can we identify if this is actually going to be a very successful, fully featured product.

0:00:20 Nathaniel Fischer: It's easy to get distracted by smaller things that maybe won't end up with as much of a payoff in the end.

0:00:28 Kendall Lott: What's it like to manage a project where the unknowns are not just related to external factors of stakeholders, team performance, and resource availability, but rather to the scope itself? Where the unknowns can range from, say, 10% to even 90% of the project scope. To design and manufacture a product or a system whose requirements and capabilities veer into murky territory, or even totally unknown territory? It's kind of systematic bushwhacking and it's a fascinating dance that PMs and engineers perform to successfully shepherd these projects from start to finish. In today's episode, we talk about some of the tenets, theories and techniques for working through these amorphous and often ambitious projects.

0:01:07 Speaker 5: From the Washington, DC chapter of the Project Management Institute, this is PM Point of View, the podcast that looks at project management from all the angles. Here's your host, Kendall Lott.

0:01:18 KL: My guests for this podcast have vast experience working on projects that involve a certain degree of unknown. Randy Iliff works primarily in the outer realms of the unknown, where even the capabilities of the final product cannot be known. He introduced me in turn to Ruth Barry, who deals more with the mid range, 50% unknown projects, and Nathaniel Fischer, who hones closer to the mostly known range in product redesign. 

Randy Iliff is the founder and principal of Eclectic Intellect, a Madison-based company that provides a unique range of new product development support services to the innovation community. With a background in engineering, Randy is a founding member of INCOSE, the International Council On Systems Engineering, and has also served as chief systems engineer for the IceCube Neutrino Observatory, or simply IceCube, a neutrino telescope at the South Pole. You'll hear more about that in a few minutes.

0:02:09 KL: This interaction of the project management and engineering, how do you define how the two fields normally operate, such that there's a difference when they come together? 

0:02:19 RI: Within the entire spectrum of PMs that I have encountered, they range from individuals who are charged with recreating something that's already understood in detail, but dealing with the uncertainty that arises in the application circumstances, the management of resources, the measurement of time, and that may be in construction or production type things where 'done' is already defined in advance. Your job is to deal with the variables of making sure it can happen correctly. Where my kind of world comes in is when that definition of 'done' is fuzzy or missing completely at the beginning.

0:02:54 KL: What's the PM's obligation in working in this engineering environment or how do you see that work together? 

0:03:00 RI: Well, the trick here is that there are two different classes of work taking place at the same time. One of those tasks is a straight, execute per plan, minimize variance, do things as efficiently as possible, minimize and avoid risks. The hard part is the stuff that hasn't been defined yet, where the recipe shelf includes things like brainstorming instead of statistical process control. Where it includes running a prototype of a system for a period of time just for the purpose of gaining more understanding about the intent and the options, rather than it being an end item deliverable from the project. So it's the use of project time and energy to gain information that helps to define the project, such that the remaining period of performance effort can be more focused on a specific deliverable.


0:03:51 KL: Sounds like that's something that the original stakeholder, I perhaps would say the customer of the project itself, would be confronted with and understand going into it, right? 

0:04:00 RI: It's an interesting comment, Kendall, because in many cases I've found that the stakeholder community greatly underestimates the amount of uncertainty that remains when they state a vision of, "I would like a new product that is comfortable, easy to use and fun to play with." And they believe they're done with the specification and you should now go execute, whereas in reality, all they've really done is define a target that has to be articulated and codified in sufficient way to be managed and executed. More often than not, I find that the stakeholders on the large developmental projects I've been around don't agree on what 'done' should look like until the project is almost complete. There should be a convergence of those viewpoints all along to keep the project healthy and alive, but it's stunning to me how even in final test people still have different ideas of what the system they're stakeholding should be part of.


0:04:56 KL: You're taking me right to the heart of it to me, then. The stakeholders in theory should've known, you identified they don't. So, now we're at who owns the obligation of identifying that? 

0:05:05 RI: Part of this goes to the nature by which large system procurements take place. They often make purchases of developmental systems and solutions or programs in the same way that they would buy capital infrastructure or commodities for their Xerox machines in the offices. And that method of purchasing forces all of the bidders to act as though all of the rules were known on day one, so that they can get through the gate of award and then you return right back to, "Hey, we just won it and now what was it we're supposed to do? Let's get together and have a meeting and agree on that." So there's an overlap here of the mechanics of procurement that works against the need to have this. "I know this much, you must go do this, I will hold you accountable for it" subset, and then another class of work which says, "I intend to go here, you will be scored based on the ability you have to get me to these places and the quality of solution I have when I reach them."

0:06:05 RI: If you could take the politics out of it, then the PM would still be faced with the challenge of preparing estimates in cost, in schedule, in technical and programmatic risk for elements of their project that are essentially unknown at the time that they're originally being bid. And there's a sense that 10% of a project being unknown isn't that dangerous and 50% would be pretty dangerous and of course if 90% of it's unknown, you know you're in a really major project. But what I found is that, even tiny amounts of new creep in and interact with other projects in ways that are surprisingly complicated for the PM to deal with. An example would be a derived requirement. When you get to a point and you decide you're going to use an evaporative cooler to remove heat from something, well, now you have to have a place to mount that. Now you have to have power for it. Now there's another inspection for that. Now there's perhaps a permit that goes with it. So it's each one of these things that then triggers three or four more things behind it that is very hard to articulate at the beginning. Even if the PM knows it and can make a good case for it from their experience, it's difficult to communicate that to stakeholders who perhaps have a different agenda.

0:07:13 KL: So the PMBOK says, "Figure out what your requirements are, develop scope, build a work breakdown structure." And then from that, a schedule, resource loading, dependencies, etc. We cannot do that because it is not yet known and there are elements of things we will build that will actually introduce new changes. We know that those will be introduced, but we don't know what those impacts are. So one way this has been handled is Agile. How is this not a restatement of the problems that set us up for Agile? 

0:07:40 RI: The only part of this that's really different is this part that's changing. The part that's a constant can be treated with any type of method you want to use; that part's pretty easy. So step one is to split out as much of your understanding of the project from, “This part is known, it's relatively safe and stable. It's relatively independent of the decisions that are ahead of me” and I can begin doing pure PM, project management on that type of work. The portions that are missing, then the program management aspect is, “In what order will those pieces of missing information become available to me? How much of that do I have to open to see the connections to something else?” And this is where we get into the brilliance of Agile, Kendall. What Agile does is ask you to break down that big ball of string into whatever degree of smaller pieces are necessary, to get the information you need, to make the next decision in sequence, and judge the validity of that decision. At least in my experience, when I've seen Agile work very well, there's been a sense of the destination that provided a reference. There has been a logic of the way the time would be used and then Agile has been used as the mechanism to get efficiency out of each increment of execution. The places I've seen it misused is when the top view was missing and it was simply a minute by minute log of wherever the Brownian motion was taking the project.

0:09:00 KL: When we know the goal, but maybe some of how we're getting there is undefined, that is where Agile may be very helpful, but you're really calling out systems where we don't know what we're going to get yet.


0:09:11 RI: Exactly.

0:09:17 KL: We're in the class of problems where the project output itself, 'done', as you said, is not known. I challenged that the stakeholder should have a sense of it and you said, "Stakeholders kind of know a result they want but they don't know what 'done' actually looks like."

0:09:29 RI: They'll all have boundaries in their mind. Yeah, so instead of saying, "I need a model 17A with this option in it," then just press Order on the keyboard, they say, "I want something that does this." And the very best communication I know to advise your listeners to seek at this stage, is to ask people what they want in functional terms. "I need this to transmit torque, not I need a drive shaft."

0:09:52 KL: Okay. So we've got this first step where a PM says, "Great, I've been assigned this." I'm hearing this nascent engineer floating around behind this PM, though. What's the next step they need to take on this? 

0:10:01 RI: Well, obviously, you're looking to get some kind of a definition that is mutually compatible. If one of your stakeholders says it should be small, another one says it should large, one should be invisible, one should be a bright, shiny color, you’ve got to get those impossible-to-reconcile defects out of the way, and that gets you to the point where there's possibility. That'll be a big, fuzzy, boundary cloud, and you can now start looking at the inside of that cloud and say, "Are there any standard pieces of this that I can just go grab and use because they're fixed? I know I'm going to build them that way." A bridge, a railroad, whatever. Just standard things that I can grab and go work on. Then you look at the pieces that need to be developed and you're especially looking for the dependencies. So, if you can understand task dependency for the flow of work, take that same skill set that every PM has mastered, just to get a PERT diagram together or a network diagram, and now think in terms of not just building a building or a structure or a process, but building the definition of the work. You may have a definition of, "I'm going to replace a hospital respirator with a new unit." That's a fence that says, "I'm not building railroads or spacecraft or nuclear weapons," and it gets you started. It divides up the world. So this is all about reducing it to something finite so that it can, in fact, become a project or a program under deliberate control.


0:11:23 KL: So you're still going to leave some fuzzy space in here, it sounds like.

0:11:26 RI: Yeah. The first piece is the outside boundary. You want as good a stone fence around the outside as you get, even though you know you're going to go back and revisit as you get smarter. And then you want to start carving up the territory on the inside to find as much independence as possible. So all of the dependency tests that a PM is used to thinking about in terms of resources or other capabilities, dependencies, it's now just in the definition of the work. You want to expand to a point where you have enough understanding to think, "You know, I'm pretty good at this. I need to start actually accomplishing some work." And then you switch from expansion into convergence. So if you say there are basically two types of work, the stuff that's new and the stuff you already know how to do, you just want to do over again. That divides the work up into two classes. For the stuff that's new, I want the fastest definition of that bubble so that it can be reduced to normal execution-type work.


0:12:25 RI: There are two features that are kind of missing from program project management software today that may help this conversation. One of them is conditional branching. I've yet to see a project plan that gets to a point and says, "If this, then I follow this line. If not, I follow the other one and the rest of this is ignored." There's an assumption in almost every project plan that every task you see will always be executed, and yet in a developmental world, that simply doesn't happen. There's also recursion, which if you've run any kind of a programming language, you can call it a do-loop in the old FORTRAN days or whatever, but you stay in that until you hit a condition and then you exit. So I would stay in engineering definition until my stakeholders all agreed on what I wanted to go build and then I'd release it to manufacturing. That sounds great to the engineer, but it doesn't look really good to the PM who is wondering, "When do I schedule that? I’ve got a boss up here who wants me to have it delivered by September. Is that possible? Impossible?"

0:13:15 KL: So it's going to take a PM who's trained differently to even start seeing this, it seems. On the other hand, I realize, two seconds into the project, they're going to start being in trouble if they haven't looked at it this way.

0:13:24 RI: Yeah, and this is why it's so important to say there's two things going on at the same time, because if we don't do this, Kendall, then there'll be a danger for PMs to say, "Okay, I get this. I have to wait until I know what the requirements are before I start committing big resources." And that's a pretty good model, except it's not true for the whole project. In even the most exotic things I've ever worked on, there's a subset of the project that can begin on day one using structured, legitimate, disciplined PMI methods and that project will be better in those tasks for applying that discipline from the very beginning. Your goal is to take the stuff that can't go through that model yet because it lacks definition, give it that definition and then move it over to the other side of the fence where it's saved as rapidly as possible.


0:14:09 KL: There are projects that will end up with some fuzziness still for quite a while. In fact, there may be this level of recursion and condition... Well, more I'm thinking of the conditional branching, where we simply now know that we cannot know until we engage – where we begin to do a prototype, when we test something, when we build something. So, how does a project manager handle that? They're communicating it, they just plan and leave that as an open budget item? 

0:14:33 RI: It sounds terrifying when you say, "Oh, my God. I've got everything as a variable." But anyone who's ever been a parent knows there's a small number of things you must control and then there's day to day stuff that just simply happens. And, if you're raising a good, healthy child based on whatever the standards of the stakeholders are, that's probably a better analogy to developmental projects than the manufacturing, "I do this, then I put in part B, then I put in part C" model that some of the really mechanical project management guidance might have you believe.

0:15:02 KL: So what do you suggest that they're doing there, they take that as an acceptable risk or…? 

0:15:06 RI: PMs who work in this developmental space have to have a respect for the additional level of complexity. They have to be able to communicate that to stakeholders to be able to shift the use of timing in the project a little bit. There needs to be more of the total project or program time spent at the beginning to understand what the destination is before you begin accelerating towards it. In design, the difference between a really good solution, an acceptable one, and a failing one can be billions of dollars to the corporation that has their brand on that product. So there's an investment component to development, whereas there is a cost management component to pure PM execution.


0:15:51 KL: What happens when it's 10% new and 90% understood, a 50-50 split and then perhaps a complete flip where almost all of it is essentially, I guess, a research project or a discovery project of some sort, where it's 90% new and 10% fixed? Tell me something about how you characterize those three different buckets and then we're going to be trying to find some examples around that.

0:16:09 RI: Sure. The key is, whenever you move off of 100.0% defined, the second you move away from that, you're in a completely different space. Manufacturing is the absolute special case, the singularity that exists when no one is changing anything during the period of execution. Most of the PM world doesn't operate just like a high volume production line would. So, if it's 1% or 2% changes here and there, you can pretty much use the same production model. You'll pick up a slight inefficiency because of rework or kind of being surprised by things, but it won't dominate the equation. When you get up into about the 5% or 10% different, although that seems pretty trivial, just to change the tail lights or the trim or to put it in a new package or a color this year, you'll find that the dependencies of all of those decisions add up much more rapidly than you would think. So a 10% change can actually double the difficulty of doing a developmental project. When you get up into the 80%, 90%, 95% never seen before level of something like a lot of the NASA work or some of the things that I've done on advanced military and aerospace projects, it's a completely different model.


0:17:22 KL: Talk to me about when it was a 90-10, 90% new, 10% fixed, really complicated project. You want to talk us through that? 

0:17:29 RI: The best example of a crazy out there project, I can give you, Kendall, is IceCube. It is literally a cubic kilometer volume of ice at Amundsen-Scott Station at the South Pole. It was roughly $270 million. The National Science Foundation in the US was the major source of funding for that. We also had funding from partners in Germany, Sweden and Belgium. But the instrument is a high energy physics research project. It is termed by the NSF as a discovery class instrument, and that makes them a particular design challenge, because the physicists who were giving me guidance on what they wanted the instrument to do, were in fact speculating about the nature of the universe and coming up with theories about how the universe might work.

0:18:08 RI: So the instrument that I designed had to be flexible enough to deal with any of the uncertainty in the understanding of what they wanted the device to do. They didn't know what they wanted the baby to grow up into because they wanted to see what it wanted to be later. So when this instrument was first turned on, they realized that there were some extraordinarily high-energy particles out there at or even beyond the best hopes of the researchers, and they were able to tune the behavior of the instrument to be more effective at trying to understand and draw information from that. My job was to gather all of the range of uncertainty from all of the stakeholders about what they might want this thing to be when it was done and what they might want to use it for after they'd discovered something interesting. And as much as I could to preserve that flexibility into the future.

0:19:02 KL: Let's talk about difficult conditions. Just from the pure, the known stuff, right, had to be hard.

0:19:08 RI: Yeah. Just some of the gee whiz things. If you've ever been in Dubai and you've seen the Burj Dubai Tower, or Chicago, the Sears tower, or Willis Tower, imagine three, four, maybe five of those buildings stacked up on top of each other, except going straight down into ice. That's how deep the deepest sensors are in the ice. They're 2,450 meters deep in the ice. To get a hole in ice and put a sensor in it 2,400 meters down needs a lot of equipment. So now, "Okay, I want to put sensors in the ice. How am I going to get them there?" Well, you don't exactly have a big cordless drill with 2,500-meter bit on it hanging around at the South Pole. We had to invent a hot water heater that was able to melt a column of liquid water 2,500 meters deep, about a meter in diameter, so that these instruments... The instrumentation could be lowered into the ice and then when it refroze, it made a single solid structure.

0:20:00 RI: And when someone comes up to you and says, "I want to put these little egg shell sensors with a glass photomultiplier tube in, basically vacuum tubes, 2,400 meters deep in the ice. There's 10,000 pounds of pressure of just the water sitting above it and oh, by the way, did we mention that this will all freeze back in?" They'd start rattling this stuff off. And most people would run screaming for the exits. You need a PM and an engineering team that goes, "Well, it's not completely impossible so I guess we'll keep working on it."


0:20:33 KL: What was the outcome of all that? 

0:20:35 RI: A spectacularly successful instrument. The devices in ice, which were my primary focus for it, are exceeding what were already extremely high reliability expectations for the devices. But the fact that the parts that are in ice, that are inaccessible, that are the source of the quality of all of the data, are over-performing against every technical requirement they were given and also in terms of reliability, means that the investment that the taxpayer made in that instrument will likely have a much better return than it would otherwise. That's the piece I'm probably the most proud of. The tricky parts of it, just the time-delay of getting stuff to the Pole, the complications of doing work in an environment with international treaty obligations.


0:21:20 RI: So the PM has to take on a really tricky role in these projects to balance out the stakeholders' desire to achieve what in some cases may be an unrealistic goal and the engineer's desire to optimize, to what may be an unrealistic level, the performance of things that are possible. So you run up against a social problem of one group of people who loves to play with possibilities, sometimes beyond the window or the budget of the project, and another group of people who are used to things being defined very rapidly because they already exist and seeing evidence of execution. And when you put those two expectations against each other, if somebody isn't acting as an ambassador to communicate between those two camps, either the stakeholders will say, "Shoot the engineers and ship something," or the engineers will win, and they will simply use up all of the market time and budget and you won't ship a product.

0:22:09 KL: It strikes me that you'll really need good communication and so you need pretty strong feedback, particularly if you're not the one who's an expert in the space.

0:22:18 RI: The PM as a business system manager is a different role but he or she very much is trying to make sure that the effort is successful by balancing out the stakeholders, the engineering inputs and all of the traditional PM dimensions. And you're right, it's all about communication.


0:22:38 RI: So the PM doesn't have to be technical. I make no claim whatsoever to having anywhere near the physics credentials other than an engineer who survived the classes and went on and got a degree. But I could communicate with the physics expertise that was on that project and I could understand from them what was important to them in what orders and when they had to make trade-offs, what they considered the best shape of that trade curve. And I was able to get those groups of people together and get them to agree on what was generally a shared set-up. "If we take the instrument in this direction all of us are better off even though, yeah, I give up a little of my sensitivity in order for you to get that." The project as a whole was safer when everybody agreed.


0:23:20 KL: To learn more about Randy's theories and techniques, look for the book Integrating Project Management and Systems Engineering: Methods, Tools and Organizational Systems for Improving Performance. Hot off the presses, this collaborative effort and yes, Randy was part of the team, between MIT, INCOSE and PMI, incorporates the latest research on the subject. It's available in digital and hard copy at Amazon, PMI and through the publisher, Wiley.


0:23:50 KL: Ruth Barry is the Director of Electrical Engineering at bb7, a product development firm in Madison, Wisconsin. She has over 25 years of product development experience in the automotive, medical, dental, industrial controls, consumer, telecommunications and wireless communications industries. She's worn myriad hats, including design engineer and project manager. According to Randy, she's well versed in those middle range projects, a blend of 50% known and 50% unknown.

0:24:17 RB: I've worked on a really broad range of products. But some of them that are oftentimes the most interesting are the ones that come in in the medical industry, where we're looking for a next generation product that is some combination of cost reduction and new features. So we're looking at introducing or adding some additional technologies. And the interesting part of that is you need to look at the product through new eyes again from a system's perspective to say, "Which parts of this can I just inherit and bring forward? And where are my opportunities to actually look at bringing in new features and new technologies? And what does that mean?" So you have to look at it from a risk management perspective and identify what sort of additional testing or factors you have to bring into the project plan.

0:25:20 KL: It sounds like the scope which you're handed is being this blend, this one set of product plus these other features. And you begin to have to flex the rest of your planning around that.

0:25:30 RB: That's very accurate. And sometimes that doesn't actually start as the scope's statement. Pretty common for a client to come in and say, "Hey, I've got this product. There are a few things I'd like to modify about it from a reliability perspective or some areas I would like to get some costs out of it." And as you start looking at that with the client, you identify some additional opportunities that if I'm opening the hood, there are some opportunities to take advantage of some new technologies, in addition to looking at some cost reduction or some additional reliability features that you might be able to incorporate.


0:26:17 KL: Let's just run the PMBOK for a second. Some of the key areas, you mentioned risk. How do you see this changing a more linear process in risk from a planning and an execution place? 

0:26:31 RB: So when we start with the initial planning phase where you're really looking at your requirements and looking at your project scope, we try to always look at starting up our risk management strategy, starting up looking at what our manufacturing strategy is going to be and what our testing strategy are going to be. All of that front end when we're doing that initial project planning. And when you start looking at the risks along with what features you're trying to incorporate, a good example would be one of the products that we were looking at for one of our medical clients, where they had basically a cable connecting a headlamp to a battery pack. And the cable was a bit of a problem, so they were looking for ways to manage the reliability of the cable.

0:27:28 RB: One of the ideas was, "Well, let's get rid of the cable. Why don't we put the batteries up by the headlamp and actually have the entire thing be a head-worn structure?" And as soon as you start embracing an idea like that, one of the first things you need to look at is how does that affect the complexity and how does that affect the risk. And there were some additional very interesting risks that showed up. What are the requirements around having a battery worn on the head? As soon as you start playing with the trade-offs of how do I solve these problems or opportunities that I'm trying to address for a client, you really need to look at each of those alternative paths and map it out to say, "Where does that take me? Does that still fit within the budget and the timeline and some of those traditional project management planning factors that we try to incorporate?"


0:28:37 KL: When you're actually going through the process, then, how do you manage the risk? Normally, we look at the risk of being able to produce the project on time, on budget, but it's also adapting for the risk that's inherent in the product that you're having to then build around? 

0:28:53 RB: We've really put together a risk management table that's very similar to one that you would see in a typical medical project, but we do them on all projects. We're going to be looking at project risks and I'll call it user risks. We would actually start mapping those right from the beginning to say, "Okay, here are the different areas that we're looking at that could introduce product reliability, product cost and user safety considerations." And as in a typical risk management process, you're going to map those about how severe they are and how likely they are to occur. And actually start, it's kind of informal, just using things like low, medium, high, not trying to rank them with numbers right off. But trying to look at which of these things are factors that we need to weigh in right away. You may rule some out right away. Because just looking at them may say, "This is going to push me outside of my budget or my timeline."

0:30:00 RB: "Hey. Great idea. I'm gonna keep track of this for a future generation of the product." But for today, you'll end up with a subset of those ideas that, "Hey, these all kind of fit. Now let's look at those." And as a general rule, when we're planning a project, things that are a higher risk, we will try to move those into the front end of the preliminary project design.

0:30:25 KL: I wanted to ask you about that, because in a standard project management world, as you complete tasks, your risk tends to drop, because you have more knowns now, right? The unknowns are coming off the table. You may have problems but you know what they are. However, I remembered my folks at NASA describing in some cases the risk in a sense increasing, because now it becomes much more about the product and the product's uncertainty, when you are doing R&D, when you're doing the new things. And so as they moved through the project they were actually experiencing, in some cases the potential for something catastrophically being wrong, is actually higher in some way. So you were talking about moving it forward. Is that around the product design or around the elements of the project execution? 

0:31:15 RB: I would say that it is, you're looking for where the unknowns or where the largest risk factors to the product are to patient safety or user safety. I would draw an analogy to it that it's similar to when you're forming a project team, you always go through the forming, storming, norming and performing. So, it's kind of like creating the storming. As the project managers, we all know how to do that. We deliberately drive a certain level of conflict to bring out the areas where our team needs to learn to work together, and in a product, you're doing something very similar. You're looking at that list of risks and you're saying, "Which of these things do I actually not know how to solve, or if I can't solve this, we can't even go down this path." Because you want to find those things out. Fail fast, is the phrase that's frequently used in association with this.

0:32:23 RB: It's akin to rapid prototyping. It has a flavor of Agile development to it, where I'm basically saying, "Hey, I've got a head lamp. I want to make it as small as possible but I don't know if I'm going to be able to manage the thermal elements of dissipating the heat from this LED with the bezel, if I get the bezel this small or if I want to use this really lightweight material for the bezel." And so you'll actually prototype just that part of it so that I can actually simulate that part of the problem and work through, "Okay, yes. I can use this really lightweight material. It does dissipate the heat successfully." And now I've got a part of my design locked in place and I can focus more on some of those more predictable elements, like the electronics design for actually driving that LED headlamp.


0:33:22 KL: When you're in the R&D side, you may have to invent your own tools or new things. It's like you open new questions of the research, new science questions. I'm imagining part of your risk is almost scope increase, because you can't get to the product. Zeno's paradox, you keep almost getting there and then uncovering something else you have to resolve before you can actually get to the product as you thought you were told to build it.

0:33:47 RB: Right. And at least the thing that I've found that, my management teams become most comfortable with, if you're saying that you've got these risky areas and you can get past those in an early phase of the project, then the rest of your schedule becomes more predictable, both for timeline and for the cost to actually complete. And so you get into the realm where the management team or whoever is funding your project is more comfortable with the predictability of the outcome.

0:34:28 KL: You've got a project that looks similar to a normal project, but you have all of this where you're moving your unknowns forward. You're uncovering new things even as you begin to try and resolve them. And that's part of the process. How does this change your communications with your own team and with external parties? 

0:34:45 RB: Certainly, the team needs to understand, I mean, they're a part of the helping to uncover the areas where things... It's not something that we've done before, it's not completely predictable. So they're really part of the identifying what those tasks are as well as then tackling those tasks in a creative, problem-solving approach. So they get to kind of be adventurers and encouraging them that this is a good thing. We need to find these things and find an appropriate approach towards them. The part with communications with the teams themselves, nobody likes to see a great product that they're working on not move forward. And yet, as a project manager in this kind of area of the project, you need to encourage them that that's exactly the right thing to do. Because if this can't move forward the way it's currently defined, you need to figure that out right away.

0:35:48 KL: Is the problem clearing the barrier to their understanding or is it motivation or is it fear? 

0:35:53 RB: I think it's more fear. None of us likes to see our baby [chuckle] be taken away from us. One of the most rewarding things is to see a product that you've worked on actually on the shelves, in the store. Or see it online that you can actually purchase this. And so that's an exciting end goal that we all are already envisioning at the time that we do a concept and do that first project kick-off meeting. And so there is a little bit of encouraging the responsibility that you don't want to get too far down this path and then find out that this can't work, or it's going to be injured. [chuckle] It's not going to be able to meet its full specifications.

0:36:37 KL: Oh, I like that.

0:36:38 RB: You want to figure that out ahead of time. And then you can actually be delivering this healthy product to your end users in the marketplace that meets their expectations and they're excited about. So to the team, there's some coaching and encouraging that this is their responsibility as product developers, and as being good stewards of the development funds that we've been given responsibility for.

0:37:07 KL: Oh, I like that. That encouraging their responsibility with the idea, and you put this in your kick-off meeting, what it will feel like to see this healthy product actually being sold in the market.

0:37:18 RB: Absolutely.

0:37:19 KL: Yeah. That's exciting.

0:37:20 RB: If it's a consumer product around here, we actually buy one for the people. And they actually get it in their office for a product that, as long as it's not like a $50,000 piece of equipment.


0:37:35 KL: My own MRI. Yay.

0:37:37 RB: Right. If they ever work on a Lamborghini, we're not giving them a Lamborghini. But you know if it's...



0:37:47 RB: But then you also mentioned how you manage stakeholders in the same way.

0:37:51 KL: Yeah. More than communicating with them. It's actually full-blown management of them, right? And their expectations.

0:37:55 RB: It really is. The communication is really managing expectations. The best thing that we, as a development team, can be doing for our stakeholders is making sure that if there is something flawed about the product concept, that we identify that quickly before we spend much of the development funds. It's a race to find out how quickly can we identify if this is actually going to be a very successful, fully-featured product, or how quickly can we identify if there are some flaws in that vision that need to be modified before we move forward.

0:38:39 KL: That's different than a lot of the project management that I see here as I work with the federal government, or I hear others in my chapter talking about. The idea is that you get their understanding of the scope pretty nailed down. And even in an Agile development, you're verifying, if you will, the scope as you move through it. You're really talking, again, this is at R&D, that research part, that development part, where it's possible we're asking for something that can't be. Or can't be easily in this way. And you're trying to figure that out quickly.

0:39:08 RB: Right. And we do do a lot of product development where we're working, I would not say it's the bleeding edge of technology, but it is oftentimes the leading edge. You're trying to take a wireless system and force it into a very small consumer product or into a medical environment. And you have to be able to accomplish a particular communication range, a certain amount of data and a certain level of data security. And all of those things are things that you want to prove out very quickly. They may lead you to say that the product enclosure actually needs to be a little bit larger. Or that, [chuckle] "Sorry, you can't have metal right here on the enclosure. We're going to have to go with an all-plastic enclosure, even though there may be some durability trade-offs with regard to that." Those are the sort of design trade-offs and discussion items that would have shown up in the risk register, and that you'd want to get in front of the decision makers, right up front.


0:40:21 KL: How do you handle quality? 'Cause it sounds like quality potentially has variables in it as well now.

0:40:26 RB: Obviously, we stay very quality-focused. We want something to be highly manufacturable. We look at manufacturability right up front when we start a design. And we have different aspects of product reliability defined, such as product life, as one of the design parameters that you're trying to meet with the design. But when you're testing it basically shows that maybe something is not going to be durable for nine years of life. Maybe the testing is actually showing that seven is a better number. You want to have done enough product testing to actually have that known within the design team and the stakeholder community before you've ever communicated any sort of expectations to end users. And there's a decision point there at the point where you say, "This is going to last seven years, not nine years, based on our reliability testing. What do we want to do? Do we want to go back and change the enclosure material or change the battery? Or do we want to proceed with the product with a seven year product life?"


0:41:50 KL: How much value engineering goes into these to get to defining the product? You just talked about, you find out seven years instead of nine. You want to design it for its purpose and for its lifecycle, its expected lifespan. Do you guys use that technique at all? Or is that used before you even get your hands around the project? 

0:42:08 RB: To a degree. I would say we're always balancing product cost, product development dollars, product development timeline and what the product requirements are in kind of a trade-off balance as we work through a design. And so we're going to be running a costed bill of materials continuously through each of the different phases of the development cycle, and looking at it to see if we're headed or trending in the right direction. We're probably keeping a continuous eye on where's the 80% of the cost in this product coming from, so that we know if we're starting to trend to the wrong direction, we can start to pull it back. At the same time, we're doing our preliminary testing against our performance, against the required features set. It's kind of a continuous dance, and you're going to make various decisions based on how your final stakeholder for a product actually prioritized those factors. I guess it comes back to, that's one of the first questions you need to ask right up front when you're doing your original system and project planning is, if you have to prioritize these things for me, put them in order of priority: Product cost, project costs, time to market, product reliability, or feature set.

0:43:38 KL: I like that you move past pure scope scheduling cost in the context of value of the project. The time to market, you're incorporating it more than just the standard three in a certain sense of that...

0:43:48 RB: I think that's true, because I think we always have to stay sensitive to why are we working on this product. Somebody's business is dependent on this product being successful. And my stakeholders have probably staked a good portion of their career path planning on this product being successful.

0:44:08 KL: That's one of the things we're hearing out of some of these podcasts, which is project managers need to think of themselves as investment managers.


0:44:19 KL: How much of this is systems engineering in your mind? There's modeling? There's simulation? There's integration of the different pieces that need to change? 

0:44:26 RB: I would say systems engineering is a key way of thinking that has to stay in the forefront of your mind throughout. It doesn't ever go away.

0:44:37 KL: And that's because of the research and development aspect of this? Or the integration with the non-research and development aspect? Or are you just suggesting that for all projects? What makes you want to highlight that? 

0:44:47 RB: I would highlight that because any time I have ever seen somebody try to run right to the, shall we say, drawing board without taking a step back and thinking about, "Where are my knowns? Where are my unknowns? What resources or building blocks are available to me? What constraints are around me?" Without understanding the business case and the drivers and the overall view of what you're trying to accomplish, you will end up with a sub-standard solution.


0:45:27 KL: What's the biggest thing a project manager in your type of space with this blend of unknowns and a lot of variability, but also with very a practical and pragmatic output, what is it that you see project managers need to know the most? 

0:45:45 RB: I do believe that looking at the overall architecture, the overall system, the overall problem that you're trying to solve is a necessary starting point. Marrying that with a good risk management strategy. Now as you start to dig down into the details, you're digging down with the mindset that I am trying to minimize risk. The other thing that I would say has increased the success in these types of projects is to make sure that you delegate authority. You don't need to do this all yourself. Most of my job in project management is really removing obstacles and empowering people and basically, being there to still try to maintain that overall management of the project, and having the people with the appropriate skill sets digging down into the specific areas or specific problems is incredibly valuable to an overall success rate.


0:46:58 KL: What was one of the most recent products you worked on that was the most interesting where you applied all of this? 

0:47:03 RB: Well, one that we just finished up for a medical customer is a home healthcare device for treating patients with COPD. And this was a project that actually took us about two years, but it involved every single one of the capabilities within our organization, from market research in industrial design, through the electrical software, mechanical engineering, fluid dynamics, finite elements analysis, right into our prototype shop for building things up. And the team was heavily involved in the verification and validation of the product as well. But one of the big challenges of this, that pushed it into the realm of what we're describing, is that we were going from a unit that was normally used in a hospital environment, where the person who was setting it up and using it was a trained clinician. And now we were bringing it into a home healthcare environment, where it was more than likely an elderly person who needed to understand this whole user interface, on a fairly complex device.


0:48:24 KL: Okay, so remove your risks as much and as quickly as possible. If necessary, you may need to abandon a certain approach if it means not delivering a healthy product. Foster good stewardship on your team. Stewardship for the funds that they've been given to complete the project, as well as stewardship of the vision of the stakeholders, and don't lose sight of the bigger picture.


0:48:54 KL: Nathaniel Fischer is a mechanical engineer. He's worked in manufacturing, documentation and design work, and is currently in product development at bb7. Randy Iliff told me that Nathaniel had a lot of experience with projects that are primarily understood or inherit characteristics with relatively little, say 10%, new feature. So I gave him a call to gain some insight into the finer points of this space. We talked about one specific product that he had recently worked on, a diesel-powered radiant heater for outdoor use, like for farms or construction.

0:49:26 NF: Essentially, the project was cost reduction.

0:49:32 KL: In what sense? 

0:49:33 NF: I guess the approach that I took was, if we can ease manufacturability and assembly time, as well as eliminate redundant components, that should net us cost savings.

0:49:50 KL: What was your actual guidance? 

0:49:52 NF: Part of the project was going to be to redesign a new one, but they also wanted to streamline their assembly process as well. By eliminating assembly time and eliminating weld time, number of processes and number of steps.


0:50:14 NF: The initial approach that I took was to highlight the sticking points, like the difficult-to-assemble pieces, and see if there was a way to streamline them.

0:50:27 KL: Streamline them in terms of production? Or their use once they were built and how they consumed energy or how efficient they were in production? 

0:50:35 NF: All of those. The main component that I can think of was a multiple piece weldment.

0:50:42 KL: Tell me what that means.

0:50:44 NF: It's an assembly of any weldable material. In this case, it was steel or stainless steel. I think it was 12 or 13 parts that were welded together. And there was quite a bit of variability in the tolerance and on the parts. They had a lot of problems with final assembly. They ended up having to do a lot of rework.

0:51:07 KL: Because their tolerances were wrong and they'd go to put the product together and it simply didn't fit? 

0:51:12 NF: Right, or some of the parts wouldn't fit into the other parts once they were welded. So the approach that we had there was to, rather than have these formed and welded sheet metal components, we went with a different manufacturing method. We ended up using a spun part.

0:51:32 KL: What's that? 

0:51:33 NF: I guess in a way it's sort of like pottery. You spin it on a wheel and then you form it. And I believe the final part count on that was about three.

0:51:44 KL: So you reduced the number of parts. Essentially, you reduced variability in the product design, right? By the manufacturer.

0:51:50 NF: Right, by eliminating piece parts. And also assembly time was greatly reduced. There was a lot of time that they were spending just getting the parts to fit together.


0:52:05 NF: Overall, there was a reduction in costs of about 25%. Also, the assembly time was reduced by, per unit, I think they estimated that they were putting in, I think from start to finish, they were talking about a couple hours. And I think we brought that down to maybe a half an hour, just because we eliminated so many of the headaches associated with the previous version. And also they were really happy with the final design.


0:52:44 KL: I assume they gave you a schedule and a budget? 

0:52:46 NF: They had a pretty flexible schedule. Well, I guess the first couple of weeks I spent understanding the current machine and addressing some of the current concerns with it. And the rest of the time we spent doing research on improvements and implementing them into the new design. I did end up giving them a presentation of what my vision for the new product would be. I gave them several options based on what I'd interpreted from them. And we ended up choosing one and going forward with it.


0:53:31 KL: So as you approach a project like this, what would be your lessons learned from this about what to approach and how? 

0:53:37 NF: One of the hardest things is to stay focused on what you're there for. It's easy to get distracted by smaller things that maybe won't end up with as much of a pay-off in the end.

0:53:51 KL: So how can you determine that when you're walking into the project? 

0:53:54 NF: I'd say it's probably instinct. You could probably say, "How much is this going to save me in the end? And is it worth going down this path? Or should I focus my energy elsewhere?"

0:54:10 KL: That's really the question for the variability issue here. It seems that once you get started in that path you seem to know what to do. There's a process for this. 

NF: Right

KL: The question is, which path? And how much time do you spend in that path? How did you validate or how do you validate if you're in the right path? 

0:54:27 NF: In this case it had to do with whether or not we were going to actually save time or money on the end result. In other cases, it's maybe not as clear.


0:54:46 KL: So now we have at least a tenuous grasp of how these three categories of projects play out, right? Of course, everything seems knowable in the abstract, but what's it really like to embark on one of these projects from day one? Can you really know in advance which category you're walking into and what's the impact of that? Because that understanding determines the approach. Let's hear what Randy has to say about that.

0:55:08 KL: If we're looking at things along some sort of continuum of 10% new, versus 50% new, versus 90% new, and we're talking about the different skill set ultimately that is really needed there. Where the danger comes in is when it's 50-50. Because you don't know which end you're on to say, "Well, at least, it's the kind of thing I understand. I'm the smart scientific... Or I'm the good PM kind of guy." It's the, "I didn't recognize how much standard or how much new was in this."

0:55:34 RI: Yeah. That's a really excellent point, Kendall. Any time you're close to one or the other of the two extremes, you can use the complete body of methods from that extreme, with only a small impact to the other set. There's a particular type of error and symptom that comes up if I mismatch these methods. If I take a creative method and I use it in an environment where done is already known, where it's a fixed, a constant, if I bring a dynamic solution into a fixed environment, I will eventually re-invent the wheel and I'll ship something out on round tires. It's just inefficient to re-invent the wheel. If I take the other error, though, if I take a hard core, "You will always do it this way" prescriptive approach, and I bring that into a creative environment, I will fail every time the standard answer doesn't accidentally match the circumstances I'm facing. So that this breaks down much faster when you move from the production scale towards the creative end, than when it moves from the creative end back towards the production end.


0:56:33 KL: So when unknown be creative and actually, to use your process, as known to become more known. go ahead and slip into your mechanical approach, your more structured approach.

0:56:42 RI: Yup.

0:56:48 KL: So how do we learn this? As PMs, where can this be learned? 

0:56:51 RI: Most PMs balance what they're doing, on even mega projects, against what would make sense in just individual interactions or social structures. I think the same thing takes place here. Don't assume PMBOK is complete and structured and satisfies all these needs. Use it as a foundation and then say, "Okay, well, what happens when more plates are spinning than that? Well, I've got to get them to slow down, then I can catch them and I can put them back in the dishwasher. But I can't get them all to spin at the same time, and I can't catch them all at the same time, so what order will I manage this mess in?"

0:57:22 KL: It's helpful to have a framework that describes, "You will face this, it will be different, and you can act differently."

0:57:28 RI: When people have been exposed to this idea of what part is new and what part is fixed where I can re-use existing methods, it's an incredible luxury, because now they have a way of just grabbing two markers and the work breakdown structure or the schedule they're given, and highlighting install fasteners. Well, anybody who's been trained ought to do that, I've got a time and an expectation for it. Figure out where the fasteners get installed. Well, somebody's got to think about that. So you have two classes of work and it's almost the same thing that a PM is used to when they're dealing with critical path or things that have slack time. Every PM has learned that the critical path stuff is different, the slack time is their friend, and they have different rules, different structures of management and they switch things back and forth as the reality of the project changes. If you take that same skill and you now say this is a constant, it's fixed, it's anchored to something we've done in the past, there's a reference for it, I can go grab all of my PM and production logic and just apply that and it's exactly the right thing to do for this part of the project. Over here, though, this is different. These are the eggs. I've got to hatch those before I know what's in them. Over here I could start the souffle right away.

0:58:33 KL: It's heartening to know that even in projects with huge looming unknowns there's a method you can use, a kind of map that will lead you to your goal. First of all, just dig in. Don't wait for the entire picture to come into focus. Start moving forward on the pieces that are known. Also, when in doubt, err on the side of creativity. But yeah, don't reinvent the wheel. Use common sense, it's not rocket science…except when it is. But seriously, project management is a logical and human system. Your instincts may be more reliable than you think. 

Special thanks to my guests, Randy Iliff, Ruth Barry and Nathaniel Fischer. And of course to Randy, for connecting me to the other two.

0:59:13 S5: Our theme music was composed by Molly Flannery, used with permission. Additional original music by Gary Fieldman, Rich Greenblatt and Lionel Lyles. Post production performed at M Powered Strategies and technical and web support provided by Potomac Management Resources.

0:59:28 KL: PMPs who have listened through this complete podcast may submit a PDU claim, one PDU, in the Talent Triangle technical project management with the Project Management Institute's CCR system. Use provider code C046, the Washington, DC Chapter and the title PMPOV0038, Systems Engineering. Visit our Facebook page, PM Point of View, to comment and to listen to more episodes. There you will also find links to the transcripts of all of our productions. You can also leave a comment at And of course you may contact me directly on LinkedIn. I'm your host, Kendall Lott, and until next time, keep it in scope, well, discover scope and get it done.

1:00:10 S5: This podcast is a Final Milestone production, distributed by PMIWDC.

1:00:15 S?: Final Milestone.

About the 'Project Management Point of View' Podcast Series

© PMIWDC and Kendall Lott

This podcast series is a collection of brief and informative conversations between MPS President, Kendall Lott, and a wide variety of practitioners and executives. His guests discuss their unique perspectives on project management, its uses, its challenges, its changes, and its future.