For those of you admiring my sweater it was bought for me by my daughter and is from the Nottingham-based Paul Smith men's fashion house. Not as famous a name as Ralph Lauren (say) but Nottingham is very proud of Sir Paul (as he now is)
Perfect explanation of something I've had to communicate to management too many times in IT over the past 30 years. I was once offered an entire team to do something in one month, and I told them it would take a year, regardless. They gave the project to another senior engineer, and it didn't take a year; It took a year and a half.
The way I'm most familiar with putting it is "Adding manpower to a late software project makes it later." The expected speedup is O(n), but the increase in communication complexity is O(n^2), hence the minimum on the complex project curve.
I ran into the mythical man month problem at my previous job. Upper management decided to add a bunch of offshore developers to the team, thinking it would make us so much more efficient because we'd be staffed almost around the clock. The time zone difference of close to 12 hours made communication incredibly difficult. We couldn't effectively collaborate with each other because any meetings had to take place during some people's off hours. Getting email responses from the offshore people would almost always take until the next day. The people we hired were great workers, but we couldn't simply hand off tasks to them at the end of the day. An effective hand off would require a lengthy real-time conversation about the details. No matter how many times I brought it up, our upper management never seemed to understand that software development is not the same as assembly line work. You can't simply hand everything off at the end of your shift.
Sounds like for that to work you'd need: People at both sites to be genuinely empowered to make any changes necessary to the system to get their work done, without fear of blame - if anything does go wrong the team at the other site can undo the changes and then start a discussion A decoupled software architecture where each site has their own module(s) that they primarily work with, so that the majority of knowledge about the internals of those modules are on site. If you get really decoupled eventually you could give each site its own stream of work and its own product management capability.
A friend of mine pleaded with his company a few years back to replace 4 off-shore people with 1 me. I would be there 9-5, I'd be more efficient, I'd even offer to do the weird hours if needbe. The only thing they understood was the cost was the same and 4>1 so they struggled on, no other metrics like performance or usability or GUI mattered. He left the company eventually because they had the same attitude to most things
@@mytech6779 Realistically there would need to be overlap because these are complex engineering tasks, not a relay race.. teams have to discuss what is happening.. the next "shift" can't just come in and understand what is happening at a glance.. it will take less work time to spend an hour with them explaining the new state of things, how it works, and the design decisions that led to it than for them to spend time looking at it and trying to figure these things out for themselves. There is a fundamental problem that communication and coordination across many brains incurs a time and energy cost... and the proportion of time and energy spent on coordination increases as the team size increases until at very large sizes a majority of your total time is being spent on coordination, where you have many people whose sole job is to facilitate the coordination (project managers), and even whole teams who write project management software to facilitate the coordination.
Back 15-20 years ago I worked at a national laboratory that tried to do a 9/80 schedule, every other Friday off. Ideally you stagger things so every department is fully staffed four days of the week and has a half-staff Friday. The problem is that nothing anyone was doing, even the seemingly trivial jobs, was truly partitionable. There was far too much collaboration going on; teams would all take the same week off because they couldn't do anything, then those teams would be interacting with other teams... a good idea in theory and didn't work in practice. And that's before you factor in the people who somehow manage to not be in on any Friday. It was changed at some point. I never heard the official reason why but rumor had it that a tour was being given to some bigwigs, they walked into the HR department for something and it was completely empty.
Alan Kay described computing as a pop culture, not good at learning from experience (mistakes?) of the past. This is the sort of talk everyone should see :-)
Ah, that beginning section triggered some memories. :) At our uni, we had a powerful computing machine able to handle many dozens of terminals at once, backed by a full megabyte of core memory which we could physically watch operating. 32 32k "pages" of 160•256 little magnetic cores, each page the size of a wide door, all neatly jointed to a whole wall of wiring cabinet leading to the processor unit, so you could physically inspect and repair them by just flipping through. It had ten bits per byte! Eight bits per actual data byte, and two more for parity so that a stuck bit could be not only detected but fixed, transparently to the running program because the error correction system was running separate from the computer, one on each memory page. When we were shown this marvel of technology, the presenting professor demonstrated the incredible resilience of this type of memory architecture by taking out a ball pen, physically opening a memory page on the wall that was the memory, and running his pen over the cores to randomly flip some. We were awestruck. And then people started flooding in because their programs had crashed, some after days of runtime. We were slightly less awestruck afterwards. ;) Turned out you needed to use a non-conducting piece of wood or plastic for the demonstration, the metal pen had hit not just cores but also shorted wire crossings, flipping other bits elsewhere (including their parities) and thus causing some real memory corruption. Unfortunately, the professor had killed not some single unlucky program but a crucial part of the operating system which then took all the programs with it. Booting everything up again took several days. We were the last freshmen group to ever be demonstrated our computer's memory resilience mechanism.
This principle has been known since prehistoric times. A tribe of 10 cavemen can team up and kill a herd of 10 mastodon in 1 day, but 1 caveman cannot kill a herd of 10 mastodon in 10 days. Also known as "The Mythical Mammoth".
One project I worked on was exactly like the last hypothetical in this video. We (the developers) knew that the project was a MINIMUM of 2 years, but a new manager came in and said he wanted it done in 9 months (Strange coincidence). We told him in meetings that it was a guaranteed 2-year project (because we had done the exact same project several years earlier and it was a 2-year project). This wasn't our first rodeo. Well, lo and behold, the project was 15 months late (and 15 months over budget) because the managing director wouldn't listen to the developers who had the experience. He tried to throw money at the problem in bonuses and OT, but it was a 2-year project and no amount of money or more people was going to shorten it. We knew it, and he eventually was removed from the project when it was already 6 months late and executive management interviewed the developers. Luckily, the executive team trusted our answers in the RCA. However, almost all of us had left by then because of the unreasonable work environment. I stayed on part-time to KT as much as I could to the new guys. The product did ship, and it shipped nearly on the 2-year end date we as a development team had forecast.
_experienced_ project managers are badly undervalued by incompetent managers. We had a senior software engineer who expected raw undergraduates to be seriously productive after three months into a large project; getting angry when they hadn't performed to his expectations. Typically, it took a year and by then they'd be performing miracles in carrying out tasks in a few hours that would have taken a week three months earlier.
It's not just team intercommunication. You are forgetting the managerial overhead - the push to have so many subordinates, middle managers, more middle managers.... limit their staff count (unless more levels of managers are inserted), etc. "Oh you can't possibly manager more than N people because we'd have to give you a pay raise.... we now have to get another manager..." An entire push for more people that's got nothing to do with solving the programming problem. Managers managing managers. The curve does indeed start to climb again as overhead becomes THE issue.
It gets worse when you have micromanagers who want continuous updates, no technical knowledge to comprehend what you're saying, and no trust that the engineers know what they are doing. I've had a manager explicitly say, "Put all of the information you've previously covered in the presentation because I don't want to have to remember anything for the meeting." Those meetings took easily 6x longer than they should have (not exaggerating, actually timed it) with no forward progress. I even pointed out the cost of one discussion (think ~15 engineers for 15 minutes on something we knew couldn't be resolved there) and got shouted down. I swear, management is the bane of most projects.
Middle management bloat is basically what took down GM back in the 70s and 80s and despite multiple major efforts at restructuring in the 80s and 90s they never quite figured out that core issue, just continuing to paper over the problem by selling off division after division until they had no extra stuff to divest and the core automotive business went bankrupt in 2009. While the fish rots from the head when talking of responsibility(board and executives that let it happen), the internal mechanism is middle managers trying to hide their incompetence behind false delegation and as a way to get out of work, often creating ever more new positions with fancy titles to be trendy and give the illusion of "innovating" or whatever buzzword of the day. (Also as a way to give friends promotions.)
Even on a small team. I spent the last 18m working an 80h week so they gave me 3 juniors to train up and help with the workload. Now I have only 60h of work but it's also an additional 20h of coaching and training and rework. In 6 months I should be fine when they spin up but I can totally understand what happens in larger teams being killed by being given a dozen new people to suddenly train up, all on the payroll
@@mytech6779 it's funny because a lot of US business schools teach that it wasn't management bloat but unionized factory laborers demanding unreasonable conditions combined with engineers sabotaging attempts at implementing modern just in time management practices.
I had the privilege of taking "A Personal History of Computing" from Professor Brooks when I was a university student (probably 2013, but I don't remember for sure). It was basically an hour of him telling stories of his life every Friday for a semester. I wish I could remember more than I do, but I do remember that a lot of it was fascinating to hear!
@@kevincozens6837 A lot of them are in the book and its revised edition. Fred's in his 90's, now. He founded the Computer Science department at UNC-CH and was still showing up at department events pre-COVID. I think the department records the special events and I know there are interviews with him on TH-cam. Just search for Fred Brooks.
If there was a Disney movie this man would be the voice of a great old knight who tells his grand kids about his advantures when he sailed the seven C s, how it took him one Swift Go with his sword (which was all Rust) to fight a giant Python. And he would tell how he collected a valuable Ruby and the Elixir of life from an island called Java. He didn't have a Basic life at all. He would tell the story how he fell in love with Julia and that they used to play Dart and had some Smalltalk. So quickly he put a Ring on her finger and they got married. His old friend Pascal did the ceremony. Professor Brailsford is my favourite Computerfile story teller 😀
If you’ve read anything by Douglas Hofstadter -- particularly his classic _Gödel, Escher, Bach: An Eternal Golden Braid_ -- that law of his starts to make more sense.
The biggest point of confusion which still percolates all businesses today: Humans are NOT resources, yet we're supposed to behave as if we're are resources for the higher-ups
I've worked at two places so far where management stated that they want any developer to be able to join any team and work on any project at any time. Both places were terrible places to work.
@@OldBaldDad That's just a convenient way for the employer to be flexible in how they use their resources. If no one has been hired for anything more specific than "Software Engineer" you can move people around at will. I used to work at such a place. They burnt people out and still would not be profitable. Making bonuses stay at zero for years. I now work at a bigger company where everyone has a specific role to play and where we are expected to be experts in our area. I work way less and I am much happier.
@@Jonteponte71 All I can say that I was very confused when I first came upon the term "human resources", I had no idea what that even meant - you see, over here in Germany, that is usually called "Personalabteilung" - personnel department. And I still think calling it "human resources" shows one of the ugliest sides of capitalism.
To calculate the duration of a project, estimate how long it will take, double it, and move to the next highest unit. Thus a 2 week task will take four months.
Thank you Sean, for filming and editing this video. I know the professor is what most people focus on, and he is genuinely amazing, but I think you deserve a thanks, too, for taking the camera and letting us be there with him. Stay awesome
I’ve worked on six month projects that took 18 months because once you have 4 or more people at least one of them is a manager and then you have meetings. Planning meetings, status meetings, meetings about meetings, meetings about planning meetings, and meetings about the status of planning meetings…
I’m not in the computer industry, and some of this goes right over my head, but I’ve watched the professor on this channel for years and could listen to him for hours. His students were so lucky to study under him 👍🏼
I wonder if most young engineering organizations are familiar with this. When I was starting 25 years ago this was one of the major engineering process books. The tech in the book itself was already pretty old but the "Mythical Man Month" was a standard concept. Of course back then we actually used books. They had a lot of info, but the problem was we had to pay $40+ for everything. Sometimes they came with disks or CDs!
I've not heard this before on the timescale of months, but similar I think. (eg, It's a 10 man-hour job). Indeed, the wikipedia page for Man-month is just a redirect to Man-hour, where it's explained decently well I think.
I was looking at some statistics a few years ago on contributions to the Linux kernel. At the time there were something like 1000 active contributors, and 10,000 lines of new/changed code flowing into the kernel each day. Which worked out to 10 lines per contributor per day -- which is a figure straight out of Brooks.
I had a software engineering course in university that I took a year ago where a large portion of our readings came out of "Mythical Man Month". Now working in the industry, It's pretty obvious to me that the principles still hold very true and should continue to be seen as a standard concept in engineering. It's probably been one of my favorite books on software engineering.
True, Inter-communication and coordination (which includes training new people and using them effectively) is one reason why men and months are not interchangeable. Inherently serial vs. massively parallel problems, too. But there's another one, not touched in the video: the aptitude of men for a certain task. There are some tasks than can simply be done by only a hand full of people. I cannot run 100 meter in under ten seconds and never will be able to. Not with any amount of help or training, ever.
The anecdote I most remember is that in the later stage of System 360 development they hit a wall, where trying to fix bugs introduced more than they'd got rid of. Professor Brailsford, if you happen to see this, how about a look at "The Soul of a New Machine"?
The efficiency of a project is defined at its outset, and that is almost as always true of its efficacy. As the project managers mantra goes 'get the very best people, in the smallest quantity, as early as possible'....
@@nickatbasel Competent people often contribute to the problem as well, if only by pointing out mistakes in the design and re-iterating that they would have done it in a completely different way...
This exact problem is an issue in parallel programming as well. If you partition a task among too many threads, it starts taking *more* time than before, because the time spent on communication dominates the time spent on doing actual work.
Very cool, I had heard about Dr Brooks' contribution to the 8-bit byte before, but hadn't really put it in perspective what that meant for modern computing.
IBM has a lot to answer for in regards to the hardware of the PC. But without IBM, we would not necessarily be where we are now. They did research and design that would not have been possible without a company the size of IBM.
The hardware design of the original IBM PC was almost a straight copy of the Apple II that the project lead owned privately. Also, it was the first successful attempt of IBM to break into that market. I don't think those two are unrelated.
@@KaiHenningsen Don't disagree. I just think they could have done better than to basically copy what everybody else was doing. The reason the IBM PC because THE PC, was all down to name cachet. This wasn't a home computer, this was a BUSINESS machine. While the IBM PC was not a bad home computer, it wasn't a great business machine. At least in my opinion. It only got the foothold that it did because of the badge on the case, not the design of the machine.
@@jeromethiel4323 They tried to do it the IBM way. It was a big flop - nobody wanted to buy it, because it was much too expensive. So this attempt went with "only use off-the-shelf hardware".
@@jeromethiel4323 Oh, and I'll also point out that there was a time when business people went to a computer store and asked to buy a "VisiCalc machine". That was an Apple II.
Professor Brailsford videos are always the most enjoyable. I've just left a company where the directors don't understand the man month 😁 I'd like to see computerphile do a video on the Texas Instruments TMS9900 microprocessor.
There are other essays in the book that no one ever mentions, because they're not in the title. However, they can be just as interesting. My favorite is the "Second System Effect."
Fred was also the architect for the IBM 360, an architecture that could scale from a small computer to a very large one. Prior to the 360 almost every computer product had a unique principles of operation. While most were a variation of the von Neumann architecture they tended to be aimed at specific applications, roughly divided between "scientific" and "commercial." The 360 architecture was designed to work for both. It was also designed to be extensible; with many extensions it is still in use today.
Fred actually had proposed an alternative to the 360 which lost out to it. He still got appointed to work on it. The 360 allowed IBM to sell a computer in all of its niches that could always be compatible across the whole range.
I have a coworker who, instead of performing a 5 minute task, will spend 5 minutes explaining it to someone else to have them do it. Doubling the manpower used has tripled the manhours needed to complete the task.
If you think about it, that is what programming is about as well. You explain once , in program code, how to solve a problem. If the problem occurs often enough there is a winning to be made. If not, you wasted time.
“… spend time explaining a task to someone else and then have THEM do it.” Ah, yes, page six in the Manager’s Handbook, delegating responsibility. (AKA, passing the buck, when you do this and you’re not the manager.)
I had the 1995 version. I graduated in 1998 and read through it at my first job where we were really just creating a software company from scratch. While its been a long time, I think it holds up today.
so that's what the "many men" example was about! I've always heard it as a way to explain that bigger teams don't always mean less development time, with the counter example "if you have a pregnant person, the pregnancy lasts 9 months, but it's not like if you had two pregnant people it takes them 4 months and a half"
Thanks as always David. Much wisdom and truth here. At work I can delegate, but tricky tasks are less stressful to do myself (so much serial time and effort needed to meet, explain, track, chase, check, and correct their work)
Cannot count the times I've had to reference "Brooks law" to steer the ship back in the direction of reality ... once again thank you prof. Brailsford for your insight into these historic decisions made in computer engineering history
Always love hearing Professor Brailsford talk about computing history. I like the "How did the project get to be a year late? One day at a time." bit. I might have to use that response if I am involved in a project that winds up being late. :)
I remember reading that back in the 1980s and it definitely formed my attitudes towards project and team management, but an even bigger influence was Gerald M. Weinberg's "Psychology of Computer Programming" which is another classic from another ex-IBMer of the same era. @Computerphile should review Weinberg's book too.
One of the interesting points about the book is where it discusses improvements in programmer productivity over time. There was an order of magnitude improvement over the decade 1985-1995 (up to the point he did the Anniversary Edition), and he discussed how another order of magnitude might be managed over the subsequent decade. He saw that it was essential to move to ever higher-level languages, which left more of the details to lower layers -- he called this “metaprogramming”. His example of a “metaprogramming” language was AppleScript -- a choice which has not stood the test of time, and one area where the book shows its age. What was more effective at the time was Perl, possibly also Tcl, which have since been joined by others including Ruby, JavaScript/Node, PHP and Python. These are the languages that, in my view, have really made a difference to programming since then.
@@autohmae I used AppleScript for about a decade from its introduction in 1993. The only “event system” it had during that time was AppleEvents, and JavaScript has never had (or needed) anything like that. AppleScript was slow and made simple things awkward. For example, taking a substring of a string would raise an error if the substring was empty. Any other language would return a zero-length substring, but no, AppleScript required you to check for that situation as a special case.
There were DEC-10 and DEC-20 computers that had 36 bit words and 18 bit addresses in use at both academic institutions and commercial (banking) companies into the late 1980s and early 1990s. Many of the early DARPA-NET sites were those 36 bit computers made by Digital Equipment Corporation. (Software usually stored 5 7-bit ascii characters in each word, although sometimes we use 6 6-bit characters)
Fred Brooks also started the computer science department at the University of North Carolina, where I'm a PhD student. He's an inspiring researcher and a devoted man of God, and one of the most interesting people I've had the pleasure of meeting.
When I first started at Georgia Tech in September of 1988, I had to read this book for one of my early Comp Sci classes. I can't believe it's been that long ago!
@@imveryangryitsnotbutter I'm trying to maintain some vintage computers, using XT-IDE and Compact Flash to replace hard drives, where I have to use even much less than 1/1000nds of the CFs because of inbuilt limitations of the maximum hard drive space can be utilized.
I knew Fred and Nancy Brooks at the Boardman Road Research Lab in Poughkeepsie in 1956. He went on to lead the Computer Science Department at University of North Carolina. I still have his email address and was in touch with him just a few years ago. The book was based upon his experience in developing the first 360 Operating System and was the authority for project management for decades. It included the idea that some tasks could not be subdivided; the analogy was that one could not produce a baby in a month using nine women. It also included the idea that adding people to a late project would make it later.
What wonderful insight - educational institutions should be covering this (and indeed the rest of the videos on Computerphile/Numberphile). Thanks for putting it together!
Some bot tried stealing your post and changing it up a bit to steal your thunder to spam for their channel. Been happening more and more on all kinds of popular videos.
6:26 Not just an odd number, but a prime number: 7. One that cannot be evenly divided up into anything else! Before byte addressability became _de rigueur_ , computers had word lengths like 24 bits, 36 bits, 60 bits -- all numbers with lots of integer divisors including powers of 2 and 3, even 5, so they could be divided up into equal-sized portions in many different ways. But with byte addressability, the basic unit is 8 bits, and the natural machine word length (so far) is 2, 4 or 8 times this. So the only factors you have are powers of 2, which limits the ways you can divide them up.
Isn't the main idea behind byte addressing that you multiply instead of dividing? I.e. instead of the word being 60 bits and you can define arbitrary "bytes" being a divisor of 60, the byte is 8 bits and you can define "words" as any multiple of 8.
Actually in the 360 (much like in later microprocessor designs like the 68000 series) - the Instruction Set Architecture defines 32 bit words, but the actual implementations varied depending on the power of the machines in the series. For example the expensive 360/195 had 32 bit paths to memory etc., but low end machines used 16 and even 8 bit paths to save money at the cost of lower speed.
Thanks! this is very interesting. one of my correspondents (Clem Cole of Intel) tells me that Fred Brooks and Gene Amdahl were permanently at war about this. The uneasy compromise was to do 32 bit calcs inside the CPU but to only store 24bits in the memory of the cheaper machines. .Is that roughly correct?
@@profdaveb6384 Not quite correct. The ISA definition requires that the effect of code is the same on all machines - there was a powerful committee to guide the development of the ISA. But the different levels of machines were handed out for design to different labs and implemented in different ways to meet the required cost / performance targets. The high end machines from Poughkeepsie lab were hard wired with 32 bit data paths and dedicated register h/w. Low end machines were some of the first to employ microcode - especially pushed by (Sir) John Fairclough from the Hursley UK lab for the lowest end model 30. They used 8 bit data paths internally and effectively executed serially (4 cycles per 32 bit word read/write). They also stored registers directly in core store to save cost - so even register operations effectively became memory accesses. The history of the IBM 360 by Pugh et all makes great reading even today!
I think the 24 bit comment comes from the 24 bit addressing space in the 360, later increased to 31 bits in the 370 range. Incidentally the use of microcode in low end machines led to the invention of emulation - using microcode to emulate a different machine ISA entirely. This enabled IBM to aid migration of older 7090 and other machines to the new 360 range without requiring a complete immediate code rewrite.
@@profdaveb6384 The standard subroutine calling scheme passed a pointer to a variable length list of addresses of the parameters, the last parameter being marked by the top bit being set. Oops!
There were a number of us in the Boardman Road Lab, including Fred and Nancy that were from the Deep South and that had regional accents. We used to gather in the garden at lunch time, and taking parts, read from Walt Kelly's Pogo comic strip. Pogo was the source of great wisdom including "We have met the enemy and he is us."
I felt an important idea driven home in the book which was just barely missed in the video is: adding workers to a late project makes it later. If you have a 4 person team and the project is 3 months late... if you add 2 more engineers, the project will now be 5 months late.. because the team has to stop working to bring the new engineers up to speed on everything, which takes a _lot of time_. Speaking from experience.. it's always going to take at least a month to onboard a new engineer if the work you're doing is genuinely building something new and different. A new engineer can only come up to speed much more quickly than that if what you're doing is work they've already done before and will immediately recognize.. in which case it's more like... busywork.. than engineering.
I think this is also why the book is called the "mythical man-month" and not the "mythical man-hour". Engineer time is measured in months, not hours. Give me just one hour to work on a new problem I've never seen before and all I'll be able to do in that time is start getting a high-level understanding of what the problem looks like... no forward progress will be achieved... I won't even make forward progress in the 2nd hour.. and I don't think that's because I'm stupid :)
My first encounter with “how many X are needed to do Y” - During my high school years, a part time job at a pharmacy included receiving deliveries and storing products on shelves in the basement. Around Christmastime one day a bunch of deliveries came in at the same time, so the boss sent one of the upstairs sales clerks down to help me. We worked pretty diligently; but, of course, it was just when we happened to sit down for a second that the boss came around the corner. Rather than chewing us out, he philosophically observed, “A boy is a boy, two boys is half a boy, and three boys is no boy at all”.
A classic. Team that book up with Edward Yourdon's "Peopleware" and Jim McCarthy's "Dynamics of Software Development" and you have a powerful triptych of software management insight.
The more people you add the harder it is to communicate to them and manage them so there is a point where you just make things worse with more men. Your programmers need to be programming not talking to the others. Its funny how much of this applies to parallel programming where synchronization can easily overwhelm the data processing if you go too far parallel.
I'd love to see a video about the philosopy behind TPF, both when introduced and over time, thinking about airlines and, if memory serves, things like the NYC 911 system.
I'm not sure if this is an intentional design element or simply a result of necessity but I really dig the use of printer paper as the medium on which to present notes throughout the discussion :)
This seems to be more about project management methodology and best practices how much resources to allocate for a task before it will start to become counterproductive.
While I was teaching at the Naval Postgraduate School in Monterey I used to watch the Research Channel on cable TV. One day it featured an interview with Fred on college education and, specifically on whether there was a future for it. In his inimical style, Fred pointed out that we have three institutions for civilizing young men, military service, prison, and college. Which would one choose for his son?
Those of us who are programmers are probably aware of the related issue in multi-processing. Adding more processors to a task may or may not speed it up. Some tasks, for example applying the same operation to a collection of items, can be sped up by using one processor per item. But other tasks may be inherently sequential - the second step cannot be started until you have the results from the first step. More processors do not help.
From elsewhere in the comments: "For those of you admiring my sweater it was bought for me by my daughter and is from the Nottingham-based Paul Smith men's fashion house. Not as famous a name as Ralph Lauren (say) but Nottingham is very proud of Sir Paul (as he now is)"
If you hire 1 programmer, they can do the job in 9 months. If you hire 100 programmers, they will argue about the architecture and 3 years later they still won't have finished.
I've been familiar with this book for decades. The lesson described in this video ia an imporant one, but in my opinion, not the most interesting. Brooks also reports on the diversity of individual productivity of programmers. I believe they found that there was a factor of 20 or 30 between the least and most productive programmers. As astonishing as this might seem, even more important is that the range did not correlate with experience. Think about that. One programmer with a couple of years experience writes a program in one day that takes someone with a decade behind him 20 days. On top of that the former program is better in multiple ways. More clearly written, more easily updated, bug free and faster. Thank about that.
For those of you admiring my sweater it was bought for me by my daughter and is from the Nottingham-based Paul Smith men's fashion house. Not as famous a name as Ralph Lauren (say) but Nottingham is very proud of Sir Paul (as he now is)
It's a beautiful piece of garment.
The man, the legend, himself
Lol. I officially have Sweater Envy :)
@@zach-87532 77777⅞77777777777777
Perfect explanation of something I've had to communicate to management too many times in IT over the past 30 years. I was once offered an entire team to do something in one month, and I told them it would take a year, regardless. They gave the project to another senior engineer, and it didn't take a year; It took a year and a half.
I read "The Mythical Man-Month" in just one hour by paying an offshoring company to have each page read by a different person. #efficiency
I've been in industry nearly 20 years and time and again I've had to say: "9 women cannot have a baby in 1 month."
Great job, Prof!
Indeed. We refer to it as "the three women scenario". Same reasoning as yours :-)
Yes they can, if they are pipelined. There's 9-month startup, of course.
At some companies (just don't start listing them, we have no space for it), it's not a joke, but a thesis.
The first 90% of the job takes 90% of the time scheduled. The last 10% of the job takes the OTHER 90% of the time!
oh man I can hear the Gantt charts flying
The way I'm most familiar with putting it is "Adding manpower to a late software project makes it later."
The expected speedup is O(n), but the increase in communication complexity is O(n^2), hence the minimum on the complex project curve.
"Non-partitionable task" is now in my explanatory vocabulary when interacting with unrealistic management.
Wait. There's other kinds of management?!
@@capturedflame Indeed. :)
'Intractable' is an elegant term I read in an article titled 'Beware intractibility bias' I would recommend.
I ran into the mythical man month problem at my previous job. Upper management decided to add a bunch of offshore developers to the team, thinking it would make us so much more efficient because we'd be staffed almost around the clock. The time zone difference of close to 12 hours made communication incredibly difficult. We couldn't effectively collaborate with each other because any meetings had to take place during some people's off hours. Getting email responses from the offshore people would almost always take until the next day. The people we hired were great workers, but we couldn't simply hand off tasks to them at the end of the day. An effective hand off would require a lengthy real-time conversation about the details.
No matter how many times I brought it up, our upper management never seemed to understand that software development is not the same as assembly line work. You can't simply hand everything off at the end of your shift.
Sounds like for that to work you'd need:
People at both sites to be genuinely empowered to make any changes necessary to the system to get their work done, without fear of blame - if anything does go wrong the team at the other site can undo the changes and then start a discussion
A decoupled software architecture where each site has their own module(s) that they primarily work with, so that the majority of knowledge about the internals of those modules are on site.
If you get really decoupled eventually you could give each site its own stream of work and its own product management capability.
You needed 12 hour shifts to match the 12 hour time difference. Direct shift handoff and 3 day weeks.
A friend of mine pleaded with his company a few years back to replace 4 off-shore people with 1 me. I would be there 9-5, I'd be more efficient, I'd even offer to do the weird hours if needbe. The only thing they understood was the cost was the same and 4>1 so they struggled on, no other metrics like performance or usability or GUI mattered. He left the company eventually because they had the same attitude to most things
@@mytech6779 Realistically there would need to be overlap because these are complex engineering tasks, not a relay race.. teams have to discuss what is happening.. the next "shift" can't just come in and understand what is happening at a glance.. it will take less work time to spend an hour with them explaining the new state of things, how it works, and the design decisions that led to it than for them to spend time looking at it and trying to figure these things out for themselves.
There is a fundamental problem that communication and coordination across many brains incurs a time and energy cost... and the proportion of time and energy spent on coordination increases as the team size increases until at very large sizes a majority of your total time is being spent on coordination, where you have many people whose sole job is to facilitate the coordination (project managers), and even whole teams who write project management software to facilitate the coordination.
Back 15-20 years ago I worked at a national laboratory that tried to do a 9/80 schedule, every other Friday off. Ideally you stagger things so every department is fully staffed four days of the week and has a half-staff Friday. The problem is that nothing anyone was doing, even the seemingly trivial jobs, was truly partitionable. There was far too much collaboration going on; teams would all take the same week off because they couldn't do anything, then those teams would be interacting with other teams... a good idea in theory and didn't work in practice. And that's before you factor in the people who somehow manage to not be in on any Friday.
It was changed at some point. I never heard the official reason why but rumor had it that a tour was being given to some bigwigs, they walked into the HR department for something and it was completely empty.
Another day of Professor Brailsford, casually blessing the internet.
Alan Kay described computing as a pop culture, not good at learning from experience (mistakes?) of the past.
This is the sort of talk everyone should see :-)
Ah, that beginning section triggered some memories. :)
At our uni, we had a powerful computing machine able to handle many dozens of terminals at once, backed by a full megabyte of core memory which we could physically watch operating. 32 32k "pages" of 160•256 little magnetic cores, each page the size of a wide door, all neatly jointed to a whole wall of wiring cabinet leading to the processor unit, so you could physically inspect and repair them by just flipping through. It had ten bits per byte! Eight bits per actual data byte, and two more for parity so that a stuck bit could be not only detected but fixed, transparently to the running program because the error correction system was running separate from the computer, one on each memory page. When we were shown this marvel of technology, the presenting professor demonstrated the incredible resilience of this type of memory architecture by taking out a ball pen, physically opening a memory page on the wall that was the memory, and running his pen over the cores to randomly flip some. We were awestruck. And then people started flooding in because their programs had crashed, some after days of runtime. We were slightly less awestruck afterwards. ;) Turned out you needed to use a non-conducting piece of wood or plastic for the demonstration, the metal pen had hit not just cores but also shorted wire crossings, flipping other bits elsewhere (including their parities) and thus causing some real memory corruption. Unfortunately, the professor had killed not some single unlucky program but a crucial part of the operating system which then took all the programs with it. Booting everything up again took several days. We were the last freshmen group to ever be demonstrated our computer's memory resilience mechanism.
This principle has been known since prehistoric times.
A tribe of 10 cavemen can team up and kill a herd of 10 mastodon in 1 day, but 1 caveman cannot kill a herd of 10 mastodon in 10 days.
Also known as "The Mythical Mammoth".
Get out!
*slow clap*
A punny inverse proof! Showing the opposite case is just as valid.
Groan.
That’s a bit crude. You could have made a more palatable, less antiquated metaphor.
One project I worked on was exactly like the last hypothetical in this video. We (the developers) knew that the project was a MINIMUM of 2 years, but a new manager came in and said he wanted it done in 9 months (Strange coincidence). We told him in meetings that it was a guaranteed 2-year project (because we had done the exact same project several years earlier and it was a 2-year project). This wasn't our first rodeo. Well, lo and behold, the project was 15 months late (and 15 months over budget) because the managing director wouldn't listen to the developers who had the experience. He tried to throw money at the problem in bonuses and OT, but it was a 2-year project and no amount of money or more people was going to shorten it. We knew it, and he eventually was removed from the project when it was already 6 months late and executive management interviewed the developers. Luckily, the executive team trusted our answers in the RCA. However, almost all of us had left by then because of the unreasonable work environment. I stayed on part-time to KT as much as I could to the new guys. The product did ship, and it shipped nearly on the 2-year end date we as a development team had forecast.
_experienced_ project managers are badly undervalued by incompetent managers. We had a senior software engineer who expected raw undergraduates to be seriously productive after three months into a large project; getting angry when they hadn't performed to his expectations. Typically, it took a year and by then they'd be performing miracles in carrying out tasks in a few hours that would have taken a week three months earlier.
It's not just team intercommunication. You are forgetting the managerial overhead - the push to have so many subordinates, middle managers, more middle managers.... limit their staff count (unless more levels of managers are inserted), etc. "Oh you can't possibly manager more than N people because we'd have to give you a pay raise.... we now have to get another manager..." An entire push for more people that's got nothing to do with solving the programming problem. Managers managing managers. The curve does indeed start to climb again as overhead becomes THE issue.
It gets worse when you have micromanagers who want continuous updates, no technical knowledge to comprehend what you're saying, and no trust that the engineers know what they are doing. I've had a manager explicitly say, "Put all of the information you've previously covered in the presentation because I don't want to have to remember anything for the meeting." Those meetings took easily 6x longer than they should have (not exaggerating, actually timed it) with no forward progress. I even pointed out the cost of one discussion (think ~15 engineers for 15 minutes on something we knew couldn't be resolved there) and got shouted down. I swear, management is the bane of most projects.
Middle management bloat is basically what took down GM back in the 70s and 80s and despite multiple major efforts at restructuring in the 80s and 90s they never quite figured out that core issue, just continuing to paper over the problem by selling off division after division until they had no extra stuff to divest and the core automotive business went bankrupt in 2009.
While the fish rots from the head when talking of responsibility(board and executives that let it happen), the internal mechanism is middle managers trying to hide their incompetence behind false delegation and as a way to get out of work, often creating ever more new positions with fancy titles to be trendy and give the illusion of "innovating" or whatever buzzword of the day. (Also as a way to give friends promotions.)
Even on a small team. I spent the last 18m working an 80h week so they gave me 3 juniors to train up and help with the workload. Now I have only 60h of work but it's also an additional 20h of coaching and training and rework. In 6 months I should be fine when they spin up but I can totally understand what happens in larger teams being killed by being given a dozen new people to suddenly train up, all on the payroll
@@mytech6779 it's funny because a lot of US business schools teach that it wasn't management bloat but unionized factory laborers demanding unreasonable conditions combined with engineers sabotaging attempts at implementing modern just in time management practices.
@@CalvinsWorldNews It took us 3 months to get a new dev started with the code base and easily 6 months to be productive. I feel your pain.
I had the privilege of taking "A Personal History of Computing" from Professor Brooks when I was a university student (probably 2013, but I don't remember for sure). It was basically an hour of him telling stories of his life every Friday for a semester. I wish I could remember more than I do, but I do remember that a lot of it was fascinating to hear!
Those sessions should have been recorded so the information could be preserved.
@@kevincozens6837 A lot of them are in the book and its revised edition. Fred's in his 90's, now. He founded the Computer Science department at UNC-CH and was still showing up at department events pre-COVID. I think the department records the special events and I know there are interviews with him on TH-cam. Just search for Fred Brooks.
If there was a Disney movie this man would be the voice of a great old knight who tells his grand kids about his advantures when he sailed the seven C s, how it took him one Swift Go with his sword (which was all Rust) to fight a giant Python. And he would tell how he collected a valuable Ruby and the Elixir of life from an island called Java. He didn't have a Basic life at all. He would tell the story how he fell in love with Julia and that they used to play Dart and had some Smalltalk. So quickly he put a Ring on her finger and they got married. His old friend Pascal did the ceremony.
Professor Brailsford is my favourite Computerfile story teller 😀
If there were a custom option on the report this comment option, I'd report this as exceptionally brilliant!
There's also Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.
You don't use plus-or-minus for time estimates, but times-or-divided-by, and never with a figure less than two.
@@Roxor128 How else can you keep your reputation as a miracle worker?
If you’ve read anything by Douglas Hofstadter -- particularly his classic _Gödel, Escher, Bach: An Eternal Golden Braid_ -- that law of his starts to make more sense.
The biggest point of confusion which still percolates all businesses today:
Humans are NOT resources, yet we're supposed to behave as if we're are resources for the higher-ups
I've worked at two places so far where management stated that they want any developer to be able to join any team and work on any project at any time. Both places were terrible places to work.
@@OldBaldDad That's just a convenient way for the employer to be flexible in how they use their resources. If no one has been hired for anything more specific than "Software Engineer" you can move people around at will. I used to work at such a place. They burnt people out and still would not be profitable. Making bonuses stay at zero for years.
I now work at a bigger company where everyone has a specific role to play and where we are expected to be experts in our area. I work way less and I am much happier.
I’m not sure what you mean, if humans are not resources at work then what are they?
@@SrssSteve Humans first, resources second?
If my employer see's me as 100% replaceable at any point in time, why should I bother?
@@Jonteponte71 All I can say that I was very confused when I first came upon the term "human resources", I had no idea what that even meant - you see, over here in Germany, that is usually called "Personalabteilung" - personnel department. And I still think calling it "human resources" shows one of the ugliest sides of capitalism.
To calculate the duration of a project, estimate how long it will take, double it, and move to the next highest unit. Thus a 2 week task will take four months.
Oh wow, I was just talking about this at work. To be fair, it's something that comes up often.
"You're a dial up connection I'm a gigabit lan. I'm a mythical man month you're a one minute man"
- Monzy
Thank you Sean, for filming and editing this video. I know the professor is what most people focus on, and he is genuinely amazing, but I think you deserve a thanks, too, for taking the camera and letting us be there with him. Stay awesome
I’ve worked on six month projects that took 18 months because once you have 4 or more people at least one of them is a manager and then you have meetings. Planning meetings, status meetings, meetings about meetings, meetings about planning meetings, and meetings about the status of planning meetings…
I’m not in the computer industry, and some of this goes right over my head, but I’ve watched the professor on this channel for years and could listen to him for hours. His students were so lucky to study under him 👍🏼
I wonder if most young engineering organizations are familiar with this. When I was starting 25 years ago this was one of the major engineering process books. The tech in the book itself was already pretty old but the "Mythical Man Month" was a standard concept.
Of course back then we actually used books. They had a lot of info, but the problem was we had to pay $40+ for everything. Sometimes they came with disks or CDs!
I've not heard this before on the timescale of months, but similar I think. (eg, It's a 10 man-hour job). Indeed, the wikipedia page for Man-month is just a redirect to Man-hour, where it's explained decently well I think.
I was looking at some statistics a few years ago on contributions to the Linux kernel. At the time there were something like 1000 active contributors, and 10,000 lines of new/changed code flowing into the kernel each day.
Which worked out to 10 lines per contributor per day -- which is a figure straight out of Brooks.
I had a software engineering course in university that I took a year ago where a large portion of our readings came out of "Mythical Man Month". Now working in the industry, It's pretty obvious to me that the principles still hold very true and should continue to be seen as a standard concept in engineering. It's probably been one of my favorite books on software engineering.
True, Inter-communication and coordination (which includes training new people and using them effectively) is one reason why men and months are not interchangeable. Inherently serial vs. massively parallel problems, too. But there's another one, not touched in the video: the aptitude of men for a certain task. There are some tasks than can simply be done by only a hand full of people. I cannot run 100 meter in under ten seconds and never will be able to. Not with any amount of help or training, ever.
The anecdote I most remember is that in the later stage of System 360 development they hit a wall, where trying to fix bugs introduced more than they'd got rid of.
Professor Brailsford, if you happen to see this, how about a look at "The Soul of a New Machine"?
The efficiency of a project is defined at its outset, and that is almost as always true of its efficacy. As the project managers mantra goes 'get the very best people, in the smallest quantity, as early as possible'....
"it's never as bad as the worst case scenario". LOL - it's often MUCH MUCH worse, where more people actively HINDERS project progress!
Exactly! I can't be the only one to have seen a positive slope on the curve🤣🤣
Indeed, that's actually a point made in the book, coined as Brooks Law: "Adding manpower to a late software project makes it later"
Especially if not everyone one the project is sufficiently competent.
@@nickatbasel Competent people often contribute to the problem as well, if only by pointing out mistakes in the design and re-iterating that they would have done it in a completely different way...
This exact problem is an issue in parallel programming as well. If you partition a task among too many threads, it starts taking *more* time than before, because the time spent on communication dominates the time spent on doing actual work.
That explains why a lot of processes I do only use four cores out of 8.
"Adding more manpower to a late project makes it more late"
Very cool, I had heard about Dr Brooks' contribution to the 8-bit byte before, but hadn't really put it in perspective what that meant for modern computing.
Thats why I love this channel. Been using computers since I was a child, never once considered why the button was called shift.
IBM has a lot to answer for in regards to the hardware of the PC.
But without IBM, we would not necessarily be where we are now. They did research and design that would not have been possible without a company the size of IBM.
The answer is - we needed to get something out in the market that was affordable and easily standardizable.
The hardware design of the original IBM PC was almost a straight copy of the Apple II that the project lead owned privately. Also, it was the first successful attempt of IBM to break into that market. I don't think those two are unrelated.
@@KaiHenningsen Don't disagree. I just think they could have done better than to basically copy what everybody else was doing.
The reason the IBM PC because THE PC, was all down to name cachet. This wasn't a home computer, this was a BUSINESS machine.
While the IBM PC was not a bad home computer, it wasn't a great business machine. At least in my opinion. It only got the foothold that it did because of the badge on the case, not the design of the machine.
@@jeromethiel4323 They tried to do it the IBM way. It was a big flop - nobody wanted to buy it, because it was much too expensive. So this attempt went with "only use off-the-shelf hardware".
@@jeromethiel4323 Oh, and I'll also point out that there was a time when business people went to a computer store and asked to buy a "VisiCalc machine". That was an Apple II.
This has absolutely nothing to do with the content of the video but that is an objectively wicked sweater.
Professor Brailsford videos are always the most enjoyable. I've just left a company where the directors don't understand the man month 😁 I'd like to see computerphile do a video on the Texas Instruments TMS9900 microprocessor.
I still have nostalgic memories of the 9900. Wrote a lot of assembly code for it in the 70's.
@@astrolad293 it's a crime that it never took off. It was the most capable chip of its era and almost designed for multitasking.
There are other essays in the book that no one ever mentions, because they're not in the title. However, they can be just as interesting. My favorite is the "Second System Effect."
The collision between “Plan To Throw One Away” and “Second-System Effect” was particularly amusing ...
Fred was also the architect for the IBM 360, an architecture that could scale from a small computer to a very large one. Prior to the 360 almost every computer product had a unique principles of operation. While most were a variation of the von Neumann architecture they tended to be aimed at specific applications, roughly divided between "scientific" and "commercial." The 360 architecture was designed to work for both. It was also designed to be extensible; with many extensions it is still in use today.
Fred actually had proposed an alternative to the 360 which lost out to it. He still got appointed to work on it. The 360 allowed IBM to sell a computer in all of its niches that could always be compatible across the whole range.
These videos from Computerphile are very insightfull
I can't believe I misread it and this wasn't about the mythical Man-Moth.
Weird innit
I have a coworker who, instead of performing a 5 minute task, will spend 5 minutes explaining it to someone else to have them do it. Doubling the manpower used has tripled the manhours needed to complete the task.
Let me guess, everyone involved gets paid by the hour?
This can be beneficial IF the person who is now trained can do it or similar tasks on their own after that
If you think about it, that is what programming is about as well. You explain once , in program code, how to solve a problem. If the problem occurs often enough there is a winning to be made. If not, you wasted time.
“… spend time explaining a task to someone else and then have THEM do it.”
Ah, yes, page six in the Manager’s Handbook, delegating responsibility.
(AKA, passing the buck, when you do this and you’re not the manager.)
Is that coworker perhaps....a manager?
I had the 1995 version. I graduated in 1998 and read through it at my first job where we were really just creating a software company from scratch. While its been a long time, I think it holds up today.
so that's what the "many men" example was about! I've always heard it as a way to explain that bigger teams don't always mean less development time, with the counter example "if you have a pregnant person, the pregnancy lasts 9 months, but it's not like if you had two pregnant people it takes them 4 months and a half"
Thanks as always David. Much wisdom and truth here. At work I can delegate, but tricky tasks are less stressful to do myself (so much serial time and effort needed to meet, explain, track, chase, check, and correct their work)
Cannot count the times I've had to reference "Brooks law" to steer the ship back in the direction of reality ... once again thank you prof. Brailsford for your insight into these historic decisions made in computer engineering history
Always love hearing Professor Brailsford talk about computing history. I like the "How did the project get to be a year late? One day at a time." bit. I might have to use that response if I am involved in a project that winds up being late. :)
Just a couple years ago I got to listen to Fred Brooks speak about his involvement in creating the 8-bit byte among other things at UNC Chapel Hill!
Fred is a great speaker and has a wonderful voice.
I remember reading that back in the 1980s and it definitely formed my attitudes towards project and team management, but an even bigger influence was Gerald M. Weinberg's "Psychology of Computer Programming" which is another classic from another ex-IBMer of the same era.
@Computerphile should review Weinberg's book too.
One of the interesting points about the book is where it discusses improvements in programmer productivity over time. There was an order of magnitude improvement over the decade 1985-1995 (up to the point he did the Anniversary Edition), and he discussed how another order of magnitude might be managed over the subsequent decade. He saw that it was essential to move to ever higher-level languages, which left more of the details to lower layers -- he called this “metaprogramming”.
His example of a “metaprogramming” language was AppleScript -- a choice which has not stood the test of time, and one area where the book shows its age. What was more effective at the time was Perl, possibly also Tcl, which have since been joined by others including Ruby, JavaScript/Node, PHP and Python. These are the languages that, in my view, have really made a difference to programming since then.
Javascript took ideas from AppleScript, so in some ways, the ideas behind AppleScript did stand the test of time.
@@autohmae What ideas did JavaScript take from AppleScript, pray tell?
Replying to follow this very recent conversation
@@lawrencedoliveiro9104 the event system was inspired by AppleScript,which is how it's integrated in browsers/HTML documents.
@@autohmae I used AppleScript for about a decade from its introduction in 1993. The only “event system” it had during that time was AppleEvents, and JavaScript has never had (or needed) anything like that. AppleScript was slow and made simple things awkward. For example, taking a substring of a string would raise an error if the substring was empty. Any other language would return a zero-length substring, but no, AppleScript required you to check for that situation as a special case.
Articulate, charming and reminiscent of a true golden age.
There were DEC-10 and DEC-20 computers that had 36 bit words and 18 bit addresses in use at both academic institutions and commercial (banking) companies into the late 1980s and early 1990s. Many of the early DARPA-NET sites were those 36 bit computers made by Digital Equipment Corporation. (Software usually stored 5 7-bit ascii characters in each word, although sometimes we use 6 6-bit characters)
Fred Brooks also started the computer science department at the University of North Carolina, where I'm a PhD student. He's an inspiring researcher and a devoted man of God, and one of the most interesting people I've had the pleasure of meeting.
When I first started at Georgia Tech in September of 1988, I had to read this book for one of my early Comp Sci classes. I can't believe it's been that long ago!
1 MB of memory cost $100,000.00 in 1960. Today, that same 1 MB in RAM will cost you less than $0.01.
Technology is amazing.
Trying to find just 1MB of ram is a bit difficult nowadays...
@@mattsadventureswithart5764 Just take a 1 GB RAM stick and cut off 1/1000th of it.
@@imveryangryitsnotbutter I'm trying to maintain some vintage computers, using XT-IDE and Compact Flash to replace hard drives, where I have to use even much less than 1/1000nds of the CFs because of inbuilt limitations of the maximum hard drive space can be utilized.
@@imveryangryitsnotbutter ...or a 1/1024th of it? Just to start a completely other discussion ;-)
@@larsscholz3762 You're thinking of gibibytes. Those are abbreviated GiB.
I knew Fred and Nancy Brooks at the Boardman Road Research Lab in Poughkeepsie in 1956. He went on to lead the Computer Science Department at University of North Carolina. I still have his email address and was in touch with him just a few years ago. The book was based upon his experience in developing the first 360 Operating System and was the authority for project management for decades. It included the idea that some tasks could not be subdivided; the analogy was that one could not produce a baby in a month using nine women. It also included the idea that adding people to a late project would make it later.
Fascinating. One of the first books I came across in the software engineering. Remember the preface of it. Need to read it
What wonderful insight - educational institutions should be covering this (and indeed the rest of the videos on Computerphile/Numberphile). Thanks for putting it together!
An IBM man-year is 730 programmers trying to finish a project by lunchtime.
I was on a well resourced project where the team leaders said that their job was to get the team down to the correct size.
They did and we delivered.
I see Professor Brailsford, I upvote
Some bot tried stealing your post and changing it up a bit to steal your thunder to spam for their channel. Been happening more and more on all kinds of popular videos.
RIP Fred Brooks.. Thank you for MMM
I saw the thumbnail title, saw the photo of Prof Dave, thought it should be the Legendary Man Month.
Its a classic book i read in my undergraduate years, i still have it and its just as relevant today
After women's day, comes man month
Perfectly balanced, as all things should be
That's how long it takes to get a chore started and finished.
The inverse is true of getting past a disagreement.
6:26 Not just an odd number, but a prime number: 7. One that cannot be evenly divided up into anything else!
Before byte addressability became _de rigueur_ , computers had word lengths like 24 bits, 36 bits, 60 bits -- all numbers with lots of integer divisors including powers of 2 and 3, even 5, so they could be divided up into equal-sized portions in many different ways. But with byte addressability, the basic unit is 8 bits, and the natural machine word length (so far) is 2, 4 or 8 times this. So the only factors you have are powers of 2, which limits the ways you can divide them up.
Isn't the main idea behind byte addressing that you multiply instead of dividing? I.e. instead of the word being 60 bits and you can define arbitrary "bytes" being a divisor of 60, the byte is 8 bits and you can define "words" as any multiple of 8.
Actually in the 360 (much like in later microprocessor designs like the 68000 series) - the Instruction Set Architecture defines 32 bit words, but the actual implementations varied depending on the power of the machines in the series. For example the expensive 360/195 had 32 bit paths to memory etc., but low end machines used 16 and even 8 bit paths to save money at the cost of lower speed.
Thanks! this is very interesting. one of my correspondents (Clem Cole of Intel) tells me that Fred Brooks and Gene Amdahl were permanently at war about this. The uneasy compromise was to do 32 bit calcs inside the CPU but to only store 24bits in the memory of the cheaper machines. .Is that roughly correct?
@@profdaveb6384 Not quite correct. The ISA definition requires that the effect of code is the same on all machines - there was a powerful committee to guide the development of the ISA. But the different levels of machines were handed out for design to different labs and implemented in different ways to meet the required cost / performance targets. The high end machines from Poughkeepsie lab were hard wired with 32 bit data paths and dedicated register h/w. Low end machines were some of the first to employ microcode - especially pushed by (Sir) John Fairclough from the Hursley UK lab for the lowest end model 30. They used 8 bit data paths internally and effectively executed serially (4 cycles per 32 bit word read/write). They also stored registers directly in core store to save cost - so even register operations effectively became memory accesses. The history of the IBM 360 by Pugh et all makes great reading even today!
I think the 24 bit comment comes from the 24 bit addressing space in the 360, later increased to 31 bits in the 370 range. Incidentally the use of microcode in low end machines led to the invention of emulation - using microcode to emulate a different machine ISA entirely. This enabled IBM to aid migration of older 7090 and other machines to the new 360 range without requiring a complete immediate code rewrite.
@@Richardincancale Why only 31 bit addresses in the 370:series? Was bit 32 reserved for some special purpose?
@@profdaveb6384 The standard subroutine calling scheme passed a pointer to a variable length list of addresses of the parameters, the last parameter being marked by the top bit being set. Oops!
This is epic,
the delivery is excellent,
great communicator,
but still not enough to flatten the curve.
There were a number of us in the Boardman Road Lab, including Fred and Nancy that were from the Deep South and that had regional accents. We used to gather in the garden at lunch time, and taking parts, read from Walt Kelly's Pogo comic strip. Pogo was the source of great wisdom including "We have met the enemy and he is us."
"Man moth?" - Karl Pilkington
I read the 20th anniversary edition in high school, and Chapter 4's discussion of conceptual integrity affected me deeply.
Professor Brailsford is a gift to humanity
My favourite professor is on it again with the most confusing title and introduction so far.
Perhaps you can do a video about Amdahl's law, that would be neat.
I felt an important idea driven home in the book which was just barely missed in the video is: adding workers to a late project makes it later. If you have a 4 person team and the project is 3 months late... if you add 2 more engineers, the project will now be 5 months late.. because the team has to stop working to bring the new engineers up to speed on everything, which takes a _lot of time_. Speaking from experience.. it's always going to take at least a month to onboard a new engineer if the work you're doing is genuinely building something new and different. A new engineer can only come up to speed much more quickly than that if what you're doing is work they've already done before and will immediately recognize.. in which case it's more like... busywork.. than engineering.
I think this is also why the book is called the "mythical man-month" and not the "mythical man-hour". Engineer time is measured in months, not hours.
Give me just one hour to work on a new problem I've never seen before and all I'll be able to do in that time is start getting a high-level understanding of what the problem looks like... no forward progress will be achieved... I won't even make forward progress in the 2nd hour.. and I don't think that's because I'm stupid :)
My first encounter with “how many X are needed to do Y” -
During my high school years, a part time job at a pharmacy included receiving deliveries and storing products on shelves in the basement. Around Christmastime one day a bunch of deliveries came in at the same time, so the boss sent one of the upstairs sales clerks down to help me.
We worked pretty diligently; but, of course, it was just when we happened to sit down for a second that the boss came around the corner. Rather than chewing us out, he philosophically observed, “A boy is a boy, two boys is half a boy, and three boys is no boy at all”.
A classic. Team that book up with Edward Yourdon's "Peopleware" and Jim McCarthy's "Dynamics of Software Development" and you have a powerful triptych of software management insight.
The more people you add the harder it is to communicate to them and manage them so there is a point where you just make things worse with more men. Your programmers need to be programming not talking to the others. Its funny how much of this applies to parallel programming where synchronization can easily overwhelm the data processing if you go too far parallel.
I'd love to see a video about the philosopy behind TPF, both when introduced and over time, thinking about airlines and, if memory serves, things like the NYC 911 system.
My favourite: "adding manpower to a late software project makes it later"
I'm not sure if this is an intentional design element or simply a result of necessity but I really dig the use of printer paper as the medium on which to present notes throughout the discussion :)
Please do a video explaining "No Silver Bullet" because whenever I try to explain it, I may as well be talking to the trees.
Adding people to a project that is late will make it later.
This seems to be more about project management methodology and best practices how much resources to allocate for a task before it will start to become counterproductive.
good story and a fabulous jumper
One of the most important books I read at MIT, still so relevant today.
While I was teaching at the Naval Postgraduate School in Monterey I used to watch the Research Channel on cable TV. One day it featured an interview with Fred on college education and, specifically on whether there was a future for it. In his inimical style, Fred pointed out that we have three institutions for civilizing young men, military service, prison, and college. Which would one choose for his son?
Those of us who are programmers are probably aware of the related issue in multi-processing.
Adding more processors to a task may or may not speed it up.
Some tasks, for example applying the same operation to a collection of items, can be sped up by using one processor per item.
But other tasks may be inherently sequential - the second step cannot be started until you have the results from the first step. More processors do not help.
Complete side note - but I really am digging that sweater. Truly!
From elsewhere in the comments:
"For those of you admiring my sweater it was bought for me by my daughter and is from the Nottingham-based Paul Smith men's fashion house. Not as famous a name as Ralph Lauren (say) but Nottingham is very proud of Sir Paul (as he now is)"
Ooh I have this book and I've been wanting to read it.
And I thought we are celebrating man pages for whole month.
I remember this book being required reading on the team I was on at ICL in the 1970's. Pitty it wasn't similarly required for management!
The funniest unit I've ever worked with was mega "user-hour-weeks".
Interesting and well told! Thanks
Thanks for not being embarrassing and ridiculous and changing it all to 'person month'. We're adults here.
It's probably no surprise that Amdahl's Law is extremely similar, but for CPUs.
The Green Bar (or smaller green bar) paper he uses it Great.
If you hire 1 programmer, they can do the job in 9 months. If you hire 100 programmers, they will argue about the architecture and 3 years later they still won't have finished.
Best saying about internal communications, re apprentices: "One lad is a lad; two lads is half a lad; three lads is no lad at all."
I've been familiar with this book for decades. The lesson described in this video ia an imporant one, but in my opinion, not the most interesting. Brooks also reports on the diversity of individual productivity of programmers. I believe they found that there was a factor of 20 or 30 between the least and most productive programmers. As astonishing as this might seem, even more important is that the range did not correlate with experience. Think about that. One programmer with a couple of years experience writes a program in one day that takes someone with a decade behind him 20 days. On top of that the former program is better in multiple ways. More clearly written, more easily updated, bug free and faster. Thank about that.
Great book, really helped to put your gut feelings into words ;) May I recommend "Software Project Survival Guide" as an additional read?
I think mythical man month is a perfect term for describing faulty assumptions.
Was this re uploaded I feel like I watched this years ago not just 8 months ago