Sunday, September 25, 2022

Theory of constraints by eliyahu goldratt download pdf

Theory of constraints by eliyahu goldratt download pdf

Please wait while your request is being verified...,Theory of Constraints. Eliyahu M. Goldratt

Download Free PDF. A Review of Goldratt’s Theory of Constraints (TOC) – lessons from the international literature. by Steven J. Balderstone and Victoria J. Mabin, from Victoria Semantic Scholar extracted view of "Theory of Constraints" by E. Goldratt. Skip to search form Skip to main correctly. DOI: /b; Corpus ID: ; Theory of Eli Goldratt is the creator of the Theory of Constraints (TOC) and is the author of 8 books, including the business best sellers The Goal, It's Not Luck, and Critical Chain. Goldratt's The definitive guide to the theory of constraints In this authoritative volume, the world's top Theory of Constraints (TOC) experts reveal how to implement the ground-breaking >>>>> Click Here to DownloadTheory of Constraints - Kindle edition by Goldratt, Eliyahu M.. Download it once and read it on your Kindle device, PC, phones or ... read more




then you must refrain Intuitively we know that Alex would never have gone and from giving the implemented them. So, at least, answers. we have learned one thing, don't give the answers. The minute you supply a person with the answers, by that very action you block them, once and for all, from the opportunity of inventing those same answers for themselves. If you want to go on an ego trip, to show how smart you are, give the answers. But if what you want is action to be taken, then you must refrain from giving the answers.


The Goal deliberately elaborates on Alex's struggle to find the answers, so that the intuition of the reader will have sufficient time to crystalize. This is so the reader will figure out the answers before he reads them. Readers have usually been, at least, one page ahead of Alex Rogo throughout the book. Is The Goal successful in doing this? We certainly didn't ask all the readers if this is actually what happened to them, but we do have good reason to believe that this is what was accomplished. The Goal is also a textbook used formally in hundreds of universities all over the world.


Nevertheless, the common reactions of the readers are, "I finished it in two or three sit- tings"; "I couldn't put it down. The Goal is certainly not an exceptionally good piece of literature. Most probably it happens because the readers are inventing the answers before they read them and that's why they couldn't put it down. The urge to verify, to yourself that you are right, is an almost uncontrollable urge. Now to the crucial question. Did the reader, by inventing the ideas in The Goal before reading them , take ownership?


Let's remind ourselves that this is exactly what we wanted to check. Is the emotion of the inventor triggered, even in cases where the person knows that somebody else has already invented the same thing before—which is certainly the case with the readers of The Goal? To answer this question, just ask yourself the following. Did I, after finishing reading The Goal, feel almost compelled to give it to someone else? We know for a fact that this is very often the case. You see, The Goal was not originally made available through bookstores even today it is quite rare to find it in a bookstore. Neverthe- less the mechanism of people passing it, actually forcing it on each other, is so strong that many hundreds of thousands of copies of The Goal have been sold.


The results are very decisive. People are much more open than we tend to think and their intuition is extremely powerful. Everybody has the ability to invent, if skillfully induced. And once people invent something for themselves, they actually take ownership. Another result, of this same analysis, is that the So- cratic method is extremely powerful, even in our modern times. The immediate question is: how to formally use the Socratic method? Let's remember that when The Goal was written, it was guided by intuition and not by formalized rules. Any research in literature reveals that an ocean of words have been written about the Socratic method. But before we dive into the enor- mous subject of using the Socratic method, it might be worth- while to summarize the three steps of the Theory of Constraints.


The steps which are equivalent to the above five steps, but are expressed in the terminology of the improvement process itself: 1. What to change? Pinpoint the core problems! To what to change to? Construct simple, practical solutions! How to cause the change? Induce the appropriate people to invent such solutions! We even know the method that will enable us to accomplish the third step—the Socratic method. Let's try to verbalize the rules that comprise the Socratic method. When we are trying to induce someone to invent a solution, for a problem which is under their control, the first step that must be accomplished is quite obvious. We must make sure that the problem we present to our audience will be regarded by them as their problem—a major problem of theirs. Otherwise how can we even hope that they will commit their brains to attempting to solve it? This all sounds quite convincing and it's probably even right, but how are we going to convince someone that a particular problem is theirs—how can we prove it to them?


Remember there are two things which are working against us when we try such a thing. The second is the fact that the usual way to prove something is simply not effective in this case. We are used to proving things by "the proof is in the pudding" method. But when you try to bring people to realize their own problem, you certainly cannot use their pudding. Using their pudding means to solve and implement for them. Look here, they had the same problem and see how they have solved it. We all know what the common response is to such presentations: "we are different, it won't work here," the puddings are not always the same.


How to Prove Effect-Cause-Effect The first stumbling block that we A way which does not face in using the Socratic method rely on examples or is thus, the need to formulate references but on the another way to prove things. A way which does not rely on intrinsic logic of the examples or references but on situation itself. the intrinsic logic of the situation itself, which is by far more convincing than the usual methods. This method of proof is called Effect-Cause-Effect and it is used extensively in all of the hard sciences. The following is an extract from the Theory of Constraints Journal that describes this generic method in detail.


The more time one spends in an organization and the more a person climbs toward the top of the pyramid, the more he seems inclined to believe that managing an organization is more of an art than an accurate science. The art of managing people. The art of reaching intuitive decisions when hard facts are not available. It is almost a consensus today that since we are dealing with so many unknowns in an organization that this field will never be a science. The unpredictable reaction of the market, the unknown actions of our direct and indirect competitors, the changing relia- bility of our vendors—not to mention the constant stream of in- ternal "surprises"—all combine to defeat any attempt to approach the subject in a "scientific" way. Some—and they cer- tainly are not a small group—even claim that since organizations comprise human beings whose reactions cannot be scientifically predicted, it is an absurdity to hope that the subject of managing an organization can be turned into a science.


Is this really so? I believe that any attempt to answer this ques- tion must first establish what is meant by "science. Is it a collection of well established proce- dures? Or is it the glorified and somewhat mysterious notion of "finding the secrets of nature? This muddled view stems from the fact that the various sciences did not spring up as fully developed subjects. Rather each science has gone through three quite distinct and radically different stages of development. In each stage every science completely changes its perspective, nomenclature and even its intrinsic premise, much like a caterpil- lar turning into a worm in its evolution to becoming a butterfly. The three distinct stages that every The three distinct science has gone through are: stages that every classification, correlation and Effect- Cause-Effect.


Let's clarify these science has gone stages through some examples. through are: Probably one of the most ancient classification, sciences known to man is astronomy. The first stage— correlation and classification— begins in prehistory. Several classifications of the stars were developed according to their location in the heavens. The most popular one was invented by the ancient Greeks. Within this broad classifi- cation they invented an elaborate subclassification, coloring the night sky with their vivid imaginations and succeeding to etch above us most of their stormy mythology. Some stars they ob- served "refused" to stay in one sector, so they classified these wandering stars in a class of their own—the planets. This mam- moth effort had its own practical use. It created a common termi- nology and today it still has some use in navigation, even though we must admit that its principal use is in horoscopes. The second stage started with Ptolemy in Alexandria about two thousand years ago.


This wise man postulated the first known correlation on this subject. The planets move along a circle, whose center moves along another circle, whose center is the earth. This correlation has been improved upon by others, who have more precisely pinpointed the radii of the circles and even added more circles to an already quite complicated model. These efforts cer- tainly bore fruits. They enabled us to predict eclipses and to fore- cast the position of the planets in tomorrow's skies. The correlation stage is not a stand-still stage. It has its turbu- lences and fierce debates. Copernicus aroused a somewhat sleepy community by his daring suggestion that a much more powerful correlation would be achieved and if we put the sun as the center of the planet's orbits.


Kepler created another turbulence by sug- gesting a correlation based on elliptical orbits rather than the almost holy circular ones. It should be noted that in the correla- tion stage, even though it is based on careful observations and often involves substantial mathematical computations, the ques- tion WHY is not asked at all. Rather the question HOW is the center of interest. The man who moved this subject into the effect-cause-effect stage is known to everybody—Sir Isaac Newton. This man was the first to insist on asking the question: WHY? He had the courage to ask it not only about remote planets but about seemingly mun- dane day-to-day events. Why do apples fall down rather than fly- ing in all directions? How easy it is to shrug off such a trivial question by the impertinent answer—"that is the way it is. He assumed the gravitational law. Because of his assumption the gravitational law , three of Kepler's correlations were explained for the first time and eight more were exposed as just coincidences that had not been thor- oughly checked.


With Newton's assumption of a cause the word explain appears on stage. It is a foreign word to the classification and correlation worlds where the only "proof is in the pudding. Try it, it works. Not surprisingly, the effect-cause-effect stage opened a whole new dimension. We are no longer just observers tracking what already exists in nature. We can now predict the orbit of satellites that we ourselves add to space. Past experience is no longer the only tool. Logical derivations based on existing assumed causes can predict the outcome of entirely new situations. It's worthwhile to note that, before Newton, astronomy was not considered a science. As a matter of fact the name used at that time is the best indication—astrology. Even Kepler was an astrol- oger and mathematician and had to supply his king with weekly horoscopes.


Let's examine another subject—diseases. The first stage—clas- sification—is mentioned as far back as the Old Testament. When certain symptoms are present—put a quarantine on the house, when other symptoms exist—isolate the person, and with yet other symptoms—-don't worry about them, they won't spread be- cause of contact with the person. Diseases were classified not only by their symptoms but also by their ability to infect others. This stage was certainly very helpful. It served to localize diseases and prevent them from spreading. The second stage—correlation— was achieved only in the modern world. Edward Jenner found that if serum is transferred from an infected cow to a human being, this human being would not be infected by smallpox. Immunization had been found. We were no longer limited to just preventing the spread of the disease. In one specific case we even prevented and eventually eliminated it.


But once again the question WHY was not asked. The only proof was "try and see. He said: Let's assume that those tiny things that Leeuwenhoek found under his microscope more than a hundred years before, those things we call germs, are the cause of dis- eases—and bingo microbiology sprang to life. Bingo, of course, means many years of hard work for each disease. By having a cause-and-effect we could now create immunizations for a very broad spectrum of diseases. Yes, not just find immunizations, but actually create immunizations, even for those diseases where such immunization is not created spontaneously in nature.


We can go over each subject that But the most is regarded as a science, whether it important stage—the isspectroscopy, chemistry, genetics or and the pattern is the one that is by far same. The first step was always more powerful classification. There are often some practical applications from this because it enables us stage but the major contribution is to create things in usually to create the basic terminology of the subject. The nature—is the stage second step— correlation—is of effect-cause-effect. usually much more rewarding. It supplies us with procedures that are powerful enough to make some practical predictions about the future.


Mendeleev's table and Mendel's genetic rules are examples of this important stage. But the most important stage— the one that is by far more powerful because it enables us to create things in nature—is the stage of effect-cause-effect. Only at this stage is there a widely accepted recognition that the subject is actually a science. Only then does the question WHY bring into the picture the demand for a logical explanation. Today there are quite a few mature sciences that have been in the third stage of effect-cause-effect for many years. The debate of what is a science is basically behind us. There is a consensus among scientists that science is not the search for truths or the search for the secrets of nature.


We are much more pragmatic than that. The widely accepted approach is to define science as the search for a minimum number of assumptions that will enable us to explain, by direct logical deduction, the maximum number of natural phenomena. Even when they can explain an infinite number of phenomena this does not make them true. It simply makes them valid. They can still be disproved. One phenomena that cannot be explained makes the assumption false, but in doing so it does not detract from its validity. It simply puts the bound- aries on the circumstances where the assumption is valid and ex- poses the opportunity to find another assumption that is even more valid. Science does not concern itself with truths but with validity. That's the reason why everything in science is open for constant checks and challenges. Accepting this general view of science, let's turn our attention to the field of organizations. Certainly we see many phenomena in organizations. It would be quite ridiculous to consider these phe- nomena, that we witness every day in any organization, as fiction.


They are no doubt a part of nature. But if all these organizational phenomena are phenomena of nature, which of the existing sci- ences deals with them? Certainly not physics, chemistry or biol- ogy. It looks as if this is an area waiting for a science to be developed. If we narrow our focus to a subset of the subject of managing organizations, the logistical arena, we can easily trace the three stages. The first one crystalized in the last thirty years. We refer to it under the generic name of MRP Manufacturing Resource Planning.


It is now evident that the real power of MRP is in its contribution to our data bases and terminology and much less to its original intent—shop floor scheduling. Bills of material, rout- ings, inventory files, work-in-process files, order files—all are nomenclatures brought by MRP. Viewed from this perspective it's quite clear that MRP is actually the first stage—classification. We have classified our data, putting it into clearly defined categories. We have created the basic language of the subject and tremen- dously improved communications.


The West invested considerable money, time and resources in the classification stage. On the other side of the globe, the Japa- nese moved almost directly into the second stage—correlation. One man was the major force behind it—Dr. Taichi Ohno. He started his career as a foreman and recently retired as the Execu- tive Vice President of Production for all of Toyota. He is the inventor of the Toyota Production System and the Kanban ap- proach. Correlations like: if products are not needed downstream—as indicated by the lack of Kanban cards—it is bet- ter for the company that the workers stay idle or, cut the batch sizes of parts even if the cost of setup skyrockets. I received the best proof that the question WHY was not asked at all from Dr. Ohno himself. He told me in our meeting several years ago in Chicago, "My system does not make sense at all, but by God it's working.


Have we evolved already into the Common sense is third stage, the effect-cause-effect stage? My answer is, definitely yes. the highest praise for Most of The Goal readers claim a logical derivation, that this book contains just for a very clear common sense. Common sense is the highest praise for a logical deri- explanation. vation, for a very clear explanation. But explanations and logical derivations are the terminology of the effect-cause-and-effect stage. In The Goal only one assumption is postulated—the assumption that we can measure the goal of an organization by Throughput, Inventory and Operating Expenses.


Everything else is derived logically from that assumption. The Theory of Constraints Journal is intended to expand this cause-and-effect logic to cover other aspects of an organization— from marketing, to design, to investment, to distribution, and so on. This is the main task of the first article in every issue. The task of the second article in each issue is quite different. The purpose of this article is certainly not to give real life testimonials that the Theory works. Who would be helped by such testimonials? The people that have already been persuaded by The Goal do not need them, they have their own real-life proof. Those who were not moved by the common sense logic in The Goal will certainly find it easy to demonstrate that their situations are different and that these ideas will not work in their environment.


No, the purpose of the "Visit" articles is quite different. What is not well appreciated is that the effect-cause-and- effect stage brings with it some significant ramifications that we have to adjust to. It involves a different approach to untieing a subject. In addition, and not less important, it demands a much more pragmatic approach to newly created "sacred cows. Let's elaborate on these points. First, how do we usually approach a subject today? The first step is typically—let's get familiarized with the subject. We are thrown into the mammoth task of assembling information. We try to collect as much relevant data as possible. Sometimes it takes a while to identify what is actually relevant. Often times it's quite frustrating to discover how difficult it is to get reliable data. But usually, determination, effort and time enable us to put our arms around an impressive collection of relevant pieces of information.


Now what? Our usual tendency is to start arranging things. lb put some order into the pile of information that we worked so hard to assemble. This is not a trivial task and certainly it takes time and effort. In most cases there is more than one alternative way to systematically arrange the data. It's not at all easy to choose be- tween the various possibilities and too frequently we decide to switch in mid-stream between one systematic approach and an- other, throwing the entire effort into one big mess. The most frustrating part occurs toward the end when we are always stuck with some pieces of information that do not fit neatly into our system. We twist and bend, invent some exception rules and in the end it is all organized.


What have we actually achieved? Classifi- cation! Many times we call the above task a "survey. Many of these "findings" turn out to be just statistics that we verbalize or present in a graphic form. This statistic, that "finding," is a direct result of our classifi- cation and sub-classification efforts. But let's not treat these statis- tics lightly. In many cases they are quite an eye opener. Nevertheless, most of us will feel uneasy in finishing such a mam- moth job with just statistics. We are eager to get more concrete things out of our work.


To accomplish this we usually screen the statistics looking for patterns and common trends between the various graphs and tables. We are looking for correlations. Usu- ally we find some, but everyone who has been involved in such an effort knows that there are two problems with these correlations. The only way to get some verification is to perform an experi- ment. To deliberately change one variable and to closely monitor another to find out whether or not it changes according to the prediction indicated by the correlation. The second and more serious problem is that we don't under- stand why the correlation exists and are always haunted by the possibility that the correlation involves more variables than what we have identified or that we haven't identified the known vari- ables narrowly enough. Numerous examples of the first case are well known. Unfortunately the second case is more common and carries with it a larger problem. If a variable was neglected in a correlation, it will not take long until it emerges or we decide to declare the correlation invalid.


Unfortunately, this is not the case if the variables were not defined narrowly enough. Most experi- ments will prove the validity of the correlation, but its implemen- tation will involve a lot of wasted effort. A classic example of this problem This correlation was is the correlation between a broadcast as company's level of inventory and its performance. The surveys taken in "Inventory is a the late seventies and early eighties liability. It also was very clear that the overall performance of these Japanese companies was superior to ours. This correlation was broadcast as "Inventory is a liability. A frantic race to reduce inventories started. We are now in the midst of this race even though our financial statements have not yet caught up. They still penalize—in the short run—every company that manages to substantially reduce its inventory. The amazing thing is that this widespread effort has occurred without most participants having a clear picture of why it is important to reduce inventory.


We still hear the usual explana- tion of investments tied up in inventories, carrying costs and inter- est cost. The disturbing thing about this movement is that we have not distinguished which portions of inventory are really responsible for the improved performance. A very close scrutiny, as can be found in The Race, reveals that the reduction in the work-in-pro- cess and finished goods portions of the inventory is the prime reason for improvement in a company's performance. Raw mate- rial inventory reductions are shown to have a relatively small im- pact. Nevertheless, due to lack of this understanding many companies are paying their dues to the current crusade by leaning on their vendors, in order to reduce their raw materials invento- ries.


In general most correlations are extremely helpful. The in- herent limitation of any correlation is due to the lack of understanding of the cause-and-effect relationships between the variables connected by the correlation. As we can see, the current approach of assembling information as the first step in approaching a subject leads us down the classifi- cation path, which may eventually evolve into fruitful correlations. Unfortunately this path fails to trigger the effect-cause-effect stage. In order to appreciate this, let's examine how a researcher in one of the established sciences operates. When such a person becomes aware of a new effect, the last thing that he desires at this stage is more information. One effect is enough. The burden now is on the scientist's shoulders.


Now he needs to think, not to look for more data. To think, to speculate, even if in thin air. To hypothesize a plausible cause for this effect. What might be caus- ing the existence of this effect? When such a cause is finally spec- ulated the real task begins. The scientist must now struggle with a much more challenging question. Suppose that the speculated cause is valid, what is another effect in reality that this cause must explain? The other predicted effect must be different in nature from the original, otherwise the speculated cause is regarded as just an empty phrase. The researcher must search then to see if this effect actually exists. Once a predicted effect is actually found and in the established sciences it might involve years of experi- mentation only then does the speculated cause gain the name of theory. If the predicted effect is not found, it is an indication that the speculated cause is wrong and the scientist must now search for another plausible cause.


By analyzing this data Kepler succeeded, after a mammoth mathematical effort of more than thirty years, to produce some correct correlations and some more mistaken correlations. Newton, on the other hand, started by examining one effect—why an apple falls down. He speculated the gravita- tional law as a plausible cause and derived from its existence a totally different effect—the orbits of the planets around the moon. Correlations do not trigger the effect-cause-effect stage. At most they shorten the time required to check the existence of some predicted effects. This process of speculating a cause for a given effect and then predicting another effect stemming from the same cause is usually referred to as Effect-Cause-Effect.


Many times the process does not end there. An effort is often made to try and predict more types of effects from the same assumed cause. The more types of effect predicted—and of course verified—the more "powerful" is the theory. Theory in science—unlike in the common language— must be practical, otherwise it is not a theory but just an empty scholastic speculation. Every verified, predicted ef- We should strive to fect throws additional light on the cause. Oftentimes this pro- reveal the cess results in the cause itself fundamental causes, being regarded as an effect thus so that a root triggering the question of what is its cause. In such a way, a log- treatment can be ical tree that explains many applied, rather than vastly different effects can grow from a single or very few ba- just treating the sic assumptions. This technique leaves—the is extremely helpful in trying to symptoms.


find the root cause of a prob- lematic situation. We should strive to reveal the fundamental causes, so that a root treatment can be applied, rather than just treating the leaves—the symptoms. I myself usually elect to stop the process of finding a cause for a cause, when I reach a cause which is psychological and not physical in nature. In using the Effect-Cause-Effect method we strive to explain the existence of many natural effects by postulating a minimum number of assumptions. If all the effects just mentioned are considered to be undesirable ones, then the proper name for these underlying assumptions is Core Problems. Thus one of the most powerful ways of pinpointing the core problems is to start with an undesirable effect, then to speculate a plausible cause, which is then either verified or disproved by checking for the existence of another type of effect, which must also stem from the same speculated cause.


Using this method also means that after having identified one undesirable effect the search for more information should be put on hold. Rather, we should now immerse ourselves in the speculation of a plausible reason, which can explain the exis- tence of this effect. Then we should try to logically deduce what other totally different effect must exist in reality if the specu- lated reason is valid. Only then, should we seek It's no wonder that verification, not of the original effect but of the speculated when using this one. It's no wonder that when method in a dialogue, using this method in a dia- we will initially give logue, we will initially give the impression that we are jump- the impression that we ing from one subject to an- are jumping from one other.


Remember, the only subject to another. connection that exists between the subjects being discussed is the hypothesis which resides only in our mind. By explaining the entire process of constructing the Effect-Cause-Effect logical "tree" we have a very powerful way to persuade others. Let's examine an example which uses this technique of think- ing—in Chapter 4 of The Goal. The assumption that Jonah makes is that Alex, being the plant manager, is not doing some- thing which is an artificial, local optimum. Thus the hypothesis is, that when Alex uses the word productivity, he knows what he is talking about.


Therefore Jonah's next question is not directed to the specific tasks of the robots, their manufacturer or even their purchase price, but rather towards the verification of what should be a straightforward resulting effect. Alex, captured in his world of local optimums, thinks that Jonah is the one who is remote from reality. Jonah now has to develop and present an Effect- Cause-Effect tree using Alex's terminology, in order to show Alex why the hypothesis "Alex knows what he is talking about" must be wrong. If the plant had actually increased productivity this would mean that either the plant increased Throughput or reduced Inventory or reduced Operating Expense. There aren't any other possibilities. Thus Jonah's next question is, "was youi plant able to ship even one more product a day as a result of what happened in the department where you installed the ro- bots?


Basically Jonah was asking whether or not Throughput was increased. The next question was "did you fire anybody? And the third question was "did your inventories go down? It is quite easy now for Jonah to hypothesis a much better reason, "Alex is playing a numbers game," and productivity for him is just a local measure like efficiencies and cost. The un- avoidable effects that will stem from managing a plant in this way is what Jonah tries to highlight by his next question, "With such high efficiencies you must be running your robots con- stantly? So without further hesitation he firmly states "Come on, be honest, your inventories are going through the roof, are they not? Then the reaction is "wait a minute here, how come you know about these things? As a matter of fact, it is the only feasible technique that we know of to identify con- straints, especially if it's a policy constraint that doesn't give rise to permanent physical constraints, but only to temporary or wandering ones.


This same method also solves the problem of providing solid proof. Read The Goal once again; it portrays a constant unfolding of Effect-Cause-Effect analysis. It turns out that people are very convinced by this type of analysis when they are introduced, not just to the end result, but to the entire logical flow: hypothesizing reasons, deriving the resulting different effects, checking for their existence and, when not finding them, changing the hypothesis and so on. If you called The Goal common sense then you have already testi- fied to the extent to which this method is accepted as proof. How to Invent Simple Solutions: Evaporating Clouds Once the core problem is pin- as long as we think pointed then the challenge of that we already know, using the Socratic approach is even bigger.


Now the audience we don't bother to re- has to be induced to derive think the situation. simple, practical solutions. The major obstacle to accomplishing such a task is the fact that people usually already have in their minds, the "accepted" solutions. Remember, we are dealing with core problems and typically core problems. They have usually been in existence within our environment for many months or even years—they do not just pop up. This provides us with the best indication that the perceived solutions are insuffi- cient, otherwise the core problem would have already been solved.


It is clear that the nature of human beings is such, that as long as we think that we already know, we don't bother to re- think the situation. Thus whenever we want to induce people to invent, we must first convince them that the "accepted" solu- tions are false, otherwise they will not think, they will just quote It's not unusual to find that the accepted solutions, which dc not work, are solutions of compromise. Inducing people to invent simple solutions, requires that we steer them away from the avenues of compromise and towards the avenue of re-examining the foundations of the sys- tem, in order to find the minimum number of changes needed to create an environment in which the problem simply cannot exist. I call the method which can accomplish this the Evaporating Clouds method. Assuming that a core problem can be described as a big black cloud, then this method strives not to solve the problem compromise but to cause the problem not to exist.


The origin of the Evaporating whenever we face a Clouds method stems from the situation which essence of two broadly accepted sentences. The first one, is requires a more theological, "God does not compromise, there is limit us, we are limiting always a simple ourselves" and the second, which is regarded to be more practical, solution that does not "You cannot have your cake involve compromise. and eat it too. But the mere fact that they are so widely accepted indicates that both are valid. The second sentence is just a vivid description of the existence of compromising solutions. The first one probably indicates that, whenever we face a situation which requires a compromise, there is always a simple solution that does not involve compromise. We just have to find it. How can we systematically find such solutions? Maybe the best place to start is by utilizing a third sentence, which is also very widely accepted: "define a problem precisely and you are halfway to a solution.


Nevertheless there is one small difficulty, when do we usually realize the validity of the above sentence, only when we've al- ready found the solution. But how can we be sure that we have defined a problem precisely before having reached the solution? Let's first examine what is the meaning of a problem. Intu- itively we understand that a problem exists whenever there is something that prevents, or limits us, from reaching a desired objective. Therefore, defining a problem precisely must start with a declaration of the desired objective. What should we do next? Let's remind ourselves that what we are dealing with are the type of problems that involve compromise. A compromise between at least two bodies. In other words, we have to pacify, or satisfy, at least two different things if we want to achieve our desired objective. From this analysis we can "You can't have your immediately conclude that cake and eat it too.


In other words to reach the objective there are at least two necessary conditions which must be met. Thus, the next step in precisely defining a problem is to define the requirements that must be fulfilled. But the definition of the problem cannot stop here. We should realize that whenever a compromise exists, there must be at least one thing that is shared by the require- ments and it is in this sharing that the problem, between the requirements, exists. Either we simply don't have enough to share or, in order to satisfy the requirements, we must do con- flicting things, "you can't have your cake and eat it too. Let's start by calling the desired objective "A.


Maybe the best way is to use the Effect-Cause-Effect method. The effect that we started with is "state a problem precisely and you're half way to solving it. In order to verify this hypothesis, we must be able to explain, with the same hy- pothesis, an entirely different type of effect. Even though there are many such types of effects, I will bring into play here the effect that I originally used to verify this method. At the time that the Evaporating Clouds method began to be formulated, I was deeply immersed in the field of scheduling and materials management. One would expect that the articles published in profes- sional magazines would deal with the problems that trouble the community of its readers.


Therefore, one would expect that the bigger and more important the problem, the more articles there would be trying to address and solve that problem. Skimming the professional magazines in the field of materials management revealed a very awkward phenomena. In the last fifty years ac- tually from the thirties the problem that attracted, by far the largest number of articles, is the problem of Economic Batch Quantity EBQ. At the same time, talk to any practitioner and you'll find out that batch sizes are determined almost off the cuff and nobody in the plants is overly concerned about it. I don't think that it is an exaggeration to estimate that at least 10, articles have already been published on this subject. Cer- tainly more than on the much more debatable subjects of sched- uling, MRP or JIT. Why is this? What caused such a flood of articles into such a relatively unimportant problem? Maybe we can explain this What caused such a phenomena, if what we find is flood of articles into that this particular problem had some unique feature.


A feature such a relatively that will attract the interests of unimportant problem? those motivated by the academic measurement of "publish or perish. In such a case, people will certainly be more attracted to deal with a problem which is clearly defined, rather than with the more important problems which are vaguely stated. As it turns out this is exactly the case. The batch size prob- lem is precisely defined, according to the above diagrams. Let's review it in more detail, not to see what the batch size should be, but in order to acquire a much better understanding of the Evaporating Clouds method. The major avenues through which the size of the batch will impact the cost per unit are as follows.


But, if after the one hour setup, we produce ten units of a given item, then each unit will have to carry only one-tenth of the cost of one hour of setup. Thus if we want to reduce the setup cost per unit, we should strive to produce in as large a batch as possible. Graphically the cost per unit as a function of batch size, when setup cost is considered, is shown in Figure 3. We are all aware that as we enlarge the size of the batch we will enlarge the amount of time that we will hold the batch in our possession and thus we increase the carrying cost of inventory. Most articles indicate a linear rela- tionship; doubling the size of the batch roughly doubles the car- rying cost.


When considering the carrying cost per unit, we should strive to produce in the smallest batches possible. Graph- ically the cost per unit as a function of batch size when carrying cost is considered is shown in Figure 4. CARRYING COST Cost Per PER UNIT Unit Batch Size FIGURE 4: What is the cost per unit, as a function of batch size, when we consider the car- rying cost? We are all aware that as we enlarge the batch we enlarge the time we hold it in our possession. It is quite easy to see that the problem of batch size determi- nation is actually a compromising problem which is precisely defined. But maybe it will behove us to first examine how such problems are treated con- ventionally.


The conventional way is to accept the problem as a given and to search for a solution within the framework estab- lished by the problem. Thus, conventionally we concentrate on finding an "optimum" solution. Since we cannot satisfy both requirements, "B" and "C," all the efforts are aimed at finding out how much we can jeopardize each one, so that the damage to the objective "A" will be minimized. Actually, finding a solu- tion is restricted by the question: what compromise should we make? In the batch size problem, we consider the total cost, which is the summation of the setup and carrying cost contributions see Figure 5.


And then, we mathematically or numerically find the minimum cost possible, which indicates the "best" batch size. Actually, finding a solution is restricted by the question: what compromise should we make? This type of approach, with a whole variety of small corrective considerations, is what appears in the vast number of articles mentioned above. Most articles also point out that the curve is very flat near the minimum and they claim that it's not too terribly important which batch is chosen, as long as it is within the range marked by the two circles in Figure 5. The intuitive, off the cuff choice for a batch size that we make in reality, is usually well within this range. This same point, about falling within this wide range, is what made everyone wonder about the practicality of all these academic articles, that while mentioning it, are concentrating on small corrective factors that do not change the picture in any significant way.


The Evaporating Clouds method does not strive to reach a compromise solution, rather it concentrates on invalidating the problem itself. The first attack is made on the objective itself asking, "Do we really want it? This comparison is achieved by simply trying to re- state the problem using the terminology of the global objective rather than the local terminology. Are we really trying to achieve a minimum cost per unit? Maybe, but what we are really trying to achieve is, of course, the making of more money. Since most readers have not yet devel- oped their intuition regarding Throughput, Inventory and Oper- ating Expense, we'll use, instead, the slightly more cumbersome global terminology of the relationships of making money; Net Profit and Return on Investment. Rather than using cost per unit, we should use profit per unit.


Since, the problem assumes a fixed selling price; more cost less profit, less cost more profit, we can just replace cost by profit. This results in a mirror image of the previous graph Figure 5. How do we bring investment into the picture? We should just remind ourselves of the reason for the linear relationship straight line between carrying cost and the batch size. Dou- bling the batch size means doubling the carrying cost. But this implies doubling the investment in the work in process and fin- ished goods material that we hold. In other words, there is also a linear relationship between the batch size and investment. Thus, we can simply replace the horizontal axis batch size with in- vestment in WIP and FG's and we get a graph which is profit per unit versus investment, as shown in Figure 6.


But what about return on investment? The same profit means the same return, but the investment in that interval has more than doubled. If we want to make more money, then we shouldn't aim for the top of the curve but at some point substantially to the left of it. And what about that brutal, necessary condition called cash? Suppose that the plant has an amount of cash which resides somewhere between the two points as indicated by the bar on the investment axis. Yes, they are equivalent from the point of view of the net profit, but in this case one means bankruptcy and the other survival. This "optimal" solution has Almost no one bothers been taught for more than 50 to check the local years in almost every university around the globe.


Almost no objectives versus the one bothers to check the local global goal. objectives versus the global goal. Let's not fool ourselves, this phenomena is not restricted to just academic problems but is widespread in real life. How many times has your company worked so hard to win a bid and once it was won, it turned out to be a disaster? How many times have you seen a foreman forced to break setups, go to overtime, in order to expedite some pieces, just to find them two weeks later gathering dust in a warehouse? How many times have you almost climbed the walls to meet tolerances that shouldn't have been there in the first place? Prob- lems that arise whenever we try to satisfy local objectives that do not match, at all, the global goal.


Coming back to the method of behind any logical Evaporating Clouds, let's assume connection there is an for now that the objective has been checked and verified. Yes, assumption. In our we do want to achieve this case, most probably it specific objective. Is the only way open to turn to the avenue of is a hidden compromise? The answer is assumption. definitely not. What we have to remind ourselves of, is that the arrows in the Evaporating Clouds diagram, the arrows connecting the requirements to the objective, the pre-requisite to the requirements and the arrow of the conflict, all these arrows are just logical connections.


In our case, most probably it is a hidden assumption. Let's clarify it with an example taken from the Theory of Con- straints Journal. It doesn't matter, "because it's there. The assumption that we intend to reach the top of Mount Everest by climbing. It is enough just to verbalize this assumption and pic- tures of parachutes and helicopters start to flash in our minds. The Evaporating Clouds technique is based on verbalizing the assumptions hidden behind the arrows, forcing them out and challenging them. It's enough to invalidate even one of these assumptions, no matter which one, and the problem collapses, disappears. The previous Mount Everest example probably left you with a sour taste in your mouth, as it is too simplistic, unfair.


So maybe we should try to use this technique on the batch size problem. Let's remember that this problem is one in which more than 10, bright people have invested so much time trying to solve to the extent that they have published articles about it. Evaporating this problem certainly serves, in more than one way, as a good illustration of the validity of the Evapo- rating Clouds method. Examine, for example, the arrow connecting requirement "B" to the objective. The influence of setup cost on cost per unit is the unstated assumption that was taken when we drew the batch size problem. It doesn't take long to realize that we have taken setup as a given. In other words, we assumed that the setup cost is fixed and cannot be reduced.


What do we call the method that so viciously attacks this assumption? We call it JIT. Sometimes from many hours to just a few minutes. But there are many ways to have our cake and eat it to. So, let's try to find out if there is another assumption hiding behind the same arrow. Just thinking about it probably sends flickers through your mind: "does setup really cost us money? Remember, the Theory of Constraints shies away from the word cost, like it was fire. The word cost belongs to the But the word cost is most dangerous and confusing also used in a third category of words—the multi- meaning words.


We use this word way, that of "product as a synonym for purchase price, cost," which is just an like in the sentence, "the cost of a artificial, machine. You might become rich by prudent investments but certainly not by spending your money. But the word cost is also used in a third way, that of "product cost," which is just an artificial, mathematical phantom Theory of Constraints Journal, Volume 1, Number 4, Article 1. After this long remark on the multiple meanings of the word cost, let's try to rephrase the question "does setup really cost us money? The lightbulb just went on. The equivalent is "will an additional setup increase the Operating Expense of the organization? Suppose that all the people who have tried to solve the batch size problem would have dealt with a situation, where at least one of the resources involved in the setup was a bottleneck. So let's assume that the situation they have dealt with, is one in which none of the resources involved in the setup is a bottleneck.


In such a case the impact of doing an additional setup on Operating Expense is basically zero. What we see is that exposing the hidden assumption is suffi- cient for us to understand that the whole problem revolved around a distortion in terminology. What is our answer to the batch size now? Where should we have large batches? On the bottlenecks and everywhere else? Let's have smaller batches. Small, to the extent that we can afford the additional setups, without turning the other resources into bottlenecks. What I would like to demonstrate is that every arrow can be challenged. But since I don't want to turn this into the 10, book on batch sizes, let me demonstrate it by concentrating on what is perceived to be the most solid arrow in the diagram— the arrow of the conflict itself.


What is the assumption behind "large batch is the opposite of small batch"? That large is the opposite of small? To challenge this means to challenge mathe- matics itself. So the only avenue left open is to challenge the assumption, that the word batch does not belong to that cate- gory of words having multiple meanings. Here, it seems that we are at a loss, where the only way out is to ask ourselves if we know of any environment, in which the concept of batch does not fit. Yes, we all know of such environments—flow lines, con- tinuous production, assembly lines. It seems to reason that batch sizing is not applicable in such environments, because in those environments the distinction between the two meanings of the word batch is so big that we cannot possibly group them to- gether.


What is the batch size in a dedicated assembly line, dedicated to the assembly of one type of product? Of course it's one; we are moving the products along the assembly line in batches of one. A very large number, we don't ever reset a dedicated line. What are we going to do now? It seems as if we have two correct answers to the same question, where the first answer is one and the second is infinite. Rather then putting the whole thing aside, by saying that the batch size concept is not applica- ble to such situations, let's try to verbalize the lessons that we can extract from it. We reached the answer one, when we looked on this situation from the point of view of the product. The unverbalized question was actually, "how many units do we batch together for the purpose of transferring them, from one resource to another along the line?


On the other hand, we reached the answer of infinite from the point of view of the resources in the line. The question here was "how many units do we batch together for the purpose of processing them, one after the other"? The answer infinite was thus given to describe the size of the batch used for the purpose of processing—we call it the process batch. In every flow environment, we find very strong indications that the process batch and the transfer batch are totally differ- ent entities that can and do co-exist, even when we consider the same items, on the same resource, at the same time. We move batches of one through a machine on the line, while at the same time the process batch, in which these parts are processed by the machine, is infinite. Now let's return to our problem: why did we have the pre- requisite of a large batch? To save setup. In other words, the batch that we wanted to be large, was the process batch. Why did we have the pre-requisite of a small batch?


Because we wanted to reduce the carrying cost of inventory—the time that we hold the inventory in our possession. In other words, we wanted a small transfer batch. Why then do we claim that we have a conflict, when these two pre-requisites can be fully satis- fied, at the same time. people for more than Now the solution is obvious. We 50 years was due to should strive to maximize the the improper use of process batches on bottlenecks, while at the same the terminology. time using small transfer batches everywhere— including through the bottleneck small transfer batches do not have any impact on setup.


The efforts to find the "best" batch size, should have been directed towards straightening out the paper work on the shop floor rather than finding some artificial optimum. Theory of Constraints: Eliyahu M. By anonymous. pdf Filesonic. com pdf Bitshare. com My other publications can be found here. No mirrors. Free Download Link1 Download Link 2. Ebooks list page : Theory of Constraints Eliyahu M. Goldratt - Beyond the Goal: Eliyahu Goldratt Speaks on the Theory of Constraints Your Coach in a Box [71 MP3s, 10 PPTs] Eliyahu Goldratt - The Theory of Constraints [REDUCED] [ MP4, MP3,2 PDF] Eliyahu M.



edu no longer supports Internet Explorer. To browse Academia. edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser. Log in with Facebook Log in with Google. Remember me on this computer. Enter the email address you signed up with and we'll email you a reset link. Need an account? Click here to sign up. Download Free PDF. Theory of Constraints. Eliyahu M. Continue Reading Download Free PDF. What About Existing New Projects? The Five Steps of Focusing The message of this book is not bottlenecks or cutting batches.


It's not even how to arrange the activities of the factory floor. As a matter of fact, the message is the same for any aspect of any company from product design and marketing to manufacturing and distribution. Everyone knows that the actions of marketing are guided by the concept of cost and margins, even more than the actions of production. We grossly underestimate our intuition. Intuitively we do know the real problems, we even know the solutions. What is unfortunately not emphasized enough, is the vast importance of verbalizing our own intuition. As long as we will not verbalize our intuition, as long as we do not learn to cast it clearly into words, not only will we be unable to convince others, we will not even be able to convince ourselves of what we already know to be right. If we don't bother to verbalize our intuition, we ourselves will do the opposite of what we believe in. We will "just play a lot of games with numbers and words.


How do we go about verbalizing it? The first step is to recognize that every system was built for a purpose. We didn't create our organizations just for the sake of their existence. Thus, every action taken by any organ—any part of the organization—should be judged by its impact on the over- all purpose. This immediately implies that, before we can deal with the improvement of any section of a system, we must first define the system's global goal; and the measurements that will enable us to judge the impact of any subsystem and any local decision, on this global goal. Once these are defined, we can describe the next steps in two different ways. One, in which we are using the terminology of the system that we are trying to improve. The other, using the terminology of the improvement process itself.


We find that both descriptions are very helpful and only when both are con- sidered together, does a non-distorted picture emerge. How to sort out the important In our reality any few from the trivial many? The key lies in the recognition of the system has very few important role of the system's constraints. A system's constraint is nothing more than what we all feel to be expressed by these words: anything that limits a system from achieving higher performance versus its goal. To turn this into a workable procedure, we just have to come to terms with the way in which our reality is constructed.


Identify the System's Constraints. Once this is accomplished—remember that to identify the constraints also means to prioritize them according to their im- pact on the goal, otherwise many trivialities will sneak in—the next step becomes self-evident. We have just put our fingers on the few things which are in short supply, short to the extent that they limit the entire system. So let's make sure that we don't waste the little that we have. In other words, step number two is: 2. Decide How to Exploit the System's Constraints. Now that we decided how we are going to manage the con- straints, how should we manage the vast majority of the system's resources, which are not constraints? Intuitively it's obvious.


We should manage them so that everything that the constraints are going to consume will be supplied by the non-constraints. Is there any point in managing the non-constraints to supply more than that? This of course will not help, since the overall system's performance is sealed—dictated by the constraints. Thus the third step is: 3. Subordinate Everything Else to the Above Decision. But let's not stop here. It's obvious we still have room for much more improvement. Constraints are not acts of God; there is much that we can do about them. Whatever the constraints are, there must be a way to reduce their limiting impact and thus the next step to concentrate on is quite evident. Elevate the System's Constraints. Can we stop here? Yes, your intuition is right. There will be another constraint, but let's verbalize it a little bit better. If we elevate and continue to elevate a constraint, then there must come a time when we break it.


This thing that we have elevated will no longer be limiting the system. Will the system's perfor- mance now go to infinity? Certainly not. Another constraint will limit its performance and thus the fifth step must be: 5. If in the Previous Steps a Constraint Has Been Broken, Go Back to Step 1. Unfortunately, we cannot state these five steps without adding a warning to the last one: "But Do Not Allow Inertia to Cause a System Constraint. What usually happens is that within our organization, we derive from the existence of the current constraints, many rules. Sometimes formally, many times just intuitively. When a constraint is broken, it appears that we don't bother to go back and review those rules. As a result, our systems today are limited mainly by policy constraints. We very rarely find a company Their original reasons with a real market constraint, have since long gone, but rather, with devastating marketing policy constraints.


We but the policies still very rarely find a true bottleneck remain with us. on the shop floor, we usually find production policy constraints. Alex didn't have to buy a new oven or a new NCX machine. He just had to change some of the production policies that were employed in his plant. And in all cases the policies were very logical at the time they were insti- tuted. Their original reasons have since long gone, but the old policies still remain with us. The general process thus can be summarized using the terminology of the system we seek to improve as: 1. Identify the system's constraints. Decide how to exploit the system's constraints. Subordinate everything else to the above decision. Elevate the system's constraints. If in the previous steps a constraint has been broken, go back to step one, but do not allow inertia to cause a system constraint. As we said before, the only way not to cause severe distor- tions, is to describe the same process, but this time using the terminology of the improvement process itself.


Every manager is overwhelmed with problems, or as some would call it opportunities. We all tend to concentrate on taking corrective actions that we know how to take, not necessarily concentrating on the problems we should correct and the actions needed to correct those problems. Thus, if a process of ongoing improvement is to be effective, we must first of all find— WHAT TO CHANGE. In other words, the first ability that we must require from a manager is the ability to pinpoint the core problems, those problems that, once corrected, will have a major impact, rather than drifting from one small problem to another, fooling ourselves into thinking that we are doing our job.


But once a core problem has been identified, we should be careful not to fall into the trap of immediately struggling with the question of How To Cause The Change. We must first clarify to ourselves—TO WHAT TO CHANGE TO—otherwise the identification of core problems will only lead to panic and chaos. Thus, we should also require that a manager acquire the abil- ity to construct simple, practical solutions. In today's world, where almost everybody is fascinated by the notion of sophistication, this ability to generate simple solutions is relatively rare. Nevertheless, we must insist on it. It's enough to remind ourselves of what we have so harshly learned from reality, over and over again. Complicated solutions don't work, simple one's might. Once the solution is known, and only then, are we facing the most difficult question of— HOW TO CAUSE THE CHANGE. and TO WHAT TO CHANGE TO? are considered to be technical questions, then the last one, HOW TO CAUSE THE CHANGE?


is definitely a psychological one. However, we are very well prepared for such questions. In our organizations there is generally more than just a little bit of politics.



Theory of Constraints. Eliyahu M. Goldratt,

La théorie des contraintes encourage les gestionnaires au sein des organisations à identifier le plus tôt possible les éléments qui peuvent limiter les options de réalisation des projets et à 01/04/ · A new approach to the management of production and operations was developed by Goldratt in the late s. Now known as the theory of constraints (TOC), it provides a >>>>> Click Here to DownloadTheory of Constraints - Kindle edition by Goldratt, Eliyahu M.. Download it once and read it on your Kindle device, PC, phones or Semantic Scholar extracted view of "Theory of Constraints" by E. Goldratt. Skip to search form Skip to main correctly. DOI: /b; Corpus ID: ; Theory of The definitive guide to the theory of constraints In this authoritative volume, the world's top Theory of Constraints (TOC) experts reveal how to implement the ground-breaking The definitive guide to the theory of constraints In this authoritative volume, the world's top Theory of Constraints (TOC) experts reveal how to implement the ground-breaking management and ... read more



You are trying your best to highlight to them that we are living in an unprecedented era in Industrial History. Let's try to verbalize the rules that comprise the Socratic method. The batch size prob- lem is precisely defined, according to the above diagrams. He started his career as a foreman and recently retired as the Execu- tive Vice President of Production for all of Toyota. Besides at almost every level we use standard purchased components that we hold in stock.



through are: Probably one of the most ancient classification, sciences known to man is astronomy. In the batch size problem, we consider the total cost, which is the summation of the setup and carrying theory of constraints by eliyahu goldratt download pdf contributions see Figure 5. CARRYING COST Cost Per PER UNIT Unit Batch Size FIGURE 4: What is the cost per unit, as a function of batch size, when we consider the car- rying cost? It was a big problem before. Let's not fool ourselves, this phenomena is not restricted to just academic problems but is widespread in real life. Intu- itively we understand that a problem exists whenever there is something that prevents, or limits us, from reaching a desired objective.

No comments:

Post a Comment