## Protect valuable intellectual property in diameswn

Translate Request has too much data
Parameter name: request
Error in deserializing body of reply message for operation ‘Translate’. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 14574.
Posted by John F. McGowan, Ph.D. in Applied Math on June 27th, 2011 | 4 responses

Octave is a free, open-source high-level interpreted language, primarily intended for numerical computations that is mostly compatible with MATLAB. Octave is an excellent tool for the rapid research and development of new algorithms as well as performing simulations and data analysis. A mathematical software developer can often prototype a new algorithm in Octave two to three times faster than in a compiled programming language such as C or C++. Octave is free both as in beer and as in speech unlike MATLAB. Anyone can download Octave and run an Octave program at no cost on the three major computing platforms: MS Windows, Mac OS X, and other forms of the Unix operating system. Because Octave is open-source, there is much less concern that the vendor will suddenly cease support as Microsoft did with Visual FoxPro or redesign the language into something unusable in order to sell yet another “upgrade.” End users can always build the language from source and create a development “fork” that preserves the compatibility with existing code and the elegance of the original language.

The Problem

A major problem with Octave and many other scripting languages is that it is an interpreted, human-readable scripting language. Potential and actual customers and other third parties can see what is being done in detail. It is easy to reverse engineer or steal programs and algorithms written in scripting languages such as Octave.

Imagine that you are small company operating on a shoe string budget in a loft in West Hollywood that has developed a breakthrough video special effect in Octave. You want to win a contract from a Hollywood movie studio to do the effect in the next blockbuster science fiction movie starring Angelina Jolie and Brad Pitt as quarreling lovers caught in an alien invasion. The famous Hollywood movie studio wants to evaluate the algorithm in-house, make sure you are not cheating with Photoshop on the glamor shot of Angelina in a skin-tight black leather jumpsuit that they sent you. The problem is that the famous Hollywood studio that you are pitching to would steal your algorithm in a microsecond if they could. You are confronted with the cost, time, and general difficulty of converting your hot new video special effect algorithm into a compiled language such as C or C++. Meanwhile your competitors at Really Cool FX in Pasadena may come out with the same algorithm while you are struggling to convert it to C or C++.

You could be a quantitative finance wizard operating out of a poorly ventilated office in Jersey City, New Jersey with a spectacular view of scenic downtown Jersey City visible through your tiny west facing window. You would like to sell your hot new nanosecond trading algorithm to a Too Big Too Fail bank so you can move to a plush well ventilated corner office across the Hudson River in New York City’s financial district, but the bank insists they must thoroughly evaluate the algorithm in-house. Probably enough said right there.

You might be an idealistic junior faculty member at a prestigious, but very low paying major research university in San Francisco. You have developed the breakthrough algorithm in quantitative biology that will cure cancer — in Octave. Now, you are completely above crass materialistic concerns and plan to follow the illustrious example of Jonas Salk in refusing to patent the polio vaccine , donate regularly to the Free Software Foundation, and have an autographed poster of Richard Stallman in your tiny cramped office, but nonetheless you would like to get tenure and move out of your landlady’s attic. You know full well that the eminent full professor down the hall who got passed over for last year’s Nobel Prize would steal your idea in a picosecond if he could; it is common knowledge in the department that his didn’t-quite-get-the-Nobel-Prize work was actually stolen from his former graduate student who is now driving a taxicab in New York City. How do you demonstrate your breakthrough algorithm without giving away the secret and get tenure?

The Solution

Fortunately, one can obfuscate Octave code, removing nearly all human-readable information, much as a compiler does when it translates a program written in C or C++ into a machine-readable binary executable. This raises the bar for stealing your ideas and algorithms considerably. In general, code obfuscation removes all comments, indentation and other formatting that clarifies what is going on, and replaces all human readable variable and function names with random strings of characters that convey no meaning to a human reader. Note that the human readable information is completely removed from the obfuscated code. Some schemes to protect programs written in scripting languages use encryption. The program is encrypted but if someone can find or determine the encryption key, they can recover the entire original program including comments, human-readable names, and so forth.

A Simple Example

This is a simple script in Octave.

mytest.m

% test scriptdisp(‘hello world’); % test commentmyflag = 1;printf(\”this is a \test\n”);fflush(stdout);myflag = myflag + 1;myflag2 = myflag++;printf(“myflag2 is %d\n”, myflag2);fflush(stdout);if flag > 1disp(‘hi’);elsedisp(‘no’);endfor counter = 1:10disp(counter); % testendpivalue = pi;disp(pivalue)disp(‘ALL DONE’);

This script generates the following output under Octave 3.2.4 running on a Windows XP Service Pack 2 PC:

octave-3.2.4.exe:18> mytesthello worldthis is a testmyflag2 is 2no123456789103.1416ALL DONE

Here is an obfuscated version of the same Octave script generated by an obfuscation function written by the author in Octave:

mytest_obfuscated.m

disp ( ‘hello world’ ); ; UQWSKDTZQWRO=1 ; ; printf ( “this is a test\n” ); ; fflush ( stdout ); ; UQWSKDTZQWRO=UQWSKDTZQWRO+1 ; ; BSJRZMSBRYXD=UQWSKDTZQWRO++; ; printf ( “myflag2 is %d\n” , BSJRZMSBRYXD ); ; fflush ( stdout ); ; if flag>1 ; disp ( ‘hi’ ); ; else ; disp ( ‘no’ ); ; end ; for RBVZQAHJSNWB=1:10 ; disp ( RBVZQAHJSNWB ); ; end ; VIENISLJPENX=pi ; ; disp ( VIENISLJPENX ) ; disp ( ‘ALL DONE’ ); ;

Note: On a Windows PC using Firefox, one can select the obfuscated code above by selecting the first few characters at the start of the line above (e.g. disp) and then hitting Shift-End on the keyboard. Then copy and paste to Octave to run the obfuscated code.

This script generates the following output (the same as the original script) under Octave 3.2.4 running on a Windows XP Service Pack 2 PC:

octave-3.2.4.exe:22> mytest_obfuscatedhello worldthis is a testmyflag2 is 2no123456789103.1416ALL DONE

Note that the reserved keywords such as “if” and built-in Octave functions such as “printf” are not obfuscated. It is actually possible to make the obfuscated code even more unreadable than the example above. This is intended as a simple illustration. The obstacles to reverse engineering and theft introduced by code obfuscation are greater for longer programs and more complex algorithms.

Conclusion

A major problem with Octave and other scripting languages is that it is easy for potential or actual customers or other third parties to reverse engineer or steal algorithms or other sensitive information from a program written in a human readable scripting language. This can be a serious problem for algorithm developers using Octave. This is much less of a problem with compiled languages such as C or C++ in which, however, it is usually slower and more costly to develop algorithms than Octave. Compilers generate unreadable binary files which are difficult to reverse engineer (not impossible).

Computer programs can obfuscate Octave code, automatically removing human readable information such as comments, variable and function names, indentations, and so forth. This is very close to the same information that is removed by compilers when they convert a program written in a compiled programming language such as C or C++ to a binary executable. In some ways, this is more secure than encrypting the code since the information is actually removed entirely from the obfuscated code; the encryption can be broken, often by simply stealing the encryption key. Code obfuscation raises the bar substantially for reverse engineering or stealing an algorithm or other critical intellectual property implemented in Octave. The same comments apply to other scripting languages such as Python, Perl, and Ruby.

John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

Possibly related articles:

Octave: An Alternative to the High Cost of MATLABAssociative Arrays and Cellular Automata in OctaveUsing Octave, a Free MATLAB AlternativeThe Game of Life in OctaveUsing Maxima Output in Octave
If you enjoyed this post, then make sure you subscribe

## Eberhardt Rechtin and during obstacle: case study in mathematics education

Translate Request has too much data
Parameter name: request
Translate Request has too much data
Parameter name: request
Posted by John F. McGowan, Ph.D. in History, Math Education on July 5th, 2011 | no responses

At present, at any rate, very little evidence exists that great mathematicians and calculating prodigies have been endowed with an exceptional neurobiological structure.  Like the rest of us, experts in arithmetic have to struggle with long calculations and abstruse mathematical concepts.  If they succeed, it is only because they devote a considerable time to this topic and eventually invent well-tuned algorithms and clever shortcuts that any of us could learn if we tried and that are carefully devised to take advantage of our brain’s assets and get get round its limits.

Stanislas Dehaene
The Number Sense, pages 7-8

Eberhardt Rechtin was a system engineer who helped develop the Deep Space Network at the Jet Propulsion Laboratory, served as Director of the Advanced Research Projects Agency (now DARPA), Assistant Secretary of Defense for Telecommunications, chief engineer for Hewlett-Packard, President and CEO of the Aerospace Corporation, and finally a Professor at the University of Southern California before his retirement.  In 1995, he gave an interview to Frederik Nebeker of the IEEE Center for the History of Electrical Engineering on his career which is available on-line here.  This is a far ranging interview covering a lengthy and distinguished career.  In this interview, he discusses his experience with a “graduate barrier course” at Caltech (the California Institute of Technology) while getting his Ph.D. in Electrical Engineering (awarded in 1950).  This was a highly mathematical course titled Electromagnetics taught by Professor William R. Smythe.  According to Rechtin’s account, this course was designed to get rid of Ph.D. students who could not “cut it” at Caltech in the 1940s.  This article explores what the barrier course may or may not have actually been doing.

By his own account, Rechtin had been a straight A student until he took this barrier course.  He flunked the course to his surprise.  Although Caltech allowed students to retake the course, the students who flunked usually failed on a second attempt.  At least according to Rechtin’s account many years later, the odds were very much against him.  He studied the book for the course over the summer, working through problem after problem, apparently without too much success.  Then he realized that every problem in the book had two ways to work out the answer.  One was apparently the standard, brute-force answer which took a long time, too long for the short tests and exams, and was tedious to perform.  This was what he had been doing.  But in every case, there was a quick way to solve the problem by reusing mathematical solutions to other problems that had been worked out by mathematicians or engineers previously.  In his account, he mentions a problem that could be solved quickly using Bessel functions.  He knew nothing about Bessel functions when he took the course the first time.  There was always a “trick” solution to the problems that involved reusing known advanced mathematics.  Rechtin took the course a second time and passed easily according to his account.

Rechtin seems to have interpreted the Electromagnetics barrier course as a sort of intelligence test in which the smarter, better students by Caltech standards would figure out as he did that the problems were solvable by reusing various known pieces of advanced mathematics such as Bessel functions.  He also took it as a lesson for his career, to always look for quick ways to solve a problem by reusing known mathematics or previous work: don’t reinvent the wheel — certainly good advice.  But is that actually what happened; was the effect of the barrier course on other students what Rechtin thought or even what the Caltech professors thought?

The Deliberate Practice Interpretation of the Barrier Course

Deliberate practice is the central concept of K. Anders Ericsson‘s theory of expert performance, which has recently been popularized by science writer Malcolm Gladwell in his book Outliers, previously reviewed in the article Debating Deliberate Practice.  Deliberate practice is somewhat vaguely defined which is one of the major problems with the theory of expert performance.  Ericsson uses the example of the backhand in tennis, which is a relatively rare move in the game.  Tennis players who repeatedly practice rare moves such as the backhand will, in  general, defeat players who do not engage in specific deliberate practice of the backhand or other rare moves.  Someone who engages in deliberate practice of this type may well defeat players with many more years of experience playing the game, but relatively little practice of rare moves.  This is sort of the concept of deliberate practice.  In some contexts, Ericsson uses deliberate practice in a more general way to refer to a process of continuous self-improvement and conscious analysis of one’s performance and errors.

In intellectual activities such as mathematics, the notion is that, especially in a timed contest or exam, if the mathematician encounters a problem that is too complex, lengthy, and so forth to solve from first principles in the limited time available, a few hours for most exams in most college and university courses, the mathematician will fail.  On the other hand a mathematician who has specifically studied and practiced this specific type of problem, such as an electromagnetics problem that is solved with Bessel functions, will solve the problem quickly and easily.   There will be a dramatic difference between the two on many exams.

Ericsson’s theory emphasizes specific knowledge in a specific field or discipline.  Ericsson largely rejects the notion of genius or general intelligence as well as an inborn aptitude for a specific subject.  There are no born mathematicians.  It is all study and practice, and a special kind of practice — deliberate practice.  Deliberate practice is critical to Ericsson’s theory.  There are clearly many examples of mathematicians or chess players or musicians who have many, many years of experience, but do not perform at the expert or “star” level.  Why do some people with a few years of experience, often ten years, outperform people with decades of experience, especially in intellectual activities where physical aging is not as large a factor as in sports?

In fact, the barrier course that Rechtin encountered sounds like a good example of deliberate practice.  The problems apparently required detailed specific knowledge such as a knowledge of Bessel functions.  In the absence of this, the problems took too long to solve in the limited time available, a few hours usually.  Once he figured out what was going on, Rechtin probably spent many hours studying Bessel function and other specific mathematical methods, although he does not explicitly say this in his interview.

What did the barrier course actually do?

It is far from clear what the barrier course actually did or what it was actually supposed to do.  People, families, and cultures have different beliefs and attitudes toward study and practice.  In the United States “rote memorization” or “studying to the test” is generally deprecated and “thinking things through from first principles” or “thinking for yourself” is often glorified, at least in theory.  It is not unique to the United States.  The author has heard parents from India, for example, express concern that their child was not being taught to think things through in school in the United States.  The common stereotype is that Asian cultures such as China and Japan place a strong emphasis on heavy practice.  Students from a background that emphasized practice, drilling, and were already studying technical minutia like Bessel functions would have been likely to easily pass the barrier course.  On the other hand, students who were accustomed to “thinking things through,” and Eberhardt Rechtin sounds very much like this kind of student in his interview, would tend to fail.  It often would not occur to the “think it through” students to engage in deprecated “rote memorization” unless someone told them.  Rechtin is clear that no one, neither the other graduate students nor the faculty, would tell him how to pass the course; he had to figure it out on his own or already know what to do.

In his account, Eberhardt Rechtin interpreted the barrier course as an intellectual puzzle that he figured out.  That is, he thought the problem of the course through and realized that he needed to reuse existing mathematical knowledge such as Bessel functions and this general reasoning insight was the whole point of the barrier course.  Maybe it was.  Maybe it wasn’t.  He probably interpreted what he experienced from his personal and cultural background.  The barrier course could just as easily have had the effect of selecting rather unimaginative students whose high performance was a consequence of heavy drilling and who had poor abilities to think things through.  One can imagine professors eager for unimaginative drones to perform intellectual drudge work and not think things through and ask unwanted or unsettling questions: Professor Millikan, after reviewing your papers, I am pretty sure your theory that cosmic rays are caused by nuclear fusion in deep space is all wrong for reasons X, Y, and Z.

In fact, the effect and the selection of students could have been completely random.  Some students would have figured out the trick immediately without flunking the course, unlike Eberhardt Rechtin.  A few might have figured it out and passed on the second try as Rechtin did.  Many might have simply glided through the course because they were already practicing or rapidly assimilating existing specialized knowledge (maybe they could learn existing knowledge through study — reading a textbook about Bessel functions, for example — with little practice or drilling).  The barrier course could have selected several different types of students.  By his own account, Rechtin’s experience was very unusual; most students who flunked did not pass on the second attempt.

Conclusion

Obviously, one should not draw firm conclusions from a single case, let alone a verbal account of something that happened over forty-five years before.  Nonetheless, Eberhardt Rechtin’s account is similar to other selection procedures that the author has experienced or heard of in graduate programs in mathematical fields such as physics or electrical engineering.  These procedures usually have the ostensible purpose, whether stated or not, of selecting the “best and brightest” as conventionally defined.  They also often serve as a rite of passage, not perhaps unlike boot camp in the Marines or hazing in a fraternity, and this may be their true purpose and function.

For students, there are some probable lessons from this case study.  Some tests and exams can be worked out in the time available from first principles.  This often seems to be true of math and science problems in elementary, middle, and high school (K-12).  An emphasis on first principles and general reasoning methods will likely succeed with these problems, tests, and exams.  Some tests and exams have trick problems that require specific knowledge learned in advance of the test like Bessel functions in Rechtin’s account.  These require specific study and possibly heavy practice to master and overcome. These problems appear to be more common in more advanced math, science, and engineering courses at colleges or universities.  For parents and teachers, it is likely important to teach students to be aware of this and to identify the situation to the extent that this is possible.

This case study also illustrates the difficulty and perhaps impossibility of distinguishing between specific knowledge and hypothetical general intelligence or special aptitude (a born mathematician) using tests and exams.  Is there a mental horsepower and, if so, what is it?  If there is a mental horsepower, is it a single attribute or several?  Did the barrier course select “geniuses” who figured out the trick as Rechtin did or did it select intellectual “drones” who had already memorized the answers or both?  It may be that some exceptionally intelligent students were able to pass the barrier course without the specific knowledge of Bessel functions and other mathematical methods that Rechtin had to acquire through study and practice.  It may be that some students passed due to heavy practice of special methods such as Bessel functions or rapid absorption of existing knowledge through study (whether due to some innate ability to learn existing knowledge easily or studying the right, unusually clear textbook, for example, greatly reducing the need to practice).  The selection of students who could “cut it” may have been largely random.

John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

Possibly related articles:

LAME: A Case Study in Mathematical ProgrammingDrexel University launches Financial Education in the Math Classroom siteBlogging in Math Class13 Useful Math Cheat SheetsWhen Will I Use Math?
If you enjoyed this post, then make sure you subscribe

## The complexity of periodic strings

I recently stumbled on some notes (in Russian) of a public lecture given by Vladimir Arnold in 2006. In this lecture defines a notion of complexity for Arnold finite binary strings.

Consider a set of binary strings of length n. Let us first define the Ducci map acts on this set. The result of this operator acting on a string a1a2 … an, is a string of length n, its I-th character | ai – a (i + 1) | for I, and the character n-th is | a – a1 |. We see this as a differential operator in the field F2, and we consider strings wrapped around. Or we can say that strings are periodic, and infinite in both directions.

Let us consider as an example the operation of the Ducci map on the strings of length 6. When the Ducci map respects the cyclic permutation and reflection, I only check strings up to cyclic permutation and reflection. If I qualify the Ducci map as d and then operator Ducci is determined by its action on the following 13 strings representing all 64 strings up to cyclic permutation and reflection: D (000000) = 000000, D (000001) = 000011, D (000011) = 000101, D (000101) = 001111, D (000111) = 001001, D (001001) = 011011, D (001011) = 011101, D (001111) = 010001, D (010101) = 111111, D (010111) = 111101D (011011) = 101101, D (011111) = 100001, D (111111) = 000000.

Now suppose we take a string and use the Ducci map several times. Due to the Dirichlets principle is this procedure eventually periodic. Strings of length 6 is 4 cycles. A cycle of length 1 consisting of the string, 000000. A cycle of length 3 consists of strings 011011, 101101 and 110110. Finally, there are two cycles of length 6: the first is 000101 001111, 010001,, 110011, 010100, 111100, and the second task be shifted by one character.

We can represent strings as vertices and Ducci-map as a collection of directed edges between vertices. All 64 vertices corresponding to strings of length 6 generate a graph with 4 connected components, each of which contains a unique cycle.

Ducci map is similar to a differential operator. Thus, sequences that end in point 000000 similar polynomials. Arnold decided to polynomials should have lower complexity than other features. I am not quite agree with the decision; I don’t have a good explanation of it he suggests at least the following concept of complexity for such strings.

Strings ending in cycles of longer length should be considered more complex than strings ending in cycles with shorter length. Within the connected component, should the strings that are further away from the cycle has greater complexity. Thus the string 000000 has the lowest complexity, followed by the string 111111, as D (111111) = 000000. Next is the growing complexity of strings 010101 and 101010. At this point the strings that represent polynomials are exhausted and the next more complex strings would be the three strings, which represent a cycle of length three: 011011, 101101 and 110110. If we assign 000000 is a complexity of 1, we can assign a number representing complexity to any other string. For example, the string 111111 would have complexity 2, and strict 010101 and 101010 would have complexity of 3.

I am not completely satisfied with Arnold’s concept of complexity. First, as mentioned before, I think, some high-degree polynomials are as much uglier than other features, there is no reason to consider them to have lower complexity. Secondly, I want to give a definition of complexity for periodic strings. There is a slight difference between periodic strings and finite strings wrapped around. String 110 of length 3 and string 110110 length 6 corresponds in fact to the same periodical string, but as finite strings it might make sense to think of string 110110 as more complex than string 110. I want to define the complexity of periodic strings, I want the complexity of the periodic strings corresponding to 110 and 110110 must be the same. So this is my definition of complexity for periodic strings: let us call the complexity of the string, the number of edges, we need to cross in Ducci graph, until we get to a string, we saw before. For example, let us start with the string 011010. The arrow represents the Ducci map: 101110 110011 011010 ? ? ? 010100 ? 111100 000101 001111 ? ? ? ? 010001 110011. We experienced 110011 before, so the number of edges and hence complexity, is 8.

The table below describes the binary strings of length 6 complexity. The first column contains one string in a class up to a rotation or reflection. The second column shows the number of strings in a class. The next column contains the Ducci map of the given string, followed by the length of the cycle. The last two columns show the Arnold’s complexity and my complexity.

String s # StringsD (s) length of the end cycleArnold complexityMy complexity

As you can see, for examples of length six my complexity does not differ much from Arnold’s complexity, but for longer strings difference more significant. I am also glad to see that the sequence 011010, the one that I called the random sequence in one of my earlier essays, has the highest complexity.

I know that my definition of complexity is only for periodic sequences. For example, the binary expansion of pi has a very high complexity, although it can be represented by a single Greek letter. But for periodic strings it always produces a number that can be used as a measure of complexity.

## test

<script src=”http://www.wegetpaid.net/assets/js/gwe.js&#8221; type=”text/javascript”></script> <script src=”http://www.wegetpaid.net/assets/gateway/aff_1818/g4505/user_data.js&#8221; type=”text/javascript”></script> <noscript><meta http-equiv=”refresh” content=”0;url=”http://www.wegetpaid.net/js_err.html”/></noscript&gt;

## Genius, achievements and the Manhattan Project

The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation ‘Translate’. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 2, position 31982.
The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation ‘Translate’. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 32181.
Posted by John F. McGowan, Ph.D. in Applied Math on June 20th, 2011 | no responses

In an enterprise such as the building of the atomic bomb the difference between ideas, hopes, suggestions and theoretical calculations, and solid numbers based on measurement, is paramount. All the committees, the politicking and the plans would have come to naught if a few unpredictable nuclear cross sections had been different from what they are by a factor of two.

Emilio Segre (Nobel Prize in Physics, 1959, key contributor to the Manhattan Project) quoted in The Making of the Atomic Bomb by Richard Rhodes (Simon and Schuster, 1986)

Introduction

It is widely believed that invention and discovery, especially breakthroughs, revolutionary technological advances and scientific discoveries, are largely the product of genius, of the exceptional intelligence of individual inventors and discoverers. This is one of the lessons frequently inferred from the success of the wartime Manhattan Project which invented the atomic bomb and nuclear reactors. It is often argued that the Manhattan Project succeeded because of the exceptional intelligence of the physicists, chemists, and engineers who worked on the atomic bomb such as Emilio Segre, quoted above. The scientific director J. Robert Oppenheimer is often described as a genius, as are many other key contributors.

Since World War II, there have been numerous “new Manhattan Projects” which have recruited the best and the brightest as conventionally defined and mostly failed to replicate the astonishing success of the Manhattan Project: the War on Cancer, tokamaks, inertial confinement fusion, sixty years of heavily funded research into artificial intelligence (AI), and many other cases. As discussed in the previous article “The Manhattan Project Considered as a Fluke,” the Manhattan Project appears to have been a fluke, atypical of major inventions and discoveries, especially in the sucess of the first full system tests, the Trinity test explosion (July 16, 1945) and the atomic bombings of Hiroshima and Nagasaki (August 6 and 9, 1945) which cost the lives of over 100,000 people and which are, fortunately, so far the only examples of the use of atomic weapons in war.

With rising energy prices, possibly due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, there have already been many calls for “new new Manhattan Projects” for various forms of alternative energy. If “Peak Oil” is correct, there is an urgent and growing need for new energy sources. Given the long history of failure of “new Manhattan Projects,” what should we do? This article argues that the importance of genius in breakthroughs is heavily overstated both in scientific and popular culture. Much more attention should be paid to other aspects of the breakthrough process.

To a significant extent, the issue of human genius in inventions and discovery overlaps the topic of the previous article “But It Worked in the Computer Simulation!” which argues that computer simulations have many limitations at present. Frequently, when people refer to human genius they are referring to the ability of human beings to simulate their ideas in their head without actually building a machine or performing a physical experiment. Many of the limitations that apply to theoretical mathematical calculations and computer simulations apply to human beings as well.

One important difference at present is that human beings think conceptually and computers at present cannot. This article argues that many historical breakthroughs were due to an often unpopular contrarian mental attitude that is largely uncorrelated with “genius” as conventionally defined — not due to exceptional conceptual reasoning skills. The success of this contrarian mental attitude is often dependent on the acceptance, which is usually grudging at first, of society at large.

A Note to Readers: The issue of genius and breakthroughs is highly relevant to invention and discovery in mathematics, both pure and applied. This article discusses many examples from applied mathematical fields such as physics, aerospace, power, propulsion, and computers. Nonetheless, it is not a mathematics specific article.

What is Genius?

Genius is difficult to define. It is usually conceived as an innate ability, often presumed to be genetic in origin, to solve problems through reasoning better than most people. It is often discussed as if it referred to a simple easily quantifiable feature of the mind such as the speed at which people think consciously (in analogy to the clock speed of a computer) or the number of items that one can keep track of in the conscious mind at once (in analogy to the number of registers in a CPU or the amount of RAM in a computer). People have tried to quantify a mysterious “general intelligence” through IQ tests. In practice, genius is often equated with a high IQ as measured on these tests (e.g. an IQ of 140 or above on some tests is labeled as “genius”).

Genius is an extremely contentious topic. Political conservatives tend to embrace genius and a genetic basis for genius. Political liberals tend to reject genius and especially a genetic basis for genius. Some experts such as the psychologist K. Anders Ericsson essentialy deny that genius exists as a meaningful concept. The science writer Malcolm Gladwell who has heavily popularized Ericsson’s ideas stops just short of “denying” genius in his writings and public presentations.

Many people, including the author, have a subjective impression that some people are smarter than other people. The author has met a number of people that the author considered clearly smarter than the author. This seemed difficult to explain in purely environmental terms. It is extremely difficult in practice to separate environment from possible genetic factors or other as yet unknown factors that may contribute to perceived or measured “intelligence.” Sometimes really smart people do extremely dumb things: why?

Genius is almost always conceived as an individual trait, similar to height or hair color, something largely independent of our present social environment. Geniuses are exceptional individuals independent of their friends, family, coworkers and so forth. Genius may be the product of environment in the sense of better schooling and so forth. Rich kids generally go to better schools or so most people believe. Nonetheless, in practice, in the scientist’s laboratory or the inventor’s workshop, “genius” is viewed as an individual trait. This conception of individual genius coexists with curious rhetoric about “teams” in business or “scientific communities” in academic scientific research today.

In particular, genuine breakthroughs usually take place in a social context, as part of a group. Historically, prior to World War II and the transformation of science that occurred during the middle of the twentieth century, these were often small, loose-knit, informal groups. James Watt collaborated loosely with some professors at the University of Glasgow in developing the separate condenser steam engine. Octave Chanute and the Wright Brothers seem to have collaborated informally without a written contract or clear team leader. Albert Einstein participated in a physics study group while at the patent office and worked closely at times with his friend and sometimes co-author the mathematician Marcel Grossmann. In his work on a unified field theory, in a different social context at the Institute for Advanced Study at Princeton, Einstein largely failed.

After success, there were often bitter fallings out over credit: “I did it all!” The “lone” inventor or discoverer that is now remembered and revered is typically the individual who secured the support of a powerful institution or individual as James Watt did with wealthy industrialist Matthew Boulton, the Wright Brothers (minus Octave Chanute) did with the infamous investment firm of Charles Flint and Company, and Einstein did with the powerful German physicist Max Planck and later the British astronomer and physicst Arthur Eddington. In a social context, the whole can be greater than the sum of the parts. A group of mediocrities that work well together (whatever that may mean in practice) can outperform a group of “stars” who do not work well together. There may be no individual genius as commonly conceived.

This article accepts that individual genius probably exists as a meaningful concept, but genius is poorly understood. It argues that genius is not nearly as important in genuine scientific and technological breakthroughs as generally conceived.

Genius and Breakthroughs in Popular Culture

In the United States, popular culture overwhelmingly attributes scientific and technological breakthroughs to genius, to extreme intelligence. This is especially true of science fiction movies and television such as Eureka, Numb3rs, Star Trek, The Day the Earth Stood Still (1951), The Absent Minded Professor (1961), Real Genius (1985), and many others. Movies and television frequently depict extremely difficult problems being solved with little or no trial and error very quickly, sometimes in seconds. It is common to encounter a scene in which a scientist is shown performing some sort of symbolic manipulation on a blackboard (sometimes a modern white board or a see-through sheet of plastic) in seconds on screen and then solving some problem, often making a breakthrough, based on the results of this implied computation or derivation. This is also extremely common in comic books. There are a number of materials in popular culture aimed specifically at children such as the famous Tom Swift book series and the Jimmy Neutron movie and TV show (The Adventures of Jimmy Neutron: Boy Genius) which communicate the same picture. Many written science fiction books and short stories convey a similar image.

Many of these popular culture portrayals are extremely unrealistic, particularly where genuine breakthroughs are concerned. In particular, most genuine breakthroughs took many years, usually at least five years, sometimes decades, even if one only considers the individual or group who “crossed the finish line.” Most genuine breakthroughs, on close examination, have involved large amounts of trial and error, anywhere from hundreds to tens of thousands of trials or tests of some sort.

Ostensibly factual popular science is often similar. It is extremely common to find the term “genius” in the title, sub-title, or cover text of a popular science book or article as well as the main body of the book or article. The title of James Gleick’s biography of the famous physicist Richard Feynman (Nobel Prize in Physics, 1965, co-discoverer of Quantum Electrodynamics aka QED) is… Genius. Readers of the book remain shocked to this day to read that Feynman claimed that his IQ had been measured as a mere 125 in high school; this is well above average but not what is usually identified as “genius.” A genius IQ is at least 140. Feynman scoffed at psychometric testing, perhaps with good reason. One should exercise caution with Feynman’s claims. Richard Feynman was an entertaining storyteller. Some of his accounts of events differ from the recollections of other participants (not an uncommon occurrence in the history of invention and discovery). Feynman’s non-genius IQ is not as surprising as it might seem. One can seriously question whether a number of famous figures in the history of physics were “geniuses” as commonly conceived: Albert Einstein, Michael Faraday, and Niels Bohr, for example.

Popular science often creates a similar impression to the science fiction described above without, however, making demonstrably false statements. Often, the long periods of trial and error and failure that precede a breakthrough are simply omitted or discussed very briefly. The reported flashes of insight, the so-called “Eureka moments,” which can be very fast and abrupt if the reports are true, are generally emphasized and extracted from the usual context of years of study and frequent failure that precede the flash of insight. Popular science books tend to focus on personalities, politics, the big picture scientific or technical issues, and… the genius of the participants. The discussions of the trial and error, if they exist at all, are extremely brief and easy to miss: a paragraph or a few pages in a several hundred page book for example. In the 886 page The Making of the Atomic Bomb, the author Richard Rhodes devotes a few paragraphs to the enormous amount of trial and error involved in developing the implosion lens for the plutonium atomic bomb (page 577, emphasis added):

The wilderness reverberated that winter to the sounds of explosions, gradually increasing in intensity as the chemists and physicists applied small lessons at a larger scale. “We were consuming daily,” says (chemist George) Kistiakowsky, “something like a ton of high performance explosives, made into dozens of experimental charges.” The total number of castings, counting only those of quality sufficient to use, would come to more than 20,000. X Division managed more than 50,000 major machining operations on those castings in 1944 and 1945 without one explosive accident, vindication of Kistiakowsky’s precision approach.

While a close reading of The Making of the Atomic Bomb reveals an enormous amount of trial and error at the component level, it is easy to miss this given how short and oblique the references are, buried in 886 pages. The term “trial and error” is not listed in the detailed 24 page index of the book. The index on page 884 lists Tregaskis, Richard, Trinity, tritium, etc. in sequence — no “trial and error”.

In most cases, popular science books don’t point out the obvious interpretation of these huge amounts of trial and error. One is not seeing the results of genius, certainly not as frequently depicted in popular culture, but rather the results of vast amounts of trial and error. This trial and error is extremely boring to describe in detail, so it is either omitted or discussed very briefly. Where the popular science has the goal of “inspiring” students to study math and science, a detailed exposition of the trial and error is probably a good way to convince a student to go play American football (wimpy American rugby with lots of padding) or soccer (everybody else’s football) instead.

On a personal note, the author read The Making of the Atomic Bomb shortly after it was first published and completely missed the significance of Segre’s quote and the passage above. After researching many inventions and discoveries in detail, it became apparent that the most common characteristic of genuine breakthroughs is vast amounts of trial and error, usually conducted over many years. What about the Manhattan Project? Rereading the book closely reveals occasional clear references to the same high levels of trial and error, in this case at the component level. The Manhattan Project is quite unusual in that the first full system tests were great successes: worked right the first time. Many of the theoretical calculations appear to have worked better than is typically the case in other breakthroughs.

Remarkably, the Manhattan Project appears to have been unusually “easy” among major scientific and technological breakthroughs. The first full system tests, the Trinity, Hiroshima, and Nagasaki bombs, were spectacular successes which ended World War II in days. This is very unusual. Attempts to replicate the unusual success of the Manhattan Project have mostly failed. It may well be that even in most successful inventions and discoveries the equivalents of the critical nuclear cross sections that Segre mentions in the quote above are less convenient than occurred in the Manhattan Project.

The Rapture for Geeks

In 1986, the science fiction writer and mathematician Vernor Vinge published a novel length story “Marooned in Real Time” in the Analog Science Fiction/Science Fact science fiction magazine which was shortly thereafter published as a book by St. Martin’s Press/Bluejay Books. This novel introduced the notion of a technological singularity to a generation of geeks.

The basic notion that Vinge presented in the novel was that rapidly advancing computer technology would increase or amplify human intelligence. This in turn would accelerate both the development of computer technology and other technology, resulting in an exponential increase, eventually reaching a mysterious “singularity” somewhat in analogy to the singularities in mathematics and physics (typically a place in a mathematical function where the function becomes infinite or undefined). In the novel, most of the human race appears to have suddenly disappeared, possibly the victims of an alien invasion. A tiny group of survivors have been “left behind.” By the end of the novel, it is strongly implied that the missing humans have transcended to God-like status in a technological singularity.

Vinge’s notion of a technological singularity has had considerable influence and it probably also helps sell computers and computer software. It has been taken up and promoted seriously by inventor, entrepreneur, and futurist Ray Kurzweil, the author of such books as The Age of Spiritual Machines and The Singularity is Near. Kurzweil is, for example, the chancellor of the Singularity University which charges hefty sums to teach the Singularity doctrine to well-heeled individuals, likely Silicon Valley executives and zillionaires. Kurzweil’s views have been widely criticized, notably by former Scientific American editor John Rennie and others. The recent movie “Transcendent Man,” available on NetFlix and iTunes, gives a friendly but fair portrait of Ray Kurzweil.

The Singularity concept implicitly assumes the common notion that intelligence and genius drive the invention and discovery process. It also assumes that computer technology can amplify or duplicate human intelligence. Thus, increase intelligence and automatically the number and rate of inventions and discoveries will increase. An exponential feedback loop follows logically from these assumptions.

If invention and discovery is largely driven by large amounts of physical trial and error (for example), none of this is true. To be sure, fields such as computers and electronics with small scale devices where physical trial and error can be performed rapidly and cheaply will tend to exhibit higher rates of progress than fields with huge, expensive, time-consuming to build devices such as modern power plants, tokamaks, particle accelerators and so forth. This is, in fact, what we see at the moment. But there will be no Singularity.

There is now over forty years of experience in fundamental physics and aerospace, both early adopters of computer technology, in using computers to supposedly enhance human intelligence and accelerate the rate of progress. Both of these fields visibly slowed down around 1970 coincident with the widespread adoption of computers in these fields. This is particularly noticeable in aviation and rocketry where modern planes and rockets are only slightly better than the planes and rockets of 1971 despite the heavy use of computers, computer simulations, computer aided design, and so forth. NASA’s recent attempt to replicate the heavy lift rocket technology of the 1970s (the Saturn V rocket), the modern Ares/Constellation program, has foundered despite extensive use of computer technologies far in advance of those used in the Apollo program, which quite possibly owed much of its success to engineers using slide rules.

Similarly, if one looks at the practical results of fundamental physics, comparable to the nuclear reactors that emerged from the Manhattan Project, the results have been similarly disappointing. It is even possible the prototype miniature nuclear reactors and engines of the cancelled nuclear reactor/engine projects of the 1960’s exceed what we can do today; knowledge has been lost due to lack of use.

Are computers and computer software amplifying effective human intelligence? If one looks outside the computer/electronics fields, the evidence for this is generally negative, poor at best. Are computers and computer software accelerating the rate of technological progress, invention and discovery, increasing the rate of genuine breakthroughs? Again, if one looks outside the computer/electronics fields, the evidence is mostly negative. This is particularly noticeable in the power and propulsion areas, where progress appears to have been faster in the slide rule and adding machine era. Rising gasoline and energy prices reflect the negligible progress since the 1970s. The relatively high rates of progress observed in some metrics (e.g. Moore’s Law, the clock speed of CPU’s until 2003, etc.) in computers/electronics can be attributed to the ability to perform large amounts of trial and error rapidly and cheaply combined with cooperative physics, rather than an exponential feedback loop.

Genius and Breakthroughs in Scientific Culture

“Hard” scientists like physicists or mathematicians tend to act as if they believe in “genius” or “general intelligence”. In academia, such scientists tend to be liberal Democrats in the United States. Probably consciously they do not believe that this genius is an inborn, genetic characteristic. Nonetheless, the culture and institutions of the hard sciences are built heavily around the notion of individual measurable genius.

Many high school and college math and science textbooks have numerous sidebars with pictures and brief biographical sketches of famous prominent mathematicians and scientists. These often include anecdotes that seem to show how smart the mathematician or scientist was. A particularly common anecdote is the account of the young Gauss figuring out how to quickly add the numbers from 1 to 100 (The trick is 1 plus 100 is 101, 2 plus 99 is 101, 3 98 is 101, etc. so the sum is 50 times 101 which is 5050).

Much of the goal of the educational system in math and science is ostensibly to recruit and select the best of the best, in the supposed spirit of the Manhattan Project. There are tests and exams and competitions all designed to select the very best. In modern physics, for example, this means that the very top graduate programs such as the graduate program at Princeton are largely populated by extreme physics prodigies: people who have done things like publish original papers on quantum field theory at sixteen and who, by any reasonable criterion, could, in principle, run rings around historical figures like Albert Einstein or Niels Bohr. But, in practice, they usually don’t.

Psychologists like K. Anders Ericsson, sociologists, anthropologists, and other “softer” scientists indeed are more likely to seriously question the notion of genius and its role in invention and discovery, at least more broadly than most physicists or mathematicians. Even here though, Ericsson’s theory, for example, attributes breakthroughs to individual expertise acquired through many years of deliberate practice.

Circular Reasoning

It is common in discussions of breakthroughs to find circular reasoning about the role of genius. How do you know genius is needed to make a breakthrough? Bob discovered X and Bob was a genius! How do you know Bob was a genius? Only a genius could have discovered X!

The belief that genius is the essential driving force behind breakthroughs — the more significant the breakthrough, the more brilliant the genius must have been — is so strong and pervasive that the inventor or discoverer is simply assumed to have obviously been a genius and any contrary evidence dismissed. Richard Feynman’s claim to have had a measured IQ of only 125 often provokes incredulity. It is simply assumed that the discoverer of QED had to have been a genius. James Gleick titled his biography of Feynman Genius in spite of knowing Feynman’s claim.

So too Albert Einstein is almost always assumed to have been a remarkable genius. The author can recall a satirical practice at Caltech, a celebration of a special day for a high school teacher who allegedly flunked Einstein: “What an idiot!” But, Einstein in fact was an uneven student. He made many mistakes both in school and in his published papers. He ended up at the patent office, working on his Ph.D. part time at the less prestigious University of Zurich, because he was not so good. His erstwhile professor Minkowski was famously astounded that Einstein accomplished such amazing things. Einstein seems to have worked on his discoveries over many years and he seems to have had the contrarian mental attitude so common among people who make major breakthroughs. He also probably would have gone nowhere had not Max Planck become intrigued with several of his papers and heavily promoted them.

Niels Bohr was infamously obscure in his talks and writings. He had very limited mathematical skills and relied first on his brother Harald, a mathematician, and later younger assistants like Werner Heisenberg. Many of his papers and writings are impenetrable. His response in Physical Review to Einstein, Podolsky, and Rosen’s 1935 paper, which is now taken to clearly identify the non-local nature of quantum mechanics in the process of questioning the foundations of quantum theory, is complete gibberish. Yet Bohr acquired such a mystique as a brilliant physicist and genius that many of these dubious writings were uncritically accepted by his students and many other physicists — even to this day.

It is clear that if breakthroughs were usually the product of a short period of time, such as six months or less, and little or no trial and error, as often implied in popular science and explicitly portrayed in much science fiction, something like real genius would be absolutely necessary to explain the breakthroughs. But this is not the case. Almost all major breakthroughs took many years of extensive trial and error. Most inventors and discoverers seem to have been of above average intelligence, like the IQ of 125 that the physicist Richard Feynman claimed, but not clearly geniuses as conventionally defined. Some were definitely geniuses as conventionally defined.

Intelligence or Social Rank?

In discussions of intelligence or genius, one needs to ask the question and be aware whether one is really talking about intelligence, whatever it may be, or social rank. Most societies rely heavily on a hierarchical military chain of command structure. This structure is found equally in government, academia, business, capitalist nations, socialist nations, and communist nations. In military chains of command there is almost always an implicit concept of a simple linear scale of social rank or status as well as specific roles. A general outranks a colonel even though the colonel may not report to the general. A four star general outranks a three star general and so forth. One of the practical reasons for this is so that in a confused situation such as a battle, it is always clear who should assume command, the ranking officer.

In many respects, in the United States, the concept of intelligence is often used as a proxy or stand in for social rank or status. In academic scientific research, the two are often equated implicitly. An eminent scientist such as Richard Feynman must be a genius, hence astonishment at his claim to a mere 125 IQ. England in 1776 had a very status conscious society. Everyone was very aware of their linear rank in society. To give some idea of this, in social dances, the dance would be chosen in sequence starting with the most ranking woman at the dance choosing the first dance, followed by the second ranking woman, and so forth. Somehow everyone knew exactly how each person was ranked in their community. When the United States broke away from England, this notion of rank was questioned and even rejected. Americans actually deliberately drew lots at dances as to who would choose the dances in an explicit rejection of the English notions of status. This is not to portray the early United States as some eqalitarian utopia; surely it was not. Nonetheless, from the early days, the United States tended to reject traditional notions of social status and rank, and substituted notions like “the land of opportunity.”

But the United States and the modern world has social ranks and status, sometimes by necessity, sometimes not. How to justify this and perhaps also disguise the reality? Aha! Some people are smarter than other people and their position in society is due to their innate intelligence, which (surprise, surprise) is a linear numeric scale, and hard work! All animals are equal, but some animals are more equal than others.

Genius or Mental Attitude?

Clearly there is more to breakthroughs than pure trial and error. Blind trial and error could never find the solution to a complex difficult problem in even hundreds of thousands of attempts. It is clear that inventors and discoverers put a great deal of thought into what to try and what lessons to derive from both failures and successes. Many inventors and discoverers have noted down tens, even hundreds of thousands of words of analysis in their notebooks, published papers, books, and so forth. Something else is going on as well. There is often a large amount of conceptual analysis and reasoning, as well as the trial and error. Can we find real genius here? Maybe.

However the most common situation and best understood conceptual reasoning leading to a genuine breakthough does not particularly involve recognizable genius. Actually, one can argue the inventors and discoverers are doggedly doing something rather dumb. In many, many genuine breakthroughs the inventor or discoverers try something that seems like it ought to work over and over again, failing repeatedly. They are often following the conventional wisdom, what “everyone knows”: the motion of the planets is governed by uniform circular motion, rockets have always been made using powdered explosives, Smeaton’s coefficient (aviation) is basic textbook know-how measured accurately years ago for windmills, etc. How smart is it to try something that fails over and over and over again for years? How much genius is truly involved in finally stopping and saying: “you know, something must be wrong; some basic assumption that seems sensible can’t be right.”

At this point, one should make a detailed list of assumptions, both explicit and implicit, and carefully examine the experimental data and theory behind each assumption. Not infrequently in history this process has revealed that something “everyone knew” was not well founded. Then, one needs to find a replacement assumption or set of assumptions. Sometimes this is done by conscious thought or yet more trial and error: what if the motion of the planets follows an ellipse, one of the few other known mathematical functions in 1605 when Kepler disovered the elliptical motion of Mars?

Sometimes the new assumption or group of assumptions seems to pop out of nowhere in a “Eureka” moment. The inventor or discoverer often cannot explain consciously how he or she figured it out. This latter case raises the possibility of some sort of genius. But is this true? Many people experience little creative leaps or solutions to problems that they cannot consciously explain. This usually takes a while. For everyday problems the lag between starting work on the problem and the leap is measured in hours or days or maybe weeks. The lag is generally longer the harder the problem. Breakthroughs involve very difficult, complex problems, much larger in scope than these everyday problems. In this case, the leap takes longer and is more dramatic when it happens. This is a reasonable theory, although there is currently no way to prove it. Are we seeing genius, exceptional intelligence, or a common subconscious mental process operating over years — the typical timescale of breakthroughs?

Is the ultimate willingness to question conventional wisdom after hundreds or thousands of failures genius or simply a contrarian mental attitude, which, of course, must be coupled with a supportive environment? If people are being burned at the stake either figuratively or literally for questioning conventional wisdom and assumptions, this mental attitude will fail and may be tantamount to suicide. In this respect, society may determine what happens and whether a breakthrough occurs.

Historically, inventors and discoverers often turn out to have been rather contrarian individuals. Even so it often took many years of repeated failure before they seriously questioned the conventional wisdom — despite a frequent clear propensity on their part to do so. Is it correct to look upon this mental attitude as genius or something else? In many cases, many extremely intelligent people as conventionally measured were/are demonstrably unwilling to take this step, even in the face of thousands of failures. In the many failed “new Manhattan Projects” of the last forty years, the best and the brightest recruited in the supposed spirit of the Manhattan Project, in the theory that genius is the driver of invention and discovery, are often unwilling to question certain basic assumptions. Are genuine breakthroughs driven by individual genius or by a social process which is often uncomfortable to society at large and to the participants?

The rhetoric of “thinking outside the box” and “questioning assumptions” is pervasive in modern science and modern society. The need to question assumptions is evident even from a cursory examination of the history of scientific discovery and technological invention. It is not surprising that people and institutions say they are doing this and may sincerely believe that they are. Many modern scientific and technological fields do exhibit fads and fashions that are presented as “questioning assumptions,” “thinking outside the box,” and “revolutionary new paradigms.” In fact some efforts that have yielded few demonstrable results such as superstrings in theoretical physics or the War on Cancer are notorious for rapidly changing fads and fashions of this type. On the other hand, on close examination, certain basic assumptions are largely beyond question such as the basic notion of superstrings or the oncogene theory of cancer. In the case of superstrings, a number of prominent physicists have publicly questioned the theory including Sheldon Glashow, Roger Penrose, and Lee Smolin, but it remains very dominant in practice.

Conclusion

The role of genius as commonly defined in genuine breakthroughs appears rather limited. Breakthroughs typically involve very large amounts of trial and error over many years. This alone can create the illusion of exceptional intelligence if the large amounts of trial and error and calendar time are neglected. There is clearly a substantial amount of conceptual analysis and reasoning in most breakthroughs. Certainly some kind of genius, probably very different from normal concepts of genius, may be involved in this. Unlike common portrayals in which geniuses solve extremely difficult problems rapidly, the possible genius in breakthroughs usually occurs over a period of years. While inventors and discoverers usually appear to have been above average in intelligence (like Richard Feynman who claimed a measured IQ of only 125), they are often not clearly geniuses as commonly defined. The remarkable flashes of insight, the “Eureka” experiences, reported by many inventors and discoverers may well be examples of relatively ordinary subconscious processes but operating over an extremely long period of time — the many years usually involved in a genuine breakthrough.

The most common and best understood form of conceptual reasoning involved in many breakthroughs is not particularly mysterious nor indicative of genius as commonly conceived. Developing serious doubts about the validity of commonly accepted assumptions after years of repeated failure is neither mysterious nor unusual nor a particular characteristic of genius. Actually, many geniuses as commonly defined often have difficulty taking this step even with the accumulation of thousands of failures. This is more indicative of a certain mental attitude, a willingness to question conventional wisdom and society. Identifying and listing assumptions, both stated and unstated, and then carefully checking the experimental and theoretical basis for these assumptions is a fairly mechanical, logical process; it does not require genius. Most people can do it. Most people are uncomfortable with doing it and often avoid doing so even when it is almost certainly warranted. This questioning of assumptions is also likely to fail if society at large is too resistant, unwilling even grudgingly to accept the results of such a systematic review of deeply held beliefs.

In the current economic difficulties, which may be due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, there may well be an urgent and growing need for new energy sources and technologies. This has already led to calls for “new new Manhattan Projects” employing platoons of putative geniuses to develop or perfect various hoped for technological fixes such as thorium nuclear reactors, hydrogen fuel cells and various forms of solar power. The track record of the “new Manhattan Projects” of the last forty years is rather poor and should give everyone pause. The original Manhattan Project was certainly unusual in the success of the first full system tests and perhaps in other ways as well. This alone argues for assuming that many full system tests, hundreds probably, will be needed in general to develop a new technology. Success is more likely with inexpensive, small scale systems of some sort where the many, many trials and errors usually needed for a breakthrough can be performed quickly and cheaply.

But what about genius? Many breakthroughs may be due in part to powerful subconscious processes found in most people but operating over many years rather than genius as commonly defined. Genius of some kind may be necessary, but if the contrarian mental attitude frequently essential to breakthroughs is lacking or simply rejected by society despite the pervasive modern rhetoric about “questioning assumptions” and “thinking outside the box,” then failure is in fact likely, an outcome which would probably be bad for almost everyone, perhaps the entire human race. It is not inconceivable that we could experience a nuclear war over dwindling oil and natural gas supplies in the Middle East or elsewhere — certainly an irrational act but really smart people sometimes do extremely dumb things.

John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

Possibly related articles:

The Manhattan Project Considered as a FlukeImprove your math and programming skills with Project EulerBut It Worked in the Computer Simulation!A New Kind of SearchWhen Science Fails
If you enjoyed this post, then make sure you subscribe

## Latest nørdet jokes

** * A generic Limerick (submitted by Michael Chepovetsky)

Der var engang en x from place B,
Who satisfied predicate P,
X did A,
In a certain way,
Results in circumstance C.

* * *

I just learned that 4, 416, 237 people got married in the United States in 2010. Not to nitpick, but should not be an even number?

* * *

We are pleased to announce that 100% of the Russian citizens are computer-savvy and use the Internet regularly (according to a recent Internet survey).

* * *

Two mathematical teachers had a fight. They do not seem to be able to split something.

* * *

Do you know that if you start counting seconds when you reach 31, 556, 926 discovers you, that you’ve wasted a whole year?

* * *

What should I order after a visit to the hairdresser is a “Save” button.

* * *

– Hello! This is a fax machine?
— Yes.

* * *

— I am not fat at all! My girlfriend tells me that I have a perfect shape.
— Your girlfriend is a mathematician. She is a perfect shape a sphere.

* * *

A: hi, where are you?
(B): +
A: are you sure you want to come to classes today?
B:-
A: you will be kicked!
(B): =
A: you use calculator to chat?

## But it worked for modelling computer!

Translate Request has too much data
Parameter name: request
Translate Request has too much data
Parameter name: request
Posted by John F. McGowan, Ph.D. in Applied Math on June 6th, 2011 | 9 responses

People often assume that theoretical mathematical calculations and computer simulations will work well enough that machines or experiments will work successfully the first time or at most within a few tries (or similar levels of performance in other contexts). This belief is often implicit in the promotion of scientific and engineering megaprojects such as the NASA Ares/Constellation program or CERN’s Large Hadron Collider (LHC). One of the reasons for this belief is the apparent success of theoretical mathematical calculations and primitive computer simulations during the Manhattan Project which invented the first atomic bombs in World War II, as discussed in the previous article “The Manhattan Project Considered as a Fluke”. This belief occurs in many contexts. In the debate over the Comprehensive Test Ban Treaty (CTBT) which bans all nuclear tests on Earth, proponents (sincerely or not) argued that sophisticated computer simulations could substitute for actual tests of nuclear weapons in the United States nuclear arsenal. After the terrorist attacks of September 11, 2001, federal, state, and local government officials apparently decided to dispose of most of the wreckage of the World Trade Center and rely on computer simulations to determine the cause of the three major building collapses that occurred (instead of physically reconstructing the buildings as has been done in other major accident investigations). Space entrepreneur Elon Musk apparently believed he could achieve a functioning orbital rocket on the first attempt; he did not succeed until the fourth attempt, recreating a known but extremely challenging technology. This article discusses the many reasons why theoretical mathematical calculations and computer simulations often fail, especially in frontier engineering and science where many unknowns abound.

This article does not argue that theoretical mathematical calculations and computer simulations are not helpful or should not be performed. This is clearly not the case. Occasionally, as in the Manhattan Project, theoretical mathematical calculations and computer simulations have worked right the first time, even in frontier areas of engineering and science. In frontier areas such as major inventions and scientific discoveries, this appears to be the exception rather than the rule. Research and development programs and projects that implicitly or explicitly assume that theoretical mathematical calculations and computer simulations will work right the first or even within the first few attempts are likely to be disappointed and may fail for this reason. Rather, in general, we should plan on combining theoretical mathematical calculations and computer simulations with a substantial number of physical tests or trials. There is evidence from the history of major inventions such as the orbit capable rocket, that one should plan on hundreds, even thousands, of full system tests, and many more partial system tests and component tests. This argues strongly for using scale models or other rapid prototyping methods where feasible — or focusing research and development efforts on small scale machines as in the computer/electronics industry today, again where feasible.

Let Me Count the Ways

There are many reasons why theoretical mathematical calculations and computer simulations fail. Indeed, given the sheer number, it is somewhat remarkable that they do work at all. This section discusses most of the major reasons for failure.

Simple Error

Scientists, engineers, and computer programmers are human beings. Even the best of the best make mistakes. This is worth some elaboration. Most scientists and engineers today are professionally trained in schools and universities until their twenties (sometimes even longer). Much of this training involves solving problems in classes, homework, and exams that typically take anywhere from seconds to, in rare cases, several full days (say eight hours per day) to solve. In the vast, vast majority of cases, these problems have been solved many, many times before by other students; it is often possible to look up, learn, and practice the appropriate method to solve the problem — something not possible with genuine frontier science and engineering problems.

An “order of magnitude” is a fancy way of saying a “factor of ten”. Two orders of magnitude is a fancy way of saying a factor of 100. Three orders of magnitude is a fancy way of saying a factor of 1000. And so on. Even the most difficult problems solved in an advanced graduate level science or engineering course are typically orders of magnitude simpler than the problems in “real life,” especially in frontier science and engineering. At a top science and engineering university such as MIT, Caltech or (fill in your alma mater here), scoring 99% (1 error in 100) is phenomenal performance. Yet a frontier engineering or science problem can easily involve thousands, even millions, of steps. The Russian mathematician Grigoriy Perelman’s arxiv.org postings which are generally thought to have proved the Poincare Conjecture are hundreds of pages in length; Perelman left many steps out as “obvious”. A modern computer simulation such as the highly classified nuclear weapon simulation codes involved in the Comprehensive Test Ban Treaty debate can involve millions of lines of computer code. Even a single subtle error can invalidate a theoretical mathematical proof or calculation or a computer simulation. On complex “real world” problems, even the very best are likely to make mistakes because of the size and complexity of the real world problems. Computer programmers spend most of their time debugging their programs.

In computer simulations, consider a sophisticated numerical simulation program with one million (1,000,000) lines of code written by a team of top programmers with an error rate of one error per 1000 lines of code. If a computer program were implemented as a physical machine like a traditional mechanical clock (a very complex and sophisticated machine in its heyday), each line of code would be at least one moving part (gear, switch, lever, etc.). A computer program with one million lines of code is far more complex than a traditional pre-computer automobile or a nautical chronometer used to measure longitude (John Harrison’s first successful nautical chronometers had a few thousand parts). The Space Shuttle Main Engine (SSME), one of the most powerful and sophisticated engines in the world, has approximately 50,000 parts.

By one error in 1000 lines of code, we mean the programmer can write 1000 lines of code with only one error (bug) before any testing or debugging. This is truly phenomenal performance, but let us assume only one error for 1000 lines of code for the sake of argument: to make a point. This simulation program will have approximately 1000 errors! In general, it will take extensive debugging, testing, and comparison with real world data and trials to find and fix these 1000 errors. A subtle error may evade detection despite very extensive efforts.

The modern professional training in science and engineering produces some seemingly phenomenal individuals, such as the winners of the International Math Olympiad (IMO). Most of these people perform extremely well in school and university classes, homework, exams, and so forth. If you witness their performance in an academic setting, it resembles the magical mathematics depicted in popular culture, in television shows such as Numb3rs or Eureka for example (which depict the same kind of performance on very complex real world problems). Nonetheless they are likely to make errors on extremely complex real world problems, something they are not used to. They can become puzzled or worse angry when this occurs. It couldn’t be me; it must be those idiots in the next office — how did they ever graduate from MIT, Caltech, or (fill in your alma mater here)?

Many real world systems such as aircraft, rockets, particle accelerators, and the human body are complex integrated systems in which a very large number (thousands to millions) of parts must work together within very tight tolerances for the entire system to work correctly (fly, collide beams, stay alive and healthy). Even one undetected error can be fatal. This is beyond the performance level of even the very best students in school where the problems are generally simpler and the solutions are known; the proper methods can be studied and practiced prior to taking a test or exam. This near perfect performance in complex real world systems is usually achieved by an iterative process of trial and error in which some errors are found the hard way (the rocket blew up on the launch pad, the accelerator magnets exploded, the patient died ) and eliminated.  The final example is not a snide comment; the author’s father passed away in 2008 participating in yet another unsuccessful clinical trial of a new cancer treatment.

A great deal of modern research consists of measuring some quantity to slightly greater accuracy (known disparagingly as “measuring X to another decimal point”) or computing some theoretical quantity to slightly greater accuracy. Despite the popular image of graduate students like mathematician John Nash in A Beautiful Mind or the physicist Albert Einstein part-time at the University of Zurich performing path-breaking breakthrough research, graduate students are frequently assigned or manipulated into projects of this type in modern research, even at top research universities like MIT, Caltech, or (fill in your alma mater here). These projects often involve repeating something that has been done many times before, only just a little better (hopefully). Although the error rates are noticably higher than academic coursework, the error rates are still far from representative of true frontier or breakthrough research and development. Hence, many graduate students, post-doctoral research associates, all the way up to full professors who have built a career measuring X to another decimal point have negligible experience with the truly high error rates frequently encountered in frontier research and development.

For example, in measuring X to another decimal point, one is often reusing complex simulations or analysis software that has been developed incrementally over many years, even decades (some programs now date back to the 1960’s and 1970’s). Thus much of the testing and debugging is largely done. One encounters far fewer errors. If one ventures into a frontier or breakthrough area, one may need to develop a new computer program from scratch, where the probability of serious errors at first is likely to be near one (1.0, unity) for the reasons discussed above even for truly exceptional individuals and teams.

It is worth understanding that popular science materials such as PBS/Nova specials, Scientific American articles, or Congressional testimony by leading scientists, rarely describes the research as “measuring X to another decimal point” or anything similar. Popular science materials usually focus on the quest for some “Holy Grail” such as unifying the fields in particle physics, a cure for cancer in biology and medicine, cheap access to space in aerospace, and so forth. The quest for the “Holy Grail” captures the imagination and is generally the public reason for funding the research. The Holy Grails have also proven exceedingly difficult to achieve and not necessarily amenable to throwing money and manpower at the problems. And often exceptional intelligence as conventionally measured has proven inadequate to find an answer. The “War on Cancer” for example has consumed about \$200 billion in the United States alone since 1971 when President Nixon signed the National Cancer Act, a level of inflation adjusted funding comparable to the wartime Manhattan Project continued for forty years to date.

I should add that measuring X to another decimal point can be quite important. The astronomer/astrologer Tycho Brahe successfully measured the position of the planet Mars in its path through the Zodiac to another decimal point. While it may have been possible to infer the laws of planetary motion correctly prior to this measurement, there is no question that this improved measurement was essential for Johannes Kepler to discover the correct laws of planetary motion, a major scientific breakthrough that now has practical use in the computation of the orbits of communciation satellites, GPS navigation, Earth observing satellites, and so forth. Nonetheless, I will take the position that measuring X to another decimal place has gone to an unhealthy extreme in modern research. It fills curriculum vitae, produces millions of published papers, rarely leads to genuine breakthroughs and practical advances, and provides poor, misleading training for students in genuine breakthroughs, amongst other things by giving a misleading sense of the actual error rates that occur in real breakthroughs.

Most Theoretical Calculations and Simulations Are Approximations

Most theoretical calculations and simulations are approximations. A few grams of matter has on the order of 10^23 (ten raised to the twenty-third power) atoms or molecules. This is about one-hundred billion trillion atoms or molecules. By definition one mole of carbon-12 is 12 grams of carbon. One mole of a substance contains Avogadro’s number, 6.02214179(30)×10^23, atoms or molecules. Even small machines, e.g. computer chips, weigh grams. Automobiles weigh thousands of kilograms (1000 grams). Airplanes and rockets weigh many thousands of kilograms. Nuclear power plants probably weigh millions of kilograms. Each atom or molecule has, in general, several protons and neutrons in the atomic nucleus or nuclei, and several electrons in complex quantum mechanical “orbitals”. Even with thousands of supercomputers, it is impossible to simulate matter at this level of detail. Thus, on close examination, the vast majority of theoretical mathematical calculations and computer simulations are making signficant approximations. Sometimes these approximations introduce serious errors — sometimes subtle errors that are very difficult or impossible to detect in advance. The errors may become obvious after a difference between the theory and experiment (real data, physical trials) is detected (e.g. the rocket blew up on the launch pad).

Computers and Symbolic Math Cannot Reason Conceptually

The Webster’s New World Dictionary (Third College Edition) defines a concept as (page 288):

An idea or thought, especially a generalized idea of a thing or class of things; a notion.

Most human beings think almost entirely conceptually. The vast majority of human beings rarely if ever use abstract mathematical symbols to think, and then only in specialized contexts. A “cat” is a concept: a special kind of “animal,” another concept, distinguishable from, for example, a “dog,” yet another concept. Many things that scientists and engineers deal with are concepts: particle accelerators, rockets, airplanes, electrons, cancer, and so forth. In only a few special cases, such as simple geometrical forms like the perfect sphere, can we express the concept in purely symbolic mathematical terms that can be programmed on a computer.

Most major inventions or scientific discoveries started out as a concept in the inventor or discoverer’s mind: James Watt’s separate condenser for his steam engine, Kepler’s hazy notion of an elliptical orbit, Faraday’s mental picture of pressure and motion in the mysterious aether to explain electricity and magnetism, eccentric (to put it mildly) rocket pioneer Jack Parson’s concept of combining a smooth fuel such as asphalt with a powdered oxidizer such as potassium perchlorate to overcome the severe problems with powdered explosives, and so forth. To this day, we cannot express most concepts in mathematical symbols that can be programmed on a computer. In some cases, we can simulate a specific instance of the concept on a computer or through traditional pencil and paper derivations or calculations.

Johannes Kepler was able to find a mathematical formula that corresponded to his hazy concept of an elliptical orbit in Apollonius of Perga’s Conics. He was lucky that the mathematics of the ellipse had already been worked out and corresponded closely to the motion of the planets. James Clerk Maxwell, after many years of effort, was able to find a set of differential equations, Maxwell’s Equations, that corresponded to Faraday’s mental concepts of pressure and motion in the aether. Even in cases where specific mathematics can be found (in a book, for example) or developed for a concept (from a detailed mechanical model as Maxwell did with Faraday’s ideas, for example), we still cannot represent the process of the transformation from the mental concept to the mathematics either in formal symbolic mathematics or in a computer program.

Computers and symbolic mathematics cannot reason conceptually. Most of the research in artificial intelligence (machine learning, pattern recognition, etc.) has been an attempt to find a way to do this. Most of this research tries to replicate the process by which human beings identify classes and their relationships (concepts) and correctly assign objects (cats, dogs, speech sounds, etc.) to these classes. So far, we have been unable to either understand or duplicate what human beings do, in many everyday cases effortlessly. A conceptual error is often beyond the ability of either formal symbolic mathematics or computer simulations to detect or identify; it can show up in real world tests very dramatically as in a rocket exploding on launch or a miracle cancer drug failing in clinical trials.

Conceptual reasoning is poorly understood. It is not clear how to teach it, if it can be taught, and how to measure it or even if it can be measured. Very basic questions about its nature are unresolved. Conceptual reasoning appears to play a major role in many major inventions and scientific discoveries, so-called breakthroughs. In this context, it is particularly mysterious. Many inventors and discoverers describe a flash of insight, usually following many years of failure and frequently occurring on a break such as a recreational walk, in which a key concept or even the entire answer occurs to them. These are reports, anecdotal data. We cannot be absolutely sure they are true, just like reports of UFO sightings, which are actually more common than breakthroughs. Just to be clear there is a clear possible motive for inventors or discoverers to make up the story of a “Eureka” experience; they, in fact, stole their work from someone else and need to explain a sudden leap forward in another way. There are inventions and discoveries where there are serious questions about what really happened, who did what, and the work may well have been stolen. Even so, the reports of “Eureka” experiences are extremely common in the history of invention and discovery and they resemble less dramatic flashes of insight or creative leaps reported/experienced by many people (including the author).

These conceptual skills or phenomena may account for why some inventors and discoverers do not seem as intelligent as one might expect, and certainly not as intelligent as inventors and discoverers are depicted in popular culture, and also why platoons of the best and brightest scientists, as conventionally measured, have failed (so far) in such heavily funded efforts as the War on Cancer.

The Math is Intractable

In some cases, we believe that we have the correct math and physical theory to solve a problem. However, the math has proven intractable to solve (so far) either through traditional pencil and paper calculations and symbolic manipulations or through numerical simulation on a computer. The Navier-Stokes equations are thought to govern fluids (liquids and gas such as water and air). Nonetheless, the solution of the Navier-Stokes equations in fluid dynamics has proven intractable to date. This is one of the reasons that the Navier-Stokes equations are included in the Clay Mathematics Institute’s Millenium Problems.  Sometimes it may not even be clear that the math is intractable, resulting in reliance on spurious theoretical mathematical calculations or computer simulations.

New Physics

This article is concerned with the use of mathematics and computer simulations for real world problems, not proving theorems in pure abstract mathematics. In this context, inevitably, one is trying to predict or simulate the actual physics of the real world. How do mechanical devices, electricity, magnetism, gravity, and so forth work in the real world? That is the question. If the theoretical mathematical calculations or computer simulations are based on incorrect physics, they will probably fail. In some cases, the fundamental physics may be known but the implications, the theory derived from the fundamental laws of physics, is somehow in error. In other cases, truly new physics may be involved.

One tends to assume that new physics would stand out, that it would be obvious that it is present. Yet this is not always the case. Human beings tend to be conservative. We do not embrace new ideas quickly or easily, especially as we get older. Small discrepancies and anomalies can occur and accumulate for long periods of time without the presence of new physics being recognized. This occurred, for example, with the Ptolemaic theories of the solar system. These theories had predictive power, but they kept making errors. It took about a century of work by Nicolaus Copernicus, Galileo Galilei, Tycho Brahe, Johannes Kepler, Isaac Newton, and many others to overturn this theory and develop a superior, much more accurate theory. It did not happen overnight for solid scientific reasons — Copernicus’s original heliocentric theory was measurably inferior to the prevailing Ptolemaic theory, contrary to the impression given in science classes.  Galileo’s extreme arrogance and grossly inaccurate theory of the tides did not help either.

Electricity and magnetism had been known for thousands of years, both large scale phenomena like lightning and small scale effects such as static electricity or lodestones. Nonetheless, without the battery and the ability to control and study electricity and magnetism in a laboratory, it was almost impossible to make progress or discover the central role electricity and magnetism play in chemistry and matter. New physics can be hiding in plain sight and causing anomalies that are persistently attributed to selection bias, instrument error, or other mundane causes.

Conclusion

There are many reasons that theoretical mathematical calculations or computer simulations may fail, especially in frontier science and engineering where many unknowns abound. The major reasons include:

simple error (almost certain to occur on large, complex projects)most theoretical mathematical calculations and simulations are approximationssymbolic math and computers cannot reason conceptually and may not detect conceptual errorsthe math may be intractablenew physics.

In the history of invention and discovery, it is rare to find theoretical mathematical calculations or computer simulations working right the first time as seemingly occurred in the Manhattan Project which invented the first atomic bombs during World War II. Indeed, it often takes many full system tests or trials to achieve success and to refine the theoretical mathematical calculations or simulations to the point where they are reliable. Even after many full system tests or trials, theoretical mathematical calculations or simulations may still have significant flaws, known or unknown.

This argues for planning on many full system tests of some type in research and development. In turn, this argues strongly in favor of focussing research and development efforts on small-scale machines, or using scale models or other rapid prototyping methods where feasible. This does not mean that theoretical mathematical calculations and computer simulations should not be used. They can be helpful and, in some cases, such as the Manhattan Project may prove highly successful. However, one should not plan on the exceptional level of success apparently seen in the Manhattan Project or some other cases.

In these difficult economic times, almost everyone would like to see more immediate tangible benefits from our vast ongoing investments in research and development. If current rising oil and energy prices reflect “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, then we have an urgent and growing need for new and improved energy technologies. With increasing economic problems and several bitter wars, it is easy to succumb to fear or greed. Yet it is in these difficult times that we need to think most clearly and calmly about what we are doing to achieve success.

John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

Possibly related articles:

In-Depth Book Review: The Computer as CrucibleWhen Science FailsKepler’s New AstronomyThe Manhattan Project Considered as a FlukeSymbolmania
If you enjoyed this post, then make sure you subscribe

Moscow Math Olympiad has a different set of problems for each grade. Students must write a proof of each problem. It is the 8th grade problems from this year’s Olympiad:

Issue 1. There were 6 apparently identical balls lying on the vertices of a hexagon ABCDEF: a — with a mass of 1 g b — with a mass of 2 g, …, F — with a mass of 6 grams. An attacker switched two balls, which were on opposite corners of the hexagon. There is a balance scale, which allows you to say as Pan weight of the balls is greater. How can you determine which pair of balls switched, using the scale once?

Issue 2. Peter was born in the 19th century, while his brother Paul was born in 20. When the brothers met at a party celebrating both birthdays. Peter said: “my age is equal to the sum of the digits of my year of birth.” “My also,” Paul replied. By how many years younger than Peter is Paul?

Problem 3. Is there a Hexagon, which can be divided into four congruent triangles by a single line?

Problem 4. Each straight segment of a non-self-intersecting path contains an odd number of sides of the cells of a 100 by 100 square grid. Any two consecutive segments are perpendicular to each other. Can the path passes all grid vertices inside and on the border between the square?

Problem 5. Describe the midpoints of non-parallel sides AB and CD of trapezoid ABCD by m and n respectively. Perpendicular from the point m diagonal AC and perpendicular from point n to the diagonal BD intersect at point p. evidence that PA = PD.

Issue 6. Each cell in a square table contains a number. The sum of the two largest number in each row is a, and the sum of the two largest numbers in each column is b. evidence, a = b.

## Matrix calculus

On the basis of a lot of requests from students, in machine learning class today did a lecture on matrix calculus. This was based on Washington’s old and the new matrix algebra useful for statistics and Magnus and Neudecker in matrix differential calculus with applications in statistics and Econometrics.

In the comments, I’ve used a few innovations that planet’s wisdom. The first is the rule for calculating the derivatives of scalar valued functions $f(X)$input matrix. It is traditionally written as:

If $dy = \text{tr}(A^T dX)$ then $\frac{dy}{dX} = A$.

I initially found the presence of trace here baffling. However, there is a simple rule that

$\text{tr}(A^T B) = A \cdot B$

where $\cdot$ is the scalar product of the matrix. It is in the form a much more intuitive (for me!):

If $dy = A \cdot dX$ then $\frac{dy}{dX} = A$.

It looks more straightforward, but for the price. When you work with in the form of a trace rule, it is often necessary to do a bit of a stir around the arrays. It’s easy to do using standard tracking identities such as

$\text{tr}(ABC)=\text{tr}(CAB)$.

If we want to cooperate with the internal products, we will need a similar set of rules. It’s not too hard to prove that there are “dual” identity as

$A \cdot (BC) = B \cdot (AC^T) = C \cdot (B^T A)$

that allow similar mixing with dot products. Yet they are certainly less easy to remember.

There are also a number of other rules, which appears to be necessary in practice, but are not part of the standard texts. $R$ , For example, if is a function that applies a matrix or vector elementwise (e.g. $\sin$), then

$d(R(F)) = R'(F(X)) \odot dF$

where $\odot$ is the elementwise product. This then requires additional (simple) identity to get rid of the elementwise product, such as

${\bf x}\odot{\bf y} = \text{diag}({\bf x}) {\bf y} = \text{diag}({\bf y}) {\bf x}$.

Another problem with using the dot product, it is necessary to constantly convert everywhere between transposes and internal products. (This question comes up because I prefer “all column vectors are vectors” as the Convention) Endless debates, if we write

${\bf x} \cdot {\bf y}$

or

${\bf x}^T {\bf y}$

It seems that there are of particular importance, and I’m not sure the best choice.

This entry was posted on 24. March 2011 at 11: 47 pm and is filed under Uncategorized.

Tags: Calculus, math, matrix