Math Words, pg 19


Back to Math Words Alphabetical Index



History and Use of the Blackboard
For the period from 1800 to 2000 few things were as ubiquitous in a mathematics classroom as the blackboard. Today a more modern "white board" may have taken their place in many institutions, or even an electronic version called a smart board; but board work still seems to be a part of the current classroom procedur. The classic chalkboard and one of its educational uses was a frequently repeated gag on the popular Simpsons cartoon series.

It appears that the blackboard first came into American education around 1800. The National Museum of American History website on colonial education says,:

"Mathematics teachers with ties to England and France introduced blackboards into the United States around 1800. By the 1840s, these erasable surfaces were used for teaching a wide range of subjects in elementary schools, colleges, and academies. The Massachusetts educator William A. Alcott visited over 20,000 schoolhouses. “A blackboard, in every school house," he wrote, "is as indispensably necessary as a stove or fireplace."

James Pillan, a Scottish teacher and education reformer is often cited as the "inventor" of the blackboard, but this seems to be a misunderstanding based on a letter from Pillans which appeared in Jeremy Bentham's Chrestomathia (1815). It was entitled Successful application of the new system to language-learning, and dated 1814; it mentions the use of chalk and blackboard in teaching geography. But Pillan only began teaching in 1810, almost a decade after the board made its way to America. He may, however, be the inventor of colored chalk. He is reported to have had a recipe with ground chalk, dyes and porridge.

Blackboards and slates were used well before any of the previous claims in musical study. In Composers at Work, author Jessie Ann Owens devotes several pages to the existence of several types of slate and wood "cartella" which was used to write out musical ideas. She describes the discoveries of these with five or ten line staves dating to the 16th century. Much larger wall size examples seem to have been used but have only been confirmed by iconography. The book includes an image from a woodcut by Hieronymus Holtzel of Nuremberg in 1501.

They seem to have very quickly become and essential part of daily school life. [From a web page of Prof. Rickey]
Perhaps no one method has so influenced the quality of the instruction of the cadets as the blackboard recitations. Major Thayer (Superintendent from 1817) insisted on this form, although old records show that it was introduced at West Point by Mr. George Baron, a civilian teacher, who in the autumn of 1801 gave to Cadet Swift "a specimen of his mode of teaching at the blackboard." Today it is the prominent feature in Academic instruction. [Quoted from Richardson 1917, p. 25] There is indication that the blackboard was used in a few schools in the US before it was used at USMA. See Charnel Anderson, Technology in American Education, 1650-1900, published by the US Dept of Health, Education, and Welfare 1961

I recently (Oct 2010) received a copy of a 1841 book that propose to teach how to use the blackboard in the elementary classroom. The author is a strong supporter of this new "technology". Thanks to Daniel Stamm for sending me a copy.

One of the earlier mentions of blackboards I have found has nothing to do with education, however. It seems that a custom developed in London's financial district in the later part of the 19th century to list the names of debtors on a blackboard to shame them into paying, and it seems to have persisted for a long time. Here is a description of the practice from
Chronicles and Characters of the Stock Exchange
By John Francis, Daniel Defoe; printed in 1850.

From Wikipedia I learned that the Oxford English Dictionary provides a citation from 1739, to write "with Chalk on a black-Board". I know it is common in England for Pubs to advertise with a blackboard outside their doors on the sidewalk, but have no idea how far back this idea originated.

Prior to the use of blackboards students learned their early lessons from an object called a hornbook. Here is a description of one from the Blackwell Museum webpage at Northern Illinois University

Paper was pretty expensive once and hornbooks were made so children could learn to read without using a lot of paper. A hornbook was usually a small, wooden paddle with just one sheet of paper glued to it. But because that paper was so expensive, parents and teachers wanted to protect it. So they covered the paper with a very thin piece of cow's horn. The piece of cow's horn was so thin, you could see right through it. That's why these odd books were called "hornbooks."
The blackboard seems to have driven out the hornbooks very quickly judging from this quote from the OED about Hornbooks, (a1842 HONE in A. W. Tuer Hist. Horn-Bk. I. i. 7) " A large wholesale dealer in..school requisites recollects that the last order he received for Horn-books came from the country, about the year 1799. From that time the demand wholly ceased..In the course of sixty years, he and his predecessors in business had executed orders for several millions of Horn-books". Some nice images of Hornbooks, both plain and fancy, are at this web page.

Early blackboards were usually made of wood, (but some may have been made of paper mache') and painted with many coats as true slate boards were very expensive. Schools purchased large pots of "slate paint" for regular repainting of the boards. The Earliest quotes from the OED date to 1823.

1823 PILLANS Contrib. Cause Educ. (1856) 378 A large black board served my purpose. On it I wrote in chalk. 1835 Musical Libr. Supp., Aug. 77 The assistant wrote down the words..on a blackboard. 1846 Rep. Inspect. Schools I. 147 The uses of the black board are not yet fully developed.
However under "slates" I found even earlier uses. In "1698 FRYER Acc. E. India & P. 112 A Board plastered over, which with Cotton they wipe out, when full, as we do from Slates or Table-Books" which indicates that boards covered with Plaster or other materials were used to write upon much earlier than the earliest use of "blackboard". An even earlier usage is given in David E. Smith's Rara arithmetica of a book printed in 1483 in Padua of the arithmetic of Prosdocimo containing a mention of the use of a slate. This led Smith to conclude that at this time the merchants would actually erase and replace numbers in division rather than showing the cross-outs that distinguish the galley method of division. The very earliest claim for slates I have found is of use in the 11th century. A work called Alberuni's Indica (Tarikh Al-Hind), "They use black tablets for the children in the schools, and write upon them along the long side, not the broadside, writing with a white material from the left to the right."

Chalkboards became so important for teaching that teachers in the 19th century sometimes went to extremes to create one. In Glen Allen, Virginia; a school is named for Elizabeth Holladay, a pioneer teacher who started the first public school in the Glen Allen area of Henrico County at her home in 1886. On a note about the history of the school it says she had, "Black oilcloth tacked to another part of the shipping crate served as a blackboard."

[For more on Blackboards and slates]

The slate was used even after paper became a relatively commonplace item. Many school histories report the use of slates into the 20th Century. This use may have been significant. The Binney & Smith company, better known to many for their creation of the Crayola Crayon, began the production of slate pencils, for writing on slate, in the year 1900. As an aside, they also won a Gold Medal at the St. Louis World Exposition(1904) for their wonderful new creation, dustless chalk. In the journal Australian Historical Archaeology, (2005) Peter Davies reports that in the excavation of a site called Henry Mill that was only operational from 1904 until around 1930 they found 30 slate pencils, remnants of four slates, and a single graphite pencil core. In "Slates Away!": Penmanship in Queensland, Australia, John Elkins, who started primary school in 1945, writes that he used slates commonly until around the third year of school.

I think in Prep 1 that we had some paper to write on with pencils, but my memory of the routine use of slates is much more vivid. Each slate was framed in wood and one side was inscribed with lines to guide the limits for the upper and lower extremities of letters. The slate "pencils" were made of some pale gray mineral softer than slate which had been milled into cylinders some one-eighth of an inch in diameter and inserted into metal holders so that about an inch protruded. Each student was equipped with a small tobacco tin in which was kept a damp sponge or cloth to erase the marks. Sharpening slate pencils was a regular task. We rubbed them on any suitable brick or concrete surface in the school yard. Teachers also kept a good supply of spares, all writing materials and books being provided by the school. It is possible that the retention of slates stemmed from the political imperative that public education should be free.
Slates were advertised in newspapers in the US as early as 1737. Slates, as indicated above, show up as commonplace in quotes from the OED as early as 1698. It seems they may have been used for some artistic or educational purposes as early as the end of the 15th Century. In the famous painting of Luca Pacioli, Ritratto di Frà Luca Pacioli, Pacioli is shown drawing on a slate to copy an example from Euclid in the open book before him. The closed book, which has the dodecahedron upon it, is supposedly Pacioli's Somma di aritmetica which was written in 1494.


In the Dec 2003 issue of Paradigm, the Journal of the Textbook Colloquium, is an article by Nigel Hall titled, "The role of the slate in Lancasterian schools as evidenced by their manuals and handbooks". A couple of snips from the article appear below:

The Oxford English Dictionary gives as its first citation for slate being used as a writing tool a quotation from Chaucer’s Treatise on the Astrolabe written about 1391. Whether usage began around this time or had begun much earlier is unknown, although as a technology it shared many characteristics with the wax tablet, used extensively from before the time of the Greeks until the 1600s in Europe, and even surviving in some usages until the early twentieth century (Lalou, 1989). Knowledge of the use of slate for writing after Chaucer is limited until one reaches the second half of the eighteenth century. The mathematician Digges (1591) refers to writing on slates and in the new colony of America an inventory (Plymouth Colony Archive, n.d.) made on 24 October 1633 of the possessions of the recently deceased Godbert and Zarah, noted among many items, ‘A writing table of slate’ (table here being a tablet of slate).
Hall goes on to suggest that, in fact, the use of slates may not have been very common in England until the end of the 18th Century because reading (beginning with hornbooks) was much more commonly taught than writing. He credits Lancaster for the promotion of slates for writing and math, but suggests that the slate was a principal element in the "monotorial system" in which more advanced students taught the lower group. An illustration showing the use of slates and the student monitor below is taken from the article. [See the full article
here]

The blackboard was extended to some specialty uses as well. A "Slated Globe" was advertised in The New York Teacher, and the American Educational Monthly, Volume 6 in 1869 for use in spherical geometry and geography classes. A four inch diameter globe sold for $1.50




Ceiling Funtion

A function similar to the Floor Function in which a real number is replaced by the smallest integer which is more than or equal to the number. The symbol commonly used is . You can think of this as rounding up to the next whole integer if the number is not already an integer. Here are a few examples


Because the brackets around the symbol resemble the gallows of a hangman, the function is sometimes called the gallows function. The notation is relatively modern dating to the last half of the 20th century.



Choose

The creation of the expression "N choose R" for a term of a binomial coefficient is credited to Richard Guy sometime around 1950. Although the symbol for the number of ways of selecting a group of r distinct items from a group of n distinct items still varies, the most connsistent usage today is . The symbol is read as "N choose r". The number and symbol is also called the Combination symbol, and is sometimes read as "The combinations of n things taken r at a time." Other symbols still used include and the less common . It is also very common to use a Capital C between the values of n and r with the values subscripted nCr, and many calculators still use a notation such as nCr with the numbers on the same line level with the C. Mathematica uses "Binomial[n,r]", and at one time I know that the TEX formatting language used (n\choose r}.

Most students first see the binomial coefficients as elements in the array (mis)named Pascal's Triangle


More information about computing binomial coefficients can be found in the page on combinations and permutations. The use of the term Binomial Coefficients comes from the fact that the numbers are the same as the coefficients of each term when a binomial, such as (x+y) is raised to a power. For example the expansion of (x+y)4 gives the five terms in the fifth row of Pascal's triangle.

In Early July of 2004 I received a note from Matthew Hubbard, the curator of Pascal's Triangle From Top to Bottom. In it he informed me that I had failed to include, in particular, the contribution of India to the study of the arithmetic triangle. A quick visit to his web site led me to :

The idea of taking "six tastes one at a time, two at a time, three at a time, etc." was written down correctly in India 300 years before the birth of Christ in a book called the Bhagabati Sutra, a text from the Jainist religion; this gives the subcontinent of India the distinction of being the earliest civilization to have an understanding of the binomial coefficients in their combinatorial form "n choose k" in a text that survives to this day.
The site contains much additional material about Indian study of the triangle, and other information that makes it well worth a visit.

Matthew also called me to task for my suggestion above that the use of "Pascal's Triangle" was somehow inappropriate. He wrote in justification of the term, " One of the reasons I wrote is the idea of misnomy in mathematics; you put the word (mis)named in front of Pascal's Triangle. While it is certainly true that many, many people had studied the binomial coefficients prior to Pascal, his work is honored because it was read by people who came after him, most notably Monmort and deMoivre, who credited Pascal's Treatise in their works several decades later. Moreover, it is worth reading, as Pascal finds many identities in the triangle that no one before him had written down.
It's too late to get the world to call it Pingala's Triangle, and I fully appreciate the desire of civilizations to honor their own, but I think if anybody's name is going to be linked to this famous array, Blaise Pascal is as good a candidate as any and significantly better than most."

My thanks to Matt for the additional material on the Indian contribution, and for helping to insure a balance of credit where credit is due, and certainly Pascal is due much credit for his exposure of many aspects of the triangle, by whatever name it is called.



Decade

The word decade can be used for any grouping of ten objects, but is used today most often for a period of ten years. From the Greek root deca for ten, the word made its way through Latin and French into the English language by about 1600. After the French Revolution in the move to metic measure the seven-day week was replaced by a "decade" of ten days; "[1801 DUPRÉ Neolog. Fr. Dict. 71]'Three decades make a month of thirty days'."[OED]



Determninant

Today the idea of a determinant is usually tied to the idea of a matrix, although the original idea actually preceeded the invention of matices. A determinant is a number determined by the sum of a series of multiplications of the elements in the matrix selected so that a number in each row and column occurs once in each product. Each possible product is multiplied by either 1 or -1 depending on whether the sum of the row and coulmn number is even (1) or odd (-1) and the sum of all the possible products is added together. That sum is the determinant. I recently came across the following explanation of the "meaning" of a determinant from John Ramsden in a post to a math discussion group:"Whereas a matrix is an operator, which transforms one space into another, its determinant is just the ratio of the new versus the old area/volume/.. (depending on the dimension of the original space) of any given region."

Only square matrices have a determiant. Determinants are useful in solving systems of equations because a zero determinant means that a system does not have a unique solution. You can see examples of matrices and their determniants at this page from the Mathworld web page.

The first determinant like objects were applied to systems of two equations by the Babylonians as early as the fourth century BC. By 100 BC the Chinease had developed a system that looked very much like a matrix today, and used a method of solving systems that was effectively the same as what we now call Cramer's rule. The modern idea of a determinant seems to have occurred at almost the same time in both Japan and Europe. In 1683, Takakaza Seki Kawa published a study on determinants, ten years before Leibniz independently "discovered" them in Europe. The word is derived from the Latin determinare and the related root terminus and means a limit or boundary. The first use of the word mathematically was by Gauss in 1801 in Disquisitiones arithmeticae , but in referrence to a different idea. It was Cauchy who first used the term in its modern meaning in 1812.

An interesting, but seemingly false, story ciruclated about a gift of a book on determinants to the Queen of England by Lewis Carroll. Here is the version as it is told on the Mathworld page referrenced earlier.

Several accounts state that Lewis Carroll (Charles Dodgson ) sent Queen Victoria a copy of one of his mathematical works, in one account, An Elementary Treatise on Determinants. Heath (1974) states, "A well-known story tells how Queen Victoria, charmed by Alice in Wonderland, expressed a desire to receive the author's next work, and was presented, in due course, with a loyally inscribed copy of An Elementary Treatise on Determinants," while Gattegno (1974) asserts "Queen Victoria, having enjoyed Alice so much, made known her wish to receive the author's other books, and was sent one of Dodgson's mathematical works." However, in Symbolic Logic (1896), Carroll stated, "I take this opportunity of giving what publicity I can to my contradiction of a silly story, which has been going the round of the papers, about my having presented certain books to Her Majesty the Queen. It is so constantly repeated, and is such absolute fiction, that I think it worth while to state, once for all, that it is utterly false in every particular: nothing even resembling it has occurred" (Mikkelson and Mikkelson).

Various symbols are still in use for the determinant, with the most common being a set of vertical lines similar to the absolute value,|A|, or the abbreviation det(A), where A is the name of the matirix.


Lurking Variables and Confounding Variables

Students in introductory Statistics classes often are confused by the terms above, and perhaps for good reason. Instead of fumbling through my own definition, I will copy a post from Dr. David Moore, perhaps one of America's most honored statisticians, to the APStatistics electronic discussion list. He was responding to a requst to distinguish between lurking and confounding variables.

Here's a try at the basics.
A. From Joiner, ``Lurking variables: some examples,'' American Statistician 35 (1981): ``A lurking variable is, by definition, a variable that has an important effect and yet is not included among the predictor variables under consideration.'' Joiner attributes the term to George Box. I follow this definition in my books.

This isn't a well-defined technical term, and I prefer to expand the Box/Joiner idea a bit: A lurking variable is a variable that is not among the explanatory or response variables in a study, and yet may (or may not) influence the interpretation of relationships among those variables. The ``or may not'' expands the idea. That is, these are non-study variables that we should worry about -- we don't know their effects unless we do look at them.

I think the core idea of ``lurking'' should be that this is a variable in the background, not one of those we wish to study.

B. The core idea of ``confounding,'' on the other hand, refers to the effects of variables on the response, not to their situation among (or not) the study variables. Variables -- whether explanatory or lurking -- are confounded if we cannot isolate from each other their effects on the response(s).

It is common in more advanced experimental designs to deliberately confound some effects of the explanatory variables when the number of runs feasible is not adequate to isolate all the effects. The design chooses which effects are isolated and which are confounded. So, for contact with more advanced statistics, we should allow ``confounded'' to describe any variables that influence the response and whose effects cannot be isolated.

Later in the same post Dr. Moore explained the difference between "confounding" and "common cause".

Not all observed associations between X and Y are explained by ``X causes Y'' (in the simple sense that if we could manipulate X and leave all else fixed, Y would change).

Even when X does cause changes in Y, causation is often not a complete explanation of the association. (More education does often cause higher adult income, but common factors such as rich and educated parents also contribute to the observed association between education and income.)

Associations between X and Y are often at least partially explained by the relationship of X and/or Y with lurking variable or variables Z.

I attempt to explain that a variety of X/Y/Z relationships can explain observed X/Y association. The attempt isn't very satisfactory, so don't let it overshadow the main ideas. The distinction is: does Z cause changes in both X and Y, thus creating an apparent relationship between X and Y? (Common response) Or are the effects of X and Z on Y confounded, so that we just can't tell what the causal links are (maybe Z-->Y, maybe X-->Y, maybe both)?

If "confounding" sounds confusing, it should. The root of confound and confuse both come from the Latin fundere to pour. In essence, the two ideas have been "poured together" so that they can not be seperated from each other.

Lurk comes from the Middle English term for one who lies in wait, usually concealed. The root seems tied to the idea of being observed by such a person and the early word for "frown".



Null Set or Empty Set

In logic, it is sometimes necessary to have a set which has no members in it. The set is called the empty or null set. The symbol for the null set is . Paul Zorn contributed the following note about the origin of the term to a matheamtics discussion list

About the etymology of the slashed-oh symbol used to denote the empty set. I suspect the truth is unknowable, but (as Google told me) the subject comes up in Andre Weil's autobiography, where Weil claims to have suggested the notation himself in connection with the Bourbaki group's effort to nail down set theory more decisively than before. Here's a quote from Weil:

"... Wisely, we had decided to publish an installment establishing the system of notation for set theory, rather than wait for the detailed treatment that was to follow: it was high time to fix these notations once and for all, and indeed the ones we proposed, which introduced a number of modifications to the notations previously in use, met with general approval. ... The symbol came from the Norwegian alphabet, with which I alone among the Bourbaki group was familiar. "

Paul Zorn
PS. Another Google'd source (authoritative, no doubt ... ) asserts that the same symbol appears in the Danish and Faroese alphabets.

The word null is from the Latin for "not any".



Octothorpe

In the 1960's when Bell Telephone added two new buttons for push button telephones, they used the * symbol and the # symbol. Although most people call the * an asterisk, the telephone folks decided to use "star". The other symbol, #, has been called lots of different names such as crosshatch, tic-tac-toe, the pound sign, and the number sign (leave it to the telephone company to put the number sign on one of the two keys without a number); but the term now used by the American telephone industry for the symbol is octothorpe although it is more often called the pound key in conversations with the public. It seems that the name was made up more or less spontaneously by Bell Engineer Don MacPherson while meeting with their first potential customer. The octo part was chosen because of the eight points at the ends of the line segments, and the thorpe was in honor of Jim Thorpe, the great Native American athelete. Why honor Thorpe? At the time MacPherson was working with a group that was trying to restore Thorpe's olympic medals, which had been taken from him when it was found he had played semi-professional baseball prior to his track victories in the Olympics in Sweden. [It's not math, but I love the story that when the King of Sweden gave him the gold medal, the king said, "You are surely the greatest athelete on the earth". The modest Thorpe smiled and replied, "Thanks, King."]

There are a host of other names for the # symbol, and many of them can be found at this page from Wikipedia which includes several different stories about the creation of "octothorpe" or "octothorn" and also has this rather interesting clip:
"The pronunciation of # as `pound' is common in the US but a bad idea. The British Commonwealth has its own, rather more apposite, use of `pound sign. On British keyboards the UK pound currency symbol often replaces #, with # being elsewhere on the keyboard. The US usage derives from an old-fashioned commercial practice of using a # suffix to tag pound weights on bills of lading. The character is usually pronounced `hash' outside the US. There are more culture wars over the correct pronunciation of this character than any other, which has led to the ha ha only serious suggestion that it be pronounced `shibboleth' (see Judges 12:6 in an Old Testament or Tanakh)." The page also disputes the use of "square" in Britain.


Peace Curve

Recently while reading an Australian mathematics journal I ran across a curve that had the auspicious name, The Peace Curve. It was a pretty curve, and in times like these its name is probably reason enough to include it, but the curve is both interesting and pretty while still being simple enough that high school students could study. The curve is described by the semi-rational equation but is easier to construct on the Geometer's Sketchpad from its parametric form, . If you have the Geometer's Sketchpad software you can download an interactive version of the GSP sketch shown at right.

The curve was named, as well as I can tell, from the articles authors, Peter Barcham and Garnet Greenbury, who wrote, "Using our imagination this appears as the curve of crucifixion and shall be called 'the curve of peace'." On the etymological side, the word peace comes to English through the French, but rests on the Latin word pax for the absence of war.



Placebo The OED defines a placebo as "a substance or procedure which a patient accepts as a medicine or therapy but which actually has no specific therapeutic activity for his condition or is prescribed in the belief that it has no such activity." In medical experiments there seems to be a tendency for patients to get "better" even if they have no actual treatment. To measure this effect in an experiment, a control group of patients is given a placebo in place of the actual drug or treatment and the results of the experimental group, who recieve the actual treatment, is compared to the control group taking the placebo. The word placebo is directly from the Latin and means "I shall be pleasing", from the more general root placre, to please.

A recent article (summer, 2006) in Proto, a journal from the Massachusetts General Hospital had a brief historical note about the placebo effect.

Then, in an influential paper, “The Powerful Placebo,” published in the Journal of the American Medical Association in 1955, Henry Beecher of the Massachusetts General Hospital and Harvard Medical School swung the debate toward the modern point of view. Beecher showed that in 15 of the first RCTs [Randomized controlled Tests], which tested treatments for a variety of diseases, a certain percentage of people in the control group actually got better. RCT proponents used this to argue that without putting half the subjects from any trial into a control group, no one would ever know whether a treatment inherently worked or whether the placebo effect—the combination of expectation, physician care and nature just taking its course—experienced by those receiving the actual treatment as well, was responsible for the improvement. Beecher believed that the effectiveness of any drug resulted in part from its active ingredients and in part from a placebo effect, and that remains the prevailing view."

I had never heard of the opposite effect, nocebo until the following post to the AP Statistics discussion group by DeAnna McDonald

New Yorker; August 11, 2003; Annals of Medicine--"Sick With Worry" by Jerome Groopman

"nocebo effect: even though these patients were in the group randomly assigned to take a chemically inert placebo, they reported suffering from side effects associated with taking Prozac...[they] had likely read their informed-consent forms, which detailed all the possible symptoms from taking [the non-placebo] a bit too carefully."

Placebo--showing the positive changes associated with active treatment even though taking inert treatment
Nocebo--showing the negative changes associated with active treatment even though taking inert treatment

Steve Schwartzman responded with an etymology, "The word nocebo is the first-person singular future tense of the Latin verb nocere 'to harm,' so nocebo means 'I will harm.' Related words that English has borrowed from Latin and French are (ob)noxious, pernicious, (in)nocent, innocuous, and nuisance.
There's a good Internet article about nocebos at: http://skepdic.com/nocebo.html ".

The OED does not list nocebo, so I assume this is a "very" current usage. I should add that Mr Schwartzman is the author of The Words of Mathematics, an excellent book on the etymology of words used in mathematics, and an excellent desktop referrence for any math teacher.

The reader should be aware that the "placebo effect" is still a question of study to both statistician and psychologist. A recent post on the AP Stats group from Joseph Strayhorn, included an article that indicated that the patients who receive placebos, and the doctors who treat them (both are supposedly "blind" to the treatment used in most studies) may not be as "blinded" as we have tended to think. Part of the post is here:

The following was lifted from a Psychology Today article by Seymour Fisher and Roger Greenburg:
HOW BLIND IS DOUBLE-BLIND?
Our concerns about the effects of inactive placebos on the double-blind design led us to ask just how blind the double-blind really is. By the 1950s reports were already surfacing that for psychoactive drugs, the double-blind design is not as scientifically objective as originally assumed. In 1993 we searched the world literature and found 31 reports in which patients and researchers involved in studies were asked to guess who was receiving the active psychotropic drug and who the placebo. In 28 instances the guesses were significantly better than chance--and at times they were surprisingly accurate. In one double-blind study that called for administering either imipramine, phenelzine, or placebo to depressed patients, 78 percent of patients and 87 percent of psychiatrists correctly distinguished drug from placebo. One particularly systematic report in the literature involved the administration of alprazolam, imipramine, and placebo over an eight-week period to groups of patients who experienced panic attacks. Halfway through the treatment and also at the end, the physicians and the patients were asked to judge independently whether each patient was receiving an active drug or a placebo. If they thought an active drug was being administered, they had to decide whether it was alprazolam or imipramine. Both physicians (with an 88 percent success rate) and patients (83 percent) substantially exceeded chance in the correctness of their judgments. Furthermore, the physicians could distinguish alprazolam from imipramine significantly better than chance. The researchers concluded that "double-blind studies of these pharmacological treatments for panic disorder are not really 'blind.'" Yet the vast majority of psychiatric drug efficacy studies have simply assumed that the double-blind design is effective; they did not test the blindness by determining whether patients and researchers were able to differentiate drug from placebo.

An effect that is often compared to the Placebo effect is called the Hawthorne effect. The Hawthorne effect dates back to, and draws its name from, a study done in the late 1920's and early 30's at the Western Electric plant in Hawthorne, Illinois, just west of Chicago. During the early days of modern industrial statistics, the emerging "efficiency experts" measured every aspect of anything that thought might effect production efficiency, lighting, paint color, noise level, breaks, starting hours... etc. The Hawthorne effect is hard to define, frequently disputed, occasionally totally denied, but in general is an effect due not to the changes, but to the subjects knowing that they are being studied. People who know they are being observed seem not to act like they do when they are not conscious of being observed. The effect may be less than has been thought as there have been serious challenges to the way the study was done, and many believe the effect is only short term.



Sigma is the Eighteenth letter of the Greek alphabet and has three representations, the capital sigma, , the lowercase sigma, , and a script-like sigma, which I think was/is only used at the end of words. All three of the representations of the Greek letter have been adopted for use in mathematics.

Euler originated the use of for the sum of an arithmetic sequence. The symbol shown below is a mathematical instruction to evaluate the expression 2k-1 for every integer value of k starting at k=3 and ending at k=6 and then sum these values.

Note that the output is 32 which is the sum of 5+7+9+11.

The symbol is usually first encountered by students as the symbol for the standard deviation of a population. According to Jeff Miller's web page on the earliest use of some math symbols, "The use of for standard deviation first occurs in Karl Pearson's 1894 paper, "Contributions to the Mathematical Theory of Evolution," Philosophical Transactions of the Royal Society of London, Ser. A, 185, 71-110. On page 80, he wrote, " Then will be termed its standard-deviation (error of mean square)" (David, 1995). When Fisher introduced variance (see Words) he did not introduce a new symbol but instead used 2."

The lower case sigma also is used as a subscripted function to represent the divisor function. For example s2(10) asks for the sum of the 2nd powers of the integer divisors of ten. Since ten has divisors of 1,2,5, and 10, the sum of their squares would be 1+4+25+100=130. To count the number of divisiors of ten we would use s0(10). Since zeroth power of each divisor is 1, we would get 1+1+1+1 = 4, the number of integer divisors of ten.

The script sigma, , is most commonly known as the symbol for integration. For example the area between the graph of y=x2 and the x-axis between the values of x=0 and x=3 is given by the integral

A curve like the one illustrated below that has an upper and lower horizontal asymptote, and therefore resembles an s, is called a Sigmoid curve. The word sigmoid simply means "sigma shaped". Many natural functions related to growth of populations and the spread of disease can be modeled by sigmoid curves.



Unit

A unit in mathematics has been, since antiquity, an undivisible item. It is often, and originally used to describe the quantity one, which the ancients Greeks thought of as the base item of all numbers. The definition in the OED is "1. a. Math. A single magnitude or number regarded as an undivided whole and as the ultimate base of all number; spec. in Arithmetic, the least whole number; the numeral ‘one’, represented by the figure 1. "

In Euclids Book VII of the Elements, he defined

Def. 1. A unit is that by virtue of which each of the things that exist is called one.
Def. 2. A number is a multitude composed of units.

Many of the ancient mathematical philosophers divided quantity into three types, units (1), duals (2), and numeros or number (>2); zero of course did not exist in most ancient cultures.

The Greeks used the term monas from which we get the many "mono-" terms in mathematics. It seems that the English mathematician and Alchemist John Dee first used unit. Dee wrote, "Note the worde, Vnit, to expresse the Greke Monas, and not Vnitie: as we haue all, commonly, till now, vsed". His definition of unit in 1570 was not greatly different than Euclid's almost 2000 years earlier, " Number, we define, to be, a certayne Mathematicall Summe, of Vnits. And, an Vnit, is that thing Mathematicall, Indiuisible, by participation of some likenes of whose property, any thing, which is in deede, or is counted One, may reasonably be called One. "

Today the word unit is also used for more complex ideas, such as an invertible element in a ring, but the idea of a primitive object is common to them all.