text
stringlengths 185
580k
| id
stringlengths 47
47
| fineweb_score
float64 3.31
5.19
| url
stringlengths 13
1.49k
|
|---|---|---|---|
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production.
What is Wind Shear
Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring.
Wind Shear and Supercell Thunderstorms
This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form.
All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment.
Rain’s Influence on Tornado Production
Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air.
That’s Not a Tornado!
I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air.
This Can Be a Tornado
You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air.
(NOAA image showing vertical column of air in a supercell thunderstorm)
The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear.
(NOAA image showing tornado formation in supercell thunderstorm)
|
<urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8>
| 4.15625
|
http://cloudyandcool.com/2009/05/05/wind-shear-and-tornadoes/
|
Is this bone a Neanderthal flute?
Cave Bear femur fragment from Slovenia, 43+kya
DOUBTS AIRED OVER NEANDERTHAL BONE 'FLUTE'
(AND REPLY BY MUSICOLOGIST BOB FINK)
Science News 153 (April 4, 1998): 215.
By B. Bower
Amid much media fanfare, a research team in 1996 trumpeted an ancient, hollowed out bear bone pierced on one side with four complete or partial holes as the earliest known musical instrument. The perforated bone, found in an Eastern European cave, represents a flute made and played by Neandertals at least 43,000 ye us ago, the scientists contended.
Now it's time to stop the music, say two archaeologists who examined the purported flute last spring. On closer inspection, the bone appears to have been punctured and gnawed by the teeth of an animal -- perhaps a wolf -- as it stripped the limb of meat and marrow report, April Nowell and Philip G. Chase, both of the University of Pennsylvania in Philadelphia. "The bone was heavily chewed by one or more carnivores, creating holes that became more rounded due to natural processes after burial," Nowell says. "It provides very weak evidence for the origins of [Stone Age] music." Nowell presented the new analysis at the annual meeting of the Paleoanthropology Society in Seattle last week.
Nowell and Chase examined the bone with the permission of its discoverer, Ivan Turk of the Slovenian Academy of Sciences in Ljubljana (S.N.: 11/23/96, p. 328). Turk knows of their conclusion but still views the specimen as a flute.
Both open ends of the thighbone contain clear signs of gnawing by carnivores, Nowell asserts. Wolves and other animals typically bite off nutrient-rich tissue at the ends of limb bones and extract available marrow. If Neandertals had hollowed out the bone and fashioned holes in it, animals would not have bothered to gnaw it, she says.
Complete and partial holes on the bone's shaft were also made by carnivores, says Nowell. Carnivores typically break open bones with their scissor like cheek teeth. Uneven bone thickness and signs of wear along the borders of the holes, products of extended burial in the soil, indicate that openings made by cheek teeth were at first less rounded and slightly smaller, the researchers hold.
Moreover, the simultaneous pressure of an upper and lower tooth produced a set of opposing holes, one partial and one complete, they maintain.
Prehistoric, carnivore-chewed bear bones in two Spanish caves display circular punctures aligned in much the same way as those on the Slovenian find. In the March Antiquity, Francesco d'Errico of the Institute of Quaternary Prehistory and Geology in Talence, France, and his colleagues describe the Spanish bones.
In a different twist, Bob Fink, an independent musicologist in Canada, has reported
on the Internet
(http://www.webster.sk.ca/greenwich/fl-compl.htm) that the spacing of the two complete and two partial holes on the back of the Slovenian bone conforms to musical notes on the diatonic (do, re, mi. . .) scale.
The bone is too short to incorporate the diatonic scale's seven notes, counter Nowell and Chase. Working with Pennsylvania musicologist Robert Judd, they estimate that the find's 5.7-inch length is less than half that needed to cover the diatonic spectrum. The recent meeting presentation is "a most convincing analysis," comments J. Desmond Clark of the University of California, Berkeley, although it's possible that Neandertals blew single notes through carnivore-chewed holes in the bone.
"We can't exclude that possibility," Nowell responds. "But it's a big leap of faith to conclude that this was an intentionally constructed flute."
TO THE EDITOR, SCIENCE NEWS (REPLY BY BOB FINK, May 1998)
(See an update of this discussion on Bob Fink's web site, November 2000)
The doubts raised by Nowell and Chase (April 4th, DOUBTS AIRED OVER NEANDERTHAL BONE 'FLUTE') saying the Neanderthal Bone is not a flute have these weaknesses:
The alignment of the holes -- all in a row, and all of equivalent diameter, appear to be contrary to most teeth marks, unless some holes were made independently by several animals. The latter case boggles the odds for the holes ending up being in line. It also would be strange that animals homed in on this one bone in a cave full of bones, where no reports of similarly chewed bones have been made.
This claim is harder to believe when it is calculated that chances for holes to
be arranged, by chance, in a pattern that matches the spacings of 4 notes of a
diatonic flute, are only one in hundreds to occur .
The analysis I made on the Internet (http://www.webster.sk.ca/greenwich/fl-compl.htm) regarding the bone being capable of matching 4 notes of the do, re, mi (diatonic) scale included the possibility that the bone was extended with another bone "mouthpiece" sufficiently long to make the notes sound fairly in tune. While Nowell says "it's a big leap of faith to conclude that this was an intentionally constructed flute," it's a bigger leap of faith to accept the immense coincidence that animals blindly created a hole-spacing pattern with holes all in line (in what clearly looks like so many other known bone flutes which are made to play notes in a step-wise scale) and blindly create a pattern that also could play a known acoustic scale if the bone was extended. That's too much coincidence for me to accept. It is more likely that it is an intentionally made flute, although admittedly with only the barest of clues regarding its original condition.
The 5.7 inch figure your article quoted appears erroneous, as the centimeter scale provided by its discoverer, Ivan Turk, indicates the artifact is about 4.3 inches long. However, the unbroken femur would originally have been about 8.5 inches, and the possibility of an additional hole or two exists, to complete a full scale, perhaps aided by the possible thumbhole. However, the full diatonic spectrum is not required as indicated by Nowell and Chase: It could also have been a simpler (but still diatonic) 4 or 5 note scale. Such short-scale flutes are plentiful in homo sapiens history.
Finally, a worn-out or broken flute bone can serve as a scoop for manipulation of food, explaining why animals might chew on its ends later. It is also well-known that dogs chase and maul even sticks, despite their non-nutritional nature. What appears "weak" is not the case for a flute, but the case against it by Nowell and Chase.
Letter to the Editor: Antiquity Journal:
"A Bone to Pick"
By Bob Fink
I have a bone to pick with Francesco d'Errico's viewpoint in the March issue of Antiquity (article too long to reproduce here) regarding the Neanderthal flute found in Slovenia by Ivan Turk. D'Errico argues the bone artifact is not a flute.
D'Errico omits dealing with the best evidence that this bone find is a flute.
Regarding the most important evidence, that of the holes being lined up, neither d'Errico nor Turk make mention of this.
This line-up is remarkable especially if they were made by more than one carnivore, which apparently they'd have to be, based on Turk's analysis of the center-spans of the holes precluding their being made by a single carnivore or bite (Turk,* pp.171-175). To account for this possible difficulty, some doubters do mention "one or more" carnivores (Chase & Nowell, Science News 4/4/98).
My arguments over the past year pointed out the mathematical odds of the lining up of the holes occurring by chance-chewing are too difficult to believe.
The Appendix in my essay ("Neanderthal Flute --A Musicological Analysis") proves that the number of ways a set of 4 random holes could be differently spaced (to produce an audibly different set of tones) are 680 ways. The chances a random set would match the existing fragment's spacing [which also could produce a match to four diatonic notes of the scale] are therefore only one in hundreds. If, in calculating the odds, you also allowed the holes to be out of line, or to be less than 4 holes as well, then the chance of a line-up match is only one from many tens of thousands.
And yet randomness and animal bites still are acceptable to account for holes being in line that could also play some notes of the scale? This is too much coincidence for me to believe occurred by chance.
D'Errico mentions my essay in his article and what he thought it was about, but he overstates my case into being a less believable one. My case simply was that if the bone was long enough (or a shorter bone extended by a mouthpiece insert) then the 4 holes would be consistent and in tune with the sounds of Do, Re, Mi, Fa (or flat Mi, Fa, Sol, and flat La in a minor scale).
In the 5 points I list below, extracted from Turk's monograph in support of this being a flute, d'Errico omits dealing with much of the first, and all of the second, fourth and sixth points.
Turk & Co's monograph shows the presence on site of boring tools, and includes experiments made by Turk's colleague Guiliano Bastiani who successfully produced similar holes in fresh bone using tools of the type found at the site (pp. 176-78 Turk).
They also wrote (pp. 171-75) that:
1. The center-to-center distances of the holes in the artifact are smaller than that of the tooth spans of most carnivores. The smallest tooth spans they found were 45mm, and the holes on the bone are 35mm (or less) apart;
2. Holes bitten are usually at the ends of bones rather than in the center of them;
3. There is an absence of dents, scratches and other signs of gnawing and counter-bites on the artifact;
4. The center-to-center distances do not correspond to the spans of carnivores which could pierce the bone;
5. The diameters of the holes are greater than that producible by a wolf exerting the greatest jaw pressure it had available -- it's doubtful that a wolf's jaws would be strong enough (like a hyena's) to have made the holes, especially in the thickest part of the wall of the artifact.
6. If you accept one or more carnivores, then why did they over-target one bone, when there were so many other bones in the cave site? Only about 4.5% of the juvenile bones were chewed or had holes, according to Turk (p. 117).
* Turk, Ivan (ed.) (1997). Mousterian Bone Flute. Znanstvenoraziskovalni
Center Sazu, Ljubljana, Slovenia.
Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles
|
<urn:uuid:f166f15d-9976-40ed-8a49-8bed360001ff>
| 3.71875
|
http://cogweb.ucla.edu/ep/FluteDebate.html
|
In some people, macular degeneration advances so slowly that it has little effect on their vision. But in others, the disease progresses faster and may lead to vision loss. Sometimes only one eye is affected, while the other eye remains free of problems for many years. People with dry macular degeneration in one eye often do not notice any changes in their vision. With one eye seeing clearly, they can still drive, read, and see fine details. Some people may notice changes in their vision only if macular degeneration affects both of their eyes. Both dry and wet macular degeneration cause no pain.
Symptoms of macular degeneration include:
Blurred vision —This is an early sign. An example of early findings is that you may need more light for reading and other tasks.
Difficulty seeing details in front of you —You may have a difficult time seeing words in a book or faces.
Blind spot —A small, growing blind spot will appear in the middle of your field of vision. This spot occurs because a group of cells in the macula have stopped working properly. Over time, the blurred spot may get bigger and darker, taking more of your central vision.
Crooked lines —An early symptom of wet macular degeneration is straight lines that will appear crooked or wavy. This happens because the newly formed blood vessels leak fluid under the macula. The fluid raises the macula from its normal place at the back of the eye and distorts your vision.
Lighting —Images appear more gray in color and colors are not as bright
Contact your ophthalmologist immediately for an eye exam if you notice:
- Visual distortions
- Sudden decrease in central vision
- A central blind spot
- Any other visual problems
- Reviewer: Christopher Cheyer, MD
- Update Date: 09/01/2011 -
|
<urn:uuid:6aba2b8d-0f86-4d64-b8af-a03c21e98c63>
| 3.328125
|
http://doctors-hospital.net/your-health/?/19810/Reducing-Your-Risk-of-Macular-Degeneration~Symptoms
|
A bullock cart or ox cart is a two-wheeled or four-wheeled vehicle pulled by oxen (draught cattle). It is a means of transportation used since ancient times in many parts of the world. They are still used today where modern vehicles are too expensive or the infrastructure does not favor them.
Used especially for carrying goods, the bullock cart is pulled by one or several oxen (bullocks). The cart (also known as a jinker) is attached to a bullock team by a special chain attached to yokes, but a rope may also be used for one or two animals. The driver and any other passengers sit on the front of the cart, while load is placed in the back. Traditionally the cargo was usually agrarian goods and lumber.
Costa Rica
In Costa Rica, ox carts (carretas in the Spanish language) were an important aspect of the daily life and commerce, especially between 1850 to 1935, developing a unique construction and decoration tradition that is still being developed. Costa Rican parades and traditional celebrations are not complete without a traditional ox cart parade.
In 1988, the traditional ox cart was declared as National Symbol of Work by the Costa Rican government.
In 2005, the "Oxherding and Oxcart Traditions in Costa Rica" were included in UNESCO's Representative List of the Intangible Cultural Heritage of Humanity.
In Indonesia, Bullock Carts are commonly used in the rural parts of the country, where it is used for transporting goods and carriages and also people. But it is mostly common in Indonesia that there are Horse Car than Bullock Carts on the streets of Indonesia.
Bullock carts were widely used in Malaysia before the introduction of automobiles, and many are still used today. These included passenger vehicles, now used especially for tourists. Passenger carts are usually equipped with awnings for protection against sun and rain, and are often gaily decorated.
See also
|Wikimedia Commons has media related to: Ox-drawn carts|
|
<urn:uuid:4dcad241-2b6b-4970-9112-c67a47a29a2c>
| 3.453125
|
http://en.wikipedia.org/wiki/Bullock_cart
|
Deep-space communication improved with electromagnetic radiation antenna
- Robert C. Dye
- Technology Transfer
- (505) 667-3404
Electromagnetic radiation antenna has potential for deep-space communication
- Directed Energy
- Long-range communications
- Medicine (Oncology)
- RADAR imaging applications are countermeasure-resistant
- Communications can be spatially-encrypted
- 4-dimensional volumes of energy can be aimed at a single space-time point for directed energy applications
- Nonspherical decay of the cusp enables low-power communications and propagation over great distances
Los Alamos National Laboratory (LANL) researchers have developed the Lightslinger, a completely new type of antenna that produces tightly-focused packets of electromagnetic radiation fundamentally different from the emissions of conventional transmitters. The device has potential applications in RADAR, directed-energy (non-kinetic kill), secure communications, ultra-long-range communications (e.g., deep-space), medicine (oncology) and astrophysics.
The Lightslinger functions by producing a moving polarization pattern in a ring of alumina. By careful timing of voltages applied to electrodes that surround the alumina, the polarization pattern can be made to move superluminally, i.e., faster than the speed of light in a vacuum. Nobel laureate Vitaly Ginzberg showed both that such superluminal polarization patterns do not violate the principles of special relativity and that they emit electromagnetic radiation. Once a source travels faster than the waves that it emits, it can make contributions at multiple retarded times to a signal received instantaneously at a distance. This effect is already well known in acoustics; when a supersonic airplane accelerates through the speed of sound, a violent “sonic boom” is heard many miles away, even if the airplane itself is rather quiet. The Lightslinger enables the same thing to be done with electromagnetic radiation; i.e., a relatively low-power source can make an “electromagnetic boom”, an intense concentration of radiowaves at a great distance.
The “electromagnetic boom” is due to temporal focusing, that is, focusing in the time domain. Because of this effect, part of the emitted radiation possesses an intensity that decays with distance r as 1/r rather than as the conventional inverse square law, 1/r2. These nonspherically-decaying wavepackets represent a game-changing technology in the applications of electromagnetic radiation.
Development stage: Working prototype
Patent status: Patent pending
Licensing status: Available for exclusive or non-exclusive licensing
|
<urn:uuid:79bc5d65-38cf-489f-b8c5-6800ff88c6f7>
| 3.34375
|
http://[email protected]/collaboration/tech-transfer/tech-transfer-summaries/electromagnetic-radiation-antenna-has-potential-for-deep-space-communication.php
|
The test team views the use of a pulley as an intermediate step only, and has planned to shift to a reliance on windlasses like those that apparently were used to hoist sails on Egyptian ships.
"The whole approach has been to downgrade the technology," Gharib said. "We first wanted to show that a kite could raise a huge weight at all. Now that we're raising larger and larger stones, we're also preparing to replace the steel scaffolding with wooden poles and the steel pulleys with wooden pulleys like the ones they may have used on Egyptian ships."
For Gharib, the idea of accomplishing heavy tasks with limited manpower is appealing from an engineer's standpoint because it makes more logistical sense.
"You can imagine how hard it is to coordinate the activities of hundreds if not thousands of laborers to accomplish an intricate task," said Gharib. "It's one thing to send thousands of soldiers to attack another army on a battlefield. But an engineering project requires everything to be put precisely into place.
"I prefer to think of the technology as simple, with relatively few people involved," he explained.
Gharib and Graff came up with a way of building a simple structure around the obelisk, with a pulley system mounted in front of the stone. That way, the base of the obelisk would drag on the ground for a few feet as the kite lifted the stone, and the stone would be quite stable once it was pulled upright into a vertical position. If the obelisk were raised with the base as a pivot, the stone would tend to swing past the vertical position and fall the other way.
The top of the obelisk is tied with ropes threaded through the pulleys and attached to the kite. The operation is guided by a couple of workers using ropes attached to the pulleys.
No one has found any evidence that the ancient Egyptians moved stones or any other objects with kites and pulleys. But Clemmons has found some tantalizing hints that the project is on the right track. On a building frieze in a Cairo museum, there is a wing pattern in bas-relief that does not resemble any living bird. Directly below are several men standing near vertical objects that could be ropes.
Gharib's interest in the project is mainly to demonstrate that the technique may be viable.
"We're not Egyptologists," he said. "We're mainly interested in determining whether there is a possibility that the Egyptians were aware of wind power, and whether they used it to make their lives better."
Now that Gharib and his team have successfully raised the four-ton concrete obelisk, they plan to further test the approach using a ten-ton stone, and perhaps an even heavier one after that. Eventually they hope to obtain permission to try using their technique to raise one of the obelisks that still lie in an Egyptian quarry.
"In fact, we may not even need a kite. It could be we can get along with just a drag chute," Gharib said.
An important question is: Was there enough wind in Egypt for a kite or a drag chute to fly? Probably so, as steady winds of up to 30 miles per hour are not unusual in the areas where pyramids and obelisks were found.
(c) 2001 Caltech
SOURCES AND RELATED WEB SITES
|
<urn:uuid:7989d2d3-3e6d-4a4d-ad8e-e7b19882a89a>
| 3.578125
|
http://news.nationalgeographic.com/news/2001/06/0628_caltechobelisk_2.html
|
Classroom Activities for Teaching Sedimentary GeologyThis collection of teaching materials allows for the sharing of ideas and activities within the community of geoscience teachers. Do you have a favorite teaching activity you'd like to share? Please help us expand this collection by contributing your own teaching materials.
Subject: Sedimentary Geology
Results 1 - 4 of 4 matches
Chemical and Physical Weathering Field and Lab Experiment: Development and Testing of Hypotheses part of Activities
Lisa Greer, Washington and Lee University
This exercise combines an integrated field and laboratory experiment with a significant scientific writing assignment to address chemical and physical weathering processes via hypothesis development, experimental ...
Demystifying the Equations of Sedimentary Geology part of Activities
Larry Lemke, Wayne State University
This activity includes three strategies to help students develop a deeper comfort level and stronger intuitive sense for understanding mathematical expressions commonly encountered in sedimentary geology. Each can ...
Digital Sandstone Tutorial part of Activities
Kitty Milliken, University of Texas at Austin, The
The Tutorial Petrographic Image Atlas is designed to give students more exposure to petrographic features than they can get during organized laboratory periods.
Red rock and concretion models from Earth to Mars: Teaching diagenesis part of Activities
Margie Chan, University of Utah
This activity teaches students concepts of terrestrial diagenesis (cementation, fluid flow, porosity and permeability, concretions) and encourages them to apply those concepts to new or unknown settings, including ...
|
<urn:uuid:f4b8146e-83a2-43e4-8f2c-b3c235ae8afb>
| 3.875
|
http://serc.carleton.edu/NAGTWorkshops/sedimentary/activities.html?q1=sercvocabs__43%253A206
|
By JOHN CARTER
When Abraham Lincoln died from an assassin’s bullet on April 15, 1865, Edwin Stanton remarked to those gathered around his bedside, “Now he belongs to the ages.”
One of the meanings implied in Stanton’s famous statement is that Lincoln would not only be remembered as an iconic figure of the past, but that his spirit would also play a significant role in ages to come.
The Oscar-nominated movie “Lincoln,” which chronicles the struggle to pass the 13th amendment abolishing slavery, has turned our attention again to Lincoln’s legacy and his relevance amid our nation’s present divisions and growing pains.
Here is some of the wit and wisdom of Abraham Lincoln worth pondering:
“As for being president, I feel like the man who was tarred and feathered and ridden out of town on a rail. To the man who asked him how he liked it, he said, ‘If it wasn’t for the honor of the thing, I’d rather walk.’”
“I desire so to conduct the affairs of this administration that if at the end, when I come to lay down the reins of power, I have lost every other friend on earth, I shall at least have one friend left, and that friend shall be down inside of me.”
“Should my administration prove to be a very wicked one, or what is more probable, a very foolish one, if you the people are true to yourselves and the Constitution, there is but little harm I can do, thank God.”
“Bad promises are better broken than kept.”
“I am not at all concerned that the Lord is on our side in this great struggle, for I know that the Lord is always on the side of the right; but it is my constant anxiety and prayer that I and this nation may be on the Lord’s side.”
“I have never had a feeling, politically, that did not spring from the sentiments embodied in the Declaration of Independence.”
“Those who deny freedom to others deserve it not for themselves; and, under a just God, cannot long retain it.”
“As I would not be a slave, so I would not be a master. This expresses my idea of democracy.”
“The probability that we may fail in the struggle ought not to deter us from the support of a cause we believe to be just.”
“The true rule, in determining to embrace or reject anything, is not whether it have any evil in it, but whether it have more evil than good. There are few things wholly evil or wholly good.”
“Some of our generals complain that I impair discipline and subordination in the army by my pardons and respites, but it makes me rested, after a hard day’s work, if I can find some good excuse for saving a man’s life, and I go to bed happy as I think how joyful the signing of my name will make him (a deserter) and his family.”
“I have been driven many times to my knees by the overwhelming conviction that I had nowhere else to go.”
In addition, Lincoln’s Gettysburg Address and his second inaugural speech are ever relevant. And you may wish to add your own favorites to these.
Paul’s advice to us in Philippians 4:8 is to “fill your minds with those things that are good and deserve praise: things that are true, noble, right, pure, lovely, and honorable.”
As we celebrate his birthday on the 12th, Lincoln’s words more than meet this standard!
John Carter is a Weatherford resident whose column, “Notes From the Journey,” is published weekly in the Weatherford Democrat.
|
<urn:uuid:d53f9812-f42b-4039-a509-209a2d5aac9b>
| 3.390625
|
http://weatherforddemocrat.com/opinion/x1303543173/NOTES-FROM-THE-JOURNEY-Lincoln-is-still-one-for-the-ages
|
Science Fair Project Encyclopedia
The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions.
The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride.
Other examples of inorganic covalently bonded chlorides which are used as reactants are:
- phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory.
- Disulfur dichloride (SCl2) - used for vulcanization of rubber.
Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb>
| 4.59375
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Chloride
|
Brain Matures a Few Years Late in ADHD
but Follows Normal Pattern
A 2007 press release from the National Institute of Mental Health discusses brain development in ADHD youths. In some cases, brain development is delayed as much as three years. The full release and related video are available on the NIMH site: Brain Matures a Few Years Late in ADHD, but Follows Normal Pattern.
Autistic Spectrum Disorders (ASD):
How to Help Children with Autism Learn
From Dr. Lauer and Dr. Beaulieu's talk
Quick facts about Pervasive Developmental Disorders (PDD)/ Autistic Spectrum Disorders (ASD)
- Autism is a 'spectrum disorder' meaning that it affects children in different ways and at different times in their development.
- Typically, delays and learning problems can emerge in several areas of functioning including social functioning, communication skills, motor skills, and overall intellectual potential.
- Each child has their own learning style that includes specific learning challenges as well as areas of preserved skills and, at times, exceptional abilities.
- Both autism and Asperger's disorder are on the same continuum but are distinct in their expression.
What are the challenges students with PDD/ASD frequently experience?
- Academic difficulties that can often be misinterpreted as learning disabilities.
- Problems with executive functioning skills.
- Difficulty in forming relationships with peers.
- Emotional difficulties due to learning and social problems such as anxiety, depression, low self-esteem.
- Fear of new situations and trouble adjusting to changes.
- May look like or be misconstrued as attention-deficit-hyperactivity disorder (ADHD), Nonverbal Learning Disability (NLD), Oppositional-Defiant Disorder or Obsessive Compulsive Disorder (OCD).
Why choose US to help YOU?
- Our evaluations are conducted by neuropsychologists who have been extensively trained in the early detection of autistic spectrum disorders and in the identification of specific patterns of learning strengths and weaknesses that are often associated with this condition.
- Our evaluations help determine which teaching style is best suited to fit an individuals' specific learning profile; we also offer suggestions regarding compensatory educational approached.
- We work as a team with other learning professionals, advocates and health professionals to enhance the child's potential for success in all settings.
'The design of truly individual treatment plans that exploit strengths and compensate for weaknesses begins with a detailed understanding of how learning is different for children with autism than for those without autism and how learning is different among children with autism.'
— Bryna Siegel, Ph.D., author of Helping Children with Autism Learn
For more information on current research, interventions and programs, follow us on Facebook.
Coming to see you for an evaluation was so helpful and Im so happy that I did this. After struggling for years with ADHD but not knowing thats what it was and almost completely ruining our marriage because of it, your diagnosis helped more than you could know. Now I know that its not just me the diagnosis has turned our lives around and helped me feel more accomplished at work. Thanks again for everything.
Sandy and Bob M.
|
<urn:uuid:9b0b7a97-4882-4e06-adc0-44f4bbdc3349>
| 3.390625
|
http://www.cnld.org/autistic_spectrum_disorders.php
|
1854-89 THREE DOLLARS INDIAN HEAD
In 1853 the United States negotiated the "Gadsden Purchase"settlement of a boundary dispute with Mexico that resulted in the U.S. acquiring what would become the southern portions of Arizona and New Mexico for ten million dollars. The following year Commodore Matthew Perry embarked upon his famed expedition to re-open Japan to the Western world and establish trade. Spreading beyond its borders in many ways, a few years earlier the United States had joined the worldwide move to uniform postage rates and printed stamps when the Congressional Act of March 3, 1845 authorized the first U.S. postage stamps, and set the local prepaid letter rate at five cents. This set the stage for a close connection between postal and coinage history.
Exactly six years later, the postage rate was reduced to three cents when New York Senator Daniel S. Dickinson fathered legislation that simultaneously initiated coinage of the tiny silver three-cent piece as a public convenience. The large cents then in circulation were cumbersome and unpopular, and the new denomination was designed to facilitate the purchase of stamps without using the hated "coppers."
This reasoning was carried a step further when the Mint Act of February 21, 1853 authorized a three-dollar gold coin. Congress and Mint Director Robert Maskell Patterson were convinced that the new coin would speed purchases of three-cent stamps by the sheet and of the silver three-cent coins in roll quantities. Unfortunately, at no time during the 35-year span of this denomination did public demand justify these hopes. Chief Engraver James Barton Longacre chose an "Indian Princess" for his obverse not a Native American profile, but actually a profile modeled after the Greco-Roman Venus Accroupie statue then in a Philadelphia museum. Longacre used this distinctive sharp-nosed profile on his gold dollar of 1849 and would employ it again on the Indian Head cent of 1859. On the three-dollar coin Liberty is wearing a feathered headdress of equal-sized plumes with a band bearing LIBERTY in raised letters. She's surrounded by the inscription UNITED STATES OF AMERICA. Such a headdress dates back to the earliest known drawings of American Indians by French artist Jacques le Moyne du Morgue's sketches of the Florida Timucua tribe who lived near the tragic French colony of Fort Caroline in 1562. It was accepted by engravers and medalists of the day as the design shorthand for "America."
Longacre's reverse depicted a wreath of tobacco, wheat, corn and cotton with a plant at top bearing two conical seed masses. The original wax models of this wreath still exist on brass discs in a Midwestern collection and show how meticulous Longacre was in preparing his design. Encircled by the wreath is the denomination 3 DOLLARS and the date. There are two boldly different reverse types, the small DOLLARS appearing only in 1854 and the large DOLLARS on coins of 1855-89. Many dates show bold "outlining" of letters and devices, resembling a double strike but probably the result of excessive forcing of the design punches into the die steel, causing a hint of their sloping "shoulders" to appear as part of the coin's design. The high points of the obverse design that first show wear are the cheek and hair above the eye; on the reverse, check the bow knot and leaves.
A total of just over 535,000 pieces were issued along with 2058 proofs. The first coins struck were the 15 proofs of 1854. Regular coinage began on May 1, and that first year saw 138,618 pieces struck at Philadelphia (no mintmark), 1,120 at Dahlonega (D), and 24,000 at New Orleans (O). These two branch mints would strike coins only in 1854. San Francisco produced the three-dollar denomination in 1855, 1856, and 1857, again in 1860, and apparently one final piece in 1870. Mintmarks are found below the wreath.
Every U.S. denomination boasts a number of major rarities. The three-dollar gold coinage of 1854-1889 is studded with so many low-mintage dates that the entire series may fairly be called rare. In mint state 1878 is the most common date, followed by the 1879, 1888, 1854 and 1889 issues. Every other date is very rare in high grade, particularly 1858, 1865, 1873 Closed 3 and all the San Francisco issues. Minuscule mintages were the rule in the later years. Proof coins prior to 1859 are extremely rare and more difficult to find than the proof-only issues of 1873 Open 3, 1875 and 1876, but many dates are even rarer in the higher Mint State grades. This is because at least some proofs were saved by well- heeled collectors while few lower-budget collectors showed any interest in higher-grade business strikes of later-date gold. Counterfeits are known for many dates; any suspicious piece should be authenticated.
The rarest date of all is the unique 1870-S, of which only one example was struck for inclusion in the new Mint's cornerstone. Either the coin escaped, or a second was struck as a pocket piece for San Francisco Mint Coiner J.B. Harmstead. In any event, one coin showing traces of jewelry use surfaced in the numismatic market in 1907. It was sold to prominent collector William H. Woodin, and when Thomas L. Elder sold the Woodin collection in 1911, the coin went to Baltimore's Waldo C. Newcomer. Later owned by Virgil Brand, it was next sold by Ted and Carl Brandts of Ohio's Celina Coin Co. and Stack's of New York to Louis C. Eliasberg in 1946 for $11,500. In Bowers and Merena's October 1982 sale of the U.S. Gold Collection, this famous coin sold for a record $687,500.
The three-dollar denomination quietly expired in 1889 along with the gold dollar and nickel three-cent piece. America's coinage was certainly more prosaic without this odd denomination gold piece, but its future popularity with collectors would vastly outstrip the lukewarm public reception it enjoyed during its circulating life.
|
<urn:uuid:ce5e0d75-e5f8-4ce2-8b94-86d9527d0dd4>
| 3.671875
|
http://www.coinsite.com/CoinSite-PF/PParticles/$3goldix.asp
|
Rainy Day Painting
Create your very own creepy, haunted castle sitting in a turbulent field of flowing grass, eerily surrounded by dark, ominous clouds.
Fireworks are such an exciting part of part of summer festivities, but it's sad when the show is over. Keep them alive all year long with watercolor fireworks.
Show your high schooler how to celebrate Mary Cassatt, an Impressionist painter, by creating a mother-child painting in her style.
Put your individual fingerprint on the 100th Day of School (literally!) with this activity.
Show your preschooler how to make a print of a butterfly using her hand as a tool--a great way to stimulate her sense of touch.
Use marbles and paint to explore the wild world of shapes and color...and build kindergarten writing strength, too.
Introduce your kindergartener to some art history by showing him how to create an everyday object print, Andy Warhol-style.
Celebrate the changing seasons with this fun, hands-on art activity that will teach your child about the different colors of the seasons.
Help your preschooler begin reading and writing the printed word by connecting simple letter recognition exercises with this easy art project: alphabet trees!
|
<urn:uuid:b9953fa3-a9b1-49a2-8a31-a2bff8d508c2>
| 3.453125
|
http://www.education.com/collection/zapkode/rainy-day-painting/
|
the energy [r]evolution
The climate change imperative demands nothing short of an Energy [R]evolution. The expert consensus is that this fundamental shift must begin immediately and be well underway within the next ten years in order to avert the worst impacts. What is needed is a complete transformation of the way we produce, consume and distribute energy, while at the same time maintaining economic growth. Nothing short of such a revolution will enable us to limit global warming to less than a rise in temperature of 2° Celsius, above which the impacts become devastating.
Current electricity generation relies mainly on burning fossil fuels, with their associated CO2 emissions, in very large power stations which waste much of their primary input energy. More energy is lost as the power is moved around the electricity grid network and converted from high transmission voltage down to a supply suitable for domestic or commercial consumers. The system is innately vulnerable to disruption: localised technical, weather-related or even deliberately caused faults can quickly cascade, resulting in widespread blackouts. Whichever technology is used to generate electricity within this old fashioned configuration, it will inevitably be subject to some, or all, of these problems. At the core of the Energy [R]evolution there therefore needs to be a change in the way that energy is both produced and distributed.
4.1 key principles
the energy [r]evolution can be achieved by adhering to five key principles:
1.respect natural limits – phase out fossil fuels by the end of this century We must learn to respect natural limits. There is only so much carbon that the atmosphere can absorb. Each year humans emit over 25 billion tonnes of carbon equivalent; we are literally filling up the sky. Geological resources of coal could provide several hundred years of fuel, but we cannot burn them and keep within safe limits. Oil and coal development must be ended. The global Energy [R]evolution scenario has a target to reduce energy related CO2 emissions to a maximum of 10 Gigatonnes (Gt) by 2050 and phase out fossil fuels by 2085.
2.equity and fairness As long as there are natural limits there needs to be a fair distribution of benefits and costs within societies, between nations and between present and future generations. At one extreme, a third of the world’s population has no access to electricity, whilst the most industrialised countries consume much more than their fair share.
The effects of climate change on the poorest communities are exacerbated by massive global energy inequality. If we are to address climate change, one of the core principles must be equity and fairness, so that the benefits of energy services – such as light, heat, power and transport – are available for all: north and south, rich and poor. Only in this way can we create true energy security, as well as the conditions for genuine human wellbeing.
The Advanced Energy [R]evolution scenario has a target to achieve energy equity as soon as technically possible. By 2050 the average per capita emission should be between 1 and 2 tonnes of CO2.
3.implement clean, renewable solutions and decentralise energy systems. There is no energy shortage. All we need to do is use existing technologies to harness energy effectively and efficiently. Renewable energy and energy efficiency measures are ready, viable and increasingly competitive. Wind, solar and other renewable energy technologies have experienced double digit market growth for the past decade.
Just as climate change is real, so is the renewable energy sector. Sustainable decentralised energy systems produce less carbon emissions, are cheaper and involve less dependence on imported fuel. They create more jobs and empower local communities. Decentralised systems are more secure and more efficient. This is what the Energy [R]evolution must aim to create.
To stop the earth’s climate spinning out of control, most of the world’s fossil fuel reserves – coal, oil and gas – must remain in the ground. Our goal is for humans to live within the natural limits of our small planet.
4.decouple growth from fossil fuel use Starting in the developed countries, economic growth must be fully decoupled from fossil fuel usage. It is a fallacy to suggest that economic growth must be predicated on their increased combustion.
We need to use the energy we produce much more efficiently, and we need to make the transition to renewable energy and away from fossil fuels quickly in order to enable clean and sustainable growth.
5.phase out dirty, unsustainable energyWe need to phase out coal and nuclear power. We cannot continue to build coal plants at a time when emissions pose a real and present danger to both ecosystems and people. And we cannot continue to fuel the myriad nuclear threats by pretending nuclear power can in any way help to combat climate change. There is no role for nuclear power in the Energy [R]evolution.
|
<urn:uuid:b6cc700a-55c3-47a6-baaf-dbe7c04a4b04>
| 3.3125
|
http://www.energyblueprint.info/1332.0.html?L=0
|
Gay and Lesbian Issues
The adolescent years are full of challenges, many related to sex and sexual identity. These issues can be especially difficult for teens who are (or think they may be) homosexual. Homosexual teens deserve the same understanding and respect as heterosexual teens, and it is important for everyone to know the facts about homosexuality. Here are the basics.
Each of us has a biological sex (we have a male or female body), a gender identity (we feel like a male or female), and a sexual orientation (we are attracted to males or females). Homosexuality refers to a person's sexual orientation; homosexual teen-agers have strong romantic or sexual feelings for a person of the same sex. Heterosexual teen-agers are attracted to people of the opposite sex, and bisexual teens are attracted to people of both sexes.
The word "gay" is used to describe both men and women who are homosexual, with the word "lesbian" specifically referring to a homosexual woman. It is estimated that 10 percent of the population in the United States and throughout the world is lesbian or gay.
Although scientists don't know why some people are homosexual and others are not, most believe that homosexuality is a normal variation of sexual orientation. It may be genetic, result from natural substances (hormones) in the body, be influenced by the environment before or after birth, or, most likely, several of these things working in combination. Homosexual teens are found in all types of families. Homosexuality is not caused by "bad parenting." If your teen is gay, it is not because of anything you or anyone else did.
Homosexuality also is not something a person chooses, nor is it an illness that can be cured. According to the American Psychiatric Association, so-called therapies such as "reparative therapy" and "transformational ministry" don't work and actually can be harmful, causing guilt and anxiety in homosexual teens.
Not all teen-agers who are attracted to members of the same sex are homosexual. Many teens experiment with their sexuality during adolescence, in much the same way that they experiment with clothing, body art or music. This brief sexual experimentation is thought to be a normal part of sexual development. For homosexual teens, the attraction to people of the same sex is stronger and longer lasting.
Back to top
Every family is different. While one parent may find out by chance that a teen is homosexual, others may hear directly from their teen in person, in a letter or by a phone call. When a teen tells other people that he is homosexual, it's called "coming out." Although this process sometimes can be difficult or painful for families, it also can be a time of tremendous growth. It is important to remember that all teens need their family's support and acceptance, especially when they are dealing with sensitive issues.
Back to top
"Coming out" can be scary and painful, and parents need to reassure their children that they will not be loved any less for sharing the truth about themselves. If your teen tells you he is gay, let him know that you love him unconditionally, and accept him no matter what.
Show your teen that you care by learning more about homosexuality. Read books on the subject or check out reputable Web sites (such as www.pflag.org). Talk to some adults you know who are gay. Look for organizations or support groups in your community that can give you information on homosexuality. It will be easier for you to support your teen when you know more and are comfortable with the subject.
Parents may worry about how friends, neighbors and family will react to their teen's homosexuality. It is usually best not to share any information without your teen-ager's permission. Unfortunately, prejudice against homosexuals is widespread, mostly due to ignorance and fear. When your teen is ready for you to let others know, you should talk with them about your teen's sexual orientation and help them to understand, by using what you have learned.
Back to top
Growing up as a homosexual in a mostly heterosexual society often is not easy. Gay and lesbian adolescents sometimes must cope with unfair, prejudiced, and even violent behavior at school, at home and in the community. They may feel fear or be alone and unsupported. This can push some teens to use drugs and alcohol, engage in risky sexual behavior, or even attempt suicide. It is important that homosexual teens feel supported by their parents and always able to talk openly with them about these issues.
Overall, most gay and lesbian youth grow up to be well-adjusted and happy adults, with successful careers and family lives.
Books for parents of the newly out:
"Is it A Choice? Answers to 300 Most Asked Questions About Gay and Lesbian People" by Eric Marcus
"Loving Someone Gay" by Don Clark, Ph.D.
"My Child is Gay" by Bryce McDougall, editor
Last updated May 29, 2011
|
<urn:uuid:83d99630-30d5-4811-bdcf-250bf5d3fac5>
| 3.59375
|
http://www.intelihealth.com/IH/ihtIH/WSIHW000/34970/34997/362814.html?d=dmtChildGuide
|
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock
Ecological and functional relationships
The community predominantly consists of algae which cover the rock surface and creates a patchy canopy. In doing so, the algae provides an amenable habitat in an otherwise hostile environment, exploitable on a temporary basis by other species. For instance, Ulva intestinalis provides shelter for the orange harpacticoid copepod, Tigriopus brevicornis, and the chironomid larva of Halocladius fucicola (McAllen, 1999). The copepod and chironomid species utilize the hollow thalli of Ulva intestinalis as a moist refuge from desiccation when rockpools completely dry. Several hundred individuals of Tigriopus brevicornis have been observed in a single thallus of Ulva intestinalis (McAllen, 1999). The occasional grazing gastropods that survive in this biotope no doubt graze Ulva.
Seasonal and longer term change
- During the winter, elevated levels of freshwater runoff would be expected owing to seasonal rainfall. Also, winter storm action may disturb the relatively soft substratum of chalk and firm mud, or boulders may be overturned.
- Seasonal fluctuation in the abundance of Ulva spp. Would therefore be expected with the biotope thriving in winter months. Porphyra also tends to be regarded as a winter seaweed, abundant from late autumn to the succeeding spring, owing to the fact that the blade shaped fronds of the gametophyte develop in early autumn, whilst the microscopic filamentous stages of the spring and summer are less apparent (see recruitment process, below).
Habitat structure and complexity
Habitat complexity in this biotope is relatively limited in comparison to other biotopes. The upper shore substrata, consisting of chalk, firm mud, bedrock or boulders, will probably offer a variety of surfaces for colonization, whilst the patchy covering of ephemeral algae provides a refuge for faunal species and an additional substratum for colonization. However, species diversity in this biotope is poor owing to disturbance and changes in the prevailing environmental factors, e.g. desiccation, salinity and temperature. Only species able to tolerate changes/disturbance or those able to seek refuge will thrive.
The biotope is characterized by primary producers. Rocky shore communities are highly productive and are an important source of food and nutrients for neighbouring terrestrial and marine ecosystems (Hill et al., 1998). Macroalgae exude considerable amounts of dissolved organic carbon which is taken up readily by bacteria and may even be taken up directly by some larger invertebrates. Dissolved organic carbon, algal fragments and microbial film organisms are continually removed by the sea. This may enter the food chain of local, subtidal ecosystems, or be exported further offshore. Rocky shores make a contribution to the food of many marine species through the production of planktonic larvae and propagules which contribute to pelagic food chains.
The life histories of common algae on the shore are generally complex and varied, but follow a basic pattern, whereby there is an alternation of a haploid, gamete-producing phase (gametophyte-producing eggs and sperm) and a diploid spore-producing (sporophyte) phase. All have dispersive phases which are circulated around in the water column before settling on the rock and growing into a germling (Hawkins & Jones, 1992).
Ulva intestinalis is generally considered to be an opportunistic species, with an 'r-type' strategy for survival. The r-strategists have a high growth rate and high reproductive rate. For instance, the thalli of Ulva intestinalis, which arise from spores and zygotes, grow within a few weeks into thalli that reproduce again, and the majority of the cell contents are converted into reproductive cells. The species is also capable of dispersal over a considerable distance. For instance, Amsler & Searles (1980) showed that 'swarmers' of a coastal population of Ulva reached exposed artificial substrata on a submarine plateau 35 km away.
The life cycle of Porphyra involves a heteromorphic (of different form) alternation of generations, that are either blade shaped or filamentous. Two kinds of reproductive bodies (male and female (carpogonium)) are found on the blade shaped frond of Porphyra that is abundant during winter. On release these fuse and thereafter, division of the fertilized carpogonium is mitotic, and packets of diploid carpospores are formed. The released carpospores develop into the 'conchocelis' phase (the diploid sporophyte consisting of microscopic filaments), which bore into shells (and probably the chalk rock) and grow vegetatively. The conchocelis filaments reproduce asexually. In the presence of decreasing day length and falling temperatures, terminal cells of the conchocelis phase produce conchospores inside conchosporangia. Meiosis occurs during the germination of the conchospore and produces the macroscopic gametophyte (blade shaped phase) and the cycle is repeated (Cole & Conway, 1980).
Time for community to reach maturity
Disturbance is an important factor structuring the biotope, consequently the biotope is characterized by ephemeral algae able to rapidly exploit newly available substrata and that are tolerant of changes in the prevailing conditions, e.g. temperature, salinity and desiccation. For instance, following the Torrey Canyon tanker oil spill in mid March 1967, which bleached filamentous algae such as Ulva and adhered to the thin fronds of Porphyra, which after a few weeks became brittle and were washed away, regeneration of Porphyra and Ulva was noted by the end of April at Marazion, Cornwall. Similarly, at Sennen Cove where rocks had completely lost their cover of Porphyra and Ulva during April, by mid-May had occasional blade-shaped fronds of Porphyra sp. up to 15 cm long. These had either regenerated from basal parts of the 'Porphyra' phase or from the 'conchocelis' phase on the rocks (see recruitment processes). By mid-August these regenerated specimens were common and well grown but darkly pigmented and reproductively immature. Besides the Porphyra, a very thick coating of Ulva (as Enteromorpha) was recorded in mid-August (Smith 1968). Such evidence suggests that the community would reach maturity relatively rapidly and probably be considered mature in terms of the species present and ability to reproduce well within six months.
No text entered.
This review can be cited as follows:
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock.
Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line].
Plymouth: Marine Biological Association of the United Kingdom.
Available from: <http://www.marlin.ac.uk/habitatecology.php?habitatid=104&code=2004>
|
<urn:uuid:13da434f-f140-49e3-8fdb-67019653693a>
| 3.625
|
http://www.marlin.ac.uk/habitatecology.php?habitatid=104&code=2004&code=2004
|
History of the Indians of the United States
by Angie Debo
The political, social, and military conflicts and foul-ups between the Indians and whites from the colonial era to the 1970s.
6 x 9 450 pages, index, maps, illustrated, paperbound
#300 Indians in the US $24.95
by Barry C. Kent
Culturally and linguistically, the Susquehannocks closely resembled the Iroquois of New York state. Actually, they were a fiercely independent nation that lived along the Susquehanna River in Pennsylvania and Maryland. They often invaded the tribes of lower Maryland. This is a detailed narrative of the Susquehannocks' lifestyle, villages,
and artifacts. Also describes their relationship with the Conestogas, Conoy, Shawnee, Delaware, and other tribes that lived along the river.
6" x 9" 440 pages, index, illustrated, maps, paperbound
#372 Susquehanna's Indians $16.95
Indians and World War II
by Alison R. Bernstein
The impact of World War II on Indian affairs was more provound and lasting than that of any other event or policy, including FDR's Indian New Deal and eforts to terminate federal responsibility for tribes under Eisenhower. Focusing on the period from 1941 to 1947, Bernstein explains why termination and tribal self-determination wer logical results of the Indians' World War II experiences in battle and on the home front. Includes a brief story of the Navajo Marine Codetalkers and Ira Hayes, a Pima Indian who helped raise the flag at Iwo Jima.
5½" x 8½" 247 pages, index, some photos, paperbound
#373 Indians & WWII $19.95
FAX: 717 464-3250
|
<urn:uuid:24918d75-915c-4e00-b5b9-a827162bb127>
| 3.515625
|
http://www.redrosestudio.com/Cat%2016%20Indians.html
|
January 23, 2007:
The paper by researchers at Yale, the University of Winnipeg, Stony Brook University, and led by University of Florida paleontologist Jonathan Bloch reconstructs the base of the primate family tree by comparing skeletal and fossil specimens representing more than 85 modern and extinct species. The team also discovered two 56-million-year-old fossils, including the most primitive primate skeleton ever described.
In the two-part study, an extensive evaluation of skeletal structures provides evidence that plesiadapiforms, a group of archaic mammals once thought to be more closely related to flying lemurs, are the most primitive primates. The team analyzed 173 characteristics of modern primates, tree shrews, flying lemurs with plesiadapiform skeletons to determine their evolutionary relationships. High-resolution CT scanning made fine resolution of inaccessible structures inside the skulls possible.
"This is the first study to bring it all together," said co-author Eric Sargis, associate professor of anthropology at Yale University and Assistant Curator of Vertebrate Zoology at Yale's Peabody Museum of Natural History. "The extensive dataset, the number and type of characteristics we were able to compare, and the availability of full skeletons, let us test far more than any previous study."
At least five major features characterize modern primates: relatively large brains, enhanced vision and eyes that face forward, a specialized ability to leap, nails instead of claws on at least the first toes, and specialized grasping hands and feet. Plesiadapiforms have some but not all of these traits. The article argues that these early primates may have acquired the traits over 10 million years in incremental changes to exploit their environment.
While the study did not include a molecular evaluation of the samples, according to Sargis, these results are consistent with molecular studies on related living groups. Compatibility with the independent molecular data increases the researchers' confidence in their own results.
Bloch discovered the new plesiadapiform species, Ignacius clarkforkensis and Dryomomys szalayi, just outside Yellowstone National Park in the Bighorn Basin with co-author Doug Boyer, a graduate student in anatomical sciences at Stony Brook. Previously, based only on skulls and isolated bones, scientists proposed that Ignacius was not an archaic primate, but instead a gliding mammal related to flying lemurs. However, analysis of a more complete and well-preserved skeleton by Bloch and his team altered this idea.
"These fossil finds from Wyoming show that our earliest primate ancestors were the size of a mouse, ate fruit and lived in the trees," said study leader Jonathan Bloch, a vertebrate paleontology curator at the Florida Museum of Natural History. "It is remarkable to think we are still discovering new fossil species in an area studied by paleontologists for over 100 years."
Researchers previously hypothesized plesiadapiforms as the ancestors of modern primates, but the idea generated strong debate within the primatology community. This study places the origins of Plesiadapiforms in the Paleocene, about 65 (million) to 55 million years ago in the period between the extinction of the dinosaurs and the first appearance of a number of undisputed members of the modern orders of mammals.
"Plesiadapiforms have long been one of the most controversial groups in mammalian phylogeny," said Michael J. Novacek, curator of paleontology at the American Museum of Natural History. "First, they are somewhere near primates and us. Second, historically they have offered tantalizing, but very often incomplete, fossil evidence. But the specimens in their study are beautifully and spectacularly preserved."
"The results of this study suggest that plesiadapiforms are the critical taxa to study in understanding the earliest phases of human evolution. As such, they should be of very broad interest to biologists, paleontologists, and anthropologists," said co-author Mary Silcox, professor of anthropology at the University of Winnipeg.
"This collaboration is the first to bring together evidence from all regions of the skeleton, and offers a well-supported perspective on the structure of the earliest part of the primate family tree," Bloch said.
The research was supported by grants from the National Science Foundation, Field Museum of Natural History, Yale University, Sigma Xi Scientific Research Society, Natural Sciences and Engineering Research Council (Canada), University of Winnipeg, the Paleobiological Fund, and The Wenner--Gren Foundation for Anthropological Research.
|
<urn:uuid:3fb03c5f-56af-4237-afa9-75336a1587b5>
| 3.8125
|
http://www.strangeark.com/blogarchive/2007_01_01_archive.html
|
What is Rainwater Harvesting?
Rainwater harvesting is an ancient practice of catching and holding rain for later use. In a rainwater harvesting system, rain is gathered from a building rooftop or other source and is held in large containers for future use, such as watering gardens or washing cars. This practice reduces the demand on water resources and is excellent during times of drought.
Why is it Important?
In addition to reducing the demand on our water sources (especially important during drought), rainwater harvesting also helps prevent water pollution. Surprised?
Here’s why: the success of the 1972 Clean Water Act has meant that the greatest threat to New York’s waterbodies comes not from industrial sources, but rather through the small actions we all make in our daily lives. For example, in a rain storm, the oil, pesticides, animal waste, and litter from our lawns, sidewalks, driveways, and streets are washed down into our sewers. This is called non-point source (NPS) pollution because the pollutants come from too many sources to be identified. Rainwater harvesting diverts water from becoming polluted stormwater; instead, this captured rainwater may be used to irrigate gardens near where it falls.
In New York City, keeping rainwater out of the sewer system is very important. That’s because the city has an old combined sewer system that uses the same pipes to transport both household waste and stormwater to sewage treatment plants. During heavy rains, the system overloads; then untreated sewage and contaminated stormwater overflow into our rivers and estuary, with serious consequences:
Who is Harvesting Rainwater in New York City?
Back in 2002, a drought emergency pushed many community gardens to the brink of extinction. For the first time in twenty years, community gardeners were denied permission to use fire hydrants, the primary source of water for most community gardens. This crisis led to the formation of the Water Resources Group (WRG), an open collaboration of community gardening and environmental organizations. With help from the WRG, rainwater harvesting systems have now been built as demonstration sites in twenty NYC community gardens.
At community gardens that harvest rainwater, rain is diverted from the gutters of adjacent buildings and is stored in tanks in the gardens. A 1-inch rainfall on a 1,000-square-foot roof produces 600 gallons of water. The tanks are mosquito proof, so the standing water does not encourage West Nile virus. Because rainwater is chlorine free, it is better than tap water for plant growth, meaning healthier plants. And it’s free!
What are Other Cities Doing?
Many cities have adopted creative, low-cost ways to stop wasting rainwater by diverting it from their sewage systems and putting it to use where it falls. Here are some examples:
What Can I Do?
Spread the word! Educate those around you on the importance of lifestyle decisions.
Tell people not to litter, dump oil down storm drains, or overfertilize their lawns.
Install a rainwater harvesting system at your home, school, business, or local community center.
Contact your local elected officials, and let them know you support rainwater harvesting!
Supporting rainwater harvesting Jade Boat Loans
|
<urn:uuid:14a860e9-8430-426b-8c1a-80c7f022fb96>
| 3.890625
|
http://www.waterresourcesgroup.org/
|
Intellectual disability begins in childhood. People with intellectual disability have limits in their mental functioning seen in below-average intelligence (IQ) tests and in their ability to communicate, socialize, and take care of their everyday needs. The degree of disability can vary from person to person. It can be categorized as mild, moderate, severe, or profound.
Some causes of intellectual disability can be prevented with proper medical care. Children diagnosed with an intellectual disability are most successful when they get help early in life. If you suspect that your child may have an intellectual disability, contact your doctor.
Several hundred causes of intellectual disability have been discovered, but many are still unknown. The most common ones are:
Biomedical causes resulting from:
- Abnormal genes inherited from parents
- Errors when genes combine, such as Down syndrome and Fragile X syndrome
- Nutritional deficiencies
- Metabolic conditions, such as phenylketonuria (PKU), galactosemia , and congenital hypothyroidism
- Developmental brain abnormality, such as hydrocephalus and brain malformation
- Infections during pregnancy, such as:
- Behavioral issues during pregnancy, such as:
Problems at birth, such as:
- Premature delivery or low birth weight
- Baby doesn’t get enough oxygen during birth
- Baby is injured during birth
Factors during childhood, such as:
- Nutritional deficiencies
- Illnesses or infections that affect the brain, including meningitis , encephalitis , chickenpox , whooping cough , and measles
- Exposure to lead , mercury , and other toxins
- Head injury or near drowning
- Social factors, such as child stimulation and adult responsiveness
- Educational deficiencies
A child could be at higher risk for intellectual disability due to any of the causes listed above, or due to intellectual disability in other family members. If you are concerned that your child is at risk, tell your child's doctor.
Symptoms appear before a child reaches age 18. Symptoms vary depending on the degree of the intellectual disability. If you think your child has any of these symptoms, do not assume it is due to intellectual disability. These symptoms may be caused by other, less serious health conditions.
- Learning and developing more slowly than other children of the same age
- Difficulty communicating or socializing with others
- Lower than average scores on IQ tests
- Trouble learning in school
- Inability to do everyday things like getting dressed or using the bathroom without help
- Difficulty hearing, seeing, walking, or talking
- Inability to think logically
The following categories are often used to describe the level of intellectual disability:
- IQ 50-70
- Slower than normal in all areas
- No unusual physical signs
- Can learn practical skills
- Reading and math skills up to grades 3-6
- Can conform socially
- Can learn daily task skills
- Functions in society
- IQ 35-49
- Noticeable delays, particularly speech
- May have unusual physical signs
- Can learn simple communication
- Can learn elementary health and safety skills
- Can participate in simple activities and self-care
- Can perform supervised tasks
- Can travel alone to familiar places
- IQ 20-34
- Significant delays in some areas; may walk late
- Little or no communication skills, but some understanding of speech with some response
- Can be taught daily routines and repetitive activities
- May be trained in simple self-care
- Needs direction and supervision socially
- IQ <20
- Significant delays in all areas
- Congenital abnormalities present
- Needs close supervision
- Requires attendant care
- May respond to regular physical and social activity
- Not capable of self-care
If you suspect your child is not developing skills on time, tell the doctor as soon as possible. Your doctor will ask about your child’s symptoms and medical history. A physical exam will be done. Standardized tests may be given that measure:
- Intelligence—IQ tests measure a person’s ability to do things such as think abstractly, learn, and solve problems. A child may have intellectual disability if IQ test results are 70 or below.
Adaptive behavior—These are skills needed to function in everyday life, including:
- Conceptual skills like reading and writing
- Social skills like responsibility and self-esteem
- Practical skills like the ability to eat, use the bathroom, and get dressed
Children with intellectual disability have a higher risk for other disabilities such as hearing impairment , visual problems, seizures , attention deficit hyperactivity disorder , or orthopaedic conditions. Additional testing may be needed to check for other conditions.
Talk with your doctor about the best treatment plan for your child. Treatment is most helpful if it begins as early as possible. Treatment includes:
- Early intervention programming for infants and toddlers up to age three
- Family counseling
- Human development training, including emotional skills and hand-eye coordination
- Special education programs
- Life skills training, such as preparing food, bathing
- Job coaching
- Social opportunities
- Housing services
To help reduce your child’s chance of becoming intellectually disabled, take the following steps:
- During pregnancy:
- Have your newborn screened for conditions that may produce intellectual disability.
- Have your child properly immunized .
- Schedule regular visits to the pediatrician.
- Use child safety seats and bicycle helmets.
- Remove lead-based paint from your home.
- Keep poisonous household products out of reach .
- Aspirin is not recommended for children or teens with a current or recent viral infection. This is because of the risk of Reye's syndrome , which can cause neurological problems. Ask your doctor which medicines are safe for your child.
- Reviewer: Rimas Lukas, MD
- Review Date: 03/2013 -
- Update Date: 00/31/2013 -
|
<urn:uuid:9966e685-a44f-44d0-bc28-77804eba9bae>
| 3.328125
|
http://blakemedicalcenter.com/your-health/?/96644/Intellectual-disability
|
12. June 2012 10:55
Retinal hemorrhage occurs when the blood vessels in the retina get damaged or ruptured, leading to abnormal bleeding. The retina, which is composed of rods and cones is the region of the eye responsible for sensitivity to light, and vision. The retinal vein and artery, along with a dense network of capillaries, are responsible for transmitting the blood supply to the retina. When these blood vessels are damaged, due to any reason, this affects the blood supply to the retina, which in turn leads to a decrease in visual acuity. Diabetic retinopathy is the leading cause of blindness in people aged between 20 and 65.
The dense network of cells in the retina is extremely sensitive, and can be damaged with even a slight trauma.
The causes due to which this damage might occur include:
- High blood pressure
- Forceful blows in the head region
- Child abuse in infants
- Improper development of blood vessels in infants born prematurely
- Blurred vision
- Spotted vision
- Lines in the field of vision
- Blind spots
- Distorted vision
- Progressive vision loss
- The disease is diagnosed by an ophthalmologist, who uses an opthalmoscope to examine the internal structure of the eye.
- Another method that is commonly used to detect the abnormalities in the blood vessels is a fluorescein angiography test, in which a fluorescent dye is injected into the patient’s bloodstream, after which photographs are clicked to view the blood vessels.
- In some cases, the physician might also order for a blood test to be performed.
- The disorder is self-limiting in most patients, with more than 85% cases healing on their own.
- The most common treatment for retinal hemorrhages is laser treatment, in which a laser beam is used to remove the affected blood vessels.
- If the disease is caused by some underlying medical condition like diabetes or hypertension, the treatment focuses on eliminating that disorder.
- Injection of anti-VEGF drugs like Avestin has been found to be effective in the treatment of hemorrhages associated with the growth of new vessels.
- The administration of various nutritional and herbal supplements like antioxidants, omega-3-rich foods, antioxidant vitamins, zinc, lutein, pine bark extract, grape seed extract, etc. has also been found to be effective in improving the symptoms of the disease.
We at Killeen Eyecare center are renowned throughout Killeen for providing the highest quality eye care to all our patients. We help them maintain healthy eyes and treat various eye diseases using most sophisticated instruments. For more details, you can visit us at 416 North Gray Street, Killeen, TX 76541, Downtown Killeen or call at 254-634-7805.
Eye Doctor Killeen - Eye Doctor Fort Hood
|
<urn:uuid:0a57e8d8-1439-48dd-8688-080cf98e78c5>
| 3.59375
|
http://killeeneyecarecenter.com/blog/post/Retinal-Hemorrhage-Symptoms-Diagnosis-And-Treatment.aspx
|
Types of literature
PRIMARY SOURCES are publications that report the results of original research. They may be in the form of conference papers, monographic series, technical reports, theses and dissertations, or journal articles. Because they present information in its original form (that is, it has not been interpreted or condensed or otherwise “repackaged” by other writers), these are considered primary sources. The works present new thinking/discoveries/results and unite them with the existing knowledge base. Journal articles that report original research are one of the more common and important steps in the information sharing cycle. They often go through a process in which they are “peer reviewed” by other experts who evaluate the work and findings before publication.
SECONDARY SOURCES are those which are published about the primary literature, that generalize, analyze, interpret, evaluate or otherwise “add value” to the original information, OR which simplify the process of finding and evaluating the primary literature. Some examples of secondary sources are “review” articles and indexes or bibliographies, such as PubMed or the ScienceDirect.
TERTIARY SOURCES compile or digest information from primary or secondary sources that has become widely accepted. They aim to provide a broad overview of a topic, or data, already proven facts, and definitions, often presented in a convenient form. They provide no new information. These include “reference” types of works such as textbooks, encyclopedias, fact books, guides and handbooks, and computer databases such as The Handbook of the Microbiological Media and SciFinder.
GRAY LITERATURE are source materials not available through the usual systems of publication (e.g., books or periodicals) and distribution. Gray literature includes conference proceedings, dissertations, technical reports, and working papers. Locating this type of literature is a little more difficult, but there are finding tools such as Dissertations Abstracts and PapersFirst.
What is a literature review?
A literature review discusses published information in a particular subject area, and sometimes information in a particular subject area within a certain time period.
A literature review can be just a simple summary of the sources, but it usually has an organizational pattern and combines both summary and synthesis. A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information. It might give a new interpretation of old material or combine new with old interpretations. Or it might trace the intellectual progression of the field, including major debates. And depending on the situation, the literature review may evaluate the sources and advise the reader on the most pertinent or relevant.
But how is a literature review different from an academic research paper?
While the main focus of an academic research paper is to support your own argument, the focus of a literature review is to summarize and synthesize the arguments and ideas of others. The academic research paper also covers a range of sources, but it is usually a select number of sources, because the emphasis is on the argument. Likewise, a literature review can also have an "argument," but it is not as important as covering a number of sources. In short, an academic research paper and a literature review contain some of the same elements. In fact, many academic research papers will contain a literature review section. But it is the aspect of the study (the argument or the sources) that is emphasized that determines what type of document it is.
Why do we write literature reviews?
Literature reviews provide you with a handy guide to a particular topic. If you have limited time to conduct research, literature reviews can give you an overview or act as a stepping stone. For professionals, they are useful reports that keep them up to date with what is current in the field. For scholars, the depth and breadth of the literature review emphasizes the credibility of the writer in his or her field. Literature reviews also provide a solid background for a research paper's investigation. Comprehensive knowledge of the literature of the field is essential to most research papers.
Who writes these things, anyway?
Literature reviews are written occasionally in the humanities, but mostly in the sciences and social sciences; in experiment and lab reports, they constitute a section of the paper. Sometimes a literature review is written as a paper in itself.
Excerpt from “Literature Reviews” from The Writing Center, University of North Carolina at Chapel Hill (http://writingcenter.unc.edu/resources/handouts-demos/specific-writing-assignments/literature-reviews), 2007.
Collection Development Librarian
1014 Boswell Ave.
Crete NE 68333
|
<urn:uuid:36c1f567-48f3-46f4-b9c2-332bfdf4d2b6>
| 3.375
|
http://libguides.doane.edu/biosem
|
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer.
The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy.
"Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!"
About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface.
SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits.
Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets.
Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star.
"Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team."
SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function.
For information about SOHO on the Internet, visit:
Explore further: Long-term warming, short-term variability: Why climate change is still an issue
|
<urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3>
| 4
|
http://phys.org/news4969.html
|
While any kind of dog can attack, some breeds are more prone to attacks than others. In fact, some dogs are more likely than others to kill humans.
The Centers of Disease Control estimates that more than 4.7 million people are bitten by dogs every year. Of those, 20 percent require medical attention.
In a 15-year study (1979-1994) a total of 239 deaths were reported as a result of injuries from dog attacks in the United States. Through its research, the CDC compiled a list of the dogs most responsible for human fatalities. They are as follows:
The study found that most dog-bite-related deaths happened to children. But, according to the CDC there are steps children (and adults) can take cut down the risk of a dog attack from family pets as well as dogs they are not familiar with:
-Don't approach an unfamiliar dog.
-If an unfamiliar dog approaches you, stay motionless.
-Don't run from a dog or scream.
-If a dog knocks you down, roll into a ball and stay still.
-Avoid looking directly into a dog's eyes.
-Leave a dog alone that is sleeping, eating or taking care of puppies.
-Let a dog see and sniff you before petting it.
-Don't play with a dog unless there is an adult present.
-If a dog bites you, tell an adult immediately.
But, the CDC's report says most attacks are preventable in three ways:
1. "Owner and public education. Dog owners, through proper selection, socialization, training, care, and treatment of a dog, can reduce the likelihood of owning a dog that will eventually bite. Male and unspayed/unneutered dogs are more likely to bite than are female and spayed/neutered dogs."
2. "Animal control at the community level. Animal-control programs should be supported, and laws for regulating dangerous or vicious dogs should be promulgated and enforced vigorously. For example, in this report, 30% of dog-bite-related deaths resulted from groups of owned dogs that were free roaming off the owner's property."
3. "Bite reporting. Evaluation of prevention efforts requires improved surveillance for dog bites. Dog bites should be reported as required by local or state ordinances, and reports of such incidents should include information about the circumstances of the bite; ownership, breed, sex, age, spay/neuter status, and history of prior aggression of the animal; and the nature of restraint before the bite incident."
CDC officials did make one important note about its list: The reporting of the breed was subjective. There is no way to determine if the identification of the breed was correct. Also, there is no way to verify if the dog was a purebred or a mixed breed.
Copyright 2011 Scripps Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
|
<urn:uuid:da55ad67-a163-461b-9317-72c3b8b457e2>
| 3.53125
|
http://www.abc2news.com/dpp/lifestyle/pets/which-dogs-are-most-likely-to-kill-humans%3F
|
Glucose is a type of sugar. It comes from food, and is also created in the liver. Glucose travels through the body in the blood. It moves from the blood to cells with the help of a hormone called insulin. Once glucose is in those cells, it can be used for energy.
Diabetes is a condition that makes it difficult for the body to use glucose. This causes a buildup of glucose in the blood. It also means the body is not getting enough energy. Type 2 diabetes is one type of diabetes. It is the most common type.
Medication, lifestyle changes, and monitoring can help control blood glucose levels.
Type 2 diabetes is often caused by a combination of factors. One factor is that your body begins to make less insulin. A second factor is that your body becomes resistant to insulin. This means there is insulin in your body, but your body cannot use it effectively. Insulin resistance is often related to excess body fat.
The doctor will ask about your symptoms and medical history. You will also be asked about your family history. A physical exam will be done.
Diagnosis is based on the results of blood testing. American Diabetes Association (ADA) recommends diagnosis be made if you have one of the following:
Symptoms of diabetes and a
random blood test
with a blood sugar level greater than or equal to 200 mg/dL (11.1 mmol/L)
- Fasting blood sugar test—Done after you have not eaten for eight or more hours—Showing blood sugar levels greater than or equal to 126 mg/dL (7 mmol/L) on two different days
- Glucose tolerance test—Measuring blood sugar two hours after you eat glucose—Showing glucose levels greater than or equal to 200 mg/dL (11.1 mmol/L)
- HbA1c level of 6.5% or higher—Indicates poor blood sugar control over the past 2-4 months
mg/dL = milligrams per deciliter of blood; mmol/L = millimole per liter of blood
Treatment aims to:
- Maintain blood sugar at levels as close to normal as possible
- Prevent or delay complications
- Control other conditions that you may have, like high blood pressure and high cholesterol
Food and drinks have a direct effect on your blood glucose level. Eating healthy meals can help you control your blood glucose. It will also help your overall health. Some basic tips include:
If you are overweight, weight loss will help your body use insulin better. Talk to your doctor about a healthy weight goal. You and your doctor or dietitian can make a safe meal plan for you.
These options may help you lose weight:
Physical activity can:
- Make the body more sensitive to insulin
- Help you reach and maintain a healthy weight
- Lower the levels of fat in your blood
exercise is any activity that increases your heart rate. Resistance training helps build muscle strength. Both types of exercise help to improve
long-term glucose control. Regular exercise can also help reduce your risk of heart disease.
Talk to your doctor about an activity plan. Ask about any precautions you may need to take.
Certain medicines will help to manage blood glucose levels.
Medication taken by mouth may include:
- Metformin—To reduce the amount of glucose made by the body and to make the body more sensitive to insulin
Medications that encourage the pancreas to make more insulin such as sulfonylureas (glyburide,
tolazamide), dipeptidyl peptidase-4 inhibitors (saxagliptin,
Insulin sensitizers such as
pioglitazone—To help the body use insulin better
Starch blockers such as
miglitol—To decrease the amount of glucose absorbed into the blood
Some medicine needs to be given through injections, such as:
Incretin-mimetics such as
stimulate the pancreas to produce insulin and decrease appetite (can assist with weight loss)
Amylin analogs such as
replace a protein of the pancreas that is low in people with type 2 diabetes
Insulin may be needed if:
- The body does not make enough of its own insulin.
- Blood glucose levels cannot be controlled with lifestyle changes and medicine.
Insulin is given through injections.
Blood Glucose Testing
You can check the level of glucose in your blood with a blood glucose meter. Checking your blood glucose levels
during the day can help you stay on track. It will also help your doctor determine if your treatment is working. Keeping track of blood sugar levels is especially important if you take insulin.
Regular testing may not be needed if your diabetes is under control and you don't take insulin. Talk with your doctor before stopping blood sugar monitoring.
may also be done at your doctor's office. This is a measure of blood glucose control over a long period of time. Doctors advise that most people keep their HbA1c levels below 7%. Your exact goal may be different. Keeping HbA1c in your goal range can help lower the chance of complications.
Decreasing Risk of Complications
Over a long period of time, high blood glucose levels can damage vital organs. The kidneys, eyes, and nerves are most affected. Diabetes can also increase your risk of heart disease.
Maintaining goal blood glucose levels is the first step to lowering your risk of these complications. Other steps include:
- Take good care of your feet. Be on the lookout for any sores or irritated areas. Keep your feet dry and clean.
- Have your eyes checked once a year.
- Don't smoke. If you do, look for programs or products that can help you quit.
- Plan medical visits as recommended.
|
<urn:uuid:da329173-6e70-42f5-aa09-933ea8352a2f>
| 3.6875
|
http://www.bidmc.org/YourHealth/ConditionsAZ/Congestiveheartfailure.aspx?ChunkID=11902
|
First ever direct measurement of the Earth’s rotation
Geodesists are pinpointing the orientation of the Earth’s axis using the world’s most stable ring laser
A group with researchers at the Technical University of Munich (TUM) and the Federal Agency for Cartography and Geodesy (BKG) are the first to plot changes in the Earth’s axis through laboratory measurements. To do this, they constructed the world’s most stable ring laser in an underground lab and used it to determine changes in the Earth’s rotation. Previously, scientists were only able to track shifts in the polar axis indirectly by monitoring fixed objects in space. Capturing the tilt of the Earth’s axis and its rotational velocity is crucial for precise positional information on Earth – and thus for the accurate functioning of modern navigation systems, for instance. The scientists’ work has been recognized an Exceptional Research Spotlight by the American Physical Society.
The Earth wobbles. Like a spinning top touched in mid-spin, its rotational axis fluctuates in relation to space. This is partly caused by gravitation from the sun and the moon. At the same time, the Earth’s rotational axis constantly changes relative to the Earth’s surface. On the one hand, this is caused by variation in atmospheric pressure, ocean loading and wind. These elements combine in an effect known as the Chandler wobble to create polar motion. Named after the scientist who discovered it, this phenomenon has a period of around 435 days. On the other hand, an event known as the “annual wobble” causes the rotational axis to move over a period of a year. This is due to the Earth’s elliptical orbit around the sun. These two effects cause the Earth’s axis to migrate irregularly along a circular path with a radius of up to six meters.
Capturing these movements is crucial to create a reliable coordinate system that can feed navigation systems or project trajectory paths in space travel. “Locating a point to the exact centimeter for global positioning is an extremely dynamic process – after all, at our latitude, we are moving at around 350 meters to the east per second,” explains Prof. Karl Ulrich Schreiber, meanwhile as station director of the geodetic observatory Wettzell where the ring laser is settled. Karl Ulrich Schreiber had directed the project in TUM’s Research Section Satellite Geodesy. The geodetic observatory Wettzell is run together by TUM and BKG.
The researchers have succeeded in corroborating the Chandler and annual wobble measurements based on the data captured by radio telescopes. They now aim to make the apparatus more accurate, enabling them to determine changes in the Earth’s rotational axis over a single day. The scientists also plan to make the ring laser capable of continuous operation so that it can run for a period of years without any deviations. “In simple terms,” concludes Schreiber, “in future, we want to be able to just pop down into the basement and find out how fast the Earth is accurately turning right now."
For more information please visit the TU München homepage http://portal.mytum.de/pressestelle/pressemitteilungen/NewsArticle_20111220_100621/newsarticle_view?.
|
<urn:uuid:d4281798-7278-4727-a736-be4cecc072f8>
| 3.921875
|
http://www.bkg.bund.de/nn_149566/sid_0F335650A0F77C3C47FE83A10BEB41EC/nsc_true/EN/News/01News/N2011/2011__12__27ring-laser.html
|
the National Science Foundation
Available Languages: English, Spanish
This classroom-tested learning module gives a condensed, easily-understood view of the development of atomic theory from the late 19th through early 20th century. The key idea was the discovery that the atom is not an "indivisible" particle, but consists of smaller constituents: the proton, neutron, and electron. It discusses the contributions of John Dalton, J.J. Thomson, Ernest Rutherford, and James Chadwick, whose experiments revolutionized the world view of atomic structure. See Related Materials for a link to Part 2 of this series.
atomic structure, cathode ray experiment, electron, helium atom, history of atom, history of the atom, hydrogen atom, neutron, proton
Metadata instance created
July 12, 2011
by Caroline Hall
October 10, 2012
by Caroline Hall
Last Update when Cataloged:
January 1, 2006
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4D. The Structure of Matter
6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope.
9-12: 4D/H1. Atoms are made of a positively charged nucleus surrounded by negatively charged electrons. The nucleus is a tiny fraction of the volume of an atom but makes up almost all of its mass. The nucleus is composed of protons and neutrons which have roughly the same mass but differ in that protons are positively charged while neutrons have no electric charge.
9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons.
10. Historical Perspectives
10F. Understanding Fire
9-12: 10F/H1. In the late 1700s and early 1800s, the idea of atoms reemerged in response to questions about the structure of matter, the nature of fire, and the basis of chemical phenomena.
9-12: 10F/H3. In the early 1800s, British chemist and physicist John Dalton united the concepts of atoms and elements. He proposed two ideas that laid the groundwork for modern chemistry: first, that elements are formed from small, indivisible particles called atoms, which are identical for a given element but different from any other element; and second, that chemical compounds are formed from atoms by combining a definite number of each type of atom to form one molecule of the compound.
9-12: 10F/H4. Dalton figured out how the relative weights of the atoms could be determined experimentally. His idea that every substance had a unique atomic composition provided an explanation for why substances were made up of elements in specific proportions.
This resource is part of a Physics Front Topical Unit.
Topic: Particles and Interactions and the Standard Model Unit Title: History and Discovery
This classroom-tested learning module gives a condensed, easily-understood view of the development of atomic theory from the late 19th through early 20th century. The key idea was the discovery that the atom is not an "indivisible" particle, but consists of smaller constituents: the proton, neutron, and electron. It discusses the contributions of John Dalton, J.J. Thomson, Ernest Rutherford, and James Chadwick, whose experiments revolutionized the world view of atomic structure.
%0 Electronic Source %A Carpi, Anthony %D January 1, 2006 %T Visionlearning: Atomic Theory I %I Visionlearning %V 2013 %N 21 May 2013 %8 January 1, 2006 %9 text/html %U http://www.visionlearning.com/library/module_viewer.php?mid=50&l=
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
|
<urn:uuid:e5d364b6-d557-47e6-b078-62ea4b57c2d1>
| 3.4375
|
http://www.compadre.org/precollege/items/detail.cfm?ID=11307
|
Micro vs Macro
Micro and macro are prefixes that are used before words to make them small or big respectively. This is true with micro and macroeconomics, micro and macro evolution, microorganism, micro lens and macro lens, micro finance and macro finance, and so on. The list of words that makes use of these prefixes is long and exhaustive. Many people confuse between micro and macro despite knowing that these prefixes signify small and large respectively. This article takes a closer look at the two prefixes to find out their differences.
To understand the difference between micro and macro, let us take up the example of micro and macro evolution. To signify evolution that takes place within a single species, the word microevolution is used whereas evolution that transcends the boundaries of species and takes place on a very large scale is termed as macroevolution. Though the principles of evolution such as genetics, mutation, natural selection, and migration remain the same across microevolution as well as macro evolution, this distinction between microevolution and macroevolution is a great way to explain this natural phenomenon.
Another field of study that makes use of micro and macro is economics. While the study of the overall economy and how it works is called macroeconomics, microeconomics focuses on the individual person, company, or industry. Thus, the study of GDP, employment, inflation etc. in an economy is classified under macroeconomics. Microeconomics is the study of forces of demand and supply inside a particular industry effecting the goods and services. So it is macroeconomics when economists choose to concentrate upon the state of the economy in a nation whereas the study of a single market or industry remains within the realms of microeconomics.
There is also the study of finance where these two prefixes are commonly used. Thus, we have microfinance where the focus is upon the monetary needs and requirements of a single individual where there is also macro finance where financing by the banks or other financial institutions is of very large nature.
Micro and macro are derived from Greek language where micro means small and macro refers to large. These prefixes are used in many fields of study such as finance, economics, evolution etc. where we have words like micro finance and macro finance, micro evolution and macro evolution etc. Studying something at a small level is micro while studying it on a large scale is macro analysis. Financing the needs of an individual may be micro financing whereas the financial needs of a builder requiring money for a very large infrastructural project may be referred to as macro finance.
|
<urn:uuid:837ed974-3d4d-4f45-9652-e4dbceee85e2>
| 3.375
|
http://www.differencebetween.com/difference-between-micro-and-vs-macro/
|
Leaf Characteristics (PK1) This set introduces simple vocabulary to describe the physical features of 40 North American tree, garden, and house plant leaves. First - The child sorts 9 leaf characteristics cards (3" x 4") onto 3 control cards (10-3/8” x 5¼”) that identify characteristics of Leaf Types, Leaf Veins, and Leaf Margins. Second - After learning the 9 characteristics of leaves, it is time to describe the 3 characteristics of just one leaf. A leaf card is selected from the 40 leaf cards provided (3" x 4"). The child selects the 3 characteristics cards (type, venation, margin) that describe that leaf, and places them on the blank Leaf Identification card (10-3/8” x 5¼”). Real leaves can be used in this exercise as well. Background information is included for the teacher.
Leaves (PK1C) This set consists of 40 DUPLICATE leaf cards (80 cards total). One group of 20 cards illustrates familiar leaves such as dandelion, marigold, and ivy. The second group illustrates common North American tree leaves such as oak, maple, and cottonwood. These are the same leaf cards found in In-Print for Children's “Leaf Characteristics” activity.
Flowers (FL1) This set is designed to help children recognize and to name 20 common flowers, many of which are commercially available throughout the year. This duplicate set of picture cards can be used in simple matching exercises, or in 3-part matching activities if one set is cut apart. The 40 photocards (3¼” x 4") are in full-color and laminated. Flower background information is included for the teacher.
Nuts (PK3) Nuts are nourishing snacks and learning how they grow will make eating them all the more fun! This set of 22 two-color cards (5½” x 3½”) of plant and nut illustrations represents eleven edible nuts/seeds. The child pairs the illustration cards of the nuts in their growing stage to the cards of the nuts in and out of their shells. Make the activity even more successful by bringing the real nuts into the classroom.
Kitchen Herbs & Spices (PK5) This set help children to learn about 20 plants that give us herbs and spices. The delicately drawn, 2-color illustrations clearly show the parts of the plants that give us edible leaves, seeds, stems, bark, bulbs, and berries. Create an aromatic and tasty exercise by having the children pair real herbs and spices with these cards (4½” x 6¼”).
Plants We Eat (PK9) Learn more about food plants and their different edible parts. This set classifies 18 plant foods into six groups: roots, stems, leaves, flowers, fruits, and seeds. A duplicate set of 18 labeled picture/definition cards (6" x 6") shows plants in their growing stage with only their “food” portion in color. One set of picture/ definition cards is spiral bound into 6 control booklets that include definitions of the root, stem, leaf, flower, fruit, and seed. The other set of picture/ definition cards are to be cut apart for 2 or 3-part matching exercises. Plant description cards can be used for “Who am I?” games with our plant picture cards or with real foods. Both cards and booklets are laminated.
Plants We Eat Replicards (PK9w) Six replicards are photocopied to produce worksheets for an extension exercise using our set Plants We Eat (PK9). Children color and label the worksheets, which illustrate three plant examples for each of the following groups: roots, stems, leaves, flowers, fruits, and seeds. The Plants We Eat booklets serve as controls. After worksheets (8½” x 11") are colored and labeled, they can be cut apart, stapled together, and made into six take-home booklets. These booklets may generate lively family dinner-table discussions: “A potato is a what?”
Plants - Who am I? (WP) This beginning activity for lower elementary strengthens both reading and listening skills, and provides children with simple facts about 10 plants. The set consists of duplicate, labeled picture cards with descriptive text and features plants different from those in the First Knowledge: Plant Stories (see below). The set of cards with text ending in “Who am I?” is cut apart into 10 picture cards, 10 plant name cards, and 10 text cards. The other set is left whole. Cards are used for picture-to-text card matching exercises and for playing the “Who am I” game. Cards measure 6½” x 4" and are in full color and laminated.
First Knowledge: Plant Stories (PK7) This set consists of 19 duplicate plant picture/text cards. One set is cut apart for 3-part matching activities, and the other set is placed in the green, 6-ring mini-binder labeled Plants. The teacher has the option of changing the cards in the binder as needed. The children can match the 3-part cards (6" x 3¾”) to the cards in the binder, practice reading, learn about the diverse characteristics of these plants, and then play “Who am I?” The eight angiosperms picture cards can be sorted beneath two cards that name and define Monocots and Dicots. These activities prepare children for later work with our Plant Kingdom Chart & Cards (see below), which illustrates the same plants.
Plant Kingdom Chart and Cards (PK6) Our 4-color plastic paper chart and cards represents the current classification of the plant kingdom (not illustrated here) – the same as is used in secondary and college level biology courses. This classification organizes the plant kingdom in a straightforward manner with simple definitions and examples under each heading. Firs the plants are categorized as either Nonvascular Plants (Bryophytes) or Vascular Plants. Then the Vascular plants are divided into two groups: Seedless Plants or Seed Plants. Seed Plants are divided into two groups: Gymnosperms and Angiosperms with sub-categories. Nineteen picture cards (2¼” x 3") illustrate the currently recognized phyla of the plant kingdom. Children match the 19 plant picture cards to the pictures on the chart (18" x 32"). Text on the back of the picture cards describes each plant. Advanced students can recreate the chart with the title cards provided, using the chart as a control of error. Background information is provided.
Parts of a Mushroom Parts of a gilled mushroom are highlighted and labeled on six 2-color cards (3" x 5"). Photocopy the Replicard (8½” x 11") to make quarter page worksheets. The child colors and labels the worksheets, using the picture cards as a guide. Completed worksheets can be stapled together to make a booklet for “Parts of a Mushroom”. (In-Print product code FK1)
Fungi (FK4) Members of the Fungus Kingdom have a wide variety of forms. Children see fungi everywhere, such as mold on food, or mushrooms on the lawn. This duplicate set of labeled picture cards shows 12 common fungi found indoors and out. Fungi illustrated: blue cheese fungus, bolete, coral fungus, cup fungus, jelly fungus, lichens, mildew, milky mushrooms, mold, and morel. Background information is included. Pictures cards (3½” x 4½”) are in full color and laminated.
Classification of the Fungus KingdomChart and Cards (FK3) This classification of the Fungus Kingdom organizes 18 representative fungi into four major groups and two important fungal partnerships: Chytrids, Yoke Fungi, Sac Fungi, Club Fungi, Lichens, Mycorrhizae. Children match the 18 picture cards (2-7/8” x 2-3/8”) to the pictures on the 2-color chart (18" x 16"). After this activity, they can sort the picture cards under the label cards for the 5 fungus groups, using the chart as the control. Description of each fungus type is printed on the back of the picture cards. Background information is included for the teacher. This chart is printed on vinyl and does not need to be laminated.
|
<urn:uuid:d8d399d3-71ef-4f88-8432-8aa6553e707a>
| 3.96875
|
http://www.in-printforchildren.com/3201/4285.html
|
: the study of the conformation of the skull based on the belief that it is indicative of mental faculties and character
Study of the shape of the skull as an indication of mental abilities and character traits. Franz Joseph Gall stated the principle that each of the innate mental faculties is based in a specific brain region (organ), whose size reflects the faculty's prominence in a person and is reflected by the skull's surface. He examined the skulls of persons with particular traits (including criminal traits) for a feature he could identify with it. His followers Johann Kaspar Spurzheim (1776–1832) and George Combe (1788–1858) divided the scalp into areas they labeled with traits such as combativeness, cautiousness, and form perception. Though popular well into the 20th century, phrenology has been wholly discredited.
|
<urn:uuid:0946e91b-14ac-49fc-8c05-8dc7853d40e6>
| 3.578125
|
http://www.merriam-webster.com/dictionary/phrenology
|
LESSON ONE: Transforming Everyday Objects
Marcel Duchamp: Bicycle Wheel, bicycle wheel on wooden stool, 1963 (Henley-on-Thames, Richard Hamilton Collection); © 2007 Artists Rights Society (ARS), New York/ADAGP, Paris, photo credit: Cameraphoto/Art Resource, NY
Man Ray: Rayograph, gelatin silver print, 29.4×23.2 cm, 1923 (New York, Museum of Modern Art); © 2007 Man Ray Trust/Artists Rights Society (ARS), New York/ADAGP, Paris, photo © The Museum of Modern Art, New York
Meret Oppenheim: Object (Le Déjeuner en fourrure), fur-lined cup, diam. 109 mm, saucer, diam. 237 mm, spoon, l. 202 mm, overall, h. 73 mm, 1936 (New York, Museum of Modern Art); © 2007 Artists Rights Society (ARS), New York/ProLitteris, Zurich, photo © Museum of Modern Art/Licensed by SCALA/Art Resource, NY
Dada and Surrealist artists questioned long-held assumptions about what a work of art should be about and how it should be made. Rather than creating every element of their artworks, they boldly selected everyday, manufactured objects and either modified and combined them with other items or simply se-lected them and called them “art.” In this lesson students will consider their own criteria for something to be called a work of art, and then explore three works of art that may challenge their definitions.
Students will consider their own definitions of art.
Students will consider how Dada and Surrealist artists challenged conventional ideas of art.
Students will be introduced to Readymades and photograms.
Ask your students to take a moment to think about what makes something a work of art. Does art have to be seen in a specific place? Where does one encounter art? What is art supposed to accomplish? Who is it for?
Ask your students to create an individual list of their criteria. Then, divide your students into small groups to discuss and debate the results and come up with a final list. Finally, ask each group to share with the class what they think is the most important criteria and what is the most contested criteria for something to be called a work of art. Write these on the chalkboard for the class to review and discuss.
Show your students the image of Bicycle Wheel. Ask your students if Marcel Duchamp’s sculp-ture fulfills any of their criteria for something to be called a work of art. Ask them to support their obser-vations with visual evidence.
Inform your students that Duchamp made this work by fastening a Bicycle Wheel to a kitchen stool. Ask your students to consider the fact that Duchamp rendered these two functional objects unus-able. Make certain that your students notice that there is no tire on the Bicycle Wheel.
To challenge accepted notions of art, Duchamp selected mass-produced, often functional objects from everyday life for his artworks, which he called Readymades. He did this to shift viewers’ engagement with a work of art from what he called the “retinal” (there to please the eye) to the “intellectual” (“in the service of the mind.”) [H. H. Arnason and Marla F. Prather, History of Modern Art: Painting, Sculpture, Architecture, Photography (Fourth Edition) (New York: Harry N. Abrams, Inc., 1998), 274.] By doing so, Duchamp subverted the traditional notion that beauty is a defining characteristic of art.
Inform your students that Bicycle Wheel is the third version of this work. The first, now lost, was made in 1913, almost forty years earlier. Because the materials Duchamp selected to be Readymades were mass-produced, he did not consider any Readymade to be “original.”
Ask your students to revisit their list of criteria for something to be called a work of art. Ask them to list criteria related specifically to the visual aspects of a work of art (such as “beauty” or realistic rendering).
Duchamp said of Bicycle Wheel, “In 1913 I had the happy idea to fasten a Bicycle Wheel to a kitchen stool and watch it turn.” [John Elderfield, ed., Studies in Modern Art 2: Essays on Assemblage (New York: The Museum of Modern Art, 1992), 135.] Bicycle Wheel is a kinetic sculpture that depends on motion for effect. Although Duchamp selected items for his Readymades without regard to their so-called beauty, he said, “To see that wheel turning was very soothing, very comforting . . . I en-joyed looking at it, just as I enjoy looking at the flames dancing in a fireplace.” [Francis M. Naumann, The Mary and William Sisler Collection (New York: The Museum of Modern Art, 1984), 160.] By en-couraging viewers to spin Bicycle Wheel, Duchamp challenged the common expectation that works of art should not to be touched.
Show your students Rayograph. Ask your students to name recognizable shapes in this work. Ask them to support their findings with visual evidence. How do they think this image was made?
Inform your students that Rayograph was made by Man Ray, an American artist who was well-known for his portrait and fashion photography. Man Ray transformed everyday objects into mysterious images by placing them on photographic paper, exposing them to light, and oftentimes repeating this process with additional objects and exposures. When photographic paper is developed in chemicals, the areas blocked from light by objects placed on the paper earlier on will remain light, and the areas exposed to light will turn black. Man Ray discovered the technique of making photograms by chance, when he placed some objects in his darkroom on light-sensitive paper and accidentally exposed them to light. He liked the resulting images and experimented with the process for years to come. He likened the technique, now known as the photogram, to “painting with light,” calling the images rayographs, after his assumed name.
Now that your students have identified some recognizable objects used to make Rayograph, ask them to consider which of those objects might have been translucent and which might have been opaque, based on the tone of the shapes in the photogram.
Now show your students Meret Oppenheim’s sculpture Object (Déjeuner en fourrure). Both Rayograph and Object were made using everyday objects and materials not traditionally used for making art, which, when combined, challenge ideas of reality in unexpected ways. Ask your students what those everyday objects are and how they have been transformed by the artists.
Ask your students to name some traditional uses for the individual materials (cup, spoon, saucer, fur) used to make Object. Ask your students what choices they think Oppenheim made to transform these materials and objects.
In 1936, the Swiss artist Oppenheim was at a café in Paris with her friends Pablo Picasso and Dora Maar. Oppenheim was wearing a bracelet she had made from fur-lined, polished metal tubing. Picasso joked that one could cover anything with fur, to which Oppenheim replied, “Even this cup and saucer.” [Bice Curiger, Meret Oppenheim: Defiance in the Face of Freedom (Zurich, Frankfurt, New York: PARKETT Publishers Inc., 1989), 39.] Her tea was getting cold, and she reportedly called out, “Waiter, a little more fur!” Soon after, when asked to participate in a Surrealist exhibition, she bought a cup, saucer, and spoon at a department store and lined them with the fur of a Chinese gazelle. [Josephine Withers, “The Famous Fur-Lined Teacup and the Anonymous Meret Oppenheim” (New York: Arts Magazine, Vol. 52, Novem-ber 1977), 88-93.]
Duchamp, Oppenheim, and Man Ray transformed everyday objects into Readymades, Surrealist objects, and photograms. Ask your students to review the images of the three artworks in this lesson and discuss the similarities and differences between these artists’ transformation of everyday objects.
Art and Controversy
At the time they were made, works of art like Duchamp’s Bicycle Wheel and Oppenheim’s Object were controversial. Critics called Duchamp’s Readymades immoral and vulgar—even plagiaristic. Overwhelmed by the publicity Object received, Oppenheim sunk into a twenty-year depres-sion that greatly inhibited her creative production.
Ask your students to conduct research on a work of art that has recently been met with controversy. Each student should find at least two articles that critique the work of art. Have your students write a one-page summary of the issues addressed in these articles. Students should consider how and why the work chal-lenged and upset critics. Was the controversial reception related to the representation, the medium, the scale, the cost, or the location of the work? After completing the assignment, ask your students to share their findings with the class. Keep a list of shared critiques among the work’s various receptions.
Make a Photogram
If your school has a darkroom, have your students make photograms. Each student should collect several small objects from school, home, and the outside to place on photographic paper. Their collection should include a range of translucent and opaque objects to allow different levels of light to shine through. Stu-dents may want to overlap objects or use their hands to cover parts of the light-sensitive paper. Once the objects are arranged on the paper in a darkroom, have your students expose the paper to light for several seconds (probably about five to ten seconds, depending on the level of light) then develop, fix, rinse, and dry the paper. Allow for a few sheets of photographic paper per student so that they can experiment with different arrangements and exposures. After the photograms are complete, have your students discuss the different results that they achieved. Students may also make negatives of their photograms by placing them on top of a fresh sheet of photographic paper and covering the two with a sheet of glass. After ex-posing this to light, they can develop the paper to get the negative of the original photogram.
Encourage your students to try FAUXtogram, an activity available on Red Studio, MoMA's Web site for teens.
GROVE ART ONLINE: Suggested Reading
Below is a list of selected articles which provide more information on the specific topics discussed in this lesson.
|
<urn:uuid:31fab53b-eb78-4e38-ae2c-77d787710125>
| 3.859375
|
http://www.oxfordartonline.com/public/page/lessons/Unit5Lesson1
|
The pleura are two thin, moist membranes around the lungs. The inner layer is attached to the lungs. The outer layer is attached to the ribs. Pleural effusion is the buildup of excess fluid in the space between the pleura. The fluid can prevent the lungs from fully opening. This can make it difficult to catch your breath.
Pleural effusion may be transudative or exudative based on the cause. Treatment of pleural effusion depends on the condition causing the effusion.
Effusion is usually caused by disease or injury.
Transudative effusion may be caused by:
Exudative effusion may be caused by:
Factors that increase your chance of getting pleural effusion include:
- Having conditions or diseases listed above
- Certain medications such as:
- Nitrofurantoin (Macrodantin, Furadantin, Macrobid)
- Methysergide (Sansert)
- Bromocriptine (Parlodel)
- Procarbazine (Matulane)
- Amiodarone (Cordarone)
- Chest injury or trauma
- Radiation therapy
Surgery, especially involving:
- Organ transplantation
Some types of pleural effusion do not cause symptoms. Others cause a variety of symptoms, including:
- Shortness of breath
- Chest pain
- Stomach discomfort
- Coughing up blood
- Shallow breathing
- Rapid pulse or breathing rate
- Weight loss
- Fever, chills, or sweating
These symptoms may be caused by many other conditions. Let your doctor know if you have any of these symptoms.
The doctor will ask about your symptoms and medical history. A physical exam will be done. This may include listening to or tapping on your chest. Lung function tests will test your ability to move air in and out of your lungs.
Images of your lungs may be taken with:
Your doctor may take samples of the fluid or pleura tissue for testing. This may be done with:
Treatment is usually aimed at treating the underlying cause. This may include medications or surgery.
Your doctor may take a "watchful waiting" approach if your symptoms are minor. You will be monitored until the effusion is gone.
To Support Breathing
If you are having trouble breathing, your doctor may recommend:
- Breathing treatments—inhaling medication directly to lungs
- Oxygen therapy
Drain the Pleural Effusion
The pleural effusion may be drained by:
- Therapeutic thoracentesis —a needle is inserted into the area to withdraw excess fluid.
- Tube thoracostomy—a tube is placed in the side of your chest to allow fluid to drain. It will be left in place for several days.
Seal the Pleural Layers
The doctor may recommend chemical pleurodesis. During this procedure, talc powder or an irritating chemical is injected into the pleural space. This will permanently seal the two layers of the pleura together. The seal may help prevent further fluid buildup.
Radiation therapy may also be used to seal the pleura.
In severe cases, surgery may be needed. Some of the pleura will be removed during surgery. Suregery options may include:
- Thoracotomy—traditional, open chest procedure
- Video-assisted thorascopic surgery (VATS)—minimally-invasive surgery that only requires small keyhole size incisions
Prompt treatment for any condition that may lead to effusion is the best way to prevent pleural effusion.
- Reviewer: Brian Randall, MD
- Review Date: 02/2013 -
- Update Date: 03/05/2013 -
|
<urn:uuid:b050fe6e-cde2-4f69-8178-49ddeb1ead6b>
| 3.546875
|
http://largomedical.com/your-health/?/2010812305/Pleural-Effusion
|
Student Learning Outcomes
Students who complete the French Program will be able to:
- Communicate in a meaningful context in French.
- Analyze the nature of language through comparisons of the French language and their own.
- Demonstrate knowledge of and sensitivity to aspects of behavior, attitudes, and customs of France and other French speaking countries.
- Connect with the global community through study and acquisition of the French language.
|
<urn:uuid:5af48c87-5ebf-40b0-9437-a79124e81436>
| 3.53125
|
http://sdmesa.edu/instruction/slo/programs.cfm?DeptID=28
|
MIT professor’s book digs into the eclectic, textually linked reading choices of people in medieval London.
CAMBRIDGE, Mass. -- Following the 1997 creation of the first laser to emit pulsed beams of atoms, MIT researchers report in the May 16 online version of Science that they have now made a continuous source of coherent atoms. This work paves the way for a laser that emits a continuous stream of atoms.
MIT physicists led by physics professor Wolfgang Ketterle (who shared the 2001 Nobel prize in physics) created the first atom laser. A long-sought goal in physics, the atom laser emitted atoms, similar in concept to the way an optical laser emits light.
"I am amazed at the rapid progress in the field," Ketterle said. "A continuous source of Bose-Einstein condensate is just one of many recent advances."
Because the atom laser operates in an ultra-high vacuum, it may never be as ubiquitous as optical lasers. But, like its predecessor, the pulsed atom laser, a continuous-stream atom laser may someday be used for a variety of applications in fundamental physics.
It could be used to directly deposit atoms onto computer chips, and improve the precision and accuracy of atomic clocks and gyroscopes. It could aid in precision measurements of fundamental constants, atom optics and interferometry.
A continuous stream laser could do all of these things better than a pulsed atomic laser, said co-author Ananth P. Chikkatur , a physics graduate student at MIT. "Similar to the optical laser revolution, a continuous stream atom laser might be useful for more things than a pulsed laser," he said.
In addition to Ketterle and Chikkatur, authors include MIT graduate students Yong-Il Shin and Aaron E. Leanhardt; David F. Kielpinski, postdoctoral fellow in the MIT Research Laboratory of Electronics (RLE); physics senior Edem Tsikata; MIT affiliate Todd L. Gustavson; and David E. Pritchard, Cecil and Ida Green Professor of Physics and a member of the MIT-Harvard Center for Ultracold Atoms and the RLE.
A NEW FORM OF MATTER
An important step toward the first atom laser was the creation of a new form of matter - the Bose-Einstein condensate (BEC). BEC forms at temperatures around one millionth of a degree Kelvin, a million times colder than interstellar space.
Ketterle's group had developed novel cooling techniques that were key to the observation of BEC in 1995, first by a group at the University of Colorado at Boulder, then a few months later by Ketterle at MIT. It was for this achievement that researchers from both institutions were honored with the Nobel prize last year.
Ketterle and his research team managed to merge a bunch of atoms into what he calls a single matter-wave, and then used fluctuating magnetic fields to shape the matter-wave into a beam much like a laser.
To test the coherence of a BEC, the researchers generated two separate matter-waves, made them overlap and photographed a so-called "interference pattern" that only can be created by coherent waves. The researchers then had proof that they had created the first atom laser.
Since 1995, all atom lasers and BEC have been produced in a pulsed manner, emitting individual pulses of atoms several times per minute. Until now, little progress has been made toward a continuous BEC source.
While it took about six months to create a continuous optical laser after the first pulsed optical laser was produced in 1960, the much more technically challenging continuous source of coherent atoms has taken seven years since Ketterle and colleagues first observed BEC in 1995.
A NEW CHALLENGE
Creating a continuous BEC source involved three steps: building a chamber where the condensate could be stored in an optical trap, moving the fresh condensate and merging the new condensate with the existing condensate stored in the optical trap. (The same researchers first developed an optical trap for BECs in 1998.)
The researchers built an apparatus containing two vacuum chambers: a production chamber where the condensate is produced and a "science chamber" around 30 centimeters away, where the condensate is stored.
The condensate in the science chamber had to be protected from laser light, which was necessary to produce a fresh condensate, and also from hot atoms. This required great precision, because a single laser-cooled atom has enough energy to knock thousands of atoms out of the condensate. In addition, they used an optical trap as the reservoir trap, which is insensitive to the magnetic fields used for cooling atoms into a BEC.
The researchers also needed to figure out how to move the fresh condensate - chilled to astronomically low temperatures - from the production chamber to the science chamber without heating them up. This was accomplished using optical tweezers - a focused laser light beam that traps the condensate.
Finally, to merge the new condensate with the existing condensate in the science chamber, they moved the new condensate in the tweezers into the science chamber by merging the condensates together.
A BUCKET OF ATOMS
If the pulsed atom laser is like a faucet that drips, Chikkatur says the new innovations create a sort of bucket that collects the drips without wasting or changing the condensate too dramatically by heating it. This way, a reservoir of condensate is always on hand to replenish an atom laser.
The condensate pulses are like a dripping faucet, where the drops are analogous to the pulsed BEC production. "We have now implemented a bucket (our reservoir trap), where we collect these drips to have continuous source of water (BEC)," Chikkatur said. "Although we did not demonstrate this, if we poke a hole in this bucket, we will have a steady stream of water. This hole would be an outcoupling technique from which we can produce a continuous atom laser output.
"The big achievement here is that we have invented the bucket, which can store atoms continuously and also makes sure that the drips of water do not cause a lot of splashing (heating of BECs)," he said.
The next step would be to improve the number of atoms in the source, perhaps by implementing a large-volume optical trap. Another important step would be to demonstrate a phase-coherent condensate merger using a matter wave amplification technique pioneered by the MIT group and a group in Japan, he said.
This work is funded by the National Science Foundation, the Office of Naval Research, the Army Research Office, the Packard Foundation and NASA.
|
<urn:uuid:00cd54cf-be16-4b4b-8800-7d5342159b7a>
| 3.515625
|
http://web.mit.edu/newsoffice/2002/atomsource.html
|
We are banishing darkness from the night. Electric lights have been shining over cities and towns around the world for a century. But, increasingly, even rural areas glimmer through the night, with mixed – and largely unstudied – impacts on wildlife. Understanding these impacts is a crucial conservation challenge and bats, as almost exclusively nocturnal animals, are ideal subjects for exploring the effects of light pollution.
Previous studies have confirmed what many city dwellers have long noted: some bats enjoy a positive impact of illumination by learning to feed on insects attracted to streetlights. My research, however, demonstrates for the first time an important downside: artificial lighting can disrupt the commuting behavior of a threatened bat species. This project, using a novel experimental approach, was supported in part by BCI Student Research Scholarships.
Artificial lighting is a global phenomenon and the amount of light pollution is growing rapidly, with a 24 percent increase in England between 1993 and 2000. Since then, cultural restoration projects have brought lighting to old docks and riversides, placing important river corridors used by bats and other wildlife at risk of disturbance.
Studies of bats' foraging activity around streetlights find that these bats are usually fast-flying species that forage in open landscapes, typically species of Pipistrellus, Nyctalus, Vespertilio and Eptesicus. Such bats are better able than their slower cousins to evade hawks, owls and other birds of prey.
For our study, we chose the lesser horseshoe bat (Rhinolophus hipposideros), a shy, slow-?ying bat that typically travels no more than about 1.2 miles (2 kilometers) from its roost to forage each night, often flying no more than 16 feet (5 meters) from the ground. The species is adapted for feeding in cluttered, woodland environments. Its global populations are reported decreasing and the species is endangered in many countries of central Europe. The United Kingdom provides a European stronghold for the lesser horseshoe bat, with an estimated population of around 50,000.
These bats' slow flight leaves them especially vulnerable to birds of prey, so they leave their roosts only as the light fades and commute to foraging areas along linear features such as hedgerows. Hedgerows are densely wooded corridors of shrubs and small trees that typically separate fields from each other and from roadways. Such features are important commuting routes for many bat species, which use them for protection from predators and the elements. We suspected that lesser horseshoe bats would avoid illuminated areas, largely because of a heightened risk from raptors.
We conducted arti?cial-lighting experiments along hedgerows in eight sites around southern Britain. We first surveyed light levels at currently illuminated hedgerows, then duplicated those levels at our experimental hedgerow sites, all of them normally unlighted. We installed two temporary, generator-powered lights – about 100 feet (30 meters) apart – that mimic the intensity and light spectra of streetlights. Each site was near a maternity colony and along confirmed commuting routes of lesser horseshoe bats.
Bat activity at each site was monitored acoustically, with mounted bat detectors, during four specific treatments: control (with no lights); noise (generator on and lights installed but switched off); lit (full illumination all night for four consecutive nights); and another night of noise only. We identified horseshoe bat calls to species and measured relative activity by counting the number of bat passes per species each night.
We found no significant difference in activity levels of lesser horseshoe bats between the control nights and either of the two noise nights, when the generators were running but the lights were off. The presence of the lighting units and the noise of the generators had no effect on bat activity.
The negative impacts came when we turned on the lights. We documented dramatic reductions in activity of lesser horseshoe bats during all of the illuminated nights. In our study, 42 percent of commuting bats continued flying through the lights; 30 percent reversed direction and left before reaching the lights; 17 percent flew over the hedgerows; 9 percent flew through the thick hedgerow vegetation; and 2 percent circled high or wide to avoid the lights. We also recorded some strange behavior on one night when two bats flew over the hedge in a dark area between two lights, then flew up and down repeatedly, as though trapped between the lights.
We examined the effects of light on the timing of bats' commuting activity. The bats began their commute, on average, 29.9 minutes after sunset on control nights, but 78.6 minutes after sunset when the lights were turned on. Light pollution significantly delayed the bats' commuting behavior. Interestingly, the activity began a few minutes earlier (23 minutes after sunset) on the first, but not the second, noise night. It is possible that some bats emerged early to investigate the generator noise.
We clearly demonstrated how artificial lighting disrupts the behavior of lesser horseshoe bats. We found no evidence of habituation: at least on our timescale, the bats did not become accustomed to the illumination and begin returning to normal activity or timing.
These results suggest that light pollution may fragment the network of commuting routes used by lesser horseshoe bats, causing them to seek alternate, and probably longer, paths between roosting and foraging habitats. For some bats, this increased flight time can increase energy costs and stress, with potential impacts on reproductive success. It is critical, therefore, that light pollution be considered in conservation efforts.
Light pollution is an increasing global problem with negative impacts on such important animal behaviors as foraging, reproduction and communication. Yet lighting is rarely considered in habitat-management plans and streetlights are specifically excluded from light-pollution legislation in England and Wales.
I plan to use these results as the basis for recommendations for changes in policy, conservation and management for bat habitat in areas that are subject to development. This knowledge is fundamental for understanding the factors that impact bat populations not only in the United Kingdom but around the world, and in developing effective bat-conservation actions. I hope these findings will also help guide further research.
Scientists need to determine what levels of lighting particular bat species can tolerate, so we can take appropriate measures to limit the impact. These might include reducing illumination at commuting times, directing light away from commuting routes and constructing alternative flight routes.
We sincerely hope this research and similar studies will cause both officials and the public to think more about the consequences of artificial lighting on bats and other wildlife.
EMMA STONE is a Ph.D. student at the University of Bristol and a researcher at the university's School of Biological Sciences. This project earned her the national Vincent Weir Scientific Award from the Bat Conservation Trust of the United Kingdom. Visit her project website for more information: www.batsandlighting.co.uk.
This research was originally published in the journal Current Biology, with co-authors Gareth Jones and Stephen Harris.
|
<urn:uuid:28ac1264-a7a3-4f42-b3f0-d3aa321f1dcf>
| 3.71875
|
http://www.batcon.org/index.php/media-and-info/bats-archives.html?task=viewArticle&magArticleID=1066
|
Asthma is a lifelong disease that causes wheezing, breathlessness, chest tightness, and coughing. It can limit a person's quality of life. While we don't know why asthma rates are rising, we do know that most people with asthma can control their symptoms and prevent asthma attacks by avoiding asthma triggers and correctly using prescribed medicines, such as inhaled corticosteroids.
The number of people diagnosed with asthma grew by 4.3 million from 2001 to 2009. From 2001 through 2009 asthma rates rose the most among black children, almost a 50% increase. Asthma was linked to 3,447 deaths (about 9 per day) in 2007. Asthma costs in the US grew from about $53 billion in 2002 to about $56 billion in 2007, about a 6% increase. Greater access to medical care is needed for the growing number of people with asthma.
Asthma is increasing every year in the US.
Too many people have asthma.
- The number of people with asthma continues to grow. One in 12 people (about 25 million, or 8% of the population) had asthma in 2009, compared with 1 in 14 (about 20 million, or 7%) in 2001.
- More than half (53%) of people with asthma had an asthma attack in 2008. More children (57%) than adults (51%) had an attack.
- 185 children and 3,262 adults died from asthma in 2007.
- About 1 in 10 children (10%) had asthma and 1 in 12 adults (8%) had asthma in 2009. Women were more likely than men and boys more likely than girls to have asthma.
- About 1 in 9 (11%) non-Hispanic blacks of all ages and about 1 in 6 (17%) of non-Hispanic black children had asthma in 2009, the highest rate among racial/ethnic groups.
- The greatest rise in asthma rates was among black children (almost a 50% increase) from 2001 through 2009.
Asthma Action Plan Stages
Green Zone: Doing Well
No cough, wheeze, chest tightness, or shortness of breath; can do all usual activities. Take prescribed longterm control medicine such as inhaled corticosteroids.
Yellow Zone: Getting Worse
Cough, wheeze, chest tightness, or shortness of breath; waking at night; can do some, but not all, usual activities. Add quick-relief medicine.
Red Zone: Medical Alert!
Very short of breath; quick-relief medicines don't help; cannot do usual activities; symptoms no better after 24 hours in Yellow Zone. Get medical help NOW.
Full Action Plan: http://www.cdc.gov/asthma/actionplan.html
Asthma has a high cost for individuals and the nation.
- Asthma cost the US about $3,300 per person with asthma each year from 2002 to 2007 in medical expenses.
- Medical expenses associated with asthma increased from $48.6 billion in 2002 to $50.1 billion in 2007. About 2 in 5 (40%) uninsured people with asthma could not afford their prescription medicines and about 1 in 9 (11%) insured people with asthma could not afford
their prescription medicines.
- More than half (59%) of children and one-third (33%) of adults who had an asthma attack missed school or work because of asthma in 2008. On average, in 2008 children missed 4 days of school and adults missed 5 days of work because of asthma.
Better asthma education is needed.
- People with asthma can prevent asthma attacks if they are taught to use inhaled corticosteroids and other prescribed daily long-term control medicines correctly and to avoid asthma triggers. Triggers can include tobacco smoke, mold, outdoor air pollution, and colds and flu.
- In 2008 less than half of people with asthma reported being taught how to avoid triggers. Almost half (48%) of adults who were taught how to avoid triggers did not follow most of this advice.
- Doctors and patients can better manage asthma by creating a personal asthma action plan that the patient follows.
Asthma by age and sex US, 2001-2009
Percentages are age-adjusted
SOURCE: National Center for Health Statistics; 2010.
Asthma self-management education by age, US, 2008
SOURCE: National Health Interview Survey, 2008, asthma supplement.
Adults with asthma in the US, 2009
SOURCE: Behavioral Risk Factor Surveillance System, 2009
Federal, state, and local health officials can:
- Track asthma rates and the effectiveness of control measures so continuous improvements can be made in prevention efforts.
- Promote influenza and pneumonia vaccination for people with asthma.
- Promote improvements in indoor air quality for people with asthma through measures such as smoke-free air laws and policies, healthy schools and workplaces, and improvements in outdoor air quality.
Health care providers can:
- Determine the severity of asthma and monitor how much control the patient has over it.
- Make an asthma action plan for patients. Use this to teach them how to use inhaled corticosteroids and other prescribed medicines correctly and how to avoid asthma triggers such as tobacco smoke, mold, pet dander, and outdoor air pollution.
- Prescribe inhaled corticosteroids for all patients with persistent asthma.
People with asthma and parents of children with asthma can:
- Receive ongoing appropriate medical care.
- Be empowered through education to manage their asthma and asthma attacks.
- Avoid asthma triggers at school, work, home, and outdoors. Parents of children with asthma should not smoke, or if they do, smoke only outdoors and not in their cars.
- Use inhaled corticosteroids and other prescribed medicines correctly.
Schools and school nurses can:
- Use student asthma action plans to guide use of inhaled corticosteroids and other prescribed asthma medicines correctly and to avoid asthma triggers.
- Make students' quick-relief inhalers readily available for them to use at school as needed.
- Take steps to fix indoor air quality problems like mold and outdoor air quality problems such as idling school buses.
Employers and insurers can:
- Promote healthy workplaces by reducing or eliminating known asthma triggers.
- Promote measures that prevent asthma attacks such as eliminating co-payments for inhaled corticosteroids and other prescribed medicines.
- Provide reimbursement for educational sessions conducted by clinicians, health educators, and other health professionals both within and outside of the clinical setting.
- Provide reimbursement for long-term control medicines, education, and services to reduce asthma triggers that are often not covered by health insurers.
|
<urn:uuid:444d36d1-9d94-4ee6-a229-05bf8f8ca759>
| 3.375
|
http://www.cdc.gov/VitalSigns/Asthma/index.html
|
Human expansion and interference have detrimental effects as civilizations continue to encroach on previously undisturbed habitats. As a result, many species of animals and plants must struggle to survive.
Biodiversity reveals the important role each of these life forms plays in its ecosystem as well as the irreversible and extensive consequences that would result from a massive loss of biodiversity. It explores the ecological and evolutionary processes, how these processes depend on the cohabitation of a wide range of life forms within an ecosystem, and how the existence of these diverse organisms maintains a crucial stability in the natural world. Beginning with an introduction to biodiversity, this new volume discusses its importance and history, the difficulties in maintaining it, and past and current efforts to protect ecosystems from greater destruction. It examines five specific case studies, including the United States, Indonesia, New Zealand, Madagascar, and Costa Rica, describing the current status and history of biodiversity, obstacles, and conservation efforts in the country at hand.
Maps. Index. Bibliography. Glossary. Chronology. Tables and graphs.
About the Author(s)
Natalie Goldstein is a freelance writer who has written numerous books for the educational market, including textbooks and teacher's guides for the middle school and encyclopedias for the high school. She also wrote Globalization and Free Trade and Global Warming in the Global Issues series.
Foreword author Julie L. Lockwood is director of the graduate program in ecology and evolution and associate professor in the Department of Ecology, Evolution, and Natural Resources at Rutgers University. She is the coauthor of Avian Invasions: The Ecology and Evolution of Exotic Birds and Invasion Ecology.
|
<urn:uuid:8e291587-fa26-42d6-8a00-914854f8bb35>
| 3.734375
|
http://www.infobasepublishing.com/Bookdetail.aspx?ISBN=0816082421&eBooks=0
|
Let's Talk About: Cosmic collisions
Share with others:
It has been almost 100 years since Edwin Hubble measured the universe beyond the Milky Way Galaxy. Today, astronomers believe that as many as 100 billion other galaxies are sharing the cosmos. Most of these cosmic islands are classified by shape as either spiral or elliptical, but stargazing scientists have discovered galaxies that don't quite fit these molds.
Common to this "irregular" category are galaxies that interact with other galaxies. These gravitational interactions are often referred to as mergers, and their existence invites the question: Is the Milky Way collision-prone? To evaluate the probability, look to the Andromeda Galaxy. Located more than 2.5 million light-years away, Andromeda appears as a small fuzzy patch in the sky. However, there is nothing miniature about it. Similar to the shape (spiral), size and mass of the Milky Way, Andromeda is home to a trillion other stars.
Astronomers have known for decades that our galactic neighbor is rapidly closing in on us -- at approximately 250,000 miles per hour. They know this because of blueshift, a measured decrease in electromagnetic wavelength caused by the motion of a light-emitting source, in this case Andromeda, as it moves closer to the observer.
Recently, data collected from the Hubble Space Telescope has allowed astronomers to predict a merger with certainty, in 4 billion years. Our sun will still be shining, and Earth will most likely survive the impact. Reason being, galaxies, although single units of stars gravitationally tied together, are mostly gigantic voids. One can compare a galaxy-on-galaxy collision to the pouring of one glass of water into another. The end result is a larger collection of water, or in the case of a cosmic collision, a larger galaxy. Future Earth inhabitants, billions of years from now, could look up and observe only small portions of such an event because it will take 2 billion years for these cosmic islands to become one.
First Published November 29, 2012 12:00 am
|
<urn:uuid:ebb1ace8-11cc-4b0f-87f2-0f8f23923491>
| 3.9375
|
http://www.post-gazette.com/stories/news/science/lets-talk-about-cosmic-collisions-664068/
|
Network With Us
Join us on Facebook to get the latest news and updates.
Lauren Boulden's Story
Using Think-Alouds to Get Inside Langston Hughes' Head
Over my past few years of teaching, there have been multiple occasions where I have been stumped on how to present a particular concept to my students. I've always been able to turn to ReadWriteThink.org for hands-on, engaging lessons. For example, I knew I wanted my students to develop their skills when it came to interacting with text, particularly with poetry. While searching through the myriad options on ReadWriteThink, I came upon "Building Reading Comprehension Through Think-Alouds."
At first, I planned to use the lesson exactly as written: Read Langston Hughes's poem "Dream Variation" and model a think-aloud with students; then have the students try their hand at some think-alouds using other poetry. After working out all of the details, I realized I could develop some additional skills, which would fit perfectly into the scope and sequence of my class. After completing the think-aloud to "Dream Variation," I broke students into selected groups. Each group was given a different Langston Hughes poem and asked to complete a think-aloud. The next day, the students were put into a new jigsaw group where they were solely responsible for sharing what their Langston Hughes poem conveyed. Based on the meanings behind their group mates' poems, along with using the knowledge of both their poem and "Dream Variation," students were asked to figure out who Langston Hughes was as a man. What did he stand for? What were his beliefs? What did he want out of life? Students used clues from the various poems to fill in a head-shaped graphic organizer to depict their understanding of who Hughes could be. This simple lesson of working with poems and think-alouds turned into a few days of group communication, text deciphering, inferences, and even an author study!
Without great lessons available on ReadWriteThink.org, such as "Building Reading Comprehension Through Think-Alouds," my students would never have been able to tackle so many key reading strategies in such a short amount of time.
Grades 6 – 8 | Lesson Plan | Standard Lesson
Students learn components of think-alouds and type-of-text interactions through teacher modeling. In the process, students develop the ability to use think-alouds to aid in reading comprehension tasks.
Lauren describes how she used ReadWriteThink in her classroom.
I have been teaching seventh- and eighth-grade language arts in Delaware for the past five years. I grew up in Long Island, New York, but have called Delaware my home since completing my undergraduate and master’s work at the University of Delaware. Teaching and learning have become my prime passions in life, which is why my days are spent teaching English, directing plays, organizing the school newspaper, and teaching yoga in the evenings.
|
<urn:uuid:d55ba202-34fa-4fe6-b6ae-698c282eb244>
| 3.5625
|
http://www.readwritethink.org/about/community-stories/using-think-alouds-inside-36.html
|
King James II of England (who was also James VII of Scotland) inherited the throne in 1685 upon the death of his brother, Charles II. James II was unpopular because of his attempts to increase the power of the monarchy and restore the Catholic faith. Deposed in the "Glorious Revolution" of 1688-89, he fled to France. His daughter and son-in-law succeeded him as Queen Mary II and King William III. James II died in 1701.
Unless otherwise noted, these books are for sale at Amazon.com. Your purchase through these links will help to support the continued operation and improvement of the Royalty.nu site.
James II by John Miller. Biography from the Yale English Monarchs series.
James II: The Triumph and the Tragedy by John Callow. Charts James' life using little-known material from the UK National Archives. Includes James' own description of the Battle of Edgehill, his reasons for his conversion to Catholicism, and his correspondence with William of Orange.
A Court in Exile: The Stuarts in France, 1689-1718 by Edward Corp. After James II was deposed, he established his court in France. The book describes his court and the close relationships between the British and French royal families.
King in Exile: James II: Warrior, King and Saint by John Callow. Reassesses James's strategy for dealing with his downfall and exile, presenting a portrait of a man who planned for great political rewards and popular acclaim.
James II and the Trial of the Seven Bishops by William Gibson. The trial of seven bishops in 1688 was a prelude to the Glorious Revolution, as popular support for the bishops led to widespread welcome for William of Orange's invasion.
The Making of King James II by John Callow is about the formative years of the fallen king. Out of print, but sometimes available from Alibris.
The Countess and the King: A Novel of the Countess of Dorchester and King James II by Susan Holloway Scott. Novel about Katherine Sedley, a royal mistress forced to make the most perilous of choices: to remain loyal to the king, or to England.
The Crown for a Lie by Jane Lane. Novel about how James II lost his throne. Out of print, but sometimes available from Alibris.
|
<urn:uuid:be22c535-4fcb-4d7e-9507-2ace71b51909>
| 3.53125
|
http://www.royalty.nu/Europe/England/Stuart/JamesII.html
|
Indian removal had been taking place in the United States since the 18th Century as more Americans made the move westward. In the early 19th Century, Andrew Jackson and the majority of white Americans like him, wanted the Indians to move west of the Mississippi, out of the way from white expansion. Popular thought was that the Indians were savages who could not be civilized, and integration with the white culture was not a possibility.
Through the next several years, Indian tribes all over the eastern front were forced to reservations of proportional inequality compared with land once owned. The United States bought the land from the Indians while using its brute power to force unruly tribes west. No matter how much they tried, the Indians were no match for the strength of the United States.
Indians of the Sauk and Fox tribes tried to take back land that was ceded to the United States wrongfully. When they inhabited the vacant land, Americans saw them as a threat to the white settlements close-by. Illinois state militia was sent in to destroy the so-called "invaders." The Indians retreated back and the militia continued to attack until most had been killed.
Were Americans justified in the mass movement of Indian tribes? I would have to say they were not. I cannot see the logic in their assumptions of the Indians. For the most part, little interaction took place with the Indians. Yet, Americans still believed they were uncivilized.
Perhaps the problem was in terms of envy. Indians had been capable of adapting to land and using the land efficiently for years at a time. I think Americans saw how the Indians were able to do this, and became jealous of their superior farming abilities. Land was becoming useless in the east, and Indians had been able to use their land repeatedly. Americans saw this fertile land as rich in potential profit and were willing to go to any length in acquiring it.
Evidence of the two cultures working together in a society was apparent in New Mexico, Texas, and California. If these people were able to survive and live off each other, I would have to assume that had the United States made an effort, they could have resolved this situation in an easier manner. Unless it was jealousy that was driving them to take the Indian land. Something tells me it was exactly that which caused such a debacle. Superior in farming techniques and land use, the Indians efficient ways were the downfall of their land availability.
|
<urn:uuid:323ada14-c2ab-40ad-a3b6-39c9c51e30e5>
| 3.734375
|
http://www.shvoong.com/humanities/history/6283-removal-indians-north-america/
|
There are many techniques available to help students get started with a
piece of writing. Getting started can be hard for all levels of writers.
Freewriting is one great technique to build fluency. That was
explored in an earlier lesson plan: http://www.thirteen.org/edonline/adulted/lessons/lesson18.html
This unit offers some other techniques. These techniques may be especially
helpful with students who prefer a style of learning or teaching that could
be described as visual, spatial, or graphic. Sometimes those styles or overlooked
in favor of approaches that are very linguistic or linear. The approaches
here will attend to a broader range of learning styles as they add variety.
- Writing: Writing Process, Pre-Writing, Autobiography, Exposition,
Personal Narrative, Argumentation, Comparison and Contrast, Description.
Students will be able to:
- Write more fluently (writing more with greater ease)
- Generate writing topics
- Select topics that will yield strong pieces of writing
- Connect personal experience, knowledge, and examples to an assigned
- Produce better organized pieces of writing
National Reporting System of Adult Education standards are applicable here.
These are the standards required by the 1998 Workforce Investment Act. See
Pencils, colored pencils, pens, markers, crayons, unlined paper, magazines
and newspapers with pictures inside, glue or paste, and paper. Big paper
or poster board can make the pre-writing exercises more eye-catching,
more of a project, and better for display.
Video and TV:
Prep for Teachers
Make sure you try each of the activities yourself before you ask students
to do them. That will give you a better understanding of the activities
and help you recognize any potential points that may be confusing or difficult.
This also gives you a sample to show the students. Its much easier
to create a diagram if you are shown an example of one.
Here are some Web sites that give background and even more ideas about you
pre-writing, diagrams, graphic organizers, and other ideas to get started
with writing. There is some repetition here. You dont have to read
them all. But check them out and see what you think.
|
<urn:uuid:8337696e-d794-475f-9207-8e5f70d2fabe>
| 4.28125
|
http://www.thirteen.org/edonline/adulted/lessons/lesson19.html
|
No one knows how the first organisms or even the first organic precursors formed on Earth, but one theory is that they didn't. Rather, they were imported from space. Scientists have been finding what looks like biological raw material in meteorites for years, but it's usually been shown to be ground contamination. This year, however, investigators studying a dozen meteorites that landed in Antarctica found traces of adenine and guanine two of the four nucleobases that make DNA. That's not a big surprise, since nucleobases have been found in meteorites before. But these were found in the company of other molecules that were similar in structure but not identical. Those had never been detected in previous meteorite samples and they were also not found on the ground where the space rocks landed. That rules out contamination and rules in space organics. A little adenine and guanine in the company of other mysterious stuff is a long, long way from something living but it's closer than we were before.
Next Star Wars Gets Real
|
<urn:uuid:3687b7dc-36d0-40be-aad4-b342f2eaf02b>
| 3.78125
|
http://www.time.com/time/specials/packages/article/0,28804,2101344_2101210_2101220,00.html
|
Search: Nuclear chemistry, Darmstadtium, Germany
In honour of scientist and astronomer Nicolaus Copernicus (1473-1543), the discovering team around Professor Sigurd Hofmann suggested the name copernicium with the element symbol Cp for the new element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Center for Heavy Ion Research) in Darmstadt. It was Copernicus who discovered that the Earth orbits the Sun, thus paving the way for our modern view of the world. Thirteen years ago, element 112 was discovered by an international team of scientists at the GSI accelerator facility. A few weeks ago, the International Union of Pure and Applied Chemistry, IUPAC, officially confirmed their discovery. In around six months, IUPAC will officially endorse the new element's name. This period is set to allow the scientific community to discuss the suggested name copernicium before the IUPAC naming.
"After IUPAC officially recognized our discovery, we – that is all scientists involved in the discovery – agreed on proposing the name copernicium for the new element 112. We would like to honor an outstanding scientist, who changed our view of the world", says Sigurd Hofmann, head of the discovering team.
Copernicus was born 1473 in Torun; he died 1543 in Frombork, Poland. Working in the field of astronomy, he realized that the planets circle the Sun. His discovery refuted the then accepted belief that the Earth was the center of the universe. His finding was pivotal for the discovery of the gravitational force, which is responsible for the motion of the planets. It also led to the conclusion that the stars are incredibly far away and the universe inconceivably large, as the size and position of the stars does not change even though the Earth is moving. Furthermore, the new world view inspired by Copernicus had an impact on the human self-concept in theology and philosophy: humankind could no longer be seen as the center of the world.
With its planets revolving around the Sun on different orbits, the solar system is also a model for other physical systems. The structure of an atom is like a microcosm: its electrons orbit the atomic nucleus like the planets orbit the Sun. Exactly 112 electrons circle the atomic nucleus in an atom of the new element "copernicium".
Element 112 is the heaviest element in the periodic table, 277 times heavier than hydrogen. It is produced by a nuclear fusion, when bombarding zinc ions onto a lead target. As the element already decays after a split second, its existence can only be proved with the help of extremely fast and sensitive analysis methods. Twenty-one scientists from Germany, Finland, Russia and Slovakia have been involved in the experiments that led to the discovery of element 112.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. The discovering teams at GSI already named five of them: element 107 is called bohrium, element 108 hassium, element 109 meitnerium, element 110 darmstadtium, and element 111 is named roentgenium.
The new element 112 discovered by GSI has been officially recognized and will be named by the Darmstadt group in due course. Their suggestion should be made public over this summer.
The element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Centre for Heavy Ion Research) in Darmstadt, has been officially recognized as a new element by the International Union of Pure and Applied Chemistry (IUPAC). IUPAC confirmed the recognition of element 112 in an official letter to the head of the discovering team, Professor Sigurd Hofmann. The letter furthermore asks the discoverers to propose a name for the new element. Their suggestion will be submitted within the next weeks. In about 6 months, after the proposed name has been thoroughly assessed by IUPAC, the element will receive its official name. The new element is approximately 277 times heavier than hydrogen, making it the heaviest element in the periodic table.
“We are delighted that now the sixth element – and thus all of the elements discovered at GSI during the past 30 years – has been officially recognized. During the next few weeks, the scientists of the discovering team will deliberate on a name for the new element”, says Sigurd Hofmann. 21 scientists from Germany, Finland, Russia and Slovakia were involved in the experiments around the discovery of the new element 112.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. GSI has already named their officially recognized elements 107 to 111: element 107 is called Bohrium, element 108 Hassium, element 109 Meitnerium, element 110 Darmstadtium, and element 111 is named Roentgenium.
Recommendation for the Naming of Element of Atomic Number 110
Prepared for publication by J. Corish and G. M. Rosenblatt
A joint IUPAC-IUPAP Working Party confirms the discovery of element number 110 and this by the collaboration of Hofmann et al. from the Gesellschaft für Schwerionenforschung mbH (GSI) in Darmstadt, Germany.
In accord with IUPAC procedures, the discoverers have proposed a name and symbol for the element. The Inorganic Chemistry Division Committee now recommends this proposal for acceptance. The proposed name is darmstadtium with symbol Ds. This proposal lies within the long established tradition of naming an element after the place of its discovery.
|
<urn:uuid:149ab25b-f1f4-4231-88ea-4e1968ed8a9d>
| 3.671875
|
http://www.webelements.com/nexus/search/results/taxonomy%3A20%2C538%2C198.212
|
As the years tick by with most of the planet doing little in the way of reducing carbon emissions, researchers are getting increasingly serious about the possibility of carbon sequestration. If it looks like we're going to be burning coal for decades, carbon sequestration offers us the best chance of limiting its impact on climate change and ocean acidification. A paper that will appear in today's PNAS describes a fantastic resource for carbon sequestration that happens to be located right next to many of the US' major urban centers on the East Coast.
Assuming that capturing the carbon dioxide is financially and energetically feasible, the big concern becomes where to put it so that it will stay out of the atmosphere for centuries. There appear to be two main schools of thought here. One is that areas that hold large deposits of natural gas should be able to trap other gasses for the long term. The one concern here is that, unlike natural gas, CO2 readily dissolves in water, and may escape via groundwater that flows through these features. The alternative approach turns that problem into a virtue: dissolved CO2 can react with minerals in rocks called basalts (the product of major volcanic activity), forming insoluble carbonate minerals. This should provide an irreversible chemical sequestration.
The new paper helpfully points out that if we're looking for basalts, the East Coast of the US, home to many of its major urban centers and their associated carbon emissions, has an embarrassment of riches. The rifting that broke up the supercontinent called Pangea and formed the Atlantic Ocean's basin triggered some massive basalt flows at the time, which are now part of the Central Atlantic Magmatic Province, or CAMP. The authors estimate that prior to some erosion, CAMP had the equivalent of the largest basalt flows we're currently aware of, the Siberian and Deccan Traps.
Some of this basalt is on land—anyone in northern Manhattan can look across the Hudson River and see it in the sheer cliffs of the Palisades. But much, much more of it is off the coast under the Atlantic Ocean. The authors provide some evidence in the form of drill cores and seismic readings that indicate there are large basalt deposits in basins offshore of New Jersey and New York, extending up to southern New England.
These areas are now covered with millions of years of sediment, which should provide a largely impermeable barrier that will trap any gas injected into the basalt for many years. The deposits should also have reached equilibrium with the seawater above, which will provide the water necessary for the chemical reactions that precipitate out carbonate minerals.
Using a drill core from an onshore deposit, the authors show that the basalt deposits are also composed of many distinct flows of material. Each of these flows would have undergone rapid cooling on both its upper and lower surface, which fragmented the rock. The core samples show porosity levels between 10 and 20 percent, which should allow any CO2 pumped into the deposits to spread widely.
The authors estimate that New Jersey's Sandy Hook basin, a relatively small deposit, is sufficient to house 40 years' worth of emissions from coal plants that produce 4GW of electricity. And the Sandy Hook basin is dwarfed by one that lies off the Carolinas and Georgia. They estimate that the South Georgia Rift basin covers roughly 40,000 square kilometers.
The authors argue that although laboratory simulations suggest the basic idea of using basalts for carbon sequestration is sound, the actual effectiveness in a given region can depend on local quirks of geology, so pilot tests in the field are absolutely essential for determining whether a given deposit is suitable. So far, only one small-scale test has been performed on any of the CAMP deposits.
Given the area's proximity to significant sources of CO2 and the infrastructure that could be brought into play if full-scale sequestration is attempted, it seems like one of the most promising proposals to date.
PNAS, 2010. DOI: 10.1073/pnas.0913721107
|
<urn:uuid:0f4b5328-483d-437b-b4b6-8cf4bfa3968b>
| 3.90625
|
http://arstechnica.com/science/2010/01/pangea-era-rift-makes-east-coast-perfect-for-carbon-storage/
|
This section provides primary sources that document how Indian and European men and one English and one Indian woman have described the practice of sati, or the self-immolation of Hindu widows.
Although they are all critical of self-immolation, Francois Bernier, Fanny Parks, Lord William Bentinck, and Rev. England present four different European perspectives on the practice of sati and what it represents about Indian culture in general, and the Hindu religion and Hindu women in particular. They also indicate increasing negativism in European attitudes toward India and the Hindu religion in general. It would be useful to compare the attitudes of Bentinck and England as representing the secular and sacred aspects of British criticism of sati. A comparison of Bentinck’s minute with the subsequent legislation also reveals differences in tone between private and public documents of colonial officials. Finally, a comparison between the Fanny Parks and the three men should raise discussion on whether or not the gender and social status of the writer made any difference in his or her appraisal of the practice of self-immolation.
The three sources by Indian men and one by an Indian woman illustrate the diversity of their attitudes toward sati. The Marathi source illuminates the material concerns of relatives of the Hindu widow who is urged to adopt a son, so as to keep a potentially lucrative office within the extended family. These men are willing to undertake intense and delicate negotiations to secure a suitably related male child who could be adopted. This letter also documents that adoption was a legitimate practice among Hindus, and that Hindu women as well as men could adopt an heir. Ram Mohan Roy’s argument illustrates a rationalist effort to reform Hindu customs with the assistance of British legislation. Roy illustrates one of the many ways in which Indians collaborate with British political power in order to secure change within Indian society. He also enabled the British to counter the arguments of orthodox Hindus about the scriptural basis for the legitimacy of self-immolation of Hindu widows. The petition of the orthodox Hindu community in Calcutta, the capital of the Company’s territories in India, documents an early effort of Indians to keep the British colonial power from legislating on matters pertaining to the private sphere of Indian family life. Finally, Pandita Ramabai reflects the ways in which ancient Hindu scriptures and their interpretation continued to dominate debate. Students should consider how Ramabai’s effort to raise funds for her future work among child widows in India might have influenced her discussion of sati.
Two key issues should be emphasized. First, both Indian supporters and European and Indian opponents of the practice of self-immolation argue their positions on the bodies of Hindu women, and all the men involved appeal to Hindu scriptures to legitimate their support or opposition. Second, the voices of Indian women were filtered through the sieve of Indian and European men and a very few British women until the late 19th century.
- How do the written and visual sources portray the Hindu women who commit self-immolation? Possible aspects range from physical appearance and age, motivation, evidence of physical pain (that even the most devoted woman must suffer while burning to death), to any evidence of the agency or autonomy of the Hindu widow in deciding to commit sati. Are any differences discernible, and if so, do they seem related to gender or nationality of the observer or time period in which they were observed?
- How are the brahman priests who preside at the self-immolation portrayed in Indian and European sources? What might account for any similarities and differences?
- What reasons are used to deter Hindu widows from committing sati? What do these reasons reveal about the nature of family life in India and the relationships between men and women?
- What do the reasons that orthodox Hindus provide to European observers and to Indian reformers reveal about the significance of sati for the practice of the Hindu religion? What do their arguments reveal about orthodox Hindu attitudes toward women and the family?
- How are Hindu scriptures used in various ways in the debates before and after the prohibition of sati?
- What is the tone of the petition from 800 Hindus to their British governor? Whom do they claim to represent? What is their justification for the ritual of self-immolation? What is their attitude toward the Mughal empire whose Muslim rulers had preceded the British? What is their characterization of the petitioners toward those Hindus who support the prohibition on sati? How do the petitioners envision the proper relationship between the state and the practice of religion among its subjects?
- Who or what factors do European observers, British officials, and Indian opponents of sati hold to be responsible for the continuance of the practice of sati?
- What were the reasons that widows gave for committing sati? Were they religious, social or material motives? What is the evidence that the widows were voluntarily committing sati before 1829? What reasons did the opponents of sati give for the decisions of widows to commit self-immolation? What reasons did opponents give for widows who tried to escape from their husbands’ pyres?
- What are the reasons that Lord Bentinck and his Executive Council cite for their decision to declare the practice of sati illegal? Are the arguments similar to or different from his arguments in his minute a month earlier? What do these reasons reveal about British attitudes toward their role or mission in India? Do they use any of the arguments cited by Ram Mohan Roy or Pandita Ramabai?
- What do these sources, both those who oppose sati and those who advocate it, reveal about their attitudes to the Hindu religion in particular and Indian culture in general?
|
<urn:uuid:672e69ee-fd10-42dc-8e01-f4fde95914a0>
| 3.734375
|
http://chnm.gmu.edu/wwh/modules/lesson5/lesson5.php?menu=1&c=strategies&s=0
|
The white, mottled area in the right-center of this image from NASA’s Shuttle Radar Topography Mission (SRTM) is Madrid, the capital of Spain. Located on the Meseta Central, a vast plateau covering about 40 percent of the country, this city of 3 million is very near the exact geographic center of the Iberian Peninsula. The Meseta is rimmed by mountains and slopes gently to the west and to the series of rivers that form the boundary with Portugal. The plateau is mostly covered with dry grasslands, olive groves and forested hills.
Madrid is situated in the middle of the Meseta, and at an elevation of 646 meters (2,119 feet) above sea level is the highest capital city in Europe. To the northwest of Madrid, and visible in the upper left of the image, is the Sistema Central mountain chain that forms the “dorsal spine” of the Meseta and divides it into northern and southern subregions. Rising to about 2,500 meters (8,200 feet), these mountains display some glacial features and are snow-capped for most of the year. Offering almost year-round winter sports, the mountains are also important to the climate of Madrid.
Three visualization methods were combined to produce this image: shading and color coding of topographic height and radar image intensity. The shade image was derived by computing topographic slope in the northwest-southeast direction. North-facing slopes appear bright and south-facing slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and brown to white at the highest elevations. The shade image was combined with the radar intensity image in the flat areas.
Size: 172 by 138 kilometers (107 by 86 miles)
Location: 40.43 degrees North latitude, 3.70 degrees West longitude
Orientation: North toward the top
Image Data: shaded and colored SRTM elevation model, with SRTM radar intensity added
Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet)
Date Acquired: February 2000
Image Courtesy SRTM Team NASA/JPL/NIMA
|
<urn:uuid:e494080f-4b89-4fa9-91cc-95cd733c7b72>
| 3.375
|
http://earthobservatory.nasa.gov/IOTD/view.php?id=3045
|
A crocodile large enough to swallow humans once lived in East Africa, according to a May 2012 paper in the Journal of Vertebrate Paleontology.
Paper author Christopher Brochu is an associate professor of geoscience at University of Iowa. He said:
It’s the largest known true crocodile. It may have exceeded 27 feet in length. By comparison, the largest recorded Nile crocodile was less than 21 feet, and most are much smaller.
The newly-discovered species lived between two and four million years ago in Kenya. It resembled its living cousin, the Nile crocodile, but was more massive.
Brochu recognized the new species from fossils that he examined three years ago at the National Museum of Kenya in Nairobi. Some were found at sites known for important human fossil discoveries. Brochu said:
It lived alongside our ancestors, and it probably ate them. He explains that although the fossils contain no evidence of human/reptile encounters, crocodiles generally eat whatever they can swallow, and humans of that time period would have stood no more than four feet tall.
We don’t actually have fossil human remains with croc bites, but the crocs were bigger than today’s crocodiles, and we were smaller, so there probably wasn’t much biting involved.
Brochu added that there likely would have been ample opportunity for humans to encounter crocs. That’s because early man, along with other animals, would have had to seek water at rivers and lakes where crocodiles lie in wait.
The crocodile Crocodylus thorbjarnarsoni is named after John Thorbjarnarson, famed crocodile expert and Brochu’s colleague who died of malaria while in the field several years ago.
Brochu says Crocodylus thorbjarnarsoni is not directly related to the present-day Nile crocodile. This suggests that the Nile crocodile is a fairly young species and not an ancient “living fossil,” as many people believe. Borchu said:
We really don’t know where the Nile crocodile came from. But it only appears after some of these prehistoric giants died out.
Bottom line: A paper in the Journal of Vertebrate Paleontology in May, 2012 reports the discovery of an ancient crocodile large enough to swallow humans that lived two to four million years ago in East Africa.
|
<urn:uuid:f9cc2710-09a0-4ee3-8011-9574d0e64b2a>
| 3.4375
|
http://earthsky.org/earth/biggest-crocodile-that-ever-lived/comment-page-1
|
Most Americans believe that the Declaration of Independence by the Continental Congress on July 4, 1776 began American independence. While this date announced the formal break between the American colonists and the “mother country,” it did not guarantee independence. Not all Americans favored independence and most historical estimates place the number of Loyalist, or Tory, Americans near one-third of the population. Winning independence required an eight-year war that began in April, 1775 and ended with a peace treaty finalized on September 3, 1783. Unfortunately the infant nation found itself born in a world dominated by a superpower struggle between England and France. The more powerful European nations viewed the vulnerable United States, correctly, as weak and ripe for exploitation. Tragically, few Americans know of this period of crisis in our nation’s history because of the irresponsible neglect of the American education system.
American independence marked the end of one chapter in American history and the beginning of another. As with all historical events this declaration continued the endless cycle of action and reaction, because nothing occurs in a vacuum. Tragically, most Americans’ historical perspective begins with their birth, rendering everything that previously occurred irrelevant. Furthermore, most educators conveniently “compartmentalize” their subjects and do not place them in the proper historical context. Since most Americans only remember the United States as a superpower they do not know of our previous struggles. Unfortunately our agenda driven education system also ignores this period and often portrays America in the most negative light.
Without delving too deeply into the deteriorating relations between the American colonists and their “mother country,” declaring independence came slowly. None of the thirteen colonies trusted the other colonies and rarely acted in concert, even during times of crisis. Regional and cultural differences between New England, mid-Atlantic and the Southern colonies deeply divided the colonists. Even in these early days of America slavery proved a dividing issue, although few believed in racial equality. The “umbilical cord” with England provided the only unifying constant that bound them together culturally and politically.
The colonies further possessed different forms of government as well, although they steadfastly expressed their liberties and “rights as Englishmen.” Some colonies existed as royal colonies, where the English monarch selected the governor. Proprietary colonies formed when merchant companies or individuals, called proprietors, received a royal grant and appointed the governor. Charter colonies received their charters much as proprietary colonies with individuals or merchants receiving royal charters and shareholders selected the governor. Each colony elected its own legislature and local communities made their laws mostly based on English common law. Any form of national, or “continental,” unity remained an illusion largely in the minds of the delegates of the First Continental Congress.
The Second Continental Congress convened on May 10, 1775 because England ignored the grievances submitted by the First Continental Congress. Furthermore, open warfare erupted in Massachusetts between British troops and the colonial militia at Lexington and Concord on April 19, 1775. Known today as Patriot’s Day few Americans outside of Massachusetts celebrate it, or even know of it. Setting forth their reasons for taking up arms against England, they established the Continental Army on June 14, 1775. For attempting a united front, they appointed George Washington, a Virginian, as commander-in-chief. On July 10, 1775, the Congress sent Parliament one last appeal for resolving their differences, which proved futile.
While Congress determined the political future of the colonies fighting continued around Boston, beginning with the bloody battle on Breed’s Hill on June 17, 1775. Known as the Battle of Bunker Hill in our history the British victory cost over 1,000 British and over 400 American casualties. This battle encouraged the Americans because it proved the “colonials” capable of standing against British regulars. British forces withdrew from Boston in March, 1776 and awaited reinforcements from England as fighting erupted in other colonies.
While Washington and the Continental Army watched the British in Boston, Congress authorized an expedition against Canada. They hoped for significant resentment of British rule by the majority of French inhabitants, something they misjudged. In September, 1775 the fledgling Continental Army launched an ambitious, but futile, two-pronged invasion of Canada. Launched late in the season, particularly for Canada, it nevertheless almost succeeded, capturing Montreal and moving on Quebec. It ended in a night attack in a snowstorm on December 31, 1775 when the commander fell dead and the second-in-command fell severely wounded. American forces did breach the city walls, however when the attack broke down these men became prisoners of war.
For disrupting the flow of British supplies into America Congress organized the Continental Navy and Continental Marines on October 13, 1775 and November 10, 1775, respectively. Still, no demands for independence despite the creation of national armed forces, the invasion of a “foreign country” and all the trappings of a national government.
The full title of the Declaration of Independence ends with “thirteen united States of America,” with united in lower case. I found no evidence that the Founding Fathers did this intentionally, or whether it merely reflected the writing style of the time. Despite everything mentioned previously regarding “continental” actions, the thirteen colonies jealously guarded their sovereignty.
Although Congress declared independence England did not acknowledge the legality of this resolution and considered the colonies “in rebellion.” England assembled land and naval forces of over 40,000, including German mercenaries, for subduing the “insurrection.” This timeless lesson proves the uselessness of passing resolutions with no credible threat of force backing them up. Unfortunately our academic-dominated society today believes merely the passage of laws and international resolutions forces compliance.
We hear much in the news today about “intelligence failures” regarding the war against terrorism. England definitely experienced an “intelligence failure” as it launched an expedition for “suppressing” this “insurrection” by a “few hotheads.” First, they under estimated the extent of dissatisfaction among the Americans, spurred into action by such “rabble rousers” as John Adams. They further under estimated the effectiveness of Washington and the Continental Army, particularly after the American victories at Trenton and Princeton.
British officials further under estimated the number of Loyalists with the enthusiasm for taking up arms for the British. While Loyalist units fought well, particularly in the South and the New York frontier, they depended heavily on the support of British regulars. Once British forces withdrew, particularly in the South, the Loyalist forces either followed them or disappeared. A perennial lesson for military planners today, do not worry about your “footprint,” decisively defeat your enemy. This hardens the resolve of your supporters, influences the “neutrals” in your favor and reduces the favorability of your enemies.
Regarding the “national defense” the Continental Congress and “states” did not fully cooperate against the superpower, England. The raising of the Continental Army fell on the individual colonies almost throughout the war with the Congress establishing quotas. Unfortunately, none of the colonies ever met their quota for Continental regiments, with the soldiers negotiating one-year enlistments.
Continental Army recruiters often met with competition from the individual colonies, who preferred fielding their militias. The Congress offered bounties in the almost worthless “Continental Currency” and service far from home in the Continental Army. Colonial governments offered higher bounties in local currencies, or British pounds, and part-time service near home.
Congress only possessed the authority for requesting troops and supplies from the colonial governors, who often did not comply. For most of the war the Continental Army remained under strength, poorly supplied, poorly armed and mostly unpaid. Volumes of history describe the harsh winters endured by the Continentals at Valley Forge and Morristown, New Jersey the following year.
Colonial governments often refused supplies for troops from other colonies, even though those troops fought inside their borders. As inflation continued devaluing “Continental Currency” farmers and merchants preferred trading with British agents, who often paid in gold. This created strong resentment from the soldiers who suffered the hardships of war and the civilians who profited from this trade. In fairness, the staggering cost of financing the war severely taxed the colonial governments and local economies, forcing hard choices.
Congress further declared independence as a cry for help from England’s superpower rival, France, and other nations jealous of England. Smarting from defeat in the Seven Years War (French and Indian War in America), and a significant reduction in its colonial empire, France burned for revenge. France’s ally, Spain, also suffered defeat and loss of territory during this war and sought advantage in the American war. However, France and Spain both needed American victories before they risked their troops and treasures. With vast colonial empires of their own they hesitated at supporting a colonial rebellion in America. As monarchies, France and Spain held no love of “republican ideals” or “liberties,” and mostly pursued independent strategies against England. Fortunately their focus at recouping their former possessions helped diminish the number of British forces facing the Americans.
On the political front the Congress knew that the new nation needed some form of national government for its survival. Unfortunately the Congress fell short on this issue, enacting the weak Articles of Confederation on November 15, 1777. Delegates so feared the “tyranny” of a strong central government, as well as they feared their neighbors, that they rejected national authority. In effect, the congressional delegates created thirteen independent nations instead of one, and our nation suffered from it. Amending this confederation required the approval of all thirteen states, virtually paralyzing any national effort. This form of government lasted until the adoption of the US Constitution on September 17, 1787.
Despite these weaknesses the fledgling “United States” survived and even achieved some success against British forces. Particularly early in the war, the British forces possessed several opportunities for destroying the Continental Army and ending the rebellion. Fortunately for us British commanders proved lethargic and complacent, believing the “colonial rabble” incapable of defeating them. Furthermore, as the Continental Army gained experience and training it grew more professional, standing toe-to-toe against the British. Since the US achieved superpower status it fell into the same trap, continuously underestimating less powerful enemies.
The surrender of British forces at Yorktown, Virginia on October 19, 1781 changed British policy regarding its American colonies. British forces now controlled mainly three enclaves: New York City; Charleston, South Carolina and Savannah, Georgia. Loyalist forces, discouraged by British reverses, either retreated into these enclaves, departed America or surrendered. Waging a global war against France and Spain further reduced the number of troops available for the American theater. This serves another modern lesson for maintaining adequate forces for meeting not only your superpower responsibilities, but executing unforeseen contingencies.
Ironically, the victory at Yorktown almost defeated the Americans as well, since the civil authorities almost stopped military recruitment. Washington struggled at maintaining significant forces for confronting the remaining British forces in their enclaves. An aggressive British commander may still score a strategic advantage by striking at demobilizing American forces. Fortunately, the British government lost heart for retaining America and announced the beginning of peace negotiations in August, 1782.
The Treaty of Paris, signed on September 3, 1783 officially ended the American Revolution; however it did not end America’s struggles. American negotiators proved somewhat naïve in these negotiations against their more experienced European counterparts. Of importance, the British believed American independence a short-lived situation, given the disunity among Americans. Congress began discharging the Continental Army before the formal signing of the treaty, leaving less than one hundred on duty.
Instead of a united “allied” front, America, France and Spain virtually negotiated separate treaties with England, delighting the British. They believed that by creating dissension among the wartime allies they furthered their position with their former colonies. If confronted with a new war with more powerful France and Spain, America might rejoin the British Empire.
When England formally established the western boundary of the US at the Mississippi River it did not consult its Indian allies. These tribes did not see themselves as “defeated nations,” since they often defeated the Americans. Spanish forces captured several British posts in this territory and therefore claimed a significant part of the southeastern US.
France, who practically bankrupted itself in financing the American cause and waging its own war against England, expected an American ally. Unfortunately, the US proved a liability and incapable of repaying France for the money loaned during the war. France soon faced domestic problems that resulted in the French Revolution in 1789.
For several reasons England believed itself the winner of these negotiations, and in a more favorable situation, globally. England controlled Canada, from where it closely monitored the unfolding events in the US, and sowed mischief. It illegally occupied several military forts on American territory and incited the Indian tribes against the American frontier. By default, England controlled all of the American territory north of the Ohio River and west of the Appalachian Mountains.
Economically, England still believed that the US needed them as its primary trading partner, whether independent or not. A strong pro-British faction in America called for closer economic ties with the former “mother country.” As England observed the chaos that gripped the US at this time, they felt that its collapse, and reconquest by England, only a matter of time.
Most Americans today, knowing only the economic, industrial and military power of America cannot fathom the turmoil of this time. The weak central government and all the states accumulated a huge war debt, leaving them financially unstable. While the US possessed rich natural resources it lacked the industrial capabilities for developing them, without foreign investment. With no military forces, the nation lacked the ability of defending its sovereignty and its citizens. From all appearances our infant nation seemed stillborn, or as the vulnerable prey for the more powerful Europeans.
As stated previously the Articles of Confederation actually created thirteen independent nations, with no national executive for enforcing the law. Therefore each state ignored the resolutions from Congress and served its own self-interest. Each state established its own rules for interstate commerce, printed its own money and even established treaties with foreign nations. No system existed for governing the interactions between the states, who often treated each other like hostile powers.
The new nation did possess one thing in abundance, land; the vast wilderness between the Appalachian Mountains and the Mississippi River. Conceded by the British in the Treaty of Paris, the Americans looked at this as their economic solution. The nation owed the veterans of the Revolution a huge debt and paid them in the only currency available, land grants. Unfortunately, someone must inform the Indians living on this land and make treaties regarding land distribution.
For the Americans this seemed simple, the Indians, as British allies, suffered defeat with the British and must pay the price. After all, under the rules of European “civilized” warfare, defeated nations surrendered territory and life went on. Unfortunately no one, neither American nor British, informed the Indians of these rules, because no one felt they deserved explanation. Besides, the British hoped that by inciting Indian troubles they might recoup their former colonies.
With British arms and encouragement the tribes of the “Old Northwest” raided the western frontier with a vengeance. From western New York down through modern Kentucky these Indians kept up their war with the Americans. In Kentucky between 1783 and 1790 the various tribes killed an estimated 1,500 people, stole 20,000 horses and destroyed an unknown amount of property.
Our former ally, Spain, controlled all of the territory west of the Mississippi River before the American Revolution. From here they launched expeditions that captured British posts at modern Vicksburg and Natchez, Mississippi, and the entire Gulf Coast. However, they claimed about two-thirds of the southeastern US based on this “conquest” including land far beyond the occupation of their troops. Like the British, they incited the Indians living in this region for keeping out American settlers.
Spain also controlled the port of New Orleans and access into the Mississippi River. Americans living in Kentucky and other western settlements depended on the Mississippi River for their commerce. The national government seemed unable, or unwilling, at forcing concessions from Spain, and many westerners considered seceding from the Union. Known as the “Spanish Conspiracy” this plot included many influential Americans and only disappeared after the American victory at Fallen Timbers.
While revisionist historians ignore the “Spanish Conspiracy” they illuminate land speculation by Americans in Spanish territory. Of course they conveniently ignore the duplicity of Spanish officials in these plots, and their acceptance of American money. In signing the Declaration of Independence the Founding Fathers pledged “their lives, their fortunes and their sacred honor.” Many Continental Army officers bankrupted themselves when Congress and their states proved recalcitrant at reimbursing them for incurred expenses. These officers often personally financed their troops and their expeditions because victory required timely action. Of importance for the western region, George Rogers Clark used his personal credit for financing his campaigns, which secured America’s claim. It takes no “lettered” historian for determining that without Clark’s campaign that America’s western boundary ends with the Appalachian Mountains, instead of the Mississippi River. With the bankrupt Congress and Virginia treasuries not reimbursing him he fell into the South Carolina Yazoo Company. Clark’s brother-in-law, Dr. James O’Fallon, negotiated this deal for 3,000,000 acres of land in modern Mississippi. This negotiation involved the Spanish governor of Louisiana, Don Estavan Miro, a somewhat corrupt official. When the Spanish king negated the treaty, Clark, O’Fallon and the other investors lost their money and grew hateful of Spain.
Another, lesser known, negotiation involved former Continental Army Colonel George Morgan and the Spanish ambassador, Don Diego de Gardoqui. Morgan received title for 15,000,000 acres near modern New Madrid, Missouri for establishing a colony. Ironically, an unscrupulous American, James Wilkinson, discussed later in the document, working in conjunction with Miro, negated this deal.
Both of these land deals involved the establishment of American colonies in Spanish territory, with Americans declaring themselves Spanish subjects. Few Spaniards lived in the area west of the Mississippi River and saw the growing number of American settlers as a threat. However, if these Americans, already disgusted with their government, became Spanish subjects, they now became assets. If they cleared and farmed the land, they provided revenue that Spanish Louisiana desperately needed. Since many of these men previously served in the Revolution, they provided a ready militia for defending their property. This included defending it against their former country, the United States, with little authority west of the Appalachian Mountains.
Internationally, the weak US became a tragic pawn in the continuing superpower struggle between England and France. With no naval forces for protection, American merchant mariners became victims of both nations on the high seas. British and French warships stopped American ships bound for their enemy, confiscating cargo and conscripting sailors into their navies. In the Mediterranean Sea, our ships became the targets of the Barbary Pirates, the ancestors of our enemies today. Helpless, our government paid ransoms for prisoners and tribute for safe passage until the Barbary Wars of the early 19th Century.
Despite all of these problems most influential Americans still “looked inward,” and feared a strong central government more than foreign domination. When the cries of outrage came from the western frontiers regarding Indian depredations, our leaders still more feared a “standing army.” In the world of the Founding Fathers the tyranny of King George III’s central government created their problem. The king further used his “standing army” for oppressing the colonists and infringing on their liberties.
Congress also possessed more recent examples of the problems with a “standing army” during the American Revolution. First came the mutiny of the Pennsylvania Line in January, 1781 for addressing their grievances. Since the beginning of the war, in 1775, the Continental soldiers endured almost insurmountable hardships, as explained previously. The soldiers rarely received pay, and then received the almost worthless “Continental Currency,” which inflation further devalued. This forced severe hardships also on the soldiers’ families, and many lost their homes and farms. The soldiers marched on the then-capital, Philadelphia, for seeking redress for these grievances. Forced into action, Congress addressed their problems with pay and the soldiers rejoined the Army.
A second, though less well known, mutiny occurred with the New Jersey Line shortly thereafter with different results. For “nipping” a growing problem “in the bud,” Washington ordered courts-martial and the execution of the ring leaders. The last such trouble occurred in the final months of the war in the Continental Army camp at Newburgh, New York. Dissatisfied with congressional inaction on their long-overdue pay, many officers urged a march on Philadelphia. Fortunately, Washington defused this perceived threat against civil authority, and squashed the strong possibility of a military dictatorship.
However, Congress realized that it needed some military force for defending the veterans settling on their land grants. The delegates authorized the First United States Regiment, consisting of 700 men drawn from four state militias for a one year period. I read countless sources describing the inadequacy of this force, highlighting congressional incompetence and non-compliance by the states. The unit never achieved its authorized strength, the primitive conditions on the frontier hindered its effectiveness and corrupt officials mismanaged supplies. Scattered in small garrisons throughout the western territories, it never proved a deterrent against the Indians.
No incentives existed for enlisting in this regiment, and it attracted a minority of what we call today “quality people.” Again, confirming state dominance over the central government, this “army” came from a militia levy from four states, a draft. A tradition at the time provided for the paying of substitutes for the men conscripted during these militia levies. Sources reflect that most of these substitutes came from the lowest levels of society, including those escaping the law. From whatever source these men came, at least they served and mostly did their best under difficult circumstances.
Routinely, once the soldiers assembled they must learn the skills needed for performing their duties. For defending the western settlements the small garrisons must reach their destination via river travel. Once at their destination they must often construct their new installations using the primitive tools and resources available. The primitive transportation system often delayed the arrival of the soldiers’ pay and supplies, forcing hardships on the troops. Few amenities existed at these frontier installations and the few settlements provided little entertainment for the troops. Unfortunately, once the soldiers achieved a level of professionalism, they reached the end of their enlistment. With few incentives for reenlistment, the process must begin again, with recruiting and training a new force.
Fortunately many prominent Americans saw that the country needed a different form of government for ensuring its survival. Despite the best intentions and established rules, few people followed these rules or respected our intentions. The Constitutional Convention convened in Philadelphia in May, 1787 with George Washington unanimously elected as its president. As the delegates began the process of forming a “more perfect Union,” the old, traditional “colonial” rivalries influenced the process.
While most Americans possess at least ancillary knowledge of the heated debates among the delegates, few know the conditions. Meeting throughout the hot summer, the delegates kept the windows of their meeting hall closed, preventing the “leaking” of information. We must remember that this occurred before electric-powered ventilation systems or air conditioning. They kept out the “media,” and none of the delegates spoke with “journalists,” again for maintaining secrecy. Modern Americans, often obsessed with media access, do not understand why the delegates kept their deliberations secret.
Most of the delegates felt they possessed one chance for creating this new government and achieving the best possible needed their focus. “Media access” jeopardized this focus and “leaked” information, with potential interruptions, jeopardized their chance for success. We find this incomprehensible today, with politicians running toward television cameras, “leaking” information and disclosing national secrets. Unfortunately a “journalistic elite” exists today, misusing the First Amendment, with many “media moguls” believing themselves the “kingmakers” of favorite politicians.
The delegates sought the best document for satisfying the needs of the most people, making “special interest groups” secondary. Creating a united nation proved more important than prioritizing regional and state desires. These delegates debated, and compromised, on various issues; many of which remain important today. They worried over the threat of dominance by large, well-populated states over smaller, less-populated states. Other issues concerned taxation, the issue that sparked the American Revolution, and import duties, which pitted manufacturing states against agricultural states. Disposition of the mostly unsettled western land, claimed by many states, proved a substantial problem for the delegates. The issue of slavery almost ended the convention and the delegates compromised, achieving the best agreement possible at the time. On September 17, 1787 the delegates adopted the US Constitution and submitted it for approval by the individual states.
Again, merely passing laws and adopting resolutions does not immediately solve the problems, or change people’s attitudes. Ratification of the Constitution required the approval of nine states, (three-fourths) which occurred on June 21, 1788. However, two important large states, New York and Virginia, still debated ratification. Several signers of the Declaration of Independence, and delegates at the Constitutional Convention, urged the defeat of the Constitution. Fiery orator, Patrick Henry, of “Give me liberty, or give me death,” fame worked hard for defeating it in Virginia. Even the most optimistic supporters gave the Constitution, and the nation, only a marginal chance at survival.
|
<urn:uuid:fcd8384e-97df-45dc-baf6-0742150406b6>
| 4.125
|
http://frontierbattles.wordpress.com/2008/09/20/battle-of-fallen-timbers-confirms-american-independence-part-i/?like=1&_wpnonce=24a0599870
|
What Is Tetanus?
Tetanus is a bacterial infection that attacks the nervous system. Tetanus may result in severe muscle spasms, and this can lead to a condition known as lockjaw, which prevents the mouth from opening and closing. Tetanus can be fatal.
Tetanus is caused when the bacterium, Clostridium tetani , enters the body through a break in the skin. The bacterium can come from soil, dust, or manure. It produces a toxin that causes the illness.
In the United States and other countries with tetanus vaccination programs, the condition is rare.
What Is the Tetanus Vaccine?
The tetanus vaccine is an inactivated toxoid (a substance that can create an antitoxin). There are different types of the vaccines to prevent tetanus, including:
Who Should Get Vaccinated and When?
The DTaP vaccine is generally required before starting school. The regular immunization schedule is to give the vaccine at:
- 2 months
- 4 months
- 6 months
- 15-18 months
- 4-6 years
Tdap is routinely recommended for children aged 11-12 years who have completed the DTaP series. Tdap can also be given to:
- Children aged 7-10 years who have not been fully vaccinated
- Children and teens aged 13-18 years who did not get the Tdap when they were 11-12 years old
- Adults under 65 years who have never received Tdap
- Pregnant women after 20 weeks gestation who have not previously received Tdap
- Adults who have not been previously vaccinated and who have contact with babies aged 12 months or younger
- Healthcare providers who have not previously received Tdap
Td is given as a booster shot every 10 years. The vaccine may also be given if you have a severe cut or burn.
If you or your child has not been fully vaccinated against tetanus, talk to the doctor.
What Are the Risks Associated With the Tetanus Vaccine?
Most people tolerate the tetanus-containing vaccines without any trouble. The most common side effects are pain, redness, or swelling at the injection site, mild fever, headache, tiredness, nausea, vomiting, diarrhea , or stomachache.
Rarely, a fever of more than 102ºF, severe gastrointestinal problems, or severe headache may occur. Nervous system problems and severe allergic reactions are extremely rare. Localized allergic reactions (redness and swelling) at the injection site may occur, while anaphylaxis (life-threatening, widespread allergic reaction) is extremely rare.
Acetaminophen (eg, Tylenol) is sometimes given to reduce pain and fever that may occur after getting a vaccine. In infants, the medicine may weaken the vaccine's effectiveness. However, in children at risk for siezures, a fever lowering medicine may be important to take. Discuss the risks and benefits of taking acetaminophen with the doctor.
Who Should Not Get Vaccinated?
The vast majority of people should receive their tetanus-containing vaccinations on schedule. However, individuals in whom the risks of vaccination outweigh the benefits include those who:
- Have had a life-threatening allergic reaction to DTP, DTap, DT, Tdap, or Td vaccine
- Have had a severe allergy to any component of the vaccine to be given
- Have gone into a coma or long seizure within seven days after a dose of DTP or DTaP
Talk with your doctor before getting the vaccine if you have:
- Allergy to latex
- Epilepsy or other nervous system problem
- Severe swelling or severe pain after a previous dose of any component of the vaccination to be given
- Guillain-Barre syndrome
Wait until you recover to get the vaccine if you have moderate or severe illness on the day your shot is scheduled.
What Other Ways Can Tetanus Be Prevented Besides Vaccination?
Caring properly for wounds, including promptly cleaning them and seeing a doctor for medical care, can prevent a tetanus infection.
- Reviewer: Lawrence Frisch, MD, MPH
- Review Date: 06/2012 -
- Update Date: 00/61/2012 -
|
<urn:uuid:13959b93-c035-4ff4-abfa-6088611bbe5c>
| 3.53125
|
http://jfkmc.com/your-health/?/187042/DTaP-vaccine-tetanus
|
Kawasaki disease is an illness that involves the skin, mouth, and lymph nodes, and most often affects kids under age 5. The cause is unknown, but if the symptoms are recognized early, kids with Kawasaki disease can fully recover within a few days. Untreated, it can lead to serious complications that can affect the heart.
Kawasaki disease occurs in 19 out of every 100,000 kids in the United States. It is most common among children of Japanese and Korean descent, but can affect all ethnic groups.
Signs and Symptoms
Kawasaki disease can't be prevented, but usually has telltale symptoms and signs that appear in phases.
The first phase, which can last for up to 2 weeks, usually involves a persistent fever higher than 104°F (39°C) and lasts for at least 5 days.
Other symptoms that typically develop include:
severe redness in the eyes
a rash on the stomach, chest, and genitals
red, dry, cracked lips
swollen tongue with a white coating and big red bumps
sore, irritated throat
swollen palms of the hands and soles of the feet with a purple-red color
swollen lymph nodes
During the second phase, which usually begins within 2 weeks of when the fever started, the skin on the hands and feet may begin to peel in large pieces. The child also may experience joint pain, diarrhea, vomiting, or abdominal pain. If your child shows any of these symptoms, call your doctor.
Doctors can manage the symptoms of Kawasaki disease if they catch it early. Symptoms often disappear within just 2 days of the start of treatment. If Kawasaki disease is treated within 10 days of the onset of symptoms, heart problems usually do not develop.
Cases that go untreated can lead to more serious complications, such as vasculitis, an inflammation of the blood vessels. This can be particularly dangerous because it can affect the coronary arteries, which supply blood to the heart.
In addition to the coronary arteries, the heart muscle, lining, valves, and the outer membrane that surrounds the heart can become inflamed. Arrhythmias (changes in the normal pattern of the heartbeat) or abnormal functioning of some heart valves also can occur.
No single test can detect Kawasaki disease, so doctors usually diagnose it by evaluating the symptoms and ruling out other conditions.
Most kids diagnosed with Kawasaki disease will have a fever lasting 5 or more days and at least four of these symptoms:
redness in both eyes
changes around the lips, tongue, or mouth
changes in the fingers and toes, such as swelling, discoloration, or peeling
Treatment should begin as soon as possible, ideally within 10 days of when the fever begins. Usually, a child is treated with intravenous doses of gamma globulin (purified antibodies), an ingredient of blood that helps the body fight infection. The child also might be given a high dose of aspirin to reduce the risk of heart problems.
|
<urn:uuid:75809dd8-de11-4a38-a503-938979a6c1b8>
| 3.671875
|
http://kidshealth.org/PageManager.jsp?dn=Nemours&lic=60&cat_id=141&article_set=22916&ps=104
|
Hepatitis A is an infection of the liver. It can be passed easily from contaminated food, water, or close contact with an infected person..
Hepatitis A is caused by a specific virus. It may be spread by:
- Drinking water contaminated by raw sewage
- Eating food contaminated by the hepatitis A virus, especially if it has not been properly cooked
- Eating raw or partially cooked shellfish contaminated by raw sewage
- Sexual contact with a partner infected with the hepatitis A virus, especially as oral-anal contact
Hepatitis A is present in stool of people with the infection. They can spread the infection if they do not wash their hands after using the bathroom and touch other objects or food.
Factors that increase your chance of a hepatitis A infection include:
- Having close contact with an infected person—although the virus is generally not spread by casual contact
- Using household items that were used by an infected person and not properly cleaned
- Having oral-anal sexual contact with an infected person
- Traveling to or spending long periods of time in a country where hepatitis A is common or where sanitation is poor
- Working as a childcare worker, changing diapers or toilet training children
- Being in daycare centers
- Being institutionalized
- Injecting drugs—especially if you share needles
- Receiving plasma products, common in conditions like hemophilia
Hepatitis A does not always cause symptoms. Adults are more likely to have them than children.
- Loss of appetite
- Nausea and vomiting
- Abdominal pain or discomfort
- Yellowing of the eyes and skin
- Darker colored urine
- Light or chalky colored stools
The doctor will ask about your symptoms and medical history. A physical exam will be done.
Tests may include:
- Blood test—to look for signs of hepatitis A
- Liver function studies
Hepatitis A usually goes away on its own within two months. There are no lasting effects in most once the infection passes.
The goals of hepatitis A treatments are to:
- Help you stay as comfortable as possible
- Prevent the infection from being passed to others
- Prevent stress on the liver while it's healing. Mainly done by avoiding certain substances like specific medications or alcohol
You will be immune to the virus once you are well.
In rare cases, the infection is very severe. A liver transplant may be needed in these cases if the liver is severely damaged.
To decrease your chance of hepatitis A:
- Wash your hands often with soap and water.
- Wash your hands before eating or preparing food.
- Avoid using household utensils that a person with hepatitis A may touch. Make sure all household utensils are carefully cleaned.
- Avoid sexual contact with a person with hepatitis A.
- Avoid injected drug use. If you do, do not share needles.
If you travel to a high risk region, take the following precautions:
- Drink bottled water
- Avoid ice chips
- Wash fruits well
- Eat well-cooked food
Medical treatments that may help prevent infection include:
- Immune (Gamma) Globulin—temporary protection from hepatitis A. It can last about 3-6 months. It must be given before exposure to the virus or within two weeks after exposure.
Hepatitis A Vaccine—highly effective in preventing infection. It provides full protection four weeks after the first injection. A second injection provides long-term protection.
The vaccine should be considered for:
- All children aged 12-23 months
- Children aged 24 months or older who are at high risk and have not been previously vaccinated
- People traveling to areas where hepatitis A is prevalent (The Centers for Disease Control and Prevention's Traveler's Health website shows which areas have a high prevalence of hepatitis A.)
- Men who have sex with men
- Injection drug users
- People who are at risk because of their job, such as lab workers
- People with chronic liver disease
- People with blood-clotting disorders, such as hemophilia
- People who will have close contact with an adopted child from a medium- or high-risk area
- People who desire immunity to hepatitis A
Check with your doctor to see if you should receive the vaccine.
- Reviewer: Brian Randall, MD
- Review Date: 02/2013 -
- Update Date: 02/20/2013 -
|
<urn:uuid:3cb6b1ba-520e-4bcb-b5a0-1322328afb04>
| 3.734375
|
http://medtropolis.com/your-health/?/11800/Hepatitis-A
|
No doubt Native Americans took advantage of the
natural bounty of the Suwannee and the neighboring forest. By
around 7500 BC the Native American population increased, and people
began to settle, at least for a time, along rivers and lakes. They
fished, gathered freshwater snails, and hunted deer. Within Andrews
on the bluff above the Suwannee are the remains of an ancient
hunting and fishing camp. When Spanish explorer Narvarez crossed
the Suwannee thousands of years later, his men called it "River of
the Deer." Later, Indians escaping to Florida from other parts of
the Southeast named it "Suwani," meaning "echo river" in Creek.
Sound echoes from the river's limestone bluffs, especially when the
water is low.
Postcard, 1936 - Florida Photo Archives
Ferry on the Suwannee ca 1882 - Florida Photo Archives
By the 1830s the tranquil, tree-lined Suwannee
became an important navigation route. Steamboats carried lumber to
Cedar Key for transport by steamship to Europe and the Northeast.
Much of the virgin cypress in the Suwannee floodplain was harvested
in the early 1900s. Furrows created by "snaking" huge cypress logs
are still visible along the banks of the Suwannee.
In the early part of the 1900s what was later to
become Andrews was subject to a wide range of uncontrolled uses,
including open range livestock grazing. Range hogs readily adapted
to the habitat and are still present on Andrews today as hunters
rediscover each fall.
In 1945 the Andrews family purchased the area. They
managed the land for outdoor recreation and were careful to protect
natural resources. Limited weekend hunts were held for deer,
turkey, and squirrel, and no mining or significant timber harvest
occurred. The Andrews family created four, five-acre clearings in
the upland hardwoods and scattered roadside openings.
In the late 1970s the deer density approached one
deer per ten acres, which resulted in severe over-browsing of
understory vegetation and a decline in the physical condition of
the deer. Doe harvest was initiated in the early 1980s to reduce
the population and to achieve a more balanced sex ratio.
The state purchased the land in 1985 through the
Save Our Rivers and Conservation and Recreation Lands programs.
|
<urn:uuid:4fe663df-1197-40be-995f-a78704bce4af>
| 3.75
|
http://myfwc.com/viewing/recreation/wmas/lead/andrews/history/
|
What is API?
API is an interface that allows software programs to interact with each other. It defines a set of rules that should be followed by the programs to communicate with each other. APIs generally specify how the routines, data structures, etc. should be defined in order for two applications to communicate. APIs differ in the functionality provided by them. There are general APIs that provide library functionalities of a programming language such as the Java API. There are also APIs that provides specific functionalities such as the Google Maps API. There are also language dependent APIs, which could only be used by a specific programming language. Furthermore, there are language independent APIs that could be used with several programming languages. APIs needs to be implemented very carefully by exposing only the required functionality or data to the outside, while keeping the other parts of the application inaccessible. Usage of APIs has become very popular in the internet. It has become very common to allow some of the functionality and data through an API to the outside on the Web. This functionality can be combined to offer an improved functionality to the users.
What is SDK?
SDK is a set of tools that can be used to develop software applications targeting a specific platform. SDKs include tools, libraries, documentation and sample code that would help a programmer to develop an application. Most of the SDKs could be downloaded from the internet and many of the SDKs are provided free to encourage the programmers to use the SDK‘s programming language. Some widely used SDKs are Java SDK (JDK) that includes all the libraries, debugging utilities, etc., which would make writing programs much easier in Java. SDKs make the life of a software developer easy, since there is no need to look for components/ tools that are compatible with each other and all of them are integrated in to a single package that is easy to install.
What is the difference between API and SDK?
API is an interface that allows software programs to interact with each other, whereas a SDK is a set of tools that can be used to develop software applications targeting a specific platform. The simplest version of a SDK could be an API that contains some files required to interact with a specific programming language. So an API can be seen as a simple SDK without all the debugging support, etc.
|
<urn:uuid:7cf35450-4a45-4a04-92c2-84c70317cbd0>
| 3.40625
|
http://programmers.stackexchange.com/questions/101873/whats-the-difference-between-an-api-and-an-sdk
|
Light glowing from a "super-Earth" planet beyond our solar system has been detected by Nasa’s Spitzer Telescope.
Until now, scientists have never been able to detect infrared light emanating from 55 Cancri E, a super-hot extrasolar planet twice the size and eight times the mass of our own.
55 Cancri E is one of five exoplanets orbiting a bright star named 55 Cancri in a solar system lying in the constellation of Cancer (The Crab).
Previously, Spitzer and other telescopes were able to study the planet by observing how the light from 55 Cancri changed as the planet passed in front of the star.
In the new study, Spitzer instead measured how much infrared light came from the planet itself – revealing some of the planet’s major features.
At 41-light years from Earth, the giant planet is considered uninhabitable.
The giant planet is tidally locked, so one side always faces the star. The telescope found that the sun-facing side is extremely hot, indicating the planet probably does not have a substantial atmosphere to carry the sun's heat to the unlit side.
[Related content: Amazing Nasa footage shows how the Earth looks from space]
On its sun-facing side, the surface has a temperature of 1,727 Celsius – or 3,140 degrees Fahrenheit – That’s hot enough to melt silver or aluminium.
The new findings are consistent with a previous theory that 55 Cancri E is a water world: A rocky core surrounded by a layer of water in a "supercritical" state where it is both liquid and gas, and topped by a blanket of steam.
Bill Danchi, Spitzer programme scientist at NASA, said: “Spitzer has amazed us yet again. The spacecraft is pioneering the study of atmospheres of distant planets and paving the way for NASA's upcoming James Webb Space Telescope to apply a similar technique on potentially habitable planets.”
Michael Werner, who also works on the Spitzer project, added: “When we conceived of Spitzer more than 40 years ago, exoplanets hadn't even been discovered. Because Spitzer was built very well, it's been able to adapt to this new field and make historic advances such as this.”
The planet was first discovered in 2004 and the new findings are published in the current issue of Astrophysical Journal Letters.
|
<urn:uuid:ee2e220e-a6e4-4be3-b81d-28773e170e84>
| 3.953125
|
http://uk.news.yahoo.com/light-detected-from-super-earth-planet-55-cancri-e-by-nasa-spitzer-telescope.html?.tsrc=yahoo
|
Alternate Names : Sexual Abuse, Sexual Assault
Rape is the physical act of attacking another person and forcing that person to have sex. It is the illegal sexual penetration of any body opening. Rape can happen to men, women, and children. It is often violent, although sometimes the threat is only implied. Rape can also occur without the victim knowing about it. This can happen if the victim is unconscious, intoxicated, or high on drugs.
Male rapists usually have an extreme hatred for women. They may feel inadequate and have problems with sexual performance. At least half the time, the rapist knows the victim and works or lives near the victim. Most rapes are planned ahead of time by the attacker. More than half of sexual assaults involve a weapon.
What is the information for this topic?
Following are some safety measures to help prevent rape when you are at home or in your car:
Don't let a stranger into the house without proper identification.
Don't list a first name on a mailbox or in a phone book.
Have the key ready before reaching the door of a car or house.
Keep a light on at all entrances.
Keep doors and windows locked.
Look in the car before entering.
Make arrangements with a neighbor for assistance in emergency situations.
Set the house lights to go on and off with a timer.
Other safety measures you can take to help prevent rape are as follows:
Appear strong and confident.
Avoid isolated and secluded areas.
Don't walk or jog alone at night.
Look for unusual behavior in those around you.
Scream loudly if attacked.
Sit in lighted areas and near other people such as the driver when using public transportation.
When someone has been raped, the rape should immediately be reported to the police. The victim should be taken to a medical facility and examined. The person should not bathe before this examination, as evidence might be destroyed. Additionally, clothing or samples of clothing might be collected by the police as evidence.
During this exam, a healthcare provider will take the following steps:
check for bruises, bite marks, and other trauma
remove pubic hair samples
take swabs from the anus and mouth
take swabs from the vaginal area if the victim is a female
test for pregnancy if the victim is a female, and provide emergency contraception as needed
test for sexually transmitted diseases and provide treatment as needed
The provider will treat all cuts and wounds. But often the emotional wounds are more severe than the physical wounds. It is very important that the victim get counseling and therapy. A local rape crisis center can help the victim through this trauma.
Recovery from rape varies from person to person. Usually the physical wounds heal quickly. Mental wounds can last for many years after the attack. A rape victim may be viewed as suffering a posttraumatic stress disorder. This usually has an acute phase, lasting a few days to a few weeks, which is followed by a long-term process of recovery. Many rape victims suffer from the following:
If the person doesn't receive effective treatment, he or she may experience these difficulties:
inability to establish long-term relationships
problems with sex
Rape victims can go on to lead normal lives. But it's very important to their mental health that they get proper counseling. Healthcare providers can help the victim work through many of the problems that result from rape. They help monitor the victim's healing, both physically and mentally.
|
<urn:uuid:e6437144-d5c4-4976-ae15-419bda125a5c>
| 3.546875
|
http://www.3-rx.com/rape/default.php
|
by Jos Van der Poel
Down’s syndrome is a genetic disorder (in stead of two these persons have three chromosomes 21) that besides a number of physical characteristics leads to intellectual impairment.
It occurs in one out of every 1.000 births. Life expectancy of people with Down’s syndrome has increased substantially over the last century: about 50 % of them will reach the age of 60. Because of the trisomie 21 people with Down’s syndrome have an overexpression of the amyloid precursor protein. Amyloid is the main ingredient of the plaques, which are found in the brains of people with Alzheimer’s disease.
Symptoms and course
Not all persons with Down’s syndrome show evidence of cognitive deterioration or other clinical evidence of dementia even after extended periods of observation.
Clinical symptoms at first are increasing depression, indifference and a decline in social communication. Later symptoms are: seizures in previously unaffected persons, changes in personality, loss of memory and general functions, long periods of inactivity or apathy, hyperactive reflexes, loss of activity of daily skills, visual retention deficits, loss of speech, disorientation, increase in stereotyped behaviour and abnormal neurological signs.
Especially for brothers and sisters who are confronted with the responsibility for (the care of) their sibling with Down’s syndrome when their parents have died. It is distressing when this person develops Alzheimer’s disease at a relatively young age. Not only are they loosing the person they (often) love very much, but the burden of care gets heavier.
Causes and risk factors
In Down’s syndrome the development of Alzheimer’s disease seems to be linked directly to the overexposure to APP. The ApoE2 gene seems to have a protective effect in Down’s syndrome too, but whether ApoE4 increases the risk of Alzheimer’s disease in Down’s syndrome is not clear yet. Men and women seem to be equally susceptible.
Down’s syndrome originates in an extra copy of chromosome 21.
At least 36 % of the people with Down’s syndrome aged 50 – 59 years and 65 % aged 60 and older are affected by dementia. Brain changes associated with Alzheimer’s disease are found in 96 % of all adults with Down’s syndrome.
Diagnosing dementia in people with Down’s syndrome is very difficult, as the dementia symptoms are often masked by the existing intellectual impairment. Several screening and evaluation procedures have been developed. These evaluations must be performed at select intervals, thus comparing with the person’s previous score. Definitive diagnosis is only available after death.
Care and treatment
Because of limited personel in small scale living settings for people with an intellectual impairment, persons with dementia often have to move (back) to an institution for mentally retarded people. Research has shown that donepezil (Aricept®) has a positive though not significant effect.
Ongoing research/Clinical trials
Erasmus University Rotterdam (Evenhuis HM)
- Beer EFG de; De effecten van donepezil bij Downsyndroom; Down + Up 2003; 62
- Lott IT, Head E; Down syndrome and Alzheimer’s disease: a link between development and aging; Ment Ret Dev Dis 2001; 7
- Visser FE; Down en Alzheimer in perspectief; dissertation 1996
- Down’s Syndrome and Alzheimer’s Disease; Briefing North West Training & Development Team (1995)
- Dementia an Intellectual Disabilities; Fact sheet Alzheimer’s Disease International (s.a.)
Last Updated: vendredi 09 octobre 2009
|
<urn:uuid:df06c0b9-a581-421c-8651-92c6b918c8b9>
| 3.71875
|
http://www.alzheimer-europe.org/FR%C3%84%C2%BC%C4%86%C2%A6%C4%80%C2%BD%C3%84%C2%BC%C4%86%C2%A6%C4%80%C2%BD%20%C3%84%C2%BC%C4%86%C2%A6%C4%80%C2%BD%C3%84%C2%BC%C4%86%C2%A6%C4%80%C2%BD%C3%84%20%C4%80%C2%B3/Dementia/Other-forms-of-dementia/Neuro-Degenerative-Diseases/Down-syndrome
|
Every generation has to reinvent the practice of computer programming. In the 1950s the key innovations were programming languages such as Fortran and Lisp. The 1960s and '70s saw a crusade to root out "spaghetti code" and replace it with "structured programming." Since the 1980s software development has been dominated by a methodology known as object-oriented programming, or OOP. Now there are signs that OOP may be running out of oomph, and discontented programmers are once again casting about for the next big idea. It's time to look at what might await us in the post-OOP era (apart from an unfortunate acronym).
The Tar Pit
The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when "the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs." Half a century later, we're still debugging.
The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1s and 0s. Moreover, it was up to the programmer to keep track of where everything was stored in the machine's memory. Before you could call a subroutine, you had to calculate its address.
The technology that lifted these burdens from the programmer was assembly language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming.
Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x2+y2 might require dozens of assembly-language instructions. Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x2+y2 would be written simply as X**2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler.
With Fortran and the languages that followed, programmers finally had the tools they needed to get into really serious trouble. By the 1960s large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large-system programming a "tar pit" and remarked, "Everyone seems to have been surprised by the stickiness of the problem."
One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra's brief letter to the editor titled "Go to statement considered harmful." Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the goto command, which allows jumps into or out of the middle of a routine). Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Böhm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs.
Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.
Nouns and Verbs
The true history of software development is not a straight line but a meandering river with dozens of branches. Some of the tributaries—functional programming, declarative programming, methods based on formal proofs of correctness—are no less interesting than the mainstream, but here I have room to explore only one channel: object-
Consider a program for manipulating simple geometric figures. In a non-OOP environment, you might begin by writing a series of procedures with names such as rotate, scale, reflect, calculate-area, calculate-perimeter. Each of these verblike procedures could be applied to triangles, squares, circles and many other shapes; the figures themselves are nounlike entities embodied in data structures separate from the procedures. For example, a triangle might by represented by an array of three vertices, where each vertex is a pair of x and y coordinates. Applying the rotate procedure to this data structure would alter the coordinates and thereby turn the triangle.
What's the matter with this scheme? One likely source of trouble is that the procedures and the data structures are separate but interdependent. If you change your mind about the implementation of triangles—perhaps using a linked list of points instead of an array—you must remember to change all the procedures that might ever be applied to a triangle. Also, choosing different representations for some of the figures becomes awkward. If you describe a circle in terms of a center and a radius rather than a set of vertices, all the procedures have to treat circles as a special case. Yet another pitfall is that the data structures are public property, and the procedures that share them may not always play nicely together. A figure altered by one procedure might no longer be valid input for another.
Object-oriented programming addresses these issues by packing both data and procedures—both nouns and verbs—into a single object. An object named triangle would have inside it some data structure representing a three-sided shape, but it would also include the procedures (called methods in this context) for acting on the data. To rotate a triangle, you send a message to the triangle object, telling it to rotate itself. Sending and receiving messages is the only way objects communicate with one another; outsiders are not allowed direct access to the data. Because only the object's own methods know about the internal data structures, it's easier to keep them in sync.
This scheme would not have much appeal if every time you wanted to create a triangle, you had to write out all the necessary data structures and methods—but that's not how it works. You define the class triangle just once; individual triangles are created as instances of the class. A mechanism called inheritance takes this idea a step further. You might define a more-general class polygon, which would have triangle as a subclass, along with other subclasses such as quadrilateral, pentagon and hexagon. Some methods would be common to all polygons; one example is the calculation of perimeter, which can be done by adding the lengths of the sides, no matter how many sides there are. If you define the method calculate-perimeter in the class polygon, all the subclasses inherit this code.
Object-oriented programming traces its heritage back to simula, a programming language devised in the 1960s by Ole-Johan Dahl and Kristen Nygaard. Some object-oriented ideas were also anticipated by David L. Parnas. And the Sketchpad system of Ivan Sutherland was yet another source of inspiration. The various threads came together when Alan Kay and his colleagues created the Smalltalk language at the Xerox Palo Alto Research Center in the 1970s. Within a decade several more object-oriented languages were in use, most notably Bjarne Stroustrup's C++, and later Java. Object-oriented features have also been retrofitted onto older languages, such as Lisp.
As OOP has transformed the way programs are written, there has also been a major shift in the nature of the programs themselves. In the software-engineering literature of the 1960s and '70s, example programs tend to have a sausage-grinder structure: Inputs enter at one end, and outputs emerge at the other. An example is a compiler, which transforms source code into machine code. Programs written in this style have not disappeared, but they are no longer the center of attention. The emphasis now is on interactive software with a graphical user interface. Programming manuals for object-oriented languages are all about windows and menus and mouse clicks. In other words, OOP is not just a different solution; it also solves a different problem.
Aspects and Objects
Most of the post-OOP initiatives do not aim to supplant object-oriented programming; they seek to refine or improve or reinvigorate it. A case in point is aspect-oriented programming, or AOP.
The classic challenge in writing object-oriented programs is finding the right decomposition into classes and objects. Returning to the example of a program for playing with geometric figures, a typical instance of the class pentagon might look like this: . But this object is also a pentagon: . And so is this: . To accommodate the differences between these figures, you could introduce subclasses of pentagon—perhaps named convex-pentagon, non-convex-pentagon and five-pointed-star. But then you would have to do the same thing for hexagons, heptagons and so forth, which soon becomes tedious. Moreover, this classification would give you no way to write methods that apply, say, to all convex polygons but to no others. An alternative decomposition would divide the polygon class into convex-polygon and non-convex-polygon, then subdivide the latter class into simple-polygon and self-intersecting-polygon. With this choice, however, you lose the ability to address all five-sided figures as a group.
One solution to this quandary is multiple inheritance—allowing a class to have more than one parent. Thus a five-pointed star could be a subclass both of pentagon and of self-intersecting-polygon and could inherit methods from both. The wisdom of this arrangement is a matter of eternal controversy in the OOP community.
Aspect-oriented programming takes another approach to dealing with "crosscutting" issues that cannot easily be arranged in a treelike hierarchy. An example in the geometry program might be the need to update a display window every time a figure is moved or modified. The straightforward OOP solution is to have each method that changes the appearance of a figure (such as rotate or scale) send a message to a display-manager object, telling the display what needs to be redrawn. But hundreds of methods could send such messages. Even apart from the boredom of writing the same code over and over, there is the worry that the interface to the display manager might change someday, requiring many methods to be revised. The AOP answer is to isolate the display-update "aspect" of the program in a module of its own. The programmer writes one instance of the code that calls for a display update, along with a specification of all the occasions on which that code is to be invoked—for example, whenever a rotate method is executed. Then even though the text of the rotate method does not mention display updating, the appropriate message is sent at the appropriate time.
An AOP system called AspectJ, developed by Gregor Kiczales and a group of colleagues at Xerox PARC, works as an extension of the Java language. AOP is particularly attractive for implementing ubiquitous tasks such as error-handling, the logging of events, and synchronizing multiple threads of execution, which might otherwise be scattered throughout a program. But there are dissenting views. Jörg Kienzle and Rachid Guerraoui report on an attempt to build a transaction-
processing system with AspectJ, where the key requirement is that transactions be executed completely or not at all (so that the system cannot debit one account without crediting another). They found it difficult to cleanly isolate this property as an aspect.
Surely the most obvious place to look for help with programming a computer is the computer itself. If Fortran can be compiled into machine code, then why not transform some higher-level description or specification directly into a ready-to-run program? This is an old dream. It lives on under names such as generative programming, metaprogramming and intentional programming.
In general, fully automatic programming remains beyond our reach, but there is one area where the idea has solid theoretical underpinnings as well as a record of practical success: in the building of compilers. Instead of hand-crafting a compiler for a specific programming language, the common practice is to write a grammar for the language and then generate the compiler with a program called a compiler compiler. (The best-known of these programs is Yacc, which stands for "yet another compiler compiler.")
Generative programming would adapt this model to other domains. For example, a program generator for the kind of software that controls printers and other peripheral devices would accept a grammar-like description of the device and produce an appropriately specialized program. Another kind of generator might assemble "protocol stacks" for computer networking.
Krzysztof Czarnecki and Ulrich W. Eisenecker compare a generative-programming system to a factory for manufacturing automobiles. Building the factory is more work than building a single car by hand, but the factory can produce thousands of cars. Moreover, if the factory is designed well, it can turn out many different models just by changing the specifications. Likewise generative programming would create families of programs tailored to diverse circumstances but all assembled from similar components.
The Quality Without a Name
Another new programming methodology draws its inspiration from an unexpected quarter. Although the term "computer architecture" goes back to the dawn of the industry, it was nonetheless a surprise when a band of software designers became disciples of a bricks-and-steel architect, Christopher Alexander. Even Alexander was surprised.
Alexander is known for the enigmatic thesis that well-designed buildings and towns must have "the quality without a name." He explains: "The fact that this quality cannot be named does not mean that it is vague or imprecise. It is impossible to name because it is unerringly precise." Does that answer your question?
Even if the quality had a name, it's not clear how one would turn it into a prescription for building good houses—or good software. Fortunately, Alexander is more explicit elsewhere in his writings. He urges architects to exploit recurrent patterns observed in both problems and solutions. For the pattern of events labeled "watching the world go by," a good solution is probably going to look something like a front porch. Taken over into the world of software, this approach leads to a catalogue of design patterns for solving specific, recurring problems in object-oriented programming. For example, a pattern named Bridge deals with the problem of setting up communications between two objects that may not know of each other's existence at the time a program is written. A pattern named Composite handles the situation where a single object and a collection of multiple objects have to be given the same status, as is often the case with files and directories of files.
Over the past 10 years a sizable community has grown up around the pattern idea. There are dozens of books, web sites and an annual conference called Pattern Languages of Programming, or PLoP. Compared with earlier reform movements in computing, the pattern community sounds a little unfocused and New Age. Whereas structured programming was founded on a proof that three specific structures suffice to express all algorithms, there is nothing resembling such a proof to justify the selection of ideas included in catalogues of design patterns. As a matter of fact, the whole idea of proofs seems to be out of favor in the pattern community.
Software Jeremiahs usually preach that programming should be an engineering profession, guided by standards analogous to building codes, or else it should be a branch of applied mathematics, with programs constructed like mathematical proofs. The pattern movement rejects both of these ideals and suggests instead that programmers are like carpenters or stonemasons—stewards of a body of knowledge gained by experience and passed along by tradition and apprenticeship. This is a movement of practitioners, not academics. Pattern advocates express particular contempt for the notion that programming might someday be taken over entirely by the computer. Automating a craft, they argue, is not only infeasible but also undesirable.
The rhetoric of the pattern movement may sound like the ranting of a fringe group, but pattern methods have been adopted in several large organizations producing large—and successful—software systems. (When you make a phone call, you may well be relying on the work of programmers seeking out the quality without a name.) Moreover, beyond the rhetoric, the writings of the software-patterns community can be quite down-to-earth and pragmatic.
If the pattern community is on the radical fringe, how far out is extreme programming (or, as it is sometimes spelled, eXtreme programming)? For the leaders of this movement, the issue is not so much the nature of the software itself but the way programming projects are organized and managed. They want to peel away layers of bureaucracy and jettison most of the stages of analysis, planning, testing, review and documentation that slow down software development. Just let programmers program! The recommended protocol is to work in pairs, two programmers huddling over a single keyboard, checking their own work as they go along. Is it a fad? A cult? Although the name may evoke a culture of body piercing and bungee jumping, extreme programming seems to have gained a foothold among the pinstriped suits. The first major project completed under the method was a payroll system for a transnational automobile manufacturer.
Ask Me About My OOP Diet
Frederick Brooks, who wrote of the tar pit in the 1960s, followed up in 1987 with an essay on the futility of seeking a "silver bullet," a single magical remedy for all of software's ills. Techniques such as object-oriented programming might alleviate "accidental difficulties" of software development, he said, but the essential complexity cannot be wished away. This pronouncement that the disease is incurable made everyone feel better. But it deterred no one from proposing remedies.
After several weeks' immersion in the how-to-program literature, I am reminded of the shelves upon shelves of diet books in the self-help department of my local bookstore. In saying this I mean no disrespect to either genre. Most diet books, somewhere deep inside, offer sound advice: Eat less, exercise more. Most programming manuals also give wise counsel: Modularize, encapsulate. But surveying the hundreds of titles in both categories leaves me with a nagging doubt: The very multiplicity of answers undermines them all. Isn't it likely that we'd all be thinner, and we'd all have better software, if there were just one true diet, and one true programming methodology?
Maybe that day will come. In the meantime, I'm going on a spaghetti-code diet.
© Brian Hayes
|
<urn:uuid:810d86e4-c8e4-4ea2-a9e6-937895141aaa>
| 3.359375
|
http://www.americanscientist.org/issues/id.3315,y.0,no.,content.true,page.3,css.print/issue.aspx
|
What is skin testing for allergies?
The most common way to test for allergies is on the skin, usually the forearm or the back. In a typical skin test, a doctor or nurse will place a tiny bit of an allergen (such as pollen or food) on the skin, then make a small scratch or prick on the skin.
The allergist may repeat this, testing for several allergens in one visit. This can be a little uncomfortable, but not painful.
If your child reacts to one of the allergens, the skin will swell a little in that area. The doctor will be able to see if a reaction occurs within about 15 minutes. The swelling usually goes down within about 30 minutes to a few hours. Other types of skin testing include injecting allergens into the skin or taping allergens to the skin for 48 hours.
With a skin test, an allergist can check for these kinds of allergies:
- environmental, such as mold, pet dander, or tree pollen
- food, such as peanuts or eggs
- medications, such as penicillin
Some medications (such as antihistamines) can interfere with skin testing, so check with the doctor to see if your child's medications need to be stopped before the test is done. While skin testing is useful and helpful, sometimes additional tests (like blood tests or food challenges) also must be done to see if a child is truly allergic to something.
While skin tests are usually well tolerated, in rare instances they can cause a more serious allergic reaction. This is why skin testing must always be done in an allergist's office, where the doctor is prepared to handle a reaction.
Reviewed by: Larissa Hirsch, MD
Date reviewed: May 2012
|
<urn:uuid:b4e1e560-c929-419e-ad52-5e1396468d93>
| 3.421875
|
http://www.childrenscolorado.org/wellness/info/parents/89105.aspx
|
Image: Walter Tape
Colourful light pillars often appear in winter when snow or ice crystals reflect light from a strong source like the sun or moon. Aided by extreme cold, light pillars appear when light bounces off the surface of flat ice crystals floating relatively close to the ground. The pillars look like feathers of light that extend vertically either above or below the light source, or both.
Diagrams showing the formation of light pillars from street lamps (left) and the reflection of light rays from plate ice crystal surfaces (right):
Images: Keith C. Heidorn
Light pillars also form from strong artificial light sources like street lamps, car headlights or the strong light sources of an ice-skating rink as in the picture above of Fairbanks, Alaska. Though they are local phenomena, light pillars can look distant like an aurora. The closer an observer is to the source of the light pillar, the larger it seems.
National Geographic has more pictures of recent light pillars in Idaho, California, Belgium, Latvia and Canada. You can also view another Environmental Graffiti article on more incredible light phenomena here.
|
<urn:uuid:06327a08-ddb4-419b-908f-4c4688c69b99>
| 3.875
|
http://www.environmentalgraffiti.com/sciencetech/light-pillars/8084
|
The next day we learned that frogs come from eggs. Frog eggs look a little different from the eggs of other animals and it took a little convincing to persuade them that they were actually eggs. We talked about the life cycle of frogs. After that we made frogs, complete with long, curly tongues and wrote about them. Here is how they turned out...
|I just love these little guys!|
After frogs & turtles, we talked about snakes!
I am not a fan of snakes, but my students ALWAYS love to learn about them! This year my class is heavy on boys (12 out of 17) and this unit keeps them so engaged! I LOVE teaching through themes. I know that the theme keeps them so engaged that I can slip in writing, reading & math skills without them even realizing it! :) I do not have pictures of our snakes because I couldn't get them to turn out. We colored 2 sides of a paper plate and then cut them in a swirl so that they looked like curly snakes. Then, I hung them from the ceiling. They would twirl when the air kicked in and the kids loved it!
All in all, this was a great unit. My kids stayed engaged and excited the entire week. I was able to teach them valuable math, reading, writing, & science skills...what more could I have asked for? :)
|
<urn:uuid:2de360b8-1c49-4cf8-aaef-680f2b45bb4d>
| 3.328125
|
http://www.jenskinderkids.blogspot.com/2011_05_01_archive.html
|
Adam Health Illustrated Encyclopedia Multimedia - TestSearch Health Information
Creatine phosphokinase test
Creatine phosphokinase (CPK) is an enzyme found mainly in the heart, brain, and skeletal muscle. This article discusses the test to measure the amount of CPK in the blood.
CPK test; Creatine kinase; CK test
How the test is performed
A blood sample is needed. This may be taken from a vein. The procedure is called a venipuncture.
This test may be repeated over 2 or 3 days for if you are a patient in the hospital.
How to prepare for the test
Usually, no special preparation is necessary.
Tell your doctor about any medications you are taking. Drugs that can increase CPK measurements include amphotericin B, certain anesthetics, statins, fibrates, dexamethasone, alcohol, and cocaine.
How the test will feel
When the needle is inserted to draw blood, you may feel moderate pain, or only a prick or stinging sensation. Afterward, there may be some throbbing.
Why the test is performed
When the total CPK level is very high, it usually means there has been injury or stress to muscle tissue, the heart, or the brain.
Muscle tissue injury is most likely. When a muscle is damaged, CPK leaks into the bloodstream. Determining which specific form of CPK is high helps doctors determine which tissue has been damaged.
This test may be used to:
- Diagnose heart attack
- Evaluate cause of chest pain
- Determine if or how badly a muscle is damaged
- Detect dermatomyositis, polymyositis, and other muscle diseases
- Tell the difference between malignant hyperthermia and postoperative infection
The pattern and timing of a rise or fall in CPK levels can be diagnostically significant, particularly if a heart attack is suspected.
Except in unusual cases, other tests are used to diagnose a heart attack.
Total CPK normal values:
- 10 - 120 micrograms per liter (mcg/L)
Normal value ranges may vary slightly among different laboratories. Some labs use different measurements or test different samples. Talk to your doctor about the meaning of your specific test results.
What abnormal results mean
High CPK levels may be seen in patients who have:
- Brain injury or stroke
- Delirium tremens
- Dermatomyositis or polymyositis
- Electric shock
- Heart attack
- Inflammation of the heart muscle (myocarditis)
- Lung tissue death (pulmonary infarction)
- Muscular dystrophies
Additional conditions may give positive test results:
What the risks are
There is very little risk involved with having your blood taken. Veins and arteries vary in size from one patient to another and from one side of the body to the other. Taking blood from some people may be more difficult than from others.
Other risks associated with having blood drawn are slight but may include:
- Excessive bleeding
- Fainting or feeling light-headed
- Hematoma (blood accumulating under the skin)
- Infection (a slight risk any time the skin is broken)
Other tests should be done to determine the exact location of muscle damage.
Factors that may affect test results include cardiac catheterization, intramuscular injections, trauma to muscles, recent surgery, and heavy exercise.
Anderson JL. ST segment elevation acute myocardial infarction and complications of myocardial infarction. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 73.
Chinnery PF. Muscle diseases. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 429.
Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by A.D.A.M. Health Solutions, Ebix, Inc., Editorial Team: David Zieve, MD, MHA, David R. Eltz, Stephanie Slon, and Nissi Wang.
|
<urn:uuid:bc6d499a-ae1b-490d-a591-0860650d7e5a>
| 3.34375
|
http://www.mercydurango.org/body.cfm?id=186&action=detail&AEArticleID=003503&AEProductID=Adam2004_5117&AEProjectTypeIDURL=APT_1
|
Spider silk can be scary enough to insects to act as a pest repellant, researchers say.
These findings could lead to a new way to naturally help protect crops, scientists added.
Spiders are among the most common predators on land. Although not all spiders weave webs, they all spin silk that may serve other purposes. For instance, many tiny spiders use silk balloons to travel by air.
Science news from NBCNews.com
Researchers suspected that insects and other regular prey of spiders might associate silk with the risk of getting eaten. As such, they reasoned silk might scare insects off.
The scientists experimented with Japanese beetles (Popillia japonica) and Mexican bean beetles (Epilachna varivestis). These plant-munching pests have spread across eastern North America within the past half-century. [ Ewww! Nature's Biggest Pests ]
The beetles were analyzed near green bean plants (Phaseolus vulgaris) in both the lab and a tilled field outdoors. The investigators applied two kinds of silk on the plants — one from silkworms (Bombyx mori) and another from a long-jawed spider (Tetragnatha elongata), a species common in riverbank forests but not in the region the researchers studied.
Both spider and silkworm silk reduced insect plant-chewing significantly. In the lab, both eliminated insect damage entirely, while in the field, spider silk had a greater effect — plants enclosed with beetles and spider silk experienced about 50 percent less damage than leaves without spider silk, while silkworm silk only led to about a 10 to 20 percent reduction. Experiments with other fibers revealed that only silk had this protective effect.
"This work suggests that silk alone is a signal to potential prey that danger is near," researcher Ann Rypstra, an evolutionary ecologist at Miami University in Ohio, told LiveScience.
Rypstra was most surprised that the effect occurred even though the species involved do not share any evolutionary history together as predator and prey. This suggests "herbivores are using the silk as some sort of general signal that a spider — any ol' spider — is around and responding by reducing their activity or leaving the area," she said.
While more work will need to be done before this research might find applied use, the fact that the presence of silk alone reduced damage caused by two economically important pest insects "suggests that there could be applications in agricultural pest management and biological control," Rypstra said.
Rypstra is also interested in the chain reaction of events that silk might trigger in an ecosystem.
"For example, if an herbivore encounters a strand of silk and alters its behavior in a particular manner, does that make it more susceptible to predation by a non-spider?" Rypstra asked. "Do spiders that leave lots of silk behind have a larger impact in the food web, and how does it vary from habitat to habitat? These are just a couple of questions that we might be exploring in the near future."
Rypstra and her colleagues detailed their findings online Wednesday in the journal Biology Letters.
- Gallery: Spooky Spiders
- What Really Scares People: Top 10 Phobias
- Gallery: Dazzling Photos of Dew-Covered Insects
© 2012 LiveScience.com. All rights reserved.
|
<urn:uuid:2bf9f9b2-3cdc-4ce4-979e-a8d74b24efa5>
| 3.96875
|
http://www.nbcnews.com/id/50018574/ns/technology_and_science-science/
|
A group of researchers at DTU Space is developing an observatory to be mounted on the International Space Station. Called ASIM, the observatory will among other things photograph giant lightning discharges above the clouds. The objective is to determine whether giant lightning discharges affect the Earth’s climate.
The question is whether giant lightning discharges, which shoot up from the clouds towards space, are simply a spectacular natural phenomenon, or whether they alter the chemical composition of the atmosphere, affecting the Earth’s climate and the ozone layer.
In recent years, scientists at DTU Space have studied giant lightning using high-altitude mountain cameras. From time to time, the cameras have succeeded in capturing low-altitude lightning flashes which have shot up from a thundercloud. The International Space Station provides a clear view of these giant lightning discharges, and the opportunity to study them will be significantly improved with the introduction of the observatory.
The researchers will also use ASIM to study how natural and man-made events on the ground – such as hurricanes, dust storms, forest fires and volcanic eruptions – influence the atmosphere and climate.
|
<urn:uuid:64609457-8d80-4c2f-9854-ad43579b4866>
| 3.90625
|
http://www.space.dtu.dk/English/Research/Climate_and_Environment/Electric_storms.aspx
|
HISTORY OF SCULPTURE
Chronological summary of major movements, styles, periods and artists that have contributed to the evolution and development of visual art.
STONE AGE ART (c.
2,500,000 - 3,000 BCE)
ORIGINS OF ART
STONE AGE ART
BRONZE AGE ART
IRON AGE ART
DARK AGES/MEDIEVAL ART
QUESTIONS ABOUT FINE ARTS
Prehistoric art comes from three epochs of prehistory: Paleolithic, Mesolithic and Neolithic. The earliest recorded art is the Bhimbetka petroglyphs (a set of 10 cupules and an engraving or groove) found in a quartzite rock shelter known as Auditorium cave at Bhimbetka in central India, dating from at least 290,000 BCE. However, it may turn out to be much older (c.700,000 BCE). This primitive rock art was followed, no later than 250,000 BCE, by simple figurines (eg. Venus of Berekhat Ram [Golan Heights] and Venus of Tan-Tan [Morocco]), and from 80,000 BCE by the Blombos cave stone engravings, and the cupules at the Dordogne rock shelter at La Ferrassie. Prehistoric culture and creativity is closely associated with brain-size and efficiency which impacts directly on "higher" functions such as language, creative expression and ultimately aesthetics. Thus with the advent of "modern" homo sapiens painters and sculptors (50,000 BCE onwards) such as Cro-Magnon Man and Grimaldi Man, we see a huge outburst of magnificent late Paleolthic sculpture and painting in France and the Iberian peninsular. This comprises a range of miniature obese venus figurines (eg. the Venuses of Willendorf, Kostenky, Monpazier, Dolni Vestonice, Moravany, Brassempouy, Garagino, to name but a few), as well as mammoth ivory carvings found in the caves of Vogelherd and Hohle Fels in the Swabian Jura. However, the greatest art of prehistory is the cave painting at Chauvet, Lascaux and Altamira.
These murals were painted in caves reserved as a sort of prehistoric art gallery, where artists began to paint animals and hunting scenes, as well as a variety of abstract or symbolic drawings. In France, they include the monochrome Chauvet Cave pictures of animals and abstract drawings, the hand stencil art at Cosquer Cave, and the polychrome charcoal and ochre images at Pech-Merle, and Lascaux. In Spain, they include polychrome images of bison and deer at Altamira Cave in Spain. Outside Europe, major examples of rock art include: Ubirr Aboriginal artworks (from 30,000 BCE), the animal figure paintings in charcoal and ochre at the Apollo 11 Cave (from 25,500 BCE) in Namibia, the Bradshaw paintings (from 17,000 BCE) in Western Australia, and the hand stencil images at the Cuevas de las Manos (Cave of the Hands) (from 9500 BCE) in Argentina, among many others.
Against a background of a new climate, improved living conditions and consequent behaviour patterns, Mesolithic art gives more space to human figures, shows keener observation, and greater narrative in its paintings. Also, because of the warmer weather, it moves from caves to outdoor sites in numerous locations across Europe, Asia, Africa, Australasia and the Americas. Mesolithic artworks include the bushman rock paintings in the Waterberg area of South Africa, the paintings in the Rock Shelters of Bhimbetka in India, and Australian Aboriginal art from Arnhem Land. It also features more 3-D art, including bas-reliefs and free standing sculpture. Examples of the latter include the anthropomorphic figurines uncovered in Nevali Cori and Göbekli Tepe near Urfa in eastern Asia Minor, and the statues of Lepenski Vir (eg. The Fish God) in Serbia. Other examples of Mesolithic portable art include bracelets, painted pebbles and decorative drawings on functional objects, as well as ceramic pottery of the Japanese Jomon culture. The greatest Mesolithic work of art is the sculpture "Thinker From Cernavoda" from Romania.
The more "settled" and populous Neolithic era saw a growth in crafts like pottery and weaving. This originated in Mesolithic times from about 9,000 BCE in the villages of southern Asia, after which it flourished along the Yellow and Yangtze river valleys in China (c.7,500 BCE) - see Neolithic Art in China - then in the fertile crescent of the Tigris and Euphrates river valleys in the Middle East (c.7,000), before spreading to India (c.5,000), Europe (c.4,000), China (3,500) and the Americas (c.2,500). Although most art remained functional in nature, there was a greater focus on ornamentation and decoration. For example, calligraphy - one of the great examples of Chinese art - first appears during this period. Neolithic art also features free standing sculpture, bronze statuettes (notably by the Indus Valley Civilization), primitive jewellery and decorative designs on a variety of artifacts. The most spectacular form of Neolithic art was architecture: featuring large-stone structures known as megaliths, ranging from the Egyptian pyramids, to the passage tombs of Northern Europe - such as Newgrange and Knowth in Ireland - and the assemblages of large upright stones (menhirs) such as those at the Stonehenge Stone Circle and Avebury Circle in England. (For more, please see: megalithic art.) However, the major medium of Neolithic art was ceramic pottery, the finest examples of which were produced around the region of Mesopotamia (see Mesopotamian art) and the eastern Mediterranean. Towards the close of this era, hieroglyphic writing systems appear in Sumer, heralding the end of prehistory.
The most famous examples of Bronze Age art appeared in the 'cradle of civilization' around the Mediterranean in the Near East, during the rise of Mesopotamia (present-day Iraq), Greece, Crete (Minoan civilization) and Egypt. The emergence of cities, the use of written languages and the development of more sophisticated tools led the creation of a far wider range of monumental and portable artworks.
Egypt, arguably the greatest civilization in the history of ancient art, was the first culture to adopt a recognizable style of art. Egyptian painters depicted the head, legs and feet of their human subjects in profile, while portraying the eye, shoulders, arms and torso from the front. Other artistic conventions laid down how Gods, Pharaohs and ordinary people should be depicted, regulating such elements as size, colour and figurative position. A series of wonderful Egyptian encaustic wax paintings, known as the Fayum portraits, offer a fascinating glimpse of Hellenistic culture in Ancient Egypt. In addition, the unique style of Egyptian architecture featured a range of massive stone burial chambers, called Pyramids. Egyptian expertise in stone had a huge impact on later Greek architecture. Famous Egyptian pyramids include: The Step Pyramid of Djoser (c.2630 BCE), and The Great Pyramid at Giza (c.2550 BCE), also called the Pyramid of Khufu or 'Pyramid of Cheops'.
In Mesopotamia and Ancient Persia, Sumerians were developing their own unique building - an alternative form of stepped pyramid called a ziggurat. These were not burial chambers but man-made mountains designed to bring rulers and people closer to their Gods who according to legend lived high up in mountains to the east. Ziggurats were built from clay bricks, typically decorated with coloured glazes.
For most of Antiquity, the art of ancient Persia was closely intertwined with that of its neighbours, especially Mesopotamia (present-day Iraq), and influenced - and was influenced by - Greek art. Early Persian works of portable art feature the intricate ceramics from Susa and Persepolis (c.3000 BCE), but the two important periods of Persian art were the Achaemenid Era (c.550-330 BCE) - exemplified by the monumental palaces at Persepolis and Susa, decorated with sculpture, stone reliefs, and the famous "Frieze of Archers" (Louvre, Paris) created out of enameled brick - and the Sassanid Era (226-650 CE) - noted for its highly decorative stone mosaics, gold and silver dishes, frescoes and illuminated manuscripts as well as crafts like carpet-making and silk-weaving. But, the greatest relics of Sassanian art are the rock sculptures carved out of steep limestone cliffs at Taq-i-Bustan, Shahpur, Naqsh-e Rostam and Naqsh-e Rajab.
The first important strand of Aegean art, created on Crete by the Minoans, was rooted in its palace architecture at Knossos, Phaestus, Akrotiri, Kato Zakros and Mallia, which were constructed using a combination of stone, mud-brick and plaster, and decorated with colourful murals and fresco pictures, portraying mythological animal symbols (eg. the bull) as well as a range of mythological narratives. Minoan art also features stone carvings (notably seal stones), and precious metalwork. The Minoan Protopalatial period (c.1700 BCE), which ended in a major earthquake, was followed by an even more ornate Neopalatial period (c.1700-1425 BCE), which witnessed the highpoint of the culture before being terminated by a second set of earthquakes in 1425. Minoan craftsmen are also noted for their ceramics and vase-painting, which featured a host of marine and maritime motifs. This focus on nature and events - instead of rulers and deities - is also evident in Minoan palace murals and sculptures.
Named after the metal which made it prosperous, the Bronze Age period witnessed a host of wonderful metalworks made from many different materials. This form of metallugy is exemplified by two extraordinary masterpieces: The "Ram in the Thicket" (c.2500 BCE, British Museum, London) a small Iraqi sculpture made from gold-leaf, copper, lapis lazuli, and red limestone; and The "Maikop Gold Bull" (c.2500 BCE, Hermitage, St Petersburg) a miniature gold sculpture of the Maikop Culpture, North Caucasus, Russia. The period also saw the emergence of Chinese bronzeworks (from c.1750 BCE), in the form of bronze plaques and sculptures often decorated with Jade, from the Yellow River Basin of Henan Province, Central China.
For Bronze Age civilizations in the Americas, see: Pre-Columbian art, which covers the art and crafts of Mesoamerican and South American cultures.
The Iron Age saw a huge growth in artistic activity, especially in Greece and around the eastern Mediterranean. It coincided with the rise of Hellenic (Greek-influenced) culture.
Although Mycenae was an independent Greek city in the Greek Peloponnese, the term "Mycenean" culture is sometimes used to describe early Greek art as a whole during the late Bronze Age. Initially very much under the influence of Minoan culture, Mycenean art gradually achieved its own balance between the lively naturalism of Crete and the more formal artistic idiom of the mainland, as exemplified in its numerous tempera frescoes, sculpture, pottery, carved gemstones, jewellery, glass, ornaments and precious metalwork. Also, in contrast to the Minoan "maritime trading" culture, Myceneans were warriors, so their art was designed primarily to glorify their secular rulers. It included a number of tholos tombs filled with gold work, ornamental weapons and precious jewellery.
Ancient Greek art is traditionally divided into the following periods: (1) the Dark Ages (c.1100-900 BCE). (2) The Geometric Period (c.900-700 BCE). (3) The Oriental-Style Period (c.700-625 BCE). (4) The Archaic Period (c.625-500 BCE). (5) The Classical Period (c.500-323 BCE). (6) The Hellenistic Period (c.323-100 BCE). Unfortunately, nearly all Greek painting and a huge proportion of Greek sculpture has been lost, leaving us with a collection of ruins or Roman copies. Greek architecture, too, is largely known to us through its ruins. Despite this tiny legacy, Greek artists remain highly revered, which demonstrates how truly advanced they were.
Like all craftsmen of the Mediterranean area, the ancient Greeks borrowed a number of important artistic techniques from their neighbours and trading partners. Even so, by the death of the Macedonian Emperor Alexander the Great in 323 BCE, Greek art was regarded in general as the finest ever made. Even the Romans - despite their awesome engineering and military skills - never quite overcame their sense of inferiority in the face of Greek craftsmanship, and (fortunately for us) copied Greek artworks assiduously. Seventeen centuries later, Greek architecture, sculptural reliefs, statues, and pottery would be rediscovered during the Italian Renaissance, and made the cornerstone of Western art for over 400 years.
Greek pottery developed much earlier than other art forms: by 3000 BCE the Peloponnese was already the leading pottery centre. Later, following the take-over of the Greek mainland by Indo-European tribes around 2100 BCE, a new form of pottery was introduced, known as Minyan Ware. It was the first Greek type to be made on a potter's wheel. Despite this, it was Minoan pottery on Crete - with its new dark-on-light style - that predominated during the 2nd Millennium BCE. Thereafter, however, Greek potters regained the initiative, introducing a series of dazzling innovations including: beautifully proportioned Geometric Style pottery (900-725), as well as Oriental (725-600), Black-Figure (600-480) and Red-Figure (530-480) styles. Famous Greek ceramicists include Exekias, Kleitias, Ergotimos, Nearchos, Lydos, the Amasis Painter, Andokides, Euthymides, and Sophilos (all Black-Figure), plus Douris, Brygos and Onesimos (Red-Figure).
In Etruria, Italy, the older Villanovan Culture gave way to Etruscan Civilization around 700 BCE. This reached its peak during the sixth century BCE as their city-states gained control of central Italy. Like the Egyptians but unlike the Greeks, Etruscans believed in an after-life, thus tomb or funerary art was a characteristic feature of Etruscan culture. Etruscan artists were also renowned for their figurative sculpture, in stone, terracotta and bronze. Above all Etruscan art is famous for its "joi de vivre", exemplified by its lively fresco mural painting, especially in the villas of the rich. In addition, the skill of Etruscan goldsmiths was highly prized throughout Italy and beyond. Etruscan culture, itself strongly influenced by Greek styles, had a marked impact on other cultures, notably the Hallstatt and La Tene styles of Celtic art. Etruscan culture declined from 396 BCE onwards, as its city states were absorbed into the Roman Empire.
From about 600 BCE, migrating pagan tribes from the Russian Steppes, known as Celts, established themselves astride the Upper Danube in central Europe. Celtic culture, based on exceptional trading skills and an early mastery of iron, facilitated their gradual expansion throughout Europe, and led to two styles of Celtic art whose artifacts are known to us through several key archeological sites in Switzerland and Austria. The two styles are Hallstatt (600-450) and La Tene (450-100). Both were exemplified by beautiful metalwork and complex linear designwork. Although by the early 1st Millennium CE most pagan Celtic artists had been fully absorbed into the Roman Empire, their traditions of spiral, zoomorphic, knotwork and interlace designs later resurfaced and flourished (600-1100 CE) in many forms of Hiberno-Saxon art (see below) such as illuminated Gospel manuscripts, religious metalwork, and High Cross Sculpture. Famous examples of Celtic metalwork art include the Gundestrup Cauldron, the Petrie Crown and the Broighter gold torc.
Unlike their intellectual Greek neighbours, the Romans were primarily practical people with a natural affinity for engineering, military matters, and Empire building. Roman architecture was designed to awe, entertain and cater for a growing population both in Italy and throughout their Empire. Thus Roman architectural achievements are exemplified by new drainage systems, aqueducts, bridges, public baths, sports facilities and amphitheatres (eg. the Colosseum 72-80 CE), characterized by major advances in materials (eg. the invention of concrete) and in the construction of arches and roof domes. The latter not only allowed the roofing of larger buildings, but also gave the exterior far greater grandeur and majesty. All this revolutionized the Greek-dominated field of architecture, at least in form and size, if not in creativity, and provided endless opportunity for embellishment in the way of scultural reliefs, statues, fresco murals, and mosaics. The most famous examples of Roman architecture include: the massive Colosseum, the Arch of Titus, and Trajan's Column.
If Roman architecture was uniquely grandiose, its paintings and sculptures continued to imitate the Greek style, except that its main purpose was the glorification of Rome's power and majesty. Early Roman art (c.200-27 BCE) was detailed, unidealized and realistic, while later Imperial styles (c.27 BCE - 200 CE) were more heroic. Mediocre painting flourished in the form of interior-design standard fresco murals, while higher quality panel painting was executed in tempera or in encaustic pigments. Roman sculpture too, varied in quality: as well as tens of thousands of average quality portrait busts of Emperors and other dignitaries, Roman sculptors also produced some marvellous historical relief sculptures, such as the spiral bas relief sculpture on Trajan's Column, celebrating the Emperor's victory in the Dacian war.
Early Art From Around the World
Although the history of art is commonly seen as being mainly concerned with civilizations that derived from European and Chinese cultures, a significant amount of arts and crafts appeared from the earliest times around the periphery of the known world. For more about the history and artifacts of these cultures, see: Oceanic art (from the South Pacific and Australasia), African art (from all parts of the continent) and Tribal art (from Africa, the Pacific Islands, Indonesia, Burma, Australasia, North America, and Alaska).
Constantinople, Christianity and Byzantine Art
With the death in 395 CE, of the Emperor Theodosius, the Roman empire was divided into two halves: a Western half based initially in Rome, until it was sacked in the 5th century CE, then Ravenna; and an eastern half located in the more secure city of Constantinople. At the same time, Christianity was made the exclusive official religion of the empire. These two political developments had a huge impact on the history of Western art. First, relocation to Constantinople helped to prolong Greco-Roman civilization and culture; second, the growth of Christianity led to an entirely new category of Christian art which provided architects, painters, sculptors and other craftsmen with what became the dominant theme in the visual arts for the next 1,200 years. As well as prototype forms of early Christian art, much of which came from the catacombs, it also led directly to the emergence of Byzantine art. See also: Christian Art, Byzantine Period.
Byzantine art was almost entirely religious art, and centred around its Christian architecture. Masterpieces include the awesome Hagia Sophia (532-37) in Istanbul; the Church of St Sophia in Sofia, Bulgaria (527-65); and the Church of Hagia Sophia in Thessaloniki. Byzantine art also influenced the Ravenna mosaics in the Basilicas of Sant'Apollinare Nuovo, San Vitale, and Sant' Apollinare in Classe. Secular examples include: the Great Palace of Constantinople, and Basilica Cistern. As well as new architectural techniques such as the use of pendentives to spread the weight of the ceiling dome, thus permitting larger interiors, new decorative methods were introduced like mosaics made from glass, rather than stone. But the Eastern Orthodox brand of Christianity (unlike its counterpart in Rome), did not allow 3-D artworks like statues or high reliefs, believing they glorified the human aspect of the flesh rather than the divine nature of the spirit. Thus Byzantine art (eg. painting, mosaic works) developed a particular style of meaningful imagery (iconography) designed to present complex theology in a very simple way. For example, colours were used to express different ideas: gold represented Heaven; blue, the colour of human life, and so on.
After 600 CE, Byzantine architecture progressed through several periods - such as, the Middle Period (c.600-1100) and the Comnenian and Paleologan periods (c.1100-1450) - gradually becoming more and more influenced by eastern traditions of construction and decoration. In Western Europe, Byzantine architecture was superceded by Romanesque and Gothic styles, while in the Near East it continued to have a significant influence on early Islamic architecture, as illustrated by the Umayyad Great Mosque of Damascus and the Dome of the Rock in Jerusalem.
In the absence of sculpture, Byzantine artists specialized in 2-D painting, becoming masters of panel-painting, including miniatures - notably icons - and manuscript illumination. Their works had a huge influence on artists throughout western and central Europe, as well as the Islamic countries of the Middle East.
Located on the remote periphery of Western Europe, Ireland remained free of interference from either Rome or the barbarians that followed. As a result, Irish Celtic art was neither displaced by Greek or Roman idioms, nor buried in the pagan Dark Ages. Furthermore, the Church was able to establish a relatively secure network of Irish monasteries, which rapidly became important centres of religious learning and scholarship, and gradually spread to the islands off Britain and to parts of Northern England. This monastic network soon became a major patron of the arts, attracting numerous scribes and painters into its scriptoriums to create a series of increasingly ornate illuminated gospel manuscripts: examples include: the Cathach of Colmcille (c.560), the Book of Dimma (c.625), the Durham Gospels (c.650), the Book of Durrow (c.670), and the supreme Book of Kells (also called the Book of Columba), considered to be the apogee of Western calligraphy. These gospel illuminations employed a range of historiated letters, rhombuses, crosses, trumpet ornaments, pictures of birds and animals, occasionally taking up whole pages (carpet pages) of geometric or interlace patterns. The creative success of these decorated manuscripts was greatly enhanced by the availability of Celtic designs from jewellery and metalwork - produced for the Irish secular elite - and by increased cultural contacts with Anglo-Saxon craftsmen in England.
Another early Christian art form developed in Ireland was religious metalwork, exemplified by such masterpieces as the Tara Brooch, the Ardagh Chalice, the Derrynaflan Chalice, and the Moylough Belt Shrine, as well as processional crosses like the 8th/9th century Tully Lough Cross and the great 12th century Cross of Cong, commissioned by Turlough O'Connor. Finally, from the late eighth century, the Church began commissioning a number of large religious crosses decorated both with scenes from the bible and abstract interlace, knotwork and other Celtic-style patterns. Examples include Muiredach's Cross at Monasterboice, County Louth, and the Ahenny High Cross in Tipperary. These scripture high crosses flourished between 900 and 1100, although construction continued as late as the 15th century.
Unfortunately, with the advent of the Vikings (c.800-1000), the unique Irish contribution to Western Civilization in general and Christianity in particular, began to fade, despite some contribution from Viking art. Thereafter, Roman culture - driven by the Church of Rome - began to reassert itself across Europe.
A Word About Asian Art
In contrast to Christianity which permits figurative representation of Prophets, Saints and the Holy family, Islam forbids all forms of human iconography. Thus Islamic art focused instead on the development of complex geometric patterns, illuminated texts and calligraphy.
In East Asia, the visual arts of India and Tibet incorporated the use of highly coloured figures (due to their wide range of pigments) and strong outlines. Painting in India was extremely diverse, as were materials (textiles being more durable often replaced paper) and size (Indian miniatures were a specialty). Chinese art included bronze sculpture, jade carving, Chinese pottery, calligraphic and brush painting, among other forms. In Japan, Buddhist temple art, Zen Ink-Painting, Yamato-e and Ukiyo-e woodblock prints were four of the main types of Japanese art.
On the continent, the revival of medieval Christian art began with Charlemagne I, King of the Franks, who was crowned Holy Roman Emperor, by Pope Leo III in 800. Charlemagne's court scriptoriums at Aachen produced a number of magnificent illuminated Christian texts, such as: the Godscalc Evangelistary, the Lorsch Gospels and the Gospels of St Medard of Soissons. Ironically, his major architectural work - the Palatine Chapel in Aachen (c.800) - was influenced not by St Peter's or other churches in Rome, but by the Byzantine-style Basilica of San Vitale in Ravenna. The Carolingian empire rapidly dissolved but Carolingian Art marked an important first step in the revitalization of European culture. Furthermore, many of the Romanesque and Gothic churches were built on the foundations of Carolingian architecture. Charlemagne's early Romanesque architectural achievements were continued by the Holy Roman Emperors Otto I-III, in a style known as Ottonian Art, which morphed into the fully fledged "Romanesque." (In England and Ireland, the Romanesque style is usually called Norman architecture.)
The Church Invests in Art to Convey Its Message
The spread of Romanesque art in the 11th century coincided with the reassertiveness of Roman Christianity, and the latter's influence on secular authorities led to the Christian re-conquest of Spain (c.1031) as well as the Crusade to free the Holy Land from the grip of Islam. The success of the Crusaders and their acquisition of Holy Relics triggered a wave of new cathedrals across Europe. In addition to its influence over international politics, Rome exercised growing power via its network of Bishops and its links with Monastic orders such as the Benedictines, the Cistercians, Carthusians and Augustinian Canons. From these monasteries, its officials exercised growing administrative power over the local population, notably the power to collect tax revenues which it devoted to religious works, particularly the building of cathedrals (encompassing sculpture and metalwork, as well as architecture), illuminated gospel manuscripts, and cultural scholarship - a process exemplified by the powerful Benedictine monastery at Cluny in Burgundy.
Romanesque Architecture (c.1000-1200)
Although based on Greek and Roman Antiquity, Romanesque architecture displayed neither the creativity of the Greeks, nor the engineering skill of the Romans. They employed thick walls, round arches, piers, columns, groin vaults, narrow slit-windows, large towers and decorative arcading. The basic load of the building was carried not its arches or columns but by its massive walls. And its roofs, vaults and buttresses were relatively primitive in comparison with later styles. Above all, interiors were dim and comparatively hemmed in with heavy stone walls. Even so, Romanesque architecture did reintroduce two important forms of fine art: sculpture (which had been in abeyance since the fall of Rome), and stained glass, albeit on a minor scale. (For details of sculptors, painters, and architects from the Middle Ages, see: Medieval Artists.)
Largely financed by monastic orders and local bishops, Gothic architecture exploited a number of technical advances in pointed arches and other design factors, in order to awe, inspire and educate the masses. Thus, out went the massively thick walls, small windows and dim interiors, in came soaring ceilings ("reaching to heaven"), thin walls and stained glass windows. This transformed the interior of many cathedrals into inspirational sanctuaries, where illiterate congregations could see the story of the bible illustrated in the beautiful stained glass art of its huge windows. Indeed, the Gothic cathedral was seen by architects as representing the universe in miniature. Almost every feature was designed to convey a theological message: namely, the awesome glory of God, and the ordered nature of his universe. Religious Gothic art - that is, architecture, relief sculpture and statuary - is best exemplified by the cathedrals of Northern France, notably Notre Dame de Paris; Reims and Chartres, as well as Cologne Cathedral, St Stephen's Cathedral Vienna and, in England, Westminster Abbey and York Minster.
Strongly influenced by International Gothic, the European revival of fine art between roughly 1300 and 1600, popularly known as "the Renaissance", was a unique and (in many respects) inexplicable phenomenon, not least because of (1) the Black Death plague (1346), which wiped out one third of the European population; (2) the 100 Years War between England and France (1339-1439) and (3) the Reformation (c.1520) - none of which was conducive to the development of the visual arts. Fortunately, certain factors in the Renaissance heartland of Florence and Rome - notably the energy and huge wealth of the Florentine Medici family, and the Papal ambitions of Pope Sixtus IV (1471-84), Pope Julius II (1503-13), Pope Leo X (1513-21) and Pope Paul III (1534-45) - succeeded in overcoming all natural obstacles, even if the Church was almost bankrupted in the process.
Renaissance art was founded on a new appreciation of the arts of Classical Antiquity, a belief in the nobility of Man, as well as artistic advances in both linear perspective and realism. It evolved in three main Italian cities: first Florence, then Rome, and lastly Venice. Renaissance chronology is usually listed as follows:
Renaissance architecture employed precepts derived from ancient Greece and Rome, but kept many modern features of Byzantine and Gothic invention, such as domes and towers. Important architects included: Donato Bramante (1444-1514) the greatest exponent of High Renaisance architecture; Baldassare Peruzzi (1481-1536), an important architect and interior designer; Michele Sanmicheli (1484-1559), the leading pupil of Bramante; Jacopo Sansovino (1486-1570), the most celebrated Venetian architect; Giulio Romano (1499-1546), the chief practitioner of Italian Late Renaissance-style building design; Andrea Palladio (1508-1580), an influential theorist; and of course Michelangelo himself, who helped to design the dome for St Peter's Basilica in Rome.
Among the greatest sculptors of the Northern Renaissance were: the German limewood sculptor Tilman Riemenschneider (1460-1531), noted for his reliefs and freestanding wood sculpture; and the wood-carver Veit Stoss (1450-1533) noted for his delicate altarpieces.
It was during this period that the Catholic Counter-Reformation got going in an attempt to attract the masses away from Protestantism. Renewed patronage of the visual arts and architecture was a key feature of this propaganda campaign, and led to a grander, more theatrical style in both areas. This new style, known as Baroque art was effectively the highpoint of dramatic Mannerism.
Baroque architecture took full advantage of the theatrical potential of the urban landscape, exemplified by Saint Peter's Square (1656-67) in Rome, in front of the domed St Peter's Basilica. Its architect, Gianlorenzo Bernini (1598-1680) employed a widening series of colonnades in the approach to the cathedral, conveying the impression to visitors that they are being embraced by the arms of the Catholic Church. The entire approach is constructed on a gigantic scale, to induce feelings of awe.
In painting, the greatest exponent of Catholic Counter-Reformation art was Peter Paul Rubens (1577-1640) - "the Prince of painters and the painter of Princes". Other leading Catholic artists included Diego Velazquez (1599-1660), Francisco Zurbaran (1598-1664) and Nicolas Poussin (1594-1665).
In Protestant Northern Europe, the Baroque era was marked by the flowering of Dutch Realist painting, a style uniquely suited to the new bourgeois patrons of small-scale interiors, genre-paintings, portraits, landscapes and still lifes. Several schools of Dutch Realism sprang up including those of Delft, Utrecht, and Leiden. Leading members included the two immortals Rembrandt (1606-1669) and Jan Vermeer (1632-1675), as well as Frans Snyders (1579-1657), Frans Hals (1581-1666), Adriaen Brouwer (1605-38), Jan Davidsz de Heem (1606-84), Adriaen van Ostade (1610-85), David Teniers the Younger (1610-90), Gerard Terborch (1617-81), Jan Steen (1626-79), Pieter de Hooch (1629-83), and the landscape painters Aelbert Cuyp (1620-91), Jacob van Ruisdael (1628-82) and Meyndert Hobbema (1638-1709), among others.
This new style of decorative art, known as Rococo, impacted most on interior-design, although architecture, painting and sculpture were also affected. Essentially a reaction against the seriousness of the Baroque, Rococo was a light-hearted, almost whimsical style which grew up in the French court at the Palace of Versailles before spreading across Europe. Rococo designers employed the full gamut of plasterwork, murals, tapestries, furniture, mirrors, porcelain, silks and other embellishments to give the householder a complete aesthetic experience. In painting, the Rococo style was championed by the French artists Watteau (1684-1721), Fragonard (1732-1806), and Boucher (1703-70). But the greatest works were produced by the Venetian Giambattista Tiepolo (1696-1770) whose fantastic wall and ceiling fresco paintings took Rococo to new heights. See in particular the renaissance of French Decorative Art (1640-1792), created by French Designers especially in the form of French Furniture, at Versailles and other Royal Chateaux, in the style of Louis Quatorze (XIV), Louis Quinze (XV) and Louis Seize (XVI). As it was, Rococo symbolized the decadent indolence and degeneracy of the French aristocracy. Because of this, it was swept away by the French Revolution which ushered in the new sterner Neoclassicism, more in keeping with the Age of Enlightenment and Reason.
In architecture, Neoclassicism derived from the more restrained "classical" forms of Baroque practised in England by Sir Christopher Wren (1632-1723), who designed St Paul's Cathedral. Yet another return to the Classical Orders of Greco-Roman Antiquity, the style was characterized by monumental structures, supported by columns of pillars, and topped with classical Renaissance domes. Employing innovations like layered cupolas, it lent added grandeur to palaces, churches, and other public structures. Famous Neoclassical buildings include: the Pantheon (Paris) designed by Jacques Germain Soufflot (1756-97), the Arc de Triomphe (Paris) designed by Jean Chalgrin, the Brandenburg Gate (Berlin) designed by Carl Gotthard Langhans (1732-1808), and the United States Capitol Building, designed by English-born Benjamin Henry Latrobe (1764-1820), and later by Stephen Hallet and Charles Bulfinch. See also the era of American Colonial Art (c.1670-1800).
Neoclassicist painters also looked to Classical
Antiquity for inspiration, and emphasized the virtues of heroicism, duty
and gravitas. Leading exponents included the French political artist Jacques-Louis
David (1748-1825), the German portrait and history painter Anton Raphael
Mengs (1728-79), and the French master of the Academic
art style, Jean Auguste Dominique Ingres (1780-1867). Neoclassical
sculptors included: Antonio Canova (1757-1822),
In contrast to the universal values espoused by Neo-Classicism, Romantic artists expressed a more personal response to life, relying more on their senses and emotions rather than reason and intellect. This idealism, like Neoclassism, was encouraged by the French Revolution, thus some artists were affected by both styles. Nature was an important subject for Romantics, and the style is exemplified, by the English School of Landscape Painting, the plein air painting of John Constable (1776-1837), Corot (1796-1875) along with members of the French Barbizon School and the American Hudson River School of landscape painting, as well as the more expressionistic JMW Turner (1775-1851). Arguably, however, the greatest Romantic landscape painter is arguably Caspar David Friedrich (1774-1840). Narrative or history painting was another important genre in Romanticism: leading exponents include: Francisco Goya (1746-1828) Henry Fuseli (1741-1825), James Barry (1741-1806), Theodore Gericault (1791-1824) and Eugene Delacroix (1798-63), as well as later Orientalists, Pre-Raphaelites and Symbolists.
As the 19th century progessed, growing awareness of the rights of man plus the social impact of the Industrial Revolution caused some artists to move away from idealistic or romantic subjects in favour of more mundane subjects, depicted in a more true-life, style of naturalism. This new focus (to some extent anticipated by William Hogarth in the 18th century, see English Figurative Painting) was exemplified by the Realism style which emerged in France during the 1840s, before spreading across Europe. This new style attracted painters from all the genres - notably Gustave Courbet (1819-77) (genre-painting), Jean Francois Millet (1814-75) (landscape, rural life), Honore Daumier (1808-79) (urban life) and Ilya Repin (1844-1930) (landscape and portraits).
History of Modern Art
French Impressionism, championed above all by Claude Monet (1840-1926), was a spontaneous colour-sensitive style of pleinairism whose origins derived from Jean-Baptiste Camille Corot and the techniques of the Barbizon school - whose quest was to depict the momentary effects of natural light. It encompassed rural landscapes [Alfred Sisley (1839-1899)], cityscapes [Camille Pissarro (1830-1903)], genre scenes [Pierre-Auguste Renoir (1841-1919), Edgar Degas (1834-1917), Paul Cezanne (1839-1906), and Berthe Morisot (1841-95)] and both figurative paintings and portraits [Edouard Manet (1832-83), John Singer Sargent (1856-1925)]. Other artists associated with Impressionism include, James McNeil Whistler (1834-1903) and Walter Sickert (1860-1942).
Impressionists sought to faithfully reproduce fleeting moments outdoors. Thus if an object appeared dark purple - due perhaps to failing or reflected light - then the artist painted it purple. Naturalist "Academic-Style" colour schemes, being devised in theory or at least in the studio, did not allow for this. As a result Impressionism offered a whole new pictorial language - one that paved the way for more revolutionary art movements like Cubism - and is often regarded by historians and critics as the first modern school of painting.
In any event, the style had a massive impact on Parisian and world art, and was the gateway to a series of colour-related movements, including Post-Impressionism, Neo-Impressionism, Pointillism, Divisionism, Fauvism, Intimism, the American Luminism or Tonalism, as well as American Impressionism, the Newlyn School and Camden Town Group, the French Les Nabis and the general Expressionist movement.
Essentially an umbrella term encompassing a number of developments and reactions to Impressionism, Post-Impressionism involved artists who employed Impressionist-type colour schemes, but were dissatisfied with the limitations imposed by merely reproducing nature. Neo-Impressionism with its technique of Pointillism (an offshoot of Divisionism) was pioneered by Georges Seurat and Paul Signac (1863-1935), while major Post-Impressionists include Paul Gauguin, Vincent Van Gogh and Paul Cezanne. Inspired by Gauguin's synthetism and Bernard's cloisonnism, the Post-Impressionist group Les Nabis promoted a wider form of decorative art; another style, known as Intimisme, concerned itself with genre scenes of domestic, intimate interiors. Exemplified by the work of Pierre Bonnard (1867-1947) and Edouard Vuillard (1868-1940), it parallels other tranquil interiors such as those by James McNeil Whistler, and the Dutch Realist-influenced Peter Vilhelm Ilsted (1861-1933). Another very important movement - anti-impressionist rather than post-impressionist - was Symbolism (flourished 1885-1900), which went on to influence Fauvism, Expressionism and Surrealism.
For more about art politics in France, see: the Paris Salon.
The term "Fauves" (wild beasts) was first used by the art critic Louis Vauxcelles at the 1905 Salon d'Automne exhibition in Paris when describing the vividly coloured paintings of Henri Matisse (1869-1954), Andre Derain (1880-1954), and Maurice de Vlaminck (1876-1958). Other Fauvists included the later Cubist Georges Braque (1882-1963), Raoul Dufy (1877-1953), Albert Marquet (1875-1947) and Georges Rouault (1871-1958). Most followers of Fauvism moved on to Expressionism or other movements associated with the Ecole de Paris.
Sculptural traditions, although never independent from those of painting, are concerned primarily with space and volume, while issues of scale and function also act as distinguishing factors. Thus on the whole, sculpture was slower to reflect the new trends of modern art during the 19th century, leaving sculptors like Auguste Rodin (1840-1917) free to pursue a monumentalism derived essentially from Neoclassicism if not Renaissance ideology. The public dimension of sculpture also lent itself to the celebration of Victorian values and historical figures, which were likewise executed in the grand manner of earlier times. Thus it wasn't until the emergence of artists like Constantin Brancusi (1876-1957) and Umberto Boccioni (1882-1916) that sculpture really began to change, at the turn of the century.
Expressionism is a general style of painting that aims to express a personal interpretation of a scene or object, rather than depict its true-life features, it is often characterized by energetic brushwork, impastoed paint, intense colours and bold lines. Early Expressionists included, Vincent Van Gogh (1853-90), Edvard Munch (1863-1944) and Wassily Kandinsky (1866-1944). A number of German Expressionist schools sprang up during the first three decades of the 20th century. These included: Die Brucke (1905-11), a group based in Dresden in 1905, which mixed elements of traditional German art with Post-Impressionist and Fauvist styles, exemplified in works by Ernst Ludwig Kirchner, Karl Schmidt-Rottluff, Erik Heckel, and Emil Nolde; Der Blaue Reiter (1911-14), a loose association of artists based in Munich, including Wassily Kandinsky, Franz Marc, August Macke, and Paul Klee; Die Neue Sachlichkeit (1920s) a post-war satirical-realist group whose members included Otto Dix, George Grosz, Christian Schad and to a lesser extent Max Beckmann. Expressionism duly spread worldwide, spawning numerous derivations in both figurative painting (eg. Francis Bacon) and abstract art (eg. Mark Rothko). See also: History of Expressionist Painting (c.1880-1930).
Art Nouveau (Late 19th Century - Early 20th Century)
Art Nouveau (known as Jugendstil in Germany, Sezessionstil in the Vienna Secession, Stile Liberty in Italy, and Modernista in Spain) derived from William Morris and the Arts and Crafts Movement in Britain, and was also influenced by both the Celtic Revival arts movement and Japanonisme. It's popularity stemmed from the 1900 Exposition Universelle in Paris, from where it spread across Europe and the United States. It was noted for its intricate flowing patterns of sinuous asymetrical lines, based on plant-forms (dating back to the Celtic Hallstatt and La Tene cultures), as well as female silhouettes and forms. Art Nouveau had a major influence on poster art, design and illustration, interior design, metalwork, glassware, jewellery, as well as painting and sculpture. Leading exponents included: Alphonse Mucha (1860-1939), Aubrey Beardsley (1872-98), Eugene Grasset (1845-1917) and Albert Guillaume (1873-1942). See also: History of Poster Art.
The Bauhaus School (Germany, 1919-1933)
Derived from the two German words "bau" for building and "haus" for house, the Bauhaus school of art and design was founded in 1919 by the architect Walter Gropius. Enormously influential in both architecture and design - and their teaching methods - its instructors included such artists as Josef Albers, Lyonel Feininger, Paul Klee, Wassily Kandinsky, Oskar Schlemmer, Laszlo Moholy-Nagy, Anni Albers and Johannes Itten. Its mission was to bring art into contact with everyday life, thus the design of everyday objects was given the same importance as fine art. Important Bauhaus precepts included the virtue of simple, clean design, massproduction and the practical advantages of a well-designed home and workplace. The Bauhaus was eventually closed by the Nazis in 1933, whereupon several of its teachers emigrated to America: Laszlo Moholy-Nagy settled in Chicago where he founded the New Bauhaus in 1937, while Albers went to Black Mountain College in North Carolina.
Art Deco (1920s, 1930s)
The design style known as Art Deco was showcased in 1925 at the International Exhibition of Modern Decorative and Industrial Arts in Paris and became a highly popular style of decorative art, design and architecture during the inter-war years (much employed by cinema and hotel architects). Its influence was also seen in the design of furniture, textile fabrics, pottery, jewellery, and glass. A reaction against Art Nouveau, the new idiom of Art Deco eliminated the latter's flowing curvilinear forms and replaced them with Cubist and Precisionist-inspired geometric shapes. Famous examples of Art Deco architecture include the Empire State Building and the New York Chrysler Building. Art Deco was also influenced by the simple architectural designs of The Bauhaus.
Invented by Pablo Picasso (1881-1973) and Georges Braque (1882-1963) and considered to be "the" revolutionary movement of modern art, Cubism was a more intellectual style of painting that explored the full potential of the two-dimensional picture plane by offering different views of the same object, typically arranged in a series of overlapping fragments: rather like a photographer might take several photos of an object from different angles, before cutting them up with scissors and rearranging them in haphazard fashion on a flat surface. This "analytical Cubism" (which originated with Picasso's "Les Demoiselles d'Avignon") quickly gave way to "synthetic Cubism", when artists began to include "found objects" in their canvases, such as collages made from newspaper cuttings. Cubist painters included: Juan Gris (1887-1927), Fernand Leger (1881-1955), Robert Delaunay (1885-1941), Albert Gleizes (1881-1953), Roger de La Fresnaye (1885-1925), Jean Metzinger (1883-1956), and Francis Picabia (1879-1953), the avant-garde artist Marcel Duchamp (1887-1968), and the sculptors Jacques Lipchitz (1891-1973), and Alexander Archipenko (1887-1964). (See also Russian art.) Short-lived but highly influential, Cubism instigated a whole new style of abstract art and had a significant impact the development of later styles such as: Orphism (1910-13), Collage (1912 onwards), Purism (1920s), Precisionism (1920s, 1930s), Futurism (1909-1914), Rayonism (c.1912-14), Suprematism (1913-1918), Constructivism (c.1919-32), Vorticism (c.1914-15) the De Stijl (1917-31) design movement and the austere geometrical style of concrete art known as Neo-Plasticism.
Largely rooted in the anti-art traditions of the Dada movement (1916-24), as well as the psychoanalytical ideas of Sigmund Freud and Carl Jung, Surrealism was the most influential art style of the inter-war years. According to its chief theorist, Andre Breton, it sought to combine the unconscious with the conscious, in order to create a new "super-reality" - a "surrealisme". The movement spanned a huge range of styles, from abstraction to true-life realism, typically punctuated with "unreal" imagery. Important Surrealists included Salvador Dali (1904-89), Max Ernst (1891-1976), Rene Magritte (1898-1967), Andre Masson (1896-1987), Yves Tanguy (1900-55), Joan Miro (1893-1983), Giorgio de Chirico (1888-1978), Jean Arp (1886-1966), and Man Ray (1890-1976). The movement had a major impact across Europe during the 1930s, was the major precursor to Conceptualism, and continues to find adherents in fine art, literature and cinematography.
American painting during the period 1900-45 was realist in style and became increasingly focused on strictly American imagery. This was the result of the reaction against the Armory Show (1913) and European hypermodernism, as well as a response to changing social conditions across the country. Later it became a patriotic response to the Great Depression of the 1930s. See also the huge advances in Skyscraper architecture of the early 20th century. For more, see: American architecture (1600-present). Specific painting movements included the Ashcan School (c.1900-1915); Precisionism (1920s) which celebrated the new American industrial landscape; the more socially aware urban style of Social Realism (1930s); American Scene Painting (c.1925-45) which embraced the work of Edward Hopper and Charles Burchfield, as well as midwestern Regionalism (1930s) championed by Grant Wood, Thomas Hart Benton and John Steuart Curry.
The first international modern art movement to come out of America (it is sometimes referred to as The New York School - see also American art), it was a predominantly abstract style of painting which followed an expressionist colour-driven direction, rather than a Cubist idiom, although it also includes a number of other styles, making it more of a general movement. Four variants stand out in Abstract Expressionism: first, the "automatic" style of "action painting" invented by Jackson Pollock (1912-56) and his wife Lee Krasner (19081984). Second, the monumental planes of colour created by Mark Rothko (1903-70), Barnett Newman (1905-70) and Clyfford Still (1904-80) - a style known as Colour Field Painting. Third, the gestural figurative works by Willem De Kooning (19041997). Four, the geometric "Homage to the Square" geometric abstracts of Josef Albers (1888-1976).
Highly influential, Abstract Expressionist painting continued to influence later artists for over two decades. It was introduced to Paris during the 1950s by Jean-Paul Riopelle (1923-2002), assisted by Michel Tapie's book, Un Art Autre (1952). At the same time, a number of new sub-movements emerged in America, such as Hard-edge painting, exemplified by Frank Stella. In the late 1950s/early 1960s, a purely abstract form of Colour Field painting appeared in works by Helen Frankenthaler and others, while in 1964, the famous art critic Clement Greenberg helped to introduce a further stylistic development known as "Post-Painterly Abstraction". Abstract Expressionism went on to influence a variety of different schools, including Op Art, Fluxus, Pop Art, Minimalism, Neo-Expressionism, and others.
The bridge between modern art and postmodernism, Pop art employed popular imagery and modern forms of graphic art, to create a lively, high-impact idiom, which could be understood and appreciated by Joe Public. It appeared simultaneously in America and Britain, during the late 1950s, while a European form (Nouveau Realisme) emerged in 1960. Pioneered in America by Robert Rauschenberg (1925-2008) and Jasper Johns (b.1930), Pop had close links with early 20th century movements like Surrealism. It was a clear reaction against the closed intellectualism of Abstract Expressionism, from which Pop artists sought to distance themselves by adopting simple, easily recognized imagery (from TV, cartoons, comic strips and the like), as well as modern technology like screen printing. Famous US Pop artists include: Jim Dine (b.1935), Robert Indiana (b.1928), Alex Katz (b.1927), Roy Lichtenstein (1923-97), Claes Oldenburg (b.1929), and Andy Warhol (1928-87). Important Pop artists in Britain were: Peter Blake (b.1932), Patrick Caulfield (1936-2006), Richard Hamilton (b.1922), David Hockney (b.1937), Allen Jones (b.1937), RB Kitaj (b.1932), and Eduardo Paolozzi (1924-2005).
From the early works of Brancusi, 20th century sculpture broadened immeasurably to encompass new forms, styles and materials. Major innovations included the "sculptured walls" of Louise Nevelson (1899-1988), the existential forms of Giacometti (1901-66), the biomorphic abstraction of both Barbara Hepworth (1903-75) and Henry Moore (1898-1986), and the spiders of Louise Bourgeois (1911-2010). Other creative angles were pursued by Salvador Dali (1904-89) in his surrealist "Mae West Lips Sofa" and "Lobster Telephone" - by Meret Oppenheim (1913-85) in her "Furry Breakfast", by FE McWilliam (1909-1992) in his "Eyes, Nose and Cheek", by Sol LeWitt (b.1928) in his skeletal box-like constructions, and by Pop-artists like Claes Oldenburg (b.1929) and Jasper Johns (b.1930), as well as by the Italians Jonathan De Pas (1932-91), Donato D'Urbino (b.1935) and Paolo Lomazzi (b.1936) in their unique "Joe Sofa".
For more about the history of painting, sculpture, architecture and crafts during this period, see: Modern Art Movements.
History of Contemporary Art
The word "Postmodernist" is often used to describe contemporary art since about 1970. In simple terms, postmodernist art emphasizes style over substance (eg. not 'what' but 'how'; not 'art for art's sake', but 'style for stye's sake'), and stresses the importance of how the artist comunicates with his/her audience. This is exemplified by movements such as Conceptual art, where the idea being communicated is seen as more important than the artwork itself, which merely acts as the vehicle for the message. In addition, in order to increase the "impact" of visual art on spectators, postmodernists have turned to new art forms such as Assemblage, Installation, Video, Performance, Happenings and Graffiti - all of which are associated in some way or other with Conceptualism- and this idea of impact continues to inspire.
Painters since the 1970s have experimented with numerous styles across the spectrum from pure abstraction to figuration. These include: Minimalism, a purist form of abstraction which did little to promote painting as an attractive medium; Neo-Expressionism, which encompassed groups like the "Ugly Realists", the "Neue Wilden", "Figuration Libre", "Transavanguardia", the "New Image Painters" and the so-called "Bad Painters", signalled a return to depicting recognizable objects, like the human body (albeit often in a quasi-abstract style), using rough brushwork, vivid colours and colour harmonies; and the wholly figurative styles adopted by groups such as "New Subjectivity" and the "London School". At the other extreme from Minimalism is the ultra-representational art form of photorealism (superrealism, hyperrealism). Conspicuous among this rather bewildering range of activity are figure painters like Francis Bacon, the great Lucien Freud (b.1922), the innovative Fernando Botero (b.1932), the precise David Hockney (b.1937), the photorealists Chuck Close (b.1940) and Richard Estes (b.1936), and the contemporary Jenny Saville (b.1970). See also: Contemporary British Painting (1960-2000).
Sculpture since 1970 has appeared in a variety of guises, including: the large scale metal works of Mark Di Suvero (b.1933), the minimalist sculptures of Walter de Maria (b.1935), the monumental public forms of Richard Serra (b.1939), the hyper-realist nudes of John De Andrea (b.1941), the environmental structures of Anthony Gormley (b.1950), the site-specific figures of Rowan Gillespie (b.1953), the stainless steel works of Anish Kapoor (b.1954), the high-impact Neo-Pop works of Jeff Koons (b.1955), and the extraordinary 21st century works by Sudobh Gupta (b.1964) and Damian Ortega (b.1967). In addition, arresting public sculpture includes the "Chicago Picasso" - a series of metal figures produced for the Chicago Civic Centre and the architectural "Spire of Dublin" (the 'spike'), created by Ian Ritchie (b.1947), among many others.
The pluralistic "anything goes" view of contemporary art (which critics might characterize as exemplifying the fable of the "Emperor's New Clothes"), is aptly illustrated in the works of Damien Hirst, a leading member of the Young British Artists school. Renowned for "The Physical Impossibility of Death in the Mind of Someone Living", a dead Tiger shark pickled in formaldehyde, and lately for his diamond encrusted skull "For the Love of God", Hirst has managed to stimulate audiences and horrify critics around the world. And while he is unlikely ever to inherit the mantle of Michelangelo, his achievement of sales worth $100 million in a single Sotheby's auction (2008) is positively eye-popping.
On a more sobering note, in March 2009 the prestigious Georges Pompidou Centre of Contemporary Art in Paris staged an exhibition entitled "The Specialisation of Sensibility in the Raw Material State into Stabilised Pictorial Sensibility". This avant-garde event consisted of 9 completely empty rooms - in effect, a reincarnation of John Cage's completely silent piece of "musical" conceptual art entitled "4.33". If one of the great contemporary art venues like the Pompidou Centre regards nine completely empty spaces as a worthy art event, we are all in deep trouble.
For more about the history of postmodernist painting, sculpture, and avant-garde art forms, see: Contemporary Art Movements.
One might say that 19th century architecture aimed to beautify the new wave of civic structures, like railway stations, museums, government buildings and other public utilities. It did this by taking ideas from Neo-Classicism, Neo-Gothic, French Second Empire and exoticism, as well as the new forms and materials of so-called "industrial architecture", as exemplified in factories along with occasional landmark structures like the Eiffel Tower. In comparison, 20th century architecture has been characterized by vertical development (skyscrapers), flagship buildings, and post-war reconstruction. More than any other era, its design has been dominated by the invention of new materials and building methods. It began with the exploitation of late 19th century innovations developed by the Chicago School of architecture, such as the structural steel frame, in a style known as Early Modernism. In America, architects started incorporating Art Nouveau and Art Deco design styles into their work, while in Germany and Russia totalitarian architecture pursued a separate agenda during the 1930s. Famous architects of the first part of the century included: Louis Sullivan (1856-1924), Frank Lloyd Wright (1867-1959), Victor Horta (1861-1947), Antoni Gaudi (1852-1926), Peter Behrens (1868-1940), Walter Gropius (1883-1969) and Le Corbusier (1887-1965). After 1945, architects turned away from functionalism and began creating new forms facilitated by reinforced concrete, steel and glass. Thus Late Modernism gave way to Brutalism, Corporate Modernism and High Tech architecture, culminating in structures like the Georges Pompidou Centre in Paris, and the iconic Sydney Opera House - one of the first buildings to use industrial strength Araldite to glue together the precast structural elements. Since 1970, postmodernist architecture has taken several different approaches. Some designers have stripped buildings of all ornamentation to create a Minimalist style; others have used ideas of Deconstructivism to move away from traditional rectilinear shapes; while yet others have employed digital modeling software to create totally new organic shapes in a process called Blobitecture. Famous post-war architects include: Miers van der Rohe (1886-1969), Louis Kahn (1901-74), Jorn Utzon; Eero Saarinen (1910-61), Kenzo Tange (1913-2005), IM Pei (b.1917), Norman Foster (b.1935), Richard Rogers, James Stirling (1926-92), Aldo Rossi (1931-97), Frank O. Gehry (b.1929), Rem Koolhaas (b.1944), and Daniel Libeskind (b.1946). Famous architectural groups or firms, include: Skidmore, Owings & Merrill (est 1936); Venturi & Scott-Brown (est 1925); the New York Five - Peter Eisenman, Michael Graves, Charles Gwathmey, John Hejduk, Richard Meier; and Herzog & de Meuron (est 1950).
For our main index, see: Art Encyclopedia.
ENCYCLOPEDIA OF ART
|
<urn:uuid:7dbb42f1-28cf-4bd3-b19e-1cbdf4a7ab2f>
| 3.59375
|
http://www.visual-arts-cork.com/history-of-art.htm
|
Interface with computers using gestures of the human body, typically hand movements. In gesture recognition technology, a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices or applications. For example, a person clapping his hands together in front of a camera can produce the sound of cymbals being crashed together when the gesture is fed through a computer.
One way gesture recognition is being used is to help the physically impaired to interact with computers, such as interpreting sign language. The technology also has the potential to change the way users interact with computers by eliminating input devices such as joysticks, mice and keyboards and allowing the unencumbered body to give signals to the computer through gestures such as finger pointing.
Unlike haptic interfaces, gesture recognition does not require the user to wear any special equipment or attach any devices to the body. The gestures of the body are read by a camera instead of sensors attached to a device such as a data glove.
In addition to hand and body movement, gesture recognition technology also can be used to read facial and speech expressions (i.e., lip reading), and eye movements.
Featured Partners Sponsored
- Increase worker productivity, enhance data security, and enjoy greater energy savings. Find out how. Download the “Ultimate Desktop Simplicity Kit” now.»
- Find out which 10 hardware additions will help you maintain excellent service and outstanding security for you and your customers. »
- Server virtualization is growing in popularity, but the technology for securing it lags. To protect your virtual network.»
- Before you implement a private cloud, find out what you need to know about automated delivery, virtual sprawl, and more. »
|
<urn:uuid:e886e042-6a7d-431e-a669-b88b52d36b9c>
| 3.578125
|
http://www.webopedia.com/index.php/TERM/G/gesture_recognition.html
|
Sawmill process
A sawmill's basic operation is much like those of hundreds of years ago; a log enters on one end and dimensional lumber exits on the other end.
- After trees are selected for harvest, the next step in logging is felling the trees, and bucking them to length.
- Branches are cut off the trunk. This is known as limbing.
- Logs are taken by logging truck, rail or a log drive to the sawmill.
- Logs are scaled either on the way to the mill or upon arrival at the mill.
- Debarking removes bark from the logs.
- Decking is the process for sorting the logs by species, size and end use (lumber, plywood, chips).
- The head saw, head rig or primary saw, breaks the log into cants (unfinished logs to be further processed) and flitches (unfinished planks) with a smooth edge.
- Depending upon the species and quality of the log, the cants will either be further broken down by a resaw or a gang edger into multiple flitches and/or boards
- Edging will take the flitch and trim off all irregular edges leaving four-sided lumber.
- Trimming squares the ends at typical lumber lengths.
- Drying removes naturally occurring moisture from the lumber. This can be done with kilns or air-dried.
- Planing smooths the surface of the lumber leaving a uniform width and thickness.
- Shipping transports the finished lumber to market.
Early history
The Hierapolis sawmill, a Roman water-powered stone saw mill at Hierapolis, Asia Minor (modern-day Turkey) dating to the second half of the 3rd century AD is the earliest known sawmill. It is also the earliest known machine to incorporate a crank and connecting rod mechanism.
The earliest literary reference to a working sawmill comes from a Roman poet, Ausonius who wrote an epic poem about the river Moselle in Germany in the late 4th century AD. At one point in the poem he describes the shrieking sound of a watermill cutting marble. Marble sawmills also seem to be indicated by the Christian saint Gregory of Nyssa from Anatolia around 370/390 AD, demonstrating a diversified use of water-power in many parts of the Roman Empire.
Sawmills became widespread in medieval Europe again, as one was sketched by Villard de Honnecourt in c. 1250. They are claimed to have been introduced to Madeira following its discovery in c. 1420 and spread widely in Europe in the 16th century.
Prior to the invention of the sawmill, boards were rived and planed, or more often sawn by two men with a whipsaw, using saddleblocks to hold the log, and a saw pit for the pitman who worked below. Sawing was slow, and required strong and hearty men. The topsawer had to be the stronger of the two because the saw was pulled in turn by each man, and the lower had the advantage of gravity. The topsawyer also had to guide the saw so that the board was of even thickness. This was often done by following a chalkline.
Early sawmills simply adapted the whipsaw to mechanical power, generally driven by a water wheel to speed up the process. The circular motion of the wheel was changed to back-and-forth motion of the saw blade by a connecting rod known as a pitman arm (thus introducing a term used in many mechanical applications).
Generally, only the saw was powered, and the logs had to be loaded and moved by hand. An early improvement was the development of a movable carriage, also water powered, to move the log steadily through the saw blade.
A type of sawmill without a crank is known from Germany called a "knock and drop" or "drop mill": "The oldest sawmills in the Black Forest are "drop sawmills" also referred to as "knock and drop sawmills". They have all disappeared in Europe except for three in the Black Forest, one of which is in the Open Air Museum in Gutach. In these drop sawmills, the frame carrying the saw blade is knocked upwards by cams as the shaft turns. These cams are let into the shaft on which the waterwheel sits. When the frame carrying the saw blade is in the topmost position it drops by its own weight, making a loud knocking noise, and in so doing it cuts the trunk. From 1800 onwards.”
A small mill such as this would be the center of many rural communities in wood-exporting regions such as the Baltic countries and Canada. The output of such mills would be quite low, perhaps only 500 boards per day. They would also generally only operate during the winter, the peak logging season.
In the United States, the sawmill was introduced soon after the colonisation of Virginia by recruiting skilled men from Hamburg. Later the metal parts were obtained from the Netherlands, where the technology was far ahead of that in England, where the sawmill remained largely unknown until the late 18th century. The arrival of a sawmill was a large and stimulative step in the growth of a frontier community.
Industrial revolution
Early mills had been taken to the forest, where a temporary shelter was built, and the logs were skidded to the nearby mill by horse or ox teams, often when there was some snow to provide lubrication. As mills grew larger, they were usually established in more permanent facilities on a river, and the logs were floated down to them by log drivers. Sawmills built on navigable rivers, lakes, or estuaries were called cargo mills because of the availability of ships transporting cargoes of logs to the sawmill and cargoes of lumber from the sawmill.
The next improvement was the use of circular saw blades, and soon thereafter, the use of gangsaws, which added additional blades so that a log would be reduced to boards in one quick step. Circular saw blades were extremely expensive and highly subject to damage by overheating or dirty logs. A new kind of technician arose, the sawfiler. Sawfilers were highly skilled in metalworking. Their main job was to set and sharpen teeth. The craft also involved learning how to hammer a saw, whereby a saw is deformed with a hammer and anvil to counteract the forces of heat and cutting. The circular saw was a later introduction, perhaps invented in England in the late 18th century, but perhaps in 17th century Holland (Netherlands). Modern circular saw blades have replaceable teeth, but still need to be hammered.
The introduction of steam power in the 19th century created many new possibilities for mills. Availability of railroad transportation for logs and lumber encouraged building of rail mills away from navigable water. Steam powered sawmills could be far more mechanized. Scrap lumber from the mill provided a ready fuel source for firing the boiler. Efficiency was increased, but the capital cost of a new mill increased dramatically as well.
By 1900, the largest sawmill in the world was operated by the Atlantic Lumber Company in Georgetown, South Carolina, using logs floated down the Pee Dee River from as far as the edge of the Appalachian Mountains in North Carolina.
A restoration project for Sturgeon's Mill in Northern California is underway, restoring one of the last steam-powered lumber mills still using its original equipment.
Current trends
In the twentieth century the introduction of electricity and high technology furthered this process, and now most sawmills are massive and expensive facilities in which most aspects of the work is computerized. The cost of a new facility with 2 mmfbm/day capacity is up to CAN$120,000,000. A modern operation will produce between 100 mmfbm and 700 mmfbm annually.
Small gasoline-powered sawmills run by local entrepreneurs served many communities in the early twentieth century, and specialty markets still today.
A trend is the small portable sawmill for personal or even professional use. Many different models have emerged with different designs and functions. They are especially suitable for producing limited volumes of boards, or specialty milling such as oversized timber.
Technology has changed sawmill operations significantly in recent years, emphasizing increasing profits through waste minimization and increased energy efficiency as well as improving operator safety. The once-ubiquitous rusty, steel conical sawdust burners have for the most part vanished, as the sawdust and other mill waste is now processed into particleboard and related products, or used to heat wood-drying kilns. Co-generation facilities will produce power for the operation and may also feed superfluous energy onto the grid. While the bark may be ground for landscaping barkdust, it may also be burned for heat. Sawdust may make particle board or be pressed into wood pellets for pellet stoves. The larger pieces of wood that won't make lumber are chipped into wood chips and provide a source of supply for paper mills. Wood by-products of the mills will also make oriented strand board (OSB) paneling for building construction, a cheaper alternative to plywood for paneling.
Additional Images
Wood from Victorian mountain ash, Swifts Creek
A sawmill in Armata, on mount Smolikas, Epirus, Greece.
A preserved water powered sawmill, Norfolk, England.
See also
- "Lumber Manufacturing". Lumber Basics. Western Wood Products Association. 2002. Retrieved 2008-02-12.
- Ritti, Grewe & Kessener 2007, p. 161
- Ritti, Grewe & Kessener 2007, pp. 149–153
- Wilson 2002, p. 16
- C. Singer et at., History of Technology II (Oxford 1956), 643-4.
- Charles E. Peterson, 'Sawdust Trail: Annals of Sawmilling and the Lumber Trade' Bulletin of the Association for Preservation Technology Vol. 5, No. 2. (1973), pp. 84-5.
- Adam Robert Lucas (2005), "Industrial Milling in the Ancient and Medieval Worlds: A Survey of the Evidence for an Industrial Revolution in Medieval Europe", Technology and Culture 46 (1): 1-30 [10-1]
- Peterson, 94-5.
- Oakleaf p.8
- Norman Ball, 'Circular Saws and the History of Technology' Bulletin of the Association for Preservation Technology 7(3) (1975), pp. 79-89.
- Edwardian Farm: Roy Hebdige's mobile sawmill
- Steam traction engines
- IN-TIME Timber Supply Chain Optimization http://www.mjc2.com/real-time-manufacturing-scheduling.htm
- Grewe, Klaus (2009), "Die Reliefdarstellung einer antiken Steinsägemaschine aus Hierapolis in Phrygien und ihre Bedeutung für die Technikgeschichte. Internationale Konferenz 13.−16. Juni 2007 in Istanbul", in Bachmann, Martin, Bautechnik im antiken und vorantiken Kleinasien, Byzas 9, Istanbul: Ege Yayınları/Zero Prod. Ltd., pp. 429–454, ISBN 978-975-8072-23-1
- Ritti, Tullia; Grewe, Klaus; Kessener, Paul (2007), "A Relief of a Water-powered Stone Saw Mill on a Sarcophagus at Hierapolis and its Implications", Journal of Roman Archaeology 20: 138–163
- Oakleaf, H.B. (1920), Lumber Manufacture in the Douglas Fir Region, Chicago: Commercial Journal Company
- Wilson, Andrew (2002), "Machines, Power and the Ancient Economy", The Journal of Roman Studies 92: 1–32
|Wikimedia Commons has media related to: Sawmills|
- Steam powered saw mills
- The basics of sawmill (German)
- Nineteenth century sawmill demonstration
- Database of worldwide sawmills
- Reynolds Bros Mill, northern foothills of Adirondack Mountains, New York State
- L. Cass Bowen Mill, Skerry, New York
|
<urn:uuid:f077b2f9-cedd-4343-ad36-cb14791a2c0c>
| 3.890625
|
http://en.wikipedia.org/wiki/Sawmill
|
Image Size is the size of your original digital photo file, measured in pixels and DPI (Dots Per Inch, sometimes referred to as PPI, Pixels Per Inch). What is a pixel? A pixel is a small square dot. DPI refers to the number of dots (pixels) per inch. Why is this important? Well, if an image is too small, you might not be able to order a large size print or other photo product. A general rule of thumb for image size versus print size is: the image size should be at least the size of the print you want multiplied by 300, at 300 DPI. For example, if you want to order a 4x6 print, the image size should be 1200 pixels (4 x 300) by 1800 pixels (6 x 300) at 300 DPI. If the image size was half of that (600 by 900), then the 4x6 print would likely come out distorted or pixilated if you were to order a print.
Camera Settings Decide in advance what is more important: image quality or room on your memory card. You can set your camera to take photos that are larger or smaller in size. If you know you will only be printing 4x6 photos, then you can reduce the image quality, which allows you to store more photos on your memory card. If you will be printing enlargements or other photo products like photo books, then keep the setting on "high" for higher quality images. The image sizes will be larger and you will not be able to store as many on your memory card at one time. Also, set the file type as "jpeg" if your camera allows you to control that detail. You might have a "tiff" option, but it is not necessary to save the photos as "tiff" files, and it will only take up more room on your memory card.
If you have a point and shoot camera, open your main menu, and find the setting for "image quality" (or something similar). Usually, the options are "low," "medium," and "high." Choose "high" for higher quality (larger) photos. If you have an SLR camera, you probably have additional options. Just stick to high quality jpeg images, unless you know you will be doing extensive image editing and post-production. In that case, you might want to shoot RAW files. Resolution The resolution of your photo is directly impacted by the image size. The more pixels your photos have, the higher their resolution is.
When you upload photos to your online account, you are given three upload options: "Regular," "Fast," and "Fastest." When you choose "Fast" or "Fastest," the photos are compressed, so the resolution will be less than the original photo file. So, if you are just uploading to order 4x6 prints, "Fastest" will be fine. But, if you wish to order enlargements, photo books, calendars, and other photo products, choose the "Regular" speed, which uploads the photos at their original resolution.
Once the photos are uploaded, you will notice three bars for each photo in your account. If all three bars are green, that means that the resolution of the photo that is in the account is sufficient enough to order just about anything on the site. If the bars are all red, you have uploaded a low resolution photo. Try to find the original photo file and check the size. If the size is sufficient enough to order prints (based on the rule we mentioned above about multiplying the desired print size by 300 and comparing to the actual image size), re-upload the photo at "Regular" upload speed. Photos with two or three red bars will generate poor quality prints, especially if you are trying to order anything larger than 4x6 prints. We also will double check the resolution on our end. If we catch a low res file when printing, we always stop and notify you. We want you to be happy with your prints.
Now that you understand image size and resolution a bit more, and understand why they are important when working in your online photo account, here are a few more extra tips about image size and resolution:
- Most computer screens display photos at 72 DPI. That means the printed photo will look different than how it appears on your computer screen.
- If you crop a photo too much (zoom in too much), it will always look pixilated and distorted, no matter how large the image size is.
- Once you take the photo, you cannot increase the size or resolution by increasing the number of pixels in any photo editing program. If you wish to increase the resolution or file size, you must do so by adjusting your camera settings before you take any more photos.
|
<urn:uuid:1e09e4c7-ed1a-4864-90b8-22fc564cbb6d>
| 3.328125
|
http://persnicketyprints.com/tip/resolution/resolution-part-2
|
In neuroanatomy, a sulcus (Latin: "furrow", pl. sulci) is a depression or fissure in the surface of the brain.
It surrounds the gyri, creating the characteristic appearance of the brain in humans and other large mammals.
Large furrows (sulci) that divide the brain into lobes are often called fissures. The large furrow that divides the two hemispheres—the interhemispheric fissure—is very rarely called a "sulcus".
The sulcal pattern varies between human individuals, and the most elaborate overview on this variation is probably an atlas by Ono, Kubick and Abernathey: Atlas of the Cerebral Sulci.
Some of the larger sulci are, however, seen across individuals - and even species - so it is possible to establish a nomenclature.
The variation in the amount of fissures in the brain (gyrification) between species is related to the size of the animal and the size of the brain. Mammals that have smooth-surfaced or nonconvoluted brains are called lissencephalics and those that have folded or convoluted brains gyrencephalics. The division between the two groups occurs when cortical surface area is about 10 cm2 and the brain has a volume of 3–4 cm3. Large rodents such as beavers (Template:Convert/lbTemplate:Convert/test/A) and capybaras (Template:Convert/lbTemplate:Convert/test/A) are gyrencephalic and smaller rodents such as rats and mice lissencephalic.
In humans, cerebral convolutions appear at about 5 months and take at least into the first year after birth to fully develop. It has been found that the width of cortical sulci not only increases with age , but also with cognitive decline in the elderly.
↑ 2.02.1Hofman MA. (1985). Size and shape of the cerebral cortex in mammals. I. The cortical surface. Brain Behav Evol. 27(1):28-40. PMID 3836731
↑ 3.03.1Hofman MA. (1989).On the evolution and geometry of the brain in mammals. Prog Neurobiol.32(2):137-58. PMID 2645619
↑Martin I. Sereno, Roger B. H. Tootell, "From Monkeys to humans: what do we now know about brain homologies," Current Opinion in Neurobiology15:135-144, (2005).
Caviness VS Jr. (1975). Mechanical model of brain convolutional development. Science. 189(4196):18-21. PMID 1135626
Tao Liu, Wei Wen, Wanlin Zhu, Julian Trollor, Simone Reppermund, John Crawford, Jesse S Jin, Suhuai Luo, Henry Brodaty, Perminder Sachdev (2010) The effects of age and sex on cortical sulci in the elderly. Neuroimage 51:1. 19-27 May. PMID 20156569
↑ Tao Liu, Wei Wen, Wanlin Zhu, Nicole A Kochan, Julian N Trollor, Simone Reppermund, Jesse S Jin, Suhuai Luo, Henry Brodaty, Perminder S Sachdev (2011) The relationship between cortical sulcal variability and cognitive performance in the elderly. Neuroimage 56:3. 865-873 Jun. PMID 21397704
↑Gerhardt von Bonin, Percival Bailey, The Neocortex of Macaca Mulatta, The University of Illinois Press, Urbana, Illinois, 1947
|
<urn:uuid:90017c93-fbd3-4e3b-bf56-08ddb285416b>
| 4.03125
|
http://psychology.wikia.com/wiki/Sulcus_(neuroanatomy)?oldid=150425
|
Locating thermophiles in other parts of the universe could very well aid in the search for extraterrestrial life. Most people have agreed that if life is found among the stars, it will be microbial (at least in the near-term future). Many individuals have also suggested that intelligent life forms might very well be extinct in other parts of the universe. If scientists could locate thermophile microbes, they could piece together an archaeological picture of once powerful civilizations.
Taiwan is well known for its hot springs. Most tourists that visit the island end up visiting at least one. Many people like to take relaxing baths in them. Hot springs can be great for people with arthritis. New research is proving that they can also be a great place to find astrobiological data.
Photosynthetic thermophiles that live in hot springs may potentially be removing significant amounts of industrially produced carbon dioxide from the atmosphere. They’ve thrived because of fundamental changes to the atmosphere caused by humanity. In fact, there are some scientists who feel that these microbes could play a vital role in regulating the planet’s climate. That role might become increasingly important in the future.
Planets that were once inhabited by industrially developed civilizations that have since passed might be teeming with life similar to these. If a planet was sufficiently changed by another race of beings, it could have ultimately favored the development of these tiny beings. They could indicate that intelligent lifeforms once inhabited a planet, and that planet could be different today than it was in the past.
While discovering a planet full of microbes would be initially interesting, in the future it could be a relatively common occurrence. Therefore, news services of the future might very well pass by such stories after a few weeks – much like they do today with the discovery of new exoplanets. Finding sufficient numbers of photosynthetic thermophiles would be telling about the history of a world, but it would also require a great deal of geological activity. Then again, there’s nothing to say that other civilizations wouldn’t also have the ability to increase the amount of geological activity on other planets. They might even do it on purpose, as a way of terraforming for instance.
For that matter, humans might want to give that a try. Venus is superheated because of thermal runaway as a result of excess carbon dioxide in the atmosphere. If water were transported to that very hot world, colonists could use the resulting geysers to grow bacteria that would absorb the atmospheric gas.
Leu, J., Lin, T., Selvamani, M., Chen, H., Liang, J., & Pan, K. (2012). Characterization of a novel thermophilic cyanobacterial strain from Taian hot springs in Taiwan for high CO2 mitigation and C-phycocyanin extraction Process Biochemistry DOI: 10.1016/j.procbio.2012.09.019
|
<urn:uuid:fb936873-c4b3-4301-85c5-1bd5eb0d9a9c>
| 3.8125
|
http://wiredcosmos.com/2012/10/18/searching-for-extraterrestrial-microbes/
|
Cleaner Water: North Carolina's Straight-Pipe Elimination Project
by Fred D. Baldwin
Some years ago, William and Elizabeth Thomas tried unsuccessfully to install a properly designed septic system that would replace a four-inch pipe draining household wastewater straight into a little creek a few yards behind their home.
"I scrounged up enough money to put one in," William Thomas says. Spreading his hands about two feet apart, he adds, "But I didn't get down this far until we hit water."
The Thomases live on a small hillside lot in a rural area of Madison County, North Carolina. Their situation is similar to that of many rural Appalachian families who for one reason or another-money, the lay of the land, or both-live in older homes with inadequate septic systems. By the end of the year, however, they and many other Madison County residents will have new septic systems in place, thanks to a county program backed by an impressive team of state, federal, and local partners ranging from area conservation groups to the Appalachian Regional Commission (ARC).
The genesis of the program goes back to 1995, when Governor James B. Hunt created the Year of the Mountains Commission to assess current and future issues affecting North Carolina's western mountain communities. To protect and improve water quality, the commission recommended that, in addition to reducing mine drainage and agricultural runoff, the state Department of Environment and Natural Resources (DENR) be directed to "aggressively pursue a program to eliminate the practice of 'straight-piping.' " For years, decades even, it had been politically easier to ignore this issue. The commission pointed out that the 1990 Census of housing showed that nearly 50,000 households in North Carolina did not have connections to either municipal sewage systems or adequate septic systems. This was true not only in mountainous areas, but also in low-income communities across the state. Some of these households were draining "black water," which includes raw sewage, into creeks or streams; others were piping toilet wastes to a septic tank but straight-piping soapy and bacteria-laden "gray water" from sinks, baths, and dishwashers. Still other households were relying on septic systems built before the installation of a dishwasher or a second bathroom; these older systems were now prone to backups or leaks.
As early as 1958, the state took the first of many steps to regulate or eliminate straight-piping. This and subsequent measures were loosely enforced. In 1996, Governor Hunt established a goal to eliminate straight-piping of untreated wastewater into western North Carolina's rivers and streams by the end of the decade. "Every child should grow up in a community with clean, safe water," Hunt says.
That same year, in response to the Year of the Mountains Commission report, the North Carolina General Assembly created the Wastewater Discharge Elimination (WaDE) program, which differed significantly from earlier, essentially punitive measures. The new law provided a temporary "amnesty" for households reporting conditions violating state environmental health codes and, more important, provided technical assistance to communities wishing to take advantage of the state's Clean Water Management Trust Fund (a fund established to finance projects that address water pollution problems). Terrell Jones, the WaDE team leader, praises Madison County for being the first county to conduct a wastewater discharge survey under the new law, and he emphasizes that straight-piping, especially of gray water, is a statewide problem.
Driving around Madison County, you see why wastewater problems are costly to correct. Roads wind up and down past rocky, fast-flowing streams and creeks that drain into the French Broad River, where white-water rafters come for excitement. Houses on back roads are far apart but near streams. If there's enough land suitable for a septic tank and drainage field downhill from one of those houses, a conventional septic system can be installed for about $2,000. But if wastewater has to be pumped uphill, the cost can easily reach $8,000 or more. This explains why punitive measures against straight-piping have been loosely enforced. Local officials know that even $2,000 is beyond the means of many families. Who would tell cash-strapped people-more often than not, elderly-that they had to sell or abandon their home or family farmstead because of a housing code violation?
A Growing List of Partners
Madison County officials decided to take the lead on a positive approach. They first turned to the Land-of-Sky Regional Council, an Asheville-based local development district that represents 19 governmental units in four Appalachian counties, including Madison. The Land-of-Sky staff took advantage of an infrastructure demonstration grant from the North Carolina Division of Community Assistance and funds from ARC to begin a wastewater survey and community-planning process. From that point, the list of partners grew rapidly. They included the DENR WaDE program, U.S. Department of Agriculture (USDA) Rural Development, the North Carolina Rural Communities Assistance Project, the state-funded Clean Water Management Trust Fund, the Pigeon River Fund, the Community Foundation of Western North Carolina, the Western North Carolina Housing Partnership, and ARC.
The Madison County Health Department and Land-of-Sky took the lead locally, working with a grassroots planning committee representing a broad base of organizations, outdoor sports enthusiasts, environmental groups, and private-property owners (some of them living in homes with straight-piping). Among the decisions: to test every building in Madison County not connected to a municipal system, not just the older units. That way, no one would feel singled out, and all faulty septic systems would be spotted. "It's made the process go slower," says Heather Bullock, the Land-of-Sky regional planner assisting the project, "but it's made it better."
Not all that much slower, either. By the end of September 1999, health department employees had surveyed 4,594 of an estimated 10,000 houses in Madison County. Where plumbing configurations weren't self-evident, the surveyors dropped dye tablets into sinks and toilets (different colors for each) to see if colored water emerged into a stream or septic tank area. The survey identified 945 noncompliant systems (20 percent of the total). Of these, 258 were straight-piping black water; 535, gray water. Another 116 had failing septic systems, and 36 had only outhouses. The incidence of problems closely tracked household income.
A welcome surprise, says Kenneth D. Ring, health director of the Madison County Health Department, was how well the inspectors were received. "The cooperation has been overwhelming," he says.
Although most people with poor systems knew they had problems and wanted to correct them, some knew little or nothing about the design of their systems. For example, Ronnie Ledford, the chief building inspector and environmental health supervisor on the health department staff, recalls a visit with a man living in a mobile home. "He thought he had a septic system," Ledford says. "He had a 55-gallon drum. We found a 'black' pipe draining into a ditch line. He was very shocked. It took him some time to get his money together, but he took care of it himself."
The problem all along, however, had been that too few people had been able to get the money together to take care of things for themselves. All the agencies involved chipped in to the extent their guidelines permitted. A few septic systems were renovated with Community Development Block Grant funds, but that program's rules require that any unit being renovated in any way be brought up to code in all respects-prohibitively expensive for people in housing with other problems. The USDA provided Section 504 loans and grants for eligible elderly, low-income home owners. The largest pool of money came from the Clean Water Management Trust Fund, which awarded Madison County $750,000 for a revolving loan and grant fund, plus funds for administration.
Even so, setting up a workable program wasn't easy. Many low-income area residents had poor credit ratings and little collateral with which to guarantee loans. If the program's loan requirements were too tight, applicants wouldn't qualify for loans, and pollutants would continue to drain into streams; too loose, and the loan fund itself would soon drain away.
Help with Funding
The Madison County Revolving Loan and Grant Program was established with these concerns in mind. The program includes both grants and loans, the ratios based on household income. In determining credit-worthiness, the program coordinator looks at whether difficulties were caused by circumstances beyond the family's control, such as a medical emergency. If a loan still looks too risky, the applicant is referred to an educational program run by the nonprofit Consumer Credit Counseling Service of Western North Carolina, in Asheville.
When it appeared that Madison County might lack the legal flexibility for making the needed loans, the partners turned for help to the Center for Community Self-Help, a statewide nonprofit that offers loans as a community development tool. Self-Help agreed to make the loans from its funds, using the county's fund as its collateral. This somewhat complicated arrangement gives everyone involved some freedom to maneuver. The default rate is likely to be substantially higher than a bank could tolerate, but Self-Help makes sure applicants take the loan seriously.
"The goal is to clean up the water," explains Tom Barefoot, the USDA Rural Development manager for the area. "We're trying to build on what it takes to get people in [the program], not on possible failure."
"This is a multi-year program," adds Marc Hunt, a loan officer with Self-Help's western North Carolina regional office. "We say, ' Work on your credit and get back on the list in a few months.' We don't want to enable consumers to develop bad habits."
Contracts were let this fall for installing the first batch of new septic systems (not counting a handful of early projects). By the end of the year 2000, Madison County hopes to have replaced 130 straight-pipes.
The benefits will be both tangible and intangible. First of all, the streams of Madison County, some of which flow into a river providing drinking water for towns downstream, will be cleaner. That has important health and economic benefits for an area increasingly attractive to both outdoor recreationalists and people planning to build homes away from cities. Ironically, in some jurisdictions, worries about "image" have been a factor in unwillingness to deal more aggressively with straight-piping. "Madison County recognized an opportunity," says Barefoot, "and they had the courage to act. It's not always a politically safe decision." Marc Hunt agrees: "Many rural counties have similar situations. Any one of them could have done it, but Madison County took the lead."
Governor Hunt also has praise for the county. "I am proud of everyone involved in Madison County's work to find and fix straight-piping problems in a cooperative effort. This will only help our economic development, our public health, and our environment. But most of all, we're helping to make sure our children can grow up in a community with clean, safe water."
The various public and private partners involved hope that Madison County's experience will become a model for other counties. There have been expressions of interest from county officials inside and outside the Appalachian areas of the state.
"It's really incredible to me," says Jody Lovelace, a community development specialist with USDA Rural Development, "how we've been able to pull this together. Everyone said, 'Let's not just clean up the water. Let's help these folks develop financial responsibility and financial pride.' " For the individual households involved, there are direct benefits. Some will have a chance to build or improve a credit history. Most will benefit at least somewhat from improved property values. All, of course, will be glad to be rid of septic systems that back up or of the unpleasant and potentially dangerous discharge of wastewater of any kind near their homes. "It's a health hazard," Elizabeth Thomas says.
The Thomases, who were defeated by waterlogged soil when they tried to replace their old system years earlier, this time received help from a neighbor. He agreed to let them install a septic tank on his vacant field, downhill and off to one side of their house.
"He's a good neighbor," William Thomas says.
That pretty much sums up what Madison County, Land-of-Sky, and their various partners have accomplished. The straight-pipe elimination project began with a blue-ribbon commission's straight talk about an old problem. It's grown into a program that gives everyone involved-from agency officials to rural people living in houses built by their grandparents many decades ago-a chance to prove that they can be good neighbors to each other.
Fred D. Baldwin is a freelance writer based in Carlisle, Pennsylvania.
|
<urn:uuid:fefff558-6fce-48fd-ba57-d053c7be5dc4>
| 3.4375
|
http://www.arc.gov/magazine/articles.asp?ARTICLE_ID=94&F_ISSUE_ID=&F_CATEGORY_ID=16
|
of lakes dot the marshy Arctic tundra regions. Now, in the latest addition to
the growing body of evidence that global warming is significantly affecting
the Arctic, two recent studies suggest that thawing permafrost is the cause
of two seemingly contradictory observations both rapidly growing and
rapidly shrinking lakes.
Thawing permafrost is altering the lakes that dominate Arctic landscapes, such as this one in western Siberia. Courtesy of Laurence C. Smith.
The first study is a historical analysis of changes to 10,000 Siberian lakes over the past 30 years, a period of warming air and soil temperatures. Using satellite images, Laurence Smith, a geographer at the University of California, Los Angeles, and colleagues found that, since the early 1970s, 125 Siberian lakes vanished completely, and those that remain averaged a 6 percent loss in surface area, a total of 930 square kilometers.
They report in the June 3 Science that the spatial pattern of lake disappearance suggests that the lakes drained away when the permafrost below them thawed, allowing the lake water to seep down into the groundwater. However, the team also found that lakes in northwestern Siberia actually grew by 12 percent, and 50 new lakes formed. Both of the rapid changes are due to warming, they say, and if the warming trend continues, the northern lakes will eventually shrink as well.
These two processes are similar, in that were witnessing permafrost degradation in both regions, says co-author Larry Hinzman, a hydrologist at the University of Alaska in Fairbanks, who in previous studies documented shrinking lakes in southern Alaska. In the warmer, southern areas, we get groundwater infiltration, but in the northern areas, where the permafrost is thicker and colder, its going to take much, much longer for that to occur. So instead of seeing lakes shrinking there, were seeing lakes growing.
That finding is consistent with the second study, which focused on a set of unusually oriented, rapidly growing lakes in northern Alaska, an area of continuous permafrost. Jon Pelletier, a geomorphologist at the University of Arizona in Tucson, reports in the June 30 Journal of Geophysical Research Earth Surface that the odd alignment of the lakes is caused not by wind direction but by permafrost melting faster at the downhill end of the lake, which has shallower banks.
Since the 1950s, scientists have attributed the odd alignment of the egg-shaped lakes to winds blowing perpendicularly to the long axes of the lakes, which then set up currents that caused waves to break at the northwest and southeast ends, thus preferentially eroding them. The prevailing wind direction idea has been around so long that we dont even think about it, Smith says, but Jons [Pelletier] work is challenging that. Its a very interesting paper.
Wind-driven erosion occurs in the Great Lakes, but at rates of about a meter a year. The Alaskan oriented thaw lakes grow at rates of 5 meters or more per year. Pelletier says this rate difference suggests a different process is at work.
According to the model, the direction and speed of growth depend on where and how quickly the permafrost thaws, which is determined by two factors: how the water table intersects the slope of the landscape and how fast the summer temperature increases. If the permafrost thaws abruptly, the shorter, downhill bank is more likely to thaw first. The soggy soil slumps into the water, and the perimeter of the lake is enlarged. Its not just the [global] warming trend, but also how quickly the warming takes place in the summertime, Pelletier says.
Hinzman says that the lakes are just one part of the Arctic water cycle, which has seen an increasing number of perturbations in recent years. The whole hydrologic cycle is changing and this is just one component of that.
Understanding how the hydrologic cycle is changing is important, Hinzman says, because the amount of freshwater runoff into the Arctic Ocean impacts global ocean circulation and the amount of sea ice, thus affecting climate worldwide. If global warming continues to the point where permafrost goes away, there will be fewer lakes, Smith says. And a drier, less marshy Arctic could alter weather patterns and ecosystems, researchers say, affecting everything from the subsistence lifestyle of native people to the hazard of fire on the tundra.
Geotimes contributing writer
Back to top
|
<urn:uuid:5fdf99e1-ac10-4897-aae4-baeb9600a36e>
| 3.59375
|
http://www.geotimes.org/sept05/NN_arcticlakes.html
|
On this day in 1863, Union General Ulysses S. Grant breaks the siege of Chattanooga, Tennessee, in stunning fashion by routing the Confederates under General Braxton Bragg at Missionary Ridge.
For two months following the Battle of Chattanooga, the Confederates had kept the Union army bottled up inside a tight semicircle around Chattanooga. When Grant arrived in October, however, he immediately reversed the defensive posture of his army. After opening a supply line by driving the Confederates away from the Tennessee River in late October, Grant prepared for a major offensive in late November. It was launched on November 23 when he sent General George Thomas to probe the center of the Confederate line. This simple plan turned into a complete victory, and the Rebels retreated higher up Missionary Ridge. On November 24, the Yankees captured Lookout Mountain on the extreme right of the Union lines, and this set the stage for the Battle of Missionary Ridge.
The attack took place in three parts. On the Union left, General William T. Sherman attacked troops under Patrick Cleburne at Tunnel Hill, an extension of Missionary Ridge. In difficult fighting, Cleburne managed to hold the hill. On the other end of the Union lines, General Joseph Hooker was advancing slowly from Lookout Mountain, and his force had little impact on the battle. It was at the center that the Union achieved its greatest success. The soldiers on both sides received confusing orders. Some Union troops thought they were only supposed to take the rifle pits at the base of the ridge, while others understood that they were to advance to the top. Some of the Confederates heard that they were to hold the pits, while others thought they were to retreat to the top of Missionary Ridge. Furthermore, poor placement of Confederate trenches on the top of the ridge made it difficult to fire at the advancing Union troops without hitting their own men, who were retreating from the rifle pits. The result was that the attack on the Confederate center turned into a major Union victory. After the center collapsed, the Confederate troops retreated on November 26, and Bragg pulled his troops away from Chattanooga. He resigned shortly thereafter, having lost the confidence of his army.
The Confederates suffered some 6,600 men killed, wounded, and missing, and the Union lost around 5,800. Grant missed an opportunity to destroy the Confederate army when he chose not to pursue the retreating Rebels, but Chattanooga was secured. Sherman resumed the attack in the spring after Grant was promoted to general in chief of all Federal forces.
|
<urn:uuid:7b1a4a78-5b08-48b8-86b9-bcbde260344d>
| 4.03125
|
http://www.history.com/this-day-in-history/-battle-of-missionary-ridge?catId=2
|
First-Hand:The Foundation of Digital Television: the origins of the 4:2:2 component digital standard
Contributed by Stanley Baron, IEEE Life Fellow
By the late 1970's, the application of digital technology in television production was widespread. A number of digital television products had become available for use in professional television production. These included graphics generators, recursive filters (noise reducers), time base correctors and synchronizers, standards converters, amongst others.
However, each manufacturer had adopted a unique digital interface, and this meant that these digital devices when formed into a workable production system had to be interfaced at the analog level, thereby forfeiting many of the advantages of digital processing.
Most broadcasters in Europe and Asia employed television systems based on 625/50 scanning (625 lines per picture, repeated 50 fields per second), with the PAL color encoding system used in much of Western Europe, Australia, and Asia, while France, the Soviet Union, Eastern Europe, and China used variations of the SECAM color encoding system. There were differences in luminance bandwidth: 5.0 MHz for B/G PAL, 5.5 MHz for PAL in the UK and nominally 6 MHz for SECAM. There were also legacy monochrome systems, such as 405/50 scanning in the UK and the 819/50 system in France. The color television system that was dominate in the Americas, Japan, and South Korea was based on 525/60 scanning, 4.2 MHz luminance bandwidth, and the NTSC color standard.
NTSC and PAL color coding are both linear processes. Therefore, analog signals in the NSTC format could be mixed and edited during studio processing, provided that color sub carrier phase relationships were maintained. The same was true for production facilities based on the PAL system. In analog NTSC and PAL studios it was normal to code video to composite form as early as possible in the signal chain so that each signal required only one wire for distribution rather than the three needed for RGB or YUV component signals. The poor stability of analog circuitry meant that matching separate channel RGB or YUV component signals was impractical except in very limited areas. SECAM employed frequency modulated coding of the color information, which did not allow any processing of composite signals, so the very robust SECAM composite signal was used only on videotape recorders and point to point links, with decoding to component signals for mixing and editing. Some SECAM broadcasters avoided the problem by operating their studios in PAL and recoding to SECAM for transmission.
The international community recognized that the world community would be best served if there could be an agreement on a single production or studio digital interface standard regardless of which color standard (525 line NTSC, 625 line PAL, or 625 line SECAM) was employed for transmission. The cost of implementation of digital technology was seen as directly connected to the production volume; the higher the volume, the lower the cost to the end user, in this case, the broadcasting community.
Work on determining a suitable standard was organized by the Society of Motion Picture Engineers (SMPTE) on behalf of the 525/60 broadcasting community and the European Broadcasting Union (EBU) on behalf of the 625/50 broadcasting community.
In 1982, the international community reached agreement on a common 4:2:2 Component Digital Television Standard. This standard as documented in SMPTE 125, several EBU Recommendations, and ITU-R Recommendation 601 was the first international standard adopted for interfacing equipment directly in the digital domain avoiding the need to first restore the signal to an analog format.
The interface standard was designed so that the basic parameter values provided would work equally well in both 525 line/60 Hz and 625 line/50 Hz television production environments. The standard was developed in a remarkably short time, considering its pioneering scope, as the world wide television community recognized the urgent need for a solid basis for the development of an all digital television production system. A component-based (Y, R-Y, B-Y) system based on a luminance (Y) sampling frequency of 13.5 MHz was first proposed in February 1980; the world television community essentially agreed to proceed on a component based system in September 1980 at the IBC; a group of manufacturers supplied devices incorporating the proposed interface at a SMPTE sponsored test demonstration in San Francisco in February 1981; most parameter values were essentially agreed to by March 1981; and the ITU-R (then CCIR) Plenary Assembly adopted the standard in February 1982.
What follows is an overview of this historic achievement, providing a history of the standard's origins, explaining how the standard came into being, why various parameter values were chosen, the process that led the world community to an agreement, and how the 4:2:2 standard led to today's digital high definition production standards and digital broadcasting standards.
It is understood that digital processing of any signal requires that the sample locations be clearly defined in time and space and, for television, processing is simplified if the samples are aligned so that they are line, field, and frame position repetitive yielding an orthogonal (rectangular grid) sampling pattern.
While the NTSC system color sub carrier frequency (fsc) was an integer sub multiple of the horizontal line frequency (fH) [fsc = (m/n) x fH] lending itself to orthogonal sampling, the PAL system color sub carrier employed a field frequency off set and the SECAM color system employed frequency modulation of the color subcarrier, which made sampling the color information, contained within those systems a more difficult challenge. Further, since some European nations had adopted various forms of the PAL 625 line/50Hz composite color television standard as their broadcast standard and other European nations had adopted various forms of the SECAM 625 line/50Hz composite color television standard, the European community's search for a common digital interface standard implied that a system that was independent of the color coding technique used for transmission would be required.
Developments within the European community
In September 1972, the European Broadcasting Union (EBU) formed Working Party C, chaired by Peter Rainger to investigate the subject of coding television systems. In 1977, based on the work of Working Party C, the EBU issued a document recommending that the European community consider a component television production standard, since a component signal could be encoded as either a PAL or SECAM composite signal just prior to transmission.
At a meeting in Montreux, Switzerland in the spring of 1979, the EBU reached agreement with production equipment manufacturers that the future of digital program production in Europe would be best served by component coding rather than composite coding, and the EBU established a research and development program among its members to determine appropriate parameter values. This launched an extensive program of work within the EBU on digital video coding for program production. The work was conducted within a handful of research laboratories across Europe and within a reorganized EBU committee structure including: Working Party V on New Systems and Services chaired by Peter Rainger; subgroup V1 chaired by Yves Guinet, which assumed the tasks originally assigned to Working Party C; and a specialist supporting committee V1 VID (Vision) chaired by Howard Jones. David Wood, representing the EBU Technical Center, served as the secretariat of all of the EBU committees concerned with digital video coding.
In 1979, EBU VI VID proposed a single three channel (Y, R-Y, B-Y) component standard. The system stipulated a 12.0 MHz luminance (Y) channel sampling frequency and provided for each of the color difference signals (R-Y and B-Y) to be sampled at 4.0 MHz. The relationship between the luminance and color difference signals was noted sometimes as (12:4:4) and sometimes as (3:1:1). The proposal, based on the results of subjective quality evaluations, suggested these values were adequate to transparently deliver 625/50i picture quality.
The EBU Technical Committee endorsed this conclusion at a meeting in April 1980, and instructed its technical groups: V, V1, and V1 VID to support this effort.
SMPTE organized for the task at hand
Three SMPTE committees were charged with addressing various aspects of world wide digital standards. The first group, organized in late 1974, was the Digital Study Group chaired by Charles Ginsburg. The Study Group was charged with investigating all issues concerning the application of digital technology to television. The second group was a Task Force on Component Digital Coding with Frank Davidoff as chairman. This Task Force, which began work in February 1980, was charged with developing a recommendation for a single worldwide digital interface standard. While membership in SMPTE committees is generally open to any interested and affected party, the membership of the Task Force had been limited to recognized experts in the field. The third group was the Working Group on Digital Video Standards. This Working Group was charged with documenting recommendations developed by the Study Group or the Task Force and generating appropriate standards, recommended practices, and engineering guidelines.
In March 1977, the Society of Motion Picture and Television Engineers (SMPTE) began development of a digital television interface standard. The work was assigned by SMPTE's Committee on New Technology chaired by Fred Remley to the Working Group on Digital Video Standards chaired by Dr. Robert Hopkins.
By 1979, the Working Group on Digital Video Standards was completing development of a digital interface standard for NTSC television production. Given the state of the art at the time and the desire to develop a standard based on the most efficient mechanism, the Working Group created a standard that allowed the NTSC television video signal to be sampled as a single composite color television signal. It was agreed after a long debate on the merits of three times sub carrier (3fsc) versus four times sub carrier (4fsc) sampling that the Composite Digital Television Standard would require the composite television signal with its luminance channel and color sub carrier to be sampled at four times the color sub carrier frequency (4fsc) or 14.31818... MHz.
During the last quarter of 1979, agreement was reached on a set of parameter values, and the drafting of the Composite Digital Television Standard was considered completed. It defined a signal sampled at 4fsc with 8 bit samples. This standard seemed to resolve the problem of providing a direct digital interface for production facilities utilizing the NTSC standard.
By 1980, the Committee on New Technology was being chaired by Hopkins and the Working Group on Digital Video Standards was being chaired by Ken Davies.
Responding to communications with the EBU and so as not to prejudice the efforts being made to reach agreement on a world wide component standard, in January 1980, Hopkins put the finished work on the NTSC Composite Digital Television Standard temporarily aside so that any minor modifications to the document that would serve to meet possible world wide applications could be incorporated before final approval. Since copies of the document were bound in red binders, the standard was referred to as the "Red Book".
Seeking a Common Reference
The agenda of the January 1980 meeting of SMPTE's Digital Study Group included a discussion on a world wide digital television interface standard. At that meeting, the Study Group considered the report of the European community, and members of the EBU working parties had been invited to attend. Although I was not a member of the Study Group, I was also invited to attend the meeting.
It was recognized that while a three color representation of the television signal using Red, Blue, and Green (R, G, B) was the simplest three component representation, a more efficient component representation, but one that is more complex, is to provide a luminance or gray scale channel (Y) and two color difference signals (R-Y and B-Y). The R-Y and B-Y components take advantage of the characteristics of the human visual system which is less sensitive to high resolution information for color than for luminance. This allows for the use of a lower number of samples to represent the color difference signals without observable losses in the restored images. Color difference components (noted as I, Q or U, V or Dr, Db) were already in use in the NTSC, PAL, and SECAM systems to reduce the bandwidth required to support color information.
Members of the NTSC community present at the January 1980 Study Group meeting believed that the EBU V1 VID proposed 12.0 MHz, (3:1:1) set of parameters would not meet the needs for NTSC television post production particularly with respect to chroma keying, then becoming an important tool. In addition, it was argued that: (1) the sampling frequency was too low (too close to the Nyquist point) for use in a production environment where multiple generations of edits were required to accommodate special effects, chroma keying, etc., and (2) a 12.0 MHz sampling system would not produce an orthogonal array of samples in NTSC (at 12.0 MHz, there would be 762.666... pixels per line).
The NTSC community offered for consideration a single three channel component standard based on (Y, R-Y, B-Y). This system stipulated a 4fsc (14.318 MHz) luminance sampling frequency equal to 910 x fH525, where fH525 is the NTSC horizontal line frequency. The proposal further provided for each of the color difference components to be sampled at 2fsc or 7.159 MHz. This relationship between the luminance and color difference signals was noted as (4:2:2). Adopting 4fsc as the luminance sampling frequency would facilitate trans coding of video recorded using the “single wire” NTSC composite standard with studio mixers and editing equipment based on a component video standard.
Representatives of the European television community present at the January 1980 Study Group meeting pointed to some potential difficulties with this proposal. The objections included: (1) that the sampling frequency was too high for use in practical digital recording at the time, and (2) a 14.318 MHz sampling system would not produce an orthogonal array of samples in a 625 line system (at 14.318 MHz, there would be 916.36... pixels per line).
During the January 1980 Study Group meeting discussion, I asked why the parties involved had not considered a sampling frequency that was a multiple of the 4.5 MHz sound carrier, since the horizontal line frequencies of both the 525 line and 625 line systems had an integer relationship to 4.5 MHz.
The original definition of the NTSC color system established a relationship between the sound carrier frequency (fs) and the horizontal line frequency (fH525) as fH525 = fs/286 = 15734.265... Hz, had further defined the vertical field rate fV525 = (fH525 x 2)/525 = 59.94006 Hz, and defined the color sub carrier (fsc) = (fH525 x 455)/2 = 3.579545.... MHz. Therefore, all the frequency components of the NTSC system could be derived as integer sub multiples of the sound carrier.
The 625 line system defined the horizontal line frequency (fH625) = 15625 Hz and the vertical field rate fV625 = (fH625 x 2)/625 = 50 Hz. It was noted from the beginning that the relationship between fs and the horizontal line frequency (fH625) could be expressed as fH625 = fs/288. Therefore, any sampling frequency that was an integer multiple of 4.5 MHz (fs) would produce samples in either the 525 line or 625 line systems that were orthogonal.
I was asked to submit a paper to the Study Group and the Task Force describing the relationship. The assignment was to cover two topics. The first topic was how the 625 line/50Hz community might arrive at a sampling frequency close to 14.318 MHz. The second topic was to explain the relationship between the horizontal frequencies of the 525 line and 625 line systems and 4.5 MHz.
This resulted in my authoring a series of papers written between February and April 1980 addressed to the SMPTE Task Force explaining why 13.5 MHz should be considered the choice for a common luminance sampling frequency. The series of papers was intended to serve as a tutorial with each of the papers expanding on the points previously raised. A few weeks after I submitted the first paper, I was invited to be a member of the SMPTE Task Force. During the next few months, I responded to questions about the proposal, and I was asked to draft a standards document.
Crunching the numbers
The first paper I addressed to the Task Force was dated 11 February 1980. This paper pointed to the fact that since the horizontal line frequency of the 525 line system (fH525 had been defined as 4.5 MHz/286 (or 2.25 MHz/143), and the horizontal line frequency of the 625 line system (fH625) was equal to 4.5 MHz/288 (or 2.25 MHz/144), any sampling frequency that was a multiple of 4.5 MHz/2 could be synchronized to both systems.
Since it would be desirable to sample color difference signals at less than the sampling rate of the luminance signal, then a sampling frequency that was a multiple of 2.25 MHz would be appropriate for use with the color difference components (R-Y, B-Y) while a sampling frequency that was a multiple of 4.5 MHz would be appropriate for use with the luminance component (Y).
Since the European community had argued that the (Y) sampling frequency must be lower than 14.318 MHz and the NTSC countries had argued that the (Y) sampling frequency must be higher than 12.00 MHz, my paper and cover letter dated 11 February 1980 suggested consideration of 3 x 4.5 MHz or 13.5 MHz as the common luminance (Y) channel sampling frequency (858 times the 525 line horizontal line frequency rate and 864 times the 625 line rate both equal 13.5 MHz).
My series of papers suggested adoption of a component color system based on (Y, R-Y, B-Y) and a luminance/color sampling relationship of (4:2:2), with the color signals sampled at 6.75 MHz. In order for the system to facilitate standards conversion and picture manipulation (such as that used in electronic special effects and graphics generators), both the luminance and color difference samples should be orthogonal. The desire to be able to trans code between component and composite digital systems implied a number of samples per active line that was divisible by four.
The February 1980 note further suggested that the number of samples per active line period should be greater than 715.5 to accommodate all of the world wide community standards active line periods. While the number of pixels per active line equal to 720 samples per line was not suggested until my next note, (720 is the number found in Rec. 601 and SMPTE 125), 720 is the first value that “works.” 716 is the first number greater than 715.5 that is divisible by 4 (716 = 4 x 179), but does not lend itself to standards conversion between 525 line component and composite color systems or provide sufficiently small pixel groupings to facilitate special effects or data compression algorithms. </p>
Additional arguments in support of 720 were provided in notes I generated prior to IBC'80 in September. Note that 720 equals 6! [6! (6 factorial) = 6x5x4x3x2x1] = 24 x 32 x 5. This allows for many small factors, important for finding an economical solution to conversion between the 525 line component and composite color standards and for image manipulation in special effects and analysis of blocks of pixels for data compression. The composite 525 line digital standard had provided for 768 samples per active line. 768 = 28 x 3. The relationship between 768 and 720 can be described as 768/720 = (28 x 3)/(24 x 32 x 5) = (24)/(3 x 5) = 16/15. A set of 16 samples in the NTSC composite standard could be used to calculate a set of 15 samples in the NTSC component standard.
Proof of Performance
At the September 1980 IBC conference, international consensus became focused on the 13.5 MHz, (4:2:2) system. However, both the 12.0 MHz and 14.318 MHz systems retained some support for a variety of practical considerations. Discussions within the Working Group on Digital Video Standards indicated that consensus could not be achieved without the introduction of convincing evidence.
SMPTE proposed to hold a “Component Coded Digital Video Demonstration” in San Francisco in February 1981 organized by and under the direction of the Working Group on Digital Video Standards to evaluate component coded systems. A series of practical tests/demonstrations were organized to examine the merits of various proposals with respect to picture quality, production effects, recording capability and practical interfacing, and to establish an informed basis for decision making.
The EBU had scheduled a series of demonstrations in January 1981 for the same purpose. SMPTE invited the EBU to hold its February meeting of the Bureau of the EBU Technical Committee in San Francisco to be followed by a joint meeting to discuss the results of the tests. It was agreed that demonstrations would be conducted at three different sampling frequencies (near 12.0 MHz, 13.5 MHz, and 14.318 MHz) and at various levels of performance.
From 2nd through the 6th of February 1981 (approximately, one year from the date of the original 13.5 MHz proposal), SMPTE conducted demonstrations at KPIX Television, Studio N facilities in San Francisco in which a number of companies participated. Each participating sponsor developed equipment with the digital interface built to the specifications provided. The demonstration was intended to provide proof of performance and to allow the international community to come to an agreement.
'The demonstration organizing committee had to improvise many special interfaces and interconnections, as well as create a range of test objects, test signals, critical observation criteria, and a scoring and analysis system and methodology.
The demonstrations were supported with equipment and personnel by many of the companies that were pioneers in the development of digital television and included: ABC Television, Ampex Corporation, Barco, Canadian Broadcasting Corporation, CBS Technology Center, Digital Video Systems, Dynair, Inc., KPIX Westinghouse Broadcasting, Leitch Video Ltd., Marconi Electronics, RCA Corporation and RCA Laboratories, Sony Corporation, Tektronix Inc., Thomson CSF, VG Electronics Ltd., and VGR Corporation. I participated in the demonstrations as a member of SMPTE's Working Group on Digital Video Standards, providing a Vidifont electronic graphics generator whose interface conformed to the new standard.
Developing an agreement
The San Francisco demonstrations proved the viability of the 13.5 MHz, (4:2:2) proposal. At a meeting in January 1981, the EBU had considered a set of parameters based on a 13.0 MHz (4:2:2) system. Additional research conducted by EBU members had shown that a (4:2:2) arrangement was needed in order to cope with picture processing requirements, such as chroma key, and the EBU members believed a 13.0 MHz system appeared to be the most economic system that provided adequate picture processing. Members of the EBU and SMPTE committees met at a joint meeting chaired by Peter Rainger in March 1981 and agreed to propose the 13.5 MHz, (4:2:2) standard as the world wide standard. By autumn 1981, NHK in Japan led by Mr. Tadokoro, had performed its own independent evaluations and concurred that the 13.5 MHz, (4:2:2) standard offered the optimum solution.
A number of points were generally agreed upon and formed the basis of a new world wide standard. They included:
- The existing colorimetry of EBU (for PAL and SECAM) and of NTSC would be retained for 625 line and 525 line signals respectively, as matrixing to a common colorimetry was considered overly burdensome;
- An 8 bit per sample representation would be defined initially, being within the state of the art, but a 10 bit per sample representation would also be specified since it was required for many production applications;
- The range of the signal to be included should include head room (above white level) and foot room (below black level) to allow for production overshoots;
- The line length to be sampled should be somewhat wider than those of the analog systems (NTSC, PAL, and SECAM) under consideration to faithfully preserve picture edges and to avoid picture cropping;
- A bit parallel, sample multiplexed interface (e.g. transmitting R-Y, Y, B-Y, Y, R-Y, ...) was practical, but in the long term, a fully bit and word serial system would be desirable;
- The gross data rate should be recordable within the capacity of digital tape recorders then in the development stages by Ampex, Bosch, RCA, and Sony.
The standard, as documented, provided for each digital sample to consist of at least 8 bits, with 10 allowed. The values for the black and white levels were defined, as was the range of the color signal. (R-Y) and (B-Y) became CR [=0.713 (R-Y)] and CB [=0.564 (B-Y)]. While the original note dated February 1980 addressed to the Task Force proposed a code of 252(base10) =(1111 1100) for ‘white’ at 100 IRE and a code of 72 (base10) =(0100 1000) for ‘black’ at 0 IRE to allow capture of the sync levels, agreement was reached to better utilize the range of codes to capture the grey scale values with more precision and provide more overhead. ‘White’ was to be represented by an eight bit code of 240(base10) =(1111 0000) and ‘black’ was to be represented by an eight bit code 16 (base10) =(0001 0000). The original codes for defining the beginning and the end of picture lines and picture area were discussed, modified, and agreed upon, as well as synchronizing coding for line, field, and frame, each coding sequence being unique and not occurring in the video signal.SMPTE and EBU organized an effort over the next few months to familiarize the remainder of the world wide television community with the advantages offered by the 13.5 MHz, (4:2:2) system and the reasoning behind its set of parameters. Members of the SMPTE Task Force traveled to Europe and to the Far East. Members of the EBU committees traveled to the, then, Eastern European block nations and to the members of the OTI, the organization of the South American broadcasters. The objective of these tours was to build a consensus prior to the upcoming discussion at the ITU in the autumn of 1981. The success of this effort could serve as a model to be followed in developing future agreements.
I was asked to draft a SMPTE standard document that listed the parameter values for a 13.5 MHz system for consideration by the SMPTE Working Group. Since copies of the document were bound in a green binder prior to final acceptance by SMPTE, the standard was referred to as the “Green Book”.
In April 1981, the draft of the standard titled “Coding Parameters for a Digital Video Interface between Studio Equipment for 525 line, 60 field Operation” was distributed to a wider audience for comment. This updated draft reflected the status of the standard after the tests in San Francisco and agreements reached at the joint EBU/SMPTE meeting in March 1981. The EBU community later requested a subtle change to the value of ‘white’ in the luminance channel, and it assumed the value of 235(base10). This change was approved in August 1981.
After review and some modification as noted above to accommodate European concerns, the “Green Book” was adopted as SMPTE Standard 125.
ITU/R Recommendation 601
The European Broadcasting Union (EBU) generated an EBU Standard containing a companion set of parameter values. The SMPTE 125 and EBU documents were then submitted to the International Telecommunications Union (ITU). The ITU, a treaty organization within the United Nations, is responsible for international agreements on communications. The ITU Radio Communications Bureau (ITU-R/CCIR) is concerned with wireless communications, including allocation and use of the radio frequency spectrum. The ITU also provides technical standards, which are called “Recommendations.”
Within the ITU, the development of the Recommendation defining the parameter values of the 13.5 MHz (4:2:2) system fell under the responsibility of ITU-R Study Group 11 on Television. The chair of Study Group 11, Prof. Mark I. Krivocheev, assigned the drafting of the document to a special committee established for that purpose and chaired by David Wood of the EBU. The document describing the digital parameters contained in the 13.5 MHz, (4:2:2) system was approved for adoption as document 11/1027 at ITU-R/CCIR meetings in Geneva in September and October 1981. A revised version, document 11/1027 Rev.1, dated 17 February 1982, and titled “Draft Rec. AA/11 (Mod F): Encoding parameters of digital television for studios” was adopted by the ITU-R/CCIR Plenary Assembly in February 1982. It described the digital interface standard for transfer of video information between equipment designed for use in either 525 line or 625 line conventional color television facilities. Upon approval by the Plenary Assembly, document 11/1027 Rev.1 became CCIR Recommendation 601.
The Foundation for HDTV and Digital Television Broadcasting Services
The 4:2:2 Component Digital Television Standard allowed for a scale of economy and reliability that was unprecedented by providing a standard that enabled the design and manufacture of equipment that could operate in both 525 line/60Hz and 625 line/50Hz production environments. The 4:2:2 Component Digital Television Standard permitted equipment supplied by different manufacturers to exchange video and embedded audio and data streams and/or to record and playback those streams directly in the digital domain without having to be restored to an analog signal. This meant that the number of different processes and/or generations of recordings could be increased without the noticeable degradation of the information experienced with equipment based on analog technology. A few years after the adoption of the 4:2:2 Component Digital Television Standard, all digital production facilities were shown to be practical.
A few years later when the ITU defined “HDTV,” the Recommendation stipulated: “the horizontal resolution for HDTV as being twice that of conventional television systems” described in Rec. 601and a picture aspect ratio of 16:9. A 16:9 aspect ratio picture requires one-third more pixels per active line than a 4:3 aspect ratio picture. Rec. 601 provided 720 samples per active line for the luminance channel and 360 samples for each of the color difference signals. Starting with 720, doubling the resolution to 1440, and adjusting the count for a 16:9 aspect ratio leads to the 1920 samples per active line defined as the basis for HDTV. Accommodating the Hollywood and computer communities' request for “square pixels” meant that the number of lines should be 1920 x (9/16) = 1080.
Progressive scan systems at 1280 pixels per line and 720 lines per frame are also a member of the “720 pixel” family. 720 pixels x 4/3 (resolution improvement) x 4/3 (16:9 aspect ratio adjustment) = 1280. Accommodating the Hollywood and computer communities' request for square pixels meant that the number of lines should be 1280 x (9/16) = 720.
The original 720 pixel per active line structure became the basis of a family of structures (the 720 pixel family) that was adopted for MPEG based systems including both conventional television and HDTV systems. Therefore, most digital television systems, including digital video tape systems and DVD recordings are derived from the format described in the original 4:2:2 standard.
The existence of a common digital component standard for both 50 Hz and 60 Hz environments as documented in SMPTE 125 and ITU Recommendation 601 provided a path for television production facilities to migrate to the digital domain. The appearance of high quality, fully digital production facilities providing digital video, audio, and metadata streams and the successful development of digital compression and modulation schemes allowed for the introduction of digital television broadcast services.
In its 1982-1983 award cycle, the National Academy of Television Arts and Sciences recognized the 4:2:2 Component Digital Standard based on 13.5 MHz (Y) sampling with 720 samples per line with three EMMY awards:
The European Broadcasting Union (EBU) was recognized: “For achieving a European agreement on a component digital video studio specification based on demonstrated quality studies and their willingness to subsequently compromise on a world wide standard.”
The International Telecommunications Union (ITU) was recognized: “For providing the international forum to achieve a compromise of national committee positions on a digital video standard and to achieve agreement within the 1978-1982 period.”
The Society of Motion Picture and Television Engineers (SMPTE) was recognized: “For their early recognition of the need for a digital video standard, their acceptance of the EBU proposed component requirement, and for the development of the hierarchy and line lock 13.5 MHz demonstrated specification, which provided the basis for a world standard.”
This narrative is intended to acknowledge the early work on digital component coded television carried out over several years by hundreds of individuals, organizations, and administrations throughout the world. It is not possible in a limited space to list all of the individuals or organizations involved, but by casting a spotlight on the results of their work since the 1960's and its significance, the intent is to honor them - all.
Individuals interested in the specific details of digital television standards and picture formats (i.e. 1080p, 720p, etc.) should inquire at www.smpte.org. SMPTE is the technical standards development organization (SDO) for motion picture film and television production.
- ↑ This article builds on a prior article by Stanley Baron and David Wood; simultaneously published in the SMPTE Motion Imaging Journal, September 2005, pp. 327 334 as “The Foundations of Digital Television: the origins of the 4:2:2 DTV standard" and in the EBU Technical Review, October 2005, as "Rec. 601 the origins of the 4:2:2 DTV standard.”
- ↑ Guinet, Yves; “Evolution of the EBU's position in respect of the digital coding of television”, EBU Review Technical, June 1981, pp.111 117.
- ↑ Davies, Kenneth; “SMPTE Demonstrations of Component Coded Digital Video, San Francisco, 1981”, SMPTE Journal, October 1981, pp.923 925.
- ↑ Fink, Donald; “Television Engineering Handbook”, McGraw Hill [New York, 1957], p.7 4.
- ↑ Baron, S.; “Sampling Frequency Compatibility”, SMPTE Digital Study Group, January 1980, revised and submitted to the SMPTE Task Force on Digital Video Standards, 11 February 1980. Later published in SMPTE Handbook, “4:2:2 Digital Video: Background and Implementation”, SMPTE, 1989, ISBN 0 940690 16, pp.20 23.
- ↑ Weiss, Merrill &amp;amp; Marconi, Ron; “Putting Together the SMPTE Demonstrations of Component Coded Digital Video, San Francisco, 1981”, SMPTE Journal, October 1981, pp.926 938.
- ↑ Davidoff, Frank; “Digital Television Coding Standards”, IEE Proceedings, 129, Pt.A., No.7, September 1982, pp.403 412.
- ↑ Nasse, D., Grimaldi, J.L., and Cayet, A; “An Experimental All Digital Television Center”, SMPTE Journal, January 1986, pp. 13 19.
- ↑ ITU Report 801, “The Present State of High Definition Television”, Part 3, “General Considerations of HDTV Systems”, Section 4.3, “Horizontal Sampling”.
|
<urn:uuid:9a916a8d-2c90-4824-b961-0b5932af2602>
| 3.375
|
http://www.ieeeghn.org/wiki/index.php?title=First-Hand:The_Foundation_of_Digital_Television:_the_origins_of_the_4:2:2_component_digital_standard&redirect=no
|
exactly located (exactlyLocated)
The actual, minimal location of an
Object. This is a subrelation of the more general Predicate
SUMO / BASE-ONTOLOGY
Related WordNet synsets
- the precise location of something; a spatially limited location; "she walked to a point where she could survey the whole street"
Agar obj is partly located in region, to yah kuch subobj nahin, ki subobj is a part of obj aur subobj is exactly located in region.
(partlyLocated ?OBJ ?REGION)
(part ?SUBOBJ ?OBJ)
(exactlyLocated ?SUBOBJ ?REGION))))
Agar obj is exactly located in region, to yah kuch otherobj nahin, ki otherobj is exactly located in region aur otherobj is not equal to obj.
(exactlyLocated ?OBJ ?REGION)
(exactlyLocated ?OTHEROBJ ?REGION)
(equal ?OTHEROBJ ?OBJ))))))
"thing ki jagah time tha" is equal to region agar hai thing is exactly located in region during time.
(WhereFn ?THING ?TIME)
(exactlyLocated ?THING ?REGION)))
|
<urn:uuid:c9bd6a3e-3426-45f4-8eec-ef2af2ae747f>
| 3.359375
|
http://virtual.cvut.cz/kifb/hindi/concepts/exactly_located.html
|
A number of federal laws and ordinances protect U.S. employees from discrimination in the workplace. These laws are enforced by the U.S. Equal Employment Opportunity Commission (EEOC). The EEOC is the main entity responsible for upholding and designating all employment laws in the United States, including federal job discrimination.
Here's a look at federal job discrimination laws.
Civil Rights Act of 1964 (Title VII). This act protects employees from job discrimination on the basis of race, color, religion, sex, or national origin. All aspects of employment are covered, including hiring, firing, promotion, wages, recruitment, training, and any other terms of employment.
Equal Pay Act of 1963. This act ensures that employees receive the same pay, benefits, and opportunities as those employees of the opposite sex who perform the same work in the same establishment.
Age Discrimination in Employment Act of 1967. This act protects workers who are 40 years of age or older from job discrimination that favors younger workers.
Title I and Title V of the Americans with Disabilities Act of 1990.
This act protects qualified workers with disabilities from job discrimination in the private and state and municipal sectors.
Sections 501 and 505 of the Rehabilitation Act of 1973. This act protects qualified workers with disabilities who work for the federal government from job discrimination.
Civil Rights Act of 1991. This act clarifies some of the ambiguous sections of Title VII, and provides monetary compensation for victims of federal job discrimination.
If you think you are a victim of job discrimination
If you think you are a victim of job discrimination under one of these federal laws, you can file a discrimination charge with the EEOC. In addition, a charge may be filed on your behalf by another person to protect your identity. You can file a charge by mail or in person at the nearest EEOC office. Importantly, you must file a claim with the EEOC within 90 days of the alleged discrimination before a private lawsuit can be filed. You must provide the following information in order to file a charge with the EEOC:
- The complaining party's name, address, and telephone number
- The name, address, and telephone number of the claim's respondent
- The date and a short description of the alleged discrimination
The EEOC will then investigate the claim. It will either dismiss the case, attempt to settle the case, bring the case to federal court, or issue the charging party a "right to sue," which allows the party to seek private counsel and bring suit upon the employer directly.
Other job discrimination laws and agencies
In addition to federal laws, many states and municipalities have their own laws that protect employees against discrimination. Workers and applicants who feel they are being discriminated against in regard to sexual orientation, parental status , marital status, political affiliation, and any other personal choice that does not affect their ability to do their job can research local and state ordinances to see whether they have legislative protection.
|
<urn:uuid:225174b7-88cb-4385-8008-4f65996fc0a6>
| 3.359375
|
http://www.avvo.com/legal-guides/federal-job-discrimination-laws?pretty_print=false
|
The Walking Liberty half dollar has won many praises and criticisms in its time. Adolph Weinman’s Walking Liberty design was more than an attempt to beautify the half dollar. It represented a concerted effort to revitalize the denomination and to get half dollars back into circulation in again. The Mint was able to churn out plenty of Walking Liberty half dollars in the design’s first year. Of the first years mintage couldn’t compare to the numbers that were minted in the 1940s.
Adolph Weinman was better known as a sculptor and medal designer. As such he won the competition to design the new half dollar. The Mint began producing the new Walking Liberty design in November, 1916. However it was January 2, 1917 before any of these dated half dollars entered into circulation.
The new half dollars debut soon brought many praises and some criticisms. The Jan 23, 1917, issue of the Elyra, Ohio Evening Telegram is quoted as stating the Walking Liberty half dollar was more “elaborate” than the old Barber half dollar. And that both half dollars shared one thing in common—they both seemed to have been inspired by some French coin designs.
For what ever reason, Weinman managed to work the American flag into the Walking Liberty half dollar design, which does seem to set it apart and gave it a more national character than other coin designs. Weinman had his own comments on the symbolism in his design:
“The design of the Half dollar bears a full-length figure of Liberty, the folds of the Stars and Stripes flying to the breeze as a background. Progressing in full stride toward the dawn of a new day, carrying branches of laurel and oak, symbolic of civil and military glory. The hand of the figure is outstretched in bestowal of the spirit of liberty.”
“The reverse of the half dollar shows an eagle perched high upon a mountain craig, his wings unfolded, fearless in spirit, and conscious of his power. Springing from a rift in the rock is a sapling of Mountain Pine, symbolic of America.”
Many bird experts were amused at the design of the eagle displayed on the half dollar. It was quite unlike any other eagle pictured on other U.S. coins. One leading ornithologist remarked the eagle looked like a “turkey.”
Very little was said about the branch of Mountain Pine. It did add a very dramatic touch to the design and is probably the coin’s most distinctive feature. The Walking Liberty is definitely the most distinctive half dollar created. In time the Walking Liberty half dollar gave way to the Franklin half dollar in 1948.
|
<urn:uuid:317df08c-8a0f-44fa-958d-481d95ef3107>
| 3.359375
|
http://www.bellaonline.com/ArticlesP/art171311.asp
|
stressArticle Free Pass
stress, in physical sciences and engineering, force per unit area within materials that arises from externally applied forces, uneven heating, or permanent deformation and that permits an accurate description and prediction of elastic, plastic, and fluid behaviour. A stress is expressed as a quotient of a force divided by an area.
There are many kinds of stress. Normal stress arises from forces that are perpendicular to a cross-sectional area of the material, whereas shear stress arises from forces that are parallel to, and lie in, the plane of the cross-sectional area. If a bar having a cross-sectional area of 4 square inches (26 square cm) is pulled lengthwise by a force of 40,000 pounds (180,000 newtons) at each end, the normal stress within the bar is equal to 40,000 pounds divided by 4 square inches, or 10,000 pounds per square inch (psi; 7,000 newtons per square cm). This specific normal stress that results from tension is called tensile stress. If the two forces are reversed, so as to compress the bar along its length, the normal stress is called compressive stress. If the forces are everywhere perpendicular to all surfaces of a material, as in the case of an object immersed in a fluid that may be compressed itself, the normal stress is called hydrostatic pressure, or simply pressure. The stress beneath the Earth’s surface that compresses rock bodies to great densities is called lithostatic pressure.
Shear stress in solids results from actions such as twisting a metal bar about a longitudinal axis as in tightening a screw. Shear stress in fluids results from actions such as the flow of liquids and gases through pipes, the sliding of a metal surface over a liquid lubricant, and the passage of an airplane through air. Shear stresses, however small, applied to true fluids produce continuous deformation or flow as layers of the fluid move over each other at different velocities like individual cards in a deck of cards that is spread. For shear stress, see also shear modulus.
Reaction to stresses within elastic solids causes them to return to their original shape when the applied forces are removed. Yield stress, marking the transition from elastic to plastic behaviour, is the minimum stress at which a solid will undergo permanent deformation or plastic flow without a significant increase in the load or external force. The Earth shows an elastic response to the stresses caused by earthquakes in the way it propagates seismic waves, whereas it undergoes plastic deformation beneath the surface under great lithostatic pressure.
What made you want to look up "stress"? Please share what surprised you most...
|
<urn:uuid:79e0dfc6-44f0-433d-bdf7-1f27c991027e>
| 4.21875
|
http://www.britannica.com/EBchecked/topic/568893/stress
|
This multimedia lesson for Grades 7-10 explores the physical forces that act in concert to create snowflakes. Students build an apparatus that creates conditions similar to a winter cloud and produce their own snow crystals indoors. By watching the snow crystals grow, they learn about how snowflake size and shape is determined by the forces that act on water molecules at the atomic and molecular levels. Digital models and snowflake photo galleries bring together a cohesive package to help kids visualize what's happening at the molecular scale.
Editor's Note: This lab activity calls for dry ice. See Related Materials for a link to the NOAA's "Dry Ice Safety" Guidelines, and for a link to snow crystal images produced by an electron microscope.
Lewis structures, VSEPR, condensation, covalent bond, crystals, electron sharing, ice, physics of snowflakes, snow formation, valence electrons, valence shell
Metadata instance created
January 2, 2013
by Caroline Hall
January 2, 2013
by Caroline Hall
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4B. The Earth
6-8: 4B/M15. The atmosphere is a mixture of nitrogen, oxygen, and trace amounts of water vapor, carbon dioxide, and other gases.
4D. The Structure of Matter
6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope.
6-8: 4D/M1cd. Atoms may link together in well-defined molecules, or may be packed together in crystal patterns. Different arrangements of atoms into groups compose all substances and determine the characteristic properties of substances.
6-8: 4D/M3cd. In solids, the atoms or molecules are closely locked in position and can only vibrate. In liquids, they have higher energy, are more loosely connected, and can slide past one another; some molecules may get enough energy to escape into a gas. In gases, the atoms or molecules have still more energy and are free of one another except during occasional collisions.
9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons.
9-12: 4D/H7a. Atoms often join with one another in various combinations in distinct molecules or in repeating three-dimensional crystal patterns.
12. Habits of Mind
12C. Manipulation and Observation
6-8: 12C/M3. Make accurate measurements of length, volume, weight, elapsed time, rates, and temperature by using appropriate devices.
<a href="http://www.compadre.org/precollege/items/detail.cfm?ID=12568">WGBH Educational Foundation. Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. Boston: WGBH Educational Foundation, 2010.</a>
Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes? (WGBH Educational Foundation, Boston, 2010), WWW Document, (http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/).
Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. (2010). Retrieved May 21, 2013, from WGBH Educational Foundation: http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/
WGBH Educational Foundation. Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. Boston: WGBH Educational Foundation, 2010. http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/ (accessed 21 May 2013).
Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes?. Boston: WGBH Educational Foundation, 2010. 21 May 2013 <http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/>.
%T Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes? %D 2010 %I WGBH Educational Foundation %C Boston %U http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/ %O application/pdf
%0 Electronic Source %D 2010 %T Teachers' Domain: Why Do Snowflakes Come in So Many Shapes and Sizes? %I WGBH Educational Foundation %V 2013 %N 21 May 2013 %9 application/pdf %U http://www.teachersdomain.org/resource/lsps07.sci.phys.matter.lpsnowflakes/
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
|
<urn:uuid:dbafed73-0f12-4aa3-a008-e9f488788ed7>
| 4
|
http://www.compadre.org/precollege/items/detail.cfm?ID=12568
|
(BPT) - The start of the school year is a time of great anticipation for parents and kids alike. New teachers. New classes. New and old friends. It's a time for fun and learning.
Parents expect schools to be safe havens, but the reality is that children face a host of dangers all day long. Bullying, taunting and teasing are only some of the hazards that kids must deal with it every day at even the best schools in America.
About 30 percent of middle and high school students say they've been bullied. Among high school students, one out of nine teens reported they had been pushed, shoved, tripped or spit upon during the last school year, according to a National Institute of Child Health and Human Development research study.
FindLaw.com, the nation's leading website for free legal information, offers the following tips on how to keep your children safe at school:
* Talk to your kids about school safety. Talk about bullying and make sure your child understands what is and is not acceptable behavior. Also discuss when and how to report bullying.
* Go to the bus stop. If your schedule allows, go to the bus stop with your child and get to know the other kids and parents, along with the bus driver.
* Get to know your kids' teachers. Send your child's teacher an email to introduce yourself and regularly check in on your child's academic and social progress. Learn how his or her teacher approaches bullying and other issues that may distract from the school's learning environment, such as the use of cell phones and iPods.
* Read the school's policy on bullying. Become familiar with school policies about bullying - particularly the protocols for identifying and reporting bullying behavior. Pay careful attention to policies regarding cyberbullying, which can take place outside of school.
* Watch and listen for the cues. Many kids don't want to reveal to their parents that they're being bullied, taunted or teased by other kids. If your child is withdrawn, not doing homework, sick more often than normal or demonstrating other out-of-the-ordinary behavior, talk about what seems to be bothering him or her.
* Know where your kids are at. Sometimes bullying and other unsafe situations take place outside of school grounds, such as at other students' houses. Telling your kids that you want to know where they are and that they need permission to visit a friend's house shows them you care. It also reassures them that they can contact you if they need help.
* Monitor Internet use and texting. Put the home computer in a public place and don't allow your kids to use a computer in their bedroom by themselves.
* Talk to other parents. You may learn that their children also have been bullied or have been involved in activities on and off school grounds that you should be concerned about. You stand a much better chance of obtaining changes and creating a safer environment for your student by acting together rather than alone.
* Put it in writing. If you suspect your child is being bullied or sexually harassed by another student (or a teacher or staff member), ask for a face-to-face meeting with the school's principal. If the principal does not act, hire an attorney and escalate your complaint to the superintendent and school board. Putting your complaint in writing about the specific types of negative behavior affecting your child is necessary if you need to litigate the complaint in court.
* Take appropriate action when bullying becomes assault. If your child is physically assaulted on the bus, in school or on school grounds, contact the local police department, particularly if there is a school liaison officer assigned to the school, about whether a police report or assault charges should be filed. Do not wait to let the school handle the situation.
For more information about how to keep your kids safe at school, visit FindLaw.com.
|
<urn:uuid:be252f6c-849c-43a6-a09e-a67e38971d3f>
| 3.546875
|
http://www.cw15.com/ara/education/story/Keeping-your-kids-safe-at-school/9Mj26bIJeEm07R11wJrHbQ.cspx
|
Throughout life there are many times when outside influences change or influence decision-making. The young child has inner motivation to learn and explore, but as he matures, finds outside sources to be a motivating force for development, as well. Along with being a beneficial influence, there are moments when peer pressure can overwhelm a child and lead him down a challenging path. And, peer pressure is a real thing – it is not only observable, but changes the way the brain behaves.
As a young adult, observational learning plays a part in development through observing and then doing. A child sees another child playing a game in a certain way and having success, so the observing child tries the same behavior. Albert Bandura was a leading researcher in this area. His famous bobo doll studies found that the young child is greatly influenced by observing other’s actions. When a child sees something that catches his attention, he retains the information, attempts to reproduce it, and then feels motivated to continue the behavior if it is met with success.
Observational learning and peer pressure are two different things – one being the observing of behaviors and then the child attempting to reproduce them based on a child’s own free will. Peer pressure is the act of one child coercing another to follow suit. Often the behavior being pressured is questionable or taboo, such as smoking cigarettes or drinking alcohol.
Peer Pressure and the Brain
Recent studies find that peer pressure influences the way our brains behave, which leads to better understanding about the impact of peer pressure and the developing child. According to studies from Temple University, peer pressure has an effect on brain signals involved in risk and reward department, especially when the teen’s friends are around. Compared to adults in the study, teenagers were much more likely to take risks they would not normally take on their own when with friends. Brain signals were more activated in the reward center of the brain, firing greatest during at risk behaviors.
Peer pressure can be difficult for young adults to deal with, and learning ways to say “no” or avoid pressure-filled situations can become overwhelming. Resisting peer pressure is not just about saying “no,” but how the brain functions. Children that have stronger connections among regions in their frontal lobes, along with other areas of the brain, are better equipped to resist peer pressure. During adolescence, the frontal lobes of the brain develop rapidly, causing axioms in the region to have a coating of fatty myelin, which insulates them and causes the frontal lobes to more effectively communicate with other brain regions. This helps the young adult to develop judgment and self-control needed to resist peer pressure.
Along with the frontal lobes contributing to the brain and peer pressure, other studies find that the prefrontal cortex plays a role in how teens respond to peer pressure. Just as with the previous study, children that were not exposed to peer pressure had greater connectivity within the brain as well as abilities to resist peer pressure.
Working through Peer Pressure
The teenage years are exciting years. The young adult is often going through physical changes due to puberty, adjusting to new friends and educational environments, and learning how to make decisions for themselves. Adults can offer a helping and supportive hand to young adults when dealing with peer pressure by considering the following:
Separation: Understanding that this is a time for the child to separate and learn how to be his own individual is important. It is hard to let go and allow the child to make mistakes for himself, especially when you want to offer input or change plans and actions, but allowing the child to go down his own path is important. As an adult, offering a helping hand if things go awry and being there to offer support is beneficial.
Talk it Out: As an adult, take a firm stand on rules and regulations with your child. Although you cannot control whom your child selects as friends, you can take a stand on your control of your child. Setting specific goals, rules, and limits encourages respect and trust, which must be earned in response. Do not be afraid to start talking with your child early about ways to resist peer pressure. Focus on how it will build your child’s confidence when he learns to say “no” at the right time and reassure him that it can be accomplished without feeling guilty or losing self-confidence.
Stay Involved: Keep family dinner as a priority, make time each week for a family meeting or game time, and plan family outings and vacations regularly. Spending quality time with kids models positive behavior and offers lots of opportunities for discussions about what is happening at school and with friends.
If at any time there are concerns a child is becoming involved in questionable behavior due to peer pressure, ask for help. Understand that involving others in helping a child cope with peer pressure, such as a family doctor, youth advisor, or other trusted friend, does not mean that the adult is not equipped to properly help the child, but that including others in assisting a child, that may be on the brink of heading down the wrong path, is beneficial.
By Sarah Lipoff. Sarah is an art educator and parent. Visit Sarah’s website here.
Read More →
|
<urn:uuid:4fafe4c1-2dd0-49fd-8b1b-41d1829f7cdf>
| 3.8125
|
http://www.funderstanding.com/category/child-development/brain-child-development/
|
< Browse to Previous Essay | Browse to Next Essay >
Everett -- Thumbnail History
HistoryLink.org Essay 7397
: Printer-Friendly Format
Once called the “City of Smokestacks,” Everett has a long association with industry and labor. Its first beginnings were two Native American settlements at opposite sides of the heavily wooded region, one on the Snohomish River and the other on Port Gardner Bay. Platted in the 1890s and named after the son of an early investor, it soon attracted the attention of East Coast money. Over the next 100 years, Everett would be a formidable logging mill and industrial center. In 2005, Everett numbered 96,000 citizens.
The Port Gardner Peninsula is a point of land bound by the Snohomish River on its east flank and northern tip and by Port Gardner Bay on the west. People have inhabited the Everett Peninsula for more than 10,000 years. In recent centuries, Hibulb (or Hebolb), the principal village of the Snohomish tribe stood at the northwest point of the peninsula. Its location near the mouth of the Snohomish River and next to Port Gardner Bay provided both abundant food and transportation. Other villages were located across the waterways. The Snohomish fortified Hibulb with a stockade made of Western red cedar posts to guard against their local enemies, the Makah, Cowichan, Muckleshoot, and the occasional northern raider.
On June 4, 1792, George Vancouver landed on the beach south of the village and claimed the entire area for the King of England. He named the bay Port Gardner for a member of his party. He apparently did not explore the river. After this first contact with the Snohomish, the next 50 years were quiet until traders with the Hudson’s Bay Company on the Columbia River ventured through in 1824. Hudson's Bay Company records show that they explored the Snohomish River. They named it “Sinnahamis.” Its present name “Snohomish” dates from the U.S. Coastal Survey of 1854 when it was charted.
In 1853, Washington Territory was formed. That same year the first white settlers in what would become Snohomish County established a water-powered sawmill on Tulalip Bay across the water from Hibulb. When the Treaty of 1855 created a reservation there for the Snohomish and other regional Indians, the settlers abandoned the operation and turned it over to the tribes. Gradually groups of white men from Port Gamble, Port Ludlow, Utsaladdy, and other Puget Sound points began to show up on the heavily forested peninsula to cut its giant timbers. They set up small logging camps in places reserved for homesteads.
During the Indian wars that erupted in King and Pierce counties after the treaty signings, the Snohomish area remained peaceful. Enterprising men making plans for a military road between Fort Bellingham and Fort Steilacoom in 1859 stimulated the exploration of the Snohomish River and its valleys. A ferry was planned at the spot where the road would cross the river. When Congress stopped funding the project, some of the young men working on the military road stayed there anyway. E. C. Ferguson claimed his own place and named it Snohomish City (1859). He was first to describe the area near present day Everett as full of trees:
“with their long strings of moss hanging from branches, which nearly shut out the sunlight ... At the time the opening at the head of Steamboat Slough was not more than fifty feet wide" (Dilgard and Riddle).
First Settlers on the Peninsula
Dennis Brigham was the first permanent settler in the area that would become Everett. A carpenter from Worcester, Massachusetts, he came in 1861 the same year Snohomish County was organized. He built a cabin on 160 acres along Port Gardner Bay and lived alone. Cut off from his nearest neighbors by the deep forests, he still had enough contact to gain the name of “Dirty Plate Face.”
In 1863, the area saw increased settlement. Erskine D. Kromer, telegraph operator and lineman for the World Telegraph, took a claim just south of Brigham. When the venture ended he settled down with a Coast Salish wife and raised a family. Leander Bagley and H. A. Taylor opened the first store in the area on the point next to Helbo. Indians pushed out by homesteaders and loggers came by to trade. The store would change ownership several times.
Also in 1863, on the snag-filled Snohomish River, E. D. Smith set up a logging camp at an angled bend in the river. Here the water was deep and an undercutting current kept his log booms against the bank. At the time there were no mills in Snohomish County. Logs were rafted down river and sent to mills around the sound. Everett’s future was foreshadowed when, during that same year, Jacob and David Livingston set up the first steam sawmill in the county near present day Harbor View Park on the bayside. It was a short-lived venture.
Settlement continued, although one early passerby in 1865 wrote that he saw nothing but woods. The settlers were there. Ezra Hatch claimed land in what would become downtown Everett and George Sines claimed land on the riverside. Together with Kromer, they would hold the most valuable holdings in the future city. There were others: Benjamin Young, George and Perrin Preston, J. L. Clark, and William Shears. They lived in simple log cabins scattered around in the woods, but when Bagley sold his share of the store to J. D. Tullis with the right to lease a portion back for a home and shipyard, Everett industry arrived. In 1886 he built the small sloop Rebecca which he sailed throughout the area. Eventually, the Prestons bought out all the shares to the store. George and Perrin Preston with his Snohomish wife Sye-Dah-bo-Deitz or Peggy would give the name Preston Point to the ancient Snohomish center.
Between the 1870 and 1880 census the white population in Snohomish County increased from 400 to 1,387, of which a minimal amount was found on the peninsula. Neil Spithill and his Snohomish wife Anastasia, the daughter of Chief Bonaparte, settled on the river where the peninsula jutted into it like a left-hand thumb. In 1872, Jacob Livingston filed the first townsite (“Western New York”) on Port Gardner Bay not far from his failed sawmill. John Davis settled at Preston Point where 50 acres were diked, and between the Snohomish River and the sloughs crops of oats, hay, hops, wheat, barley potatoes, and fruit began to appear. E. D. Smith continued to expand his logging businesses, employing 150 men. The area’s first postmaster, Smith platted the town of Lowell in 1872. In 1883, the U.S. government began snag-removal and cleared other impediments on the river. With the coming of mechanized lumber and cedar shingle production, several mills located in the area. Smith began construction on his own mill in 1889 the same year Washington became a state.
Booms and Busts
Statehood brought celebration and speculation. Connection to the area via the Seattle and Montana Railway was close at hand, but when James J. Hill announced that his Great Northern Railway would come over the Cascades to Puget Sound, many people thought that meant the railroad would come to the peninsula. There was money to be made.
First came the Rucker Brothers, Wyatt and Bethel and their mother. They bought the old Dennis Brigham homestead property on the bayside in 1890. They built a house and planned to start the townsite of “Port Gardner.” Joining them was William Swalwell and his brother Wellington. The Swalwells picked up a large section of the Spitlhill’s claim on the river covered with a growth of “timber so dense that trees on all sides touched the little cabin” (Roth). Frank Friday, who bought the old Kramer homestead from Kramer's widow added to the real estate mix. This juxtaposition of bayside to riverside settlements set the layout of the future city streets, though the Swalwell’s Landing, as it became known, was separated from the bay by “a mile of second-growth timber, impassable underbrush and a marshy area near the center of the peninsula” (Dilgard and Riddle). Things began to heat up when Tacoma lumberman and land speculator Henry Hewitt Jr. (1840-1918) arrived in the spring of 1890 with $400,000 of his own money, dreaming of a great industrial city.
After learning that one of John D. Rockefeller’s associates, Charles L. Colby (1839-1896), was looking for a site for the American Steel Barge Company of which he was president, Hewitt met with him. He convinced him that the peninsula with its river and bay access offered the perfect location for that and other industrial concerns. Impressed, Colby talked it up with friends and relatives. Once they were on board, Hewitt immediately approached the Ruckers, Friday, and Salwell and enticed them to join him. They transferred half of their holdings, nearly 800 acres, to the syndicate backed with the East Coast money of Rockefeller, Colby, and Colgate Hoyt, a director of the Great Northern Railroad. Hewitt also bargained with E. D. Smith for a paper mill.
In November 1890, the group incorporated the Everett Land Company. They made Hewitt president. For a time they met in offices at E. D. Smith’s boarding house in Lowell. By spring of 1891, the peninsula began to hum as land was cleared for a nail factory, the barge works, a paper mill, and smelter. Five hundred men graded, surveyed, and platted the townsite. Hewitt Avenue, one and half mile long and 100 feet wide, was cut from bay side to riverside. The townsite of stumps became Everett, after the son of Charles Colby.
Over the months, the city of Everett saw astonishing growth. Before the Everett Land Company lots went on sale, Swalwell jumped the gun and began selling his own lots on banks of the Snohomish River in September 1891. He built a large dock for the sternwheel steamer traffic. Dubbed the “cradle of Everett,” Swalwell’s Landing boomed at the riverside foot of Hewitt, at intersection of Chestnut and Pacific. The Pacific/Chestnut community was a wild west town with gambling and prostitution along with the offices of Brown Engineering Company in charge of platting the townsite, "Workingman’s Grocery,” a small shoe store, another grocery store, a tent hotel, meat market, and barber shop. The streets were muck choked, its sidewalks made of thrown down planks. Farther south at Lowell, Smith built a dock for his new paper mill already in production.
On the bayside, the Everett Land Company built a long wharf at 14th Street on which a sawmill was built at the end. They also built an immense warehouse of some 400 feet and a fancy brick hotel, the Monte Cristo, three stories high. By the time the company started selling their residential and commercial property in late 1891, the building frenzy had attracted the nation. “An Army of Men at Work On a Mammoth Establishment,” the headline in the newly established Port Gardner News boasted in September 1891.
By the spring of 1892, Everett resembled a city albeit with stumps. There were frame homes, schools, churches (land provided by the Everett Land Company), and theaters as well as 5,600 citizens, a third of them foreign born (mostly English and Scandinavian) enjoying streetcar service, electricity, streetlights, and telephones. The Everett Land Company won a suit to own the waterfront. The promise of riches in the mines in the Cascades spurred the building of the Everett-Monte Cristo railroad from there to a smelter on the peninsula.
In April 1893, Everett incorporated by election. Then came trouble. In May, the Silver Panic caused a national depression that slammed into Everett. Factories closed down. Banks failed. Wages dropped 60 percent. The railroads either failed or faltered. People left in droves. By 1895, Rockefeller started to withdraw his investments. Hewitt was dismissed from the Everett Land Company. Colby took over. The lack of return on fees nearly bankrupted the city government. The streetlights were turned off. Against this background the town of Snohomish fought the struggling city of Everett over which would be the county seat. Everett finally took the claim away in 1897.
A Second Wind
Everett began to recover in 1899 after Rockefeller's Everett Land Company transferred its holdings to James J. Hill's Everett Improvement Company. The railroad magnate saw benefits for his Great Northern Railroad. He sent 42-year-old John McChesney as his representative. Industrial growth improved. Work continued on dredging the river and the bay. Frederick Weyerhaeuser, neighbor of Hill in St. Paul, Minnesota, came to Everett and founded the Weyerhaeuser Timber Company. He built the world’s largest lumber mill which produced 70 million feet by 1912. David A. Clough and Harry Ramwell formed the American Tugboat Company.
By 1903, the Polk Everett City Directory boasted of 10 sawmills, 12 shingle mils, a paper mill, flouring mill, foundries and machine shops, planing mills, a smelter, an arsenic plant, a refinery, “creosoting” works, a brewer, a sash and door plant, an ice and cold storage plant, and a creamery. Industry employed more than 2,835 men. Telephone subscriptions went from 493 in 1901 to 980 with 23 women employees and eight linemen.
Secret societies as wide ranging as the Elks and the Ancient Order of United Workmen and the Catholic Order of Foresters and the Improved Order of Red Men “meeting at next great camp in the Hunting Grounds of Aberdeen” (Polk) flourished. Times were good.
In 1907, Everett passed the First Class City Charter and boomed after the San Francisco earthquake and fire brought huge orders for Northwest lumber. The city’s own big fire in 1909 destroyed parts of the city, but did not deter future growth. Three years later its population reached three times its size in 1900 -- 25,000. Ninety-five manufacturing plants, “including 11 lumber mills, 16 shingle mills and 17 mills producing both” (Shoreline Historical Survey,) dominated the area.
Unions also dominated the city, making it one of the most unionized in the country. There were 25 unions in all. Of these, the International Shingle Weavers Union of the American Federation of Labor was the strongest. The work they did at shingle mills was dangerous. The bolter used a circular saw with a blade that stood 50 inches in diameter and had three-inch teeth. A man pushed the log toward it at waist height with his knee and hands. Men fell or were pulled into it. Of the 224 people who died in Everett in 1909, 35 were killed in the mills -- almost one a week. Labor unrest grew and strikes threatened.
In 1916, the shingle weaver’s strike culminated in a bloody confrontation at the city dock when two boatloads of Industrial Workers of the World members sailed up from Seattle to demonstrate support of striking shingle mill workers and free speech. Five workers on the steamer Verona and two deputies on the dock were killed. Some 30 others were wounded. The strike ended not long after. This became known as the Everett Massacre.
During World War I, Everett benefited from the demand for lumber, but for the rest of the twentieth century the city saw many down times as it went through a national depression in 1920, the Great Depression, and problems with continual silting in the river channels.
Always a lumber and industrial town, it began to diversify. A Works Progress Administration project in 1936 created Paine Field on 640 acres of land owned by Merrill Ring Logging and the Pope and Talbot Company eight miles southwest of the city. The airfield established aviation and eventually a military presence in the area. The county matched federal dollars.
During World War II the field became a military base. Its name was changed to Paine Field in honor of Lt. Topliff Olin Paine, pioneer aviator from Everett killed in a 1922 Air Mail Service crash. An Army Air Corps unit moved in and stayed for five years. Runways were improved and fueling capabilities added for certain aircraft types. Alaska Airlines started a presence. The military returned during the Korean War (1950-1953) taking over the control tower, but withdrew in 1968. This opened the way for Boeing Corporation. Already owners of acreage north of the airfield, Boeing built the world’s largest building by volume (472 million cubic feet) for their radically new 747 jetliner.
Construction on Naval Station Everett began in November 1987. In January 1994, Navy personnel moved into the completed Fleet Support and Administration buildings and officially began operations. Currently, Everett is home to three frigates, one nuclear-powered aircraft carrier, one destroyer, and a Coast Guard buoy tender. It is the United States Navy’s most modern base.
In 2005, the city of Everett enjoyed growth and revitalization. During the past 20 years, the downtown area has been upgraded and some of the historic structures have been restored. Restaurants, shops, and parks line the bayside of the city. Industrial parks are planned for riverside. A community college and homes stand around Preston Point. Dennis Brigham and E. D. Smith would both be amazed. Henry Hewitt would say that his dream has gone on.
Don Benry, The Lowell Story, (Everett: Lowell Civil Association, 1985), 18-37; David Dilgard, Margaret Riddle and Kristin Ravetz, A Survey of Everett’s Historical Properties (Everett: Everett Public Library and Department of Planning and Community Development, 1996); David Dilgard and Margaret Riddle, Shoreline Historical Survey Report (Everett: Shoreline Master Plan Committee for City of Everett, 1973), 2-28 and 66-73; David Dilgard, Mill Town Footlights (Everett: Everett Public Library, 2001); Lawrence E. O’Donnell, Everett Past and Present (Everett: K & H Printers, 1993), 2-15; Everett City Directory (Seattle: R. L. Polk, 1893), 47-66; Everett City Directory (Seattle: R. L Polk, 1903), 64; Norman H. Clark, Mill Town (Seattle: University of Washington Press, 1970); History of Snohomish County, Washington Vols. I and 2 ed. by William Whitfield (Chicago: Pioneer Historical Publishing Company, 1926); The History of Skagit and Snohomish Counties, Washington (Interstate Publishing Company, 1906), 253-258 and 314-331; Elof Norman, The Coffee Chased Us Up Monte Cristo Memories (Seattle: Mountaineers, 1977); "Early History of Snohomish River and Vicinity," Everett Herald, January 14, 1936; Snohomish Eye, September 1893-1894; Advertisements, Everett Herald, December 17, 1891; Snohomish Sun, 1891; Everett Herald December 10, 1891 through 1892; "Puget Sound Paper Mill," Port Gardner News, September 11, 1893; "Local News," The Eye, August 22, 1893; Everett Herald, December 10, 1891; The Snohomish Story: From Ox team to Jet Stream (Snohomish: Snohomish Centennial Association, 1959).
< Browse to Previous Essay
Browse to Next Essay >
Cities & Towns |
Licensing: This essay is licensed under a Creative Commons license that
encourages reproduction with attribution. Credit should be given to both
HistoryLink.org and to the author, and sources must be included with any
reproduction. Click the icon for more info. Please note that this
Creative Commons license applies to text only, and not to images. For
more information regarding individual photos or images, please contact
the source noted in the image credit.
Major Support for HistoryLink.org Provided
By: The State of Washington | Patsy Bullitt Collins
| Paul G. Allen Family Foundation | Museum Of History & Industry
| 4Culture (King County Lodging Tax Revenue) | City of Seattle
| City of Bellevue | City of Tacoma | King County | The Peach
Foundation | Microsoft Corporation, Other Public and Private
Sponsors and Visitors Like You
This essay made possible by:
The State of Washington
Washington State Department of Archeology and Historic Preservation
Hewitt Avenue looking east, Everett
Postcard Courtesy Everett Public Library
Swalwell's Landing, site of newly platted Everett, 1891
Photo by Frank La Roche, Courtesy Everett Public Library (Image No. 1056)
Birdseye view of the Everett Peninsula, ca. 1893
Courtesy City of Smokestacks
William Weahlub of the Tulalip Reservation smoking salmon and roe on the beach, 1906
Photo by Norman Edson, Courtesy UW Special Collections
Great Northern Railway Depot, Everett, 1920s
Clark-Nickerson Lumber Mill, Everett, 1900s
Night, downtown Everett, 1920s
Hewitt Avenue and Commerce Block, Everett, 1914
Hewitt Avenue looking east, Everett, 1920s
Looking west along Hewitt Avenue across Wetmore, Everett, 1920s
Photo by J. A. Juleen, Courtesy Everett Public Library (Neg. Juleen842)
Aerial view of Everett, 1950s
Naval Station Everett, 2004
Courtesy U.S. Navy
Everett, September 28, 2005
HistoryLink.org Photo by Priscilla Long
Everett, September 28, 2005
HistoryLink.org Photo by Priscilla Long
|
<urn:uuid:55beaadf-2d1a-4ffc-a1ae-baf3ab3594d7>
| 3.328125
|
http://www.historylink.org/This_week/index.cfm?DisplayPage=output.cfm&file_id=7397
|
Hacking Quantum Cryptography Just Got Harder
With quantum encryption, in which a message gets encoded in bits represented by particles in different states, a secret message can remain secure even if the system is compromised by a malicious hacker.
CREDIT: margita | Shutterstock
VANCOUVER, British Columbia — No matter how complex they are, most secret codes turn out to be breakable. Producing the ultimate secure code may require encoding a secret message inside the quantum relationship between atoms, scientists say.
Artur Ekert, director of the Center for Quantum Technologies at the National University of Singapore, presented the new findings here at the annual meeting of the American Association for the Advancement of Science.
Ekert, speaking Saturday (Feb. 18), described how decoders can adjust for a compromised encryption device, as long as they know the degree of compromise.
The subject of subatomic particles is a large step away from the use of papyrus, the ancient writing material employed in the first known cryptographic device. That device, called a scytale, was used in 400 B.C. by Spartan military commanders to send coded messages to one another. The commanders would wrap strips of papyrus around a wooden baton and write the message across the strips so that it could be read only when the strips were wrapped around a baton of matching size. [The Coolest Quantum Particles Explained]
Later, the technique of substitution was developed, in which the entire alphabet would be shifted, say, three characters to the right, so than an "a" would be replaced by "d," and "b" replaced by "e," and so on. Only someone who knew the substitution rule could read the message. Julius Caesar employed such a cipher scheme in the first century B.C.
Over time, ciphers became more and more complicated, so that they were harder and harder to crack. Harder, but not impossible.
"When you look at the history of cryptography, you come up with a system, and sooner or later someone else comes up with a way of breaking the system," Ekert said. "You may ask yourself: Is it going to be like this forever? Is there such a thing as the perfect cipher?"
The perfect cipher
The closest thing to a perfect cipher involves what's called a one-time pad.
"You just write your message as a sequence of bits and you then add those bits to a key and obtain a cryptogram," Ekert said."If you take the cryptogram and add it to the key, you get plain text. In fact, one can prove that if the keys are random and as long as the messages, then the system offers perfect security."
In theory, it's a great solution, but in practice, it has been hard to achieve. [10 Best Encryption Software Products]
"If the keys are as long as the message, then you need a secure way to distribute the key," Ekert said.
The nature of physics known as quantum mechanics seems to offer the best hope of knowing whether a key is secure.
Quantum mechanics says that certain properties of subatomic particles can't be measured without disturbing the particles and changing the outcome. In essence, a particle exists in a state of indecision until a measurement is made, forcing it to choose one state or another. Thus, if someone made a measurement of the particle, it would irrevocably change the particle.
If an encryption key were encoded in bits represented by particles in different states, it would be immediately obvious when a key was not secure because the measurement made to hack the key would have changed the key.
This, of course, still depends on the ability of the two parties sending and receiving the message to be able to independently choose what to measure, using a truly random number generator — in other words, exercising free will — and using devices they trust.
But what if a hacker were controlling one of the parties, or tampering with the encryption device?
Ekert and his colleagues showed that even in this case, if the messaging parties still have some free will, their code could remain secure as long as they know to what degree they are compromised.
In other words, a random number generator that is not truly random can still be used to send an undecipherable secret message, as long as the sender knows how random it is and adjusts for that fact.
"Even if they are manipulated, as long as they are not stupid and have a little bit of free will, they can still do it," Ekert said.
MORE FROM LiveScience.com
|
<urn:uuid:d0f6c11e-5d9a-4eac-8057-aad25b3d2613>
| 3.390625
|
http://www.livescience.com/18587-hacking-quantum-cryptography-unbreakable-code.html
|
(1881 - 1973)
Regarding the canon of art history, no other artist has exerted such influence as Pablo Picasso.
Frequently dubbed the "dean of modernism," the Spanish artist was revolutionary in the way he challenged the conventions of painting. His stylistic pluralism, legendary reconfiguration of pictorial space and inexhaustible creative force have made Picasso one of the most revered artists of the 20th century.
Influenced by symbolism and Toulouse-Lautrec, Picasso developed his own independent style in Paris during his renowned Blue Period (1900-1904): motifs from everyday life...
|
<urn:uuid:e347ca03-870c-40d2-9d5d-c97024672562>
| 3.4375
|
http://www.williambennettgallery.com/artists/picasso/pieces/PICA1191.php
|
July 18, 2012
Since the Industrial Revolution, ocean acidity has risen by 30 percent as a direct result of fossil-fuel burning and deforestation. And within the last 50 years, human industry has caused the world’s oceans to experience a sharp increase in acidity that rivals levels seen when ancient carbon cycles triggered mass extinctions, which took out more than 90 percent of the oceans’ species and more than 75 percent of terrestrial species.
Rising ocean acidity is now considered to be just as much of a formidable threat to the health of Earth’s environment as the atmospheric climate changes brought on by pumping out greenhouse gases. Scientists are now trying to understand what that means for the future survival of marine and terrestrial organisms.
In June, ScienceNOW reported that out of the 35 billion metric tons of carbon dioxide released annually through fossil fuel use, one-third of those emissions diffuse into the surface layer of the ocean. The effects those emissions will have on the biosphere is sobering, as rising ocean acidity will completely upset the balance of marine life in the world’s oceans and will subsequently affect humans and animals who benefit from the oceans’ food resources.
The damage to marine life is due in large part to the fact that higher acidity dissolves naturally-occurring calcium carbonate that many marine species–including plankton, sea urchins, shellfish and coral–use to construct their shells and external skeletons. Studies conducted off Arctic regions have shown that the combination of melting sea ice, atmospheric carbon dioxide and subsequently hotter, CO2-saturated surface waters has led to the undersaturation of calcium carbonate in ocean waters. The reduction in the amount of calcium carbonate in the ocean spells out disaster for the organisms that rely on those nutrients to build their protective shells and body structures.
The link between ocean acidity and calcium carbonate is a directly inverse relationship, which allows scientists to use the oceans’ calcium carbonate saturation levels to measure just how acidic the waters are. In a study by the University of Hawaii at Manoa published earlier this year, researchers calculated that the level of calcium carbonate saturation in the world’s oceans has fallen faster in the last 200 years than has been seen in the last 21,000 years–signaling an extraordinary rise in ocean acidity to levels higher than would ever occur naturally.
The authors of the study continued on to say that currently only 50 percent of the world’s ocean waters are saturated with enough calcium carbonate to support coral reef growth and maintenance, but by 2100, that proportion is expected to drop to a mere five percent, putting most of the world’s beautiful and diverse coral reef habitats in danger.
In the face of so much mounting and discouraging evidence that the oceans are on a trajectory toward irreparable marine life damage, a new study offers hope that certain species may be able to adapt quick enough to keep pace with the changing make-up of Earth’s waters.
In a study published last week in the journal Nature Climate Change, researchers from the ARC Center of Excellence for Coral Reef Studies found that baby clownfish (Amphiprion melanopus) are able to cope with increased acidity if their parents also lived in higher acidic water, a remarkable finding after a study conducted last year on another clownfish species (Amphiprion percula) suggested acidic waters reduced the fish’s sense of smell, making it likely for the fish to mistakenly swim toward predators.
But the new study will require further research to determine whether or not the adaptive abilities of the clownfish are also present in more environmentally-sensitive marine species.
While the news that at least some baby fish may be able to adapt to changes provides optimism, there is still much to learn about the process. It is unclear through what mechanism clownfish are able to pass along this trait to their offspring so quickly, evolutionarily speaking. Organisms capable of generation-to-generation adaptations could have an advantage in the coming decades, as anthropogenic emissions push Earth to non-natural extremes and place new stresses on the biosphere.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
|
<urn:uuid:d5fc8f97-1ffe-4404-b9ee-d359c5162435>
| 3.796875
|
http://blogs.smithsonianmag.com/science/2012/07/ocean-acidity-rivals-climate-change-as-environmental-threat/
|
The cerebrum, the largest part of the brain, is separated into the right and left hemispheres. The right hemisphere is in charge of the functions on the left-side of the body, as well as many cognitive functions.
A right-side stroke happens when the brain’s blood supply is interrupted in this area. Without oxygen and nutrients from blood, the brain tissue quickly dies. A stroke is a serious condition. It requires emergency care.
There are two main types of stroke:
An ischemic stroke (the more common form) is caused by a sudden decrease in blood flow to a region of the brain, which may be due to:
- A clot that forms in another part of the body (eg, heart or neck) breaking off and blocking the flow in a blood vessel supplying the brain (embolus)
- A clot that forms in an artery that supplies blood to the brain (thrombus)
- A tear in an artery supplying blood to the brain (arterial dissection)
A hemorrhagic stroke is caused by a burst blood vessel that results in bleeding in the brain.
Examples of risk factors that you can control or treat include:
Certain conditions, such as:
- High blood pressure
- High cholesterol
- High levels of the amino acid homocysteine (may result in the formation of blood clots)
- Atherosclerosis (narrowing of the arteries due to build-up of plaque)
- Atrial fibrillation (abnormal heart rhythm)
- Metabolic syndrome
- Type 2 diabetes
- Alcohol or drug abuse
- Medicines (eg, long-term use of birth control pills )
- Lifestyle factors (eg, smoking , physical inactivity, diet)
Risk factors that you cannot control include:
- History of having a stroke, heart attack , or other type of cardiovascular disease
- History of having a transient ischemic attack (TIA)—With a TIA, stroke-like symptoms often resolve within minutes (always in 24 hours). They may signal a very high risk of having a stroke in the future.
- Age: 60 or older
- Family members who have had a stroke
- Gender: males
- Race: Black, Asian, Hispanic
- Blood disorder that increases clotting
- Heart valve disease (eg, mitral stenosis )
The immediate symptoms of a right-side stroke come on suddenly and may include:
- Weakness or numbness of face, arm, or leg, especially on the left side of the body
- Loss of balance, coordination problems
- Vision problems, especially on the left-side of vision in both eyes
- Difficulty swallowing
If you or someone you know has any of these symptoms, call 911 right away. A stroke needs to be treated as soon as possible.
Longer-lasting effects of the stroke may include problems with:
- Left-sided weakness and/or sensory problems
- Speaking and swallowing
- Vision (eg, inability for the brain to take in information from the left visual field)
- Perception and spatial relations
- Attention span, comprehension, problem solving, judgment
- Interactions with other people
- Activities of daily living (eg, going to the bathroom)
- Mental health (eg, depression , frustration, impulsivity)
The doctor will make a diagnosis as quickly as possible. Tests may include:
- Exam of nervous system
- Computed tomography (CT) scan —a type of x-ray that uses a computer to make pictures of the brain
- CT angiogram—a type of CT scan which evaluates the blood vessels in the brain and/or neck
- Magnetic resonance imaging (MRI) scan —a test that uses magnetic waves to make pictures of the brain
- Magnetic resonance angiography (MRA) scan —a type of MRI scan which evaluates the blood vessels in the brain and/or neck
- Angiogram —a test that uses a catheter (tube) and x-ray machine to assess the heart and its blood supply
- Heart function tests (eg, electrocardiogram , echocardiogram )
- Doppler ultrasound —a test that uses sound waves to examine the blood vessels
- Blood tests
- Tests to check the level of oxygen in the blood
- Kidney function tests
- Tests to evaluate the ability to swallow
Immediate treatment is needed to potentially:
- Dissolve a clot causing an ischemic stroke
- Stop the bleeding during a hemorrhagic stroke
In some cases, oxygen therapy is needed.
Medicines may be given right away for an ischemic stroke to:
- Dissolve clots and prevent new ones from forming
- Thin blood
- Control blood pressure
- Reduce brain swelling
- Treat an irregular heart rate
Cholesterol medicines called statins may also be given.
For a hemorrhagic stroke, the doctor may give medicines to:
- Work against any blood-thinning drugs that you may regularly take
- Reduce how your brain reacts to bleeding
- Control blood pressure
- Prevent seizures
For an ischemic stroke, procedures may be done to:
- Reroute blood supply around a blocked artery
- Remove the clot or deliver clot-dissolving medicine (embolectomy)
- Remove fatty deposits from a carotid artery (major arteries in the neck that lead to the brain) ( carotid artery endarterectomy )
- Widen carotid artery and add a mesh tube to keep it open ( angioplasty and stenting )
For a hemorrhagic stroke, the doctor may:
- Remove a piece of the skull ( craniotomy ) to relieve pressure on the brain and remove blood clot
- Place a clip on or a tiny coil in the aneurysm to stop it from bleeding
A rehabilitation program focuses on:
- Physical therapy—to regain as much movement as possible
- Occupational therapy—to assist in everyday tasks and self-care
- Speech therapy—to improve swallowing and speech challenges
- Psychological therapy—to help adjust to life after the stroke
To help reduce your chance of having a stroke, take the following steps:
- Exercise regularly .
- Eat a healthy diet that includes fruit, vegetables, whole grains, and fish.
- Maintain a healthy weight.
- If you drink alcohol , drink only in moderation (1-2 drinks per day).
- If you smoke, quit .
- If you have a chronic condition, like high blood pressure or diabetes, get proper treatment.
- If recommended by your doctor, take a low-dose aspirin every day.
- If you are at risk for having a stroke, talk to your doctor about taking statin medicines .
- Reviewer: Rimas Lukas, MD
- Review Date: 06/2012 -
- Update Date: 00/61/2012 -
|
<urn:uuid:6f093826-dc99-4b9c-9f16-033ec6f1ac6f>
| 3.671875
|
http://doctors-hospital.net/your-health/?/645168/Right-hemisphere-stroke
|
||This article needs additional citations for verification. (March 2011)|
Nuclear meltdown is an informal term for a severe nuclear reactor accident that results in core damage from overheating. The term is not officially defined by the International Atomic Energy Agency or by the U.S. Nuclear Regulatory Commission. However, it has been defined to mean the accidental melting of the core of a nuclear reactor, and is in common usage a reference to the core's either complete or partial collapse. "Core melt accident" and "partial core melt" are the analogous technical terms for a meltdown.
A core melt accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Alternately, in a reactor plant such as the RBMK-1000, an external fire may endanger the core, leading to a meltdown.
Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as cesium-137, krypton-88, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel-coolant interactions, hydrogen explosions, or water hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential, however remote, that radioactive materials could breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby.
Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat.
A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion or, in reactors without a pressure vessel, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely.
The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a 1.2-to-2.4-metre (3.9 to 7.9 ft) thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes.
- In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized.
- In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases this may reduce the heat transfer efficiency (when using an inert gas as a coolant) and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the Emergency Core Cooling System may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; however, as long as at least one gas circulator is available, the fuel will be kept cool.
- In an uncontrolled power excursion accident, a sudden power spike in the reactor exceeds reactor design specifications due to a sudden increase in reactor reactivity. An uncontrolled power excursion occurs due to significantly altering a parameter that affects the neutron multiplication rate of a chain reaction (examples include ejecting a control rod or significantly altering the nuclear characteristics of the moderator, such as by rapid cooling). In extreme cases the reactor may proceed to a condition known as prompt critical. This is especially a problem in reactors that have a positive void coefficient of reactivity, a positive temperature coefficient, are overmoderated, or can trap excess quantities of deleterious fission products within their fuel or moderators. Many of these characteristics are present in the RBMK design, and the Chernobyl disaster was caused by such deficiencies as well as by severe operator negligence. Western light water reactors are not subject to very large uncontrolled power excursions because loss of coolant decreases, rather than increases, core reactivity (a negative void coefficient of reactivity); "transients," as the minor power fluctuations within Western light water reactors are called, are limited to momentary increases in reactivity that will rapidly decrease with time (approximately 200% - 250% of maximum neutronic power for a few seconds in the event of a complete rapid shutdown failure combined with a transient).
- Core-based fires endanger the core and can cause the fuel assemblies to melt. A fire may be caused by air entering a graphite moderated reactor, or a liquid-sodium cooled reactor. Graphite is also subject to accumulation of Wigner energy, which can overheat the graphite (as happened at the Windscale fire). Light water reactors do not have flammable cores or moderators and are not subject to core fires. Gas-cooled civilian reactors, such as the Magnox, UNGG, and AGCR type reactors, keep their cores blanketed with non reactive carbon dioxide gas, which cannot support a fire. Modern gas-cooled civilian reactors use helium, which cannot burn, and have fuel that can withstand high temperatures without melting (such as the High Temperature Gas Cooled Reactor and the Pebble Bed Modular Reactor).
- Byzantine faults and cascading failures within instrumentation and control systems may cause severe problems in reactor operation, potentially leading to core damage if not mitigated. For example, the Browns Ferry fire damaged control cables and required the plant operators to manually activate cooling systems. The Three Mile Island accident was caused by a stuck-open pilot-operated pressure relief valve combined with a deceptive water level gauge that misled reactor operators, which resulted in core damage.
Light water reactors (LWRs)
Before the core of a light water nuclear reactor can be damaged, two precursor events must have already occurred:
- A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up.
- Failure of the Emergency Core Cooling System (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them.
The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started.
If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"):
- Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup."
- Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)."
- Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach 1,100 K (1,520 °F). At this temperature the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression."
- Rapid oxidation – "The next stage of core damage, beginning at approximately 1,500 K (2,240 °F), is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above 1,500 K (2,240 °F), the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam."
- Debris bed formation – "When the temperature in the core reaches about 1,700 K (2,600 °F), molten control materials [1,6] will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above 1,700 K (2,600 °F), the core temperature may escalate in a few minutes to the melting point of zircaloy [2,150 K (3,410 °F)] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 [1,7] would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed."
- (Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. Release of molten core materials into water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum."
At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel-coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV"). This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of 2,200 to 3,200 K (3,500 to 5,300 °F), its fall into liquid water at 550 to 600 K (530 to 620 °F) may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place.
Breach of the Primary Pressure Boundary
There are several possibilities as to how the primary pressure boundary could be breached by corium.
- Steam Explosion
As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin, et al. report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha-mode. In the even of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed.
- Pressurized Melt Ejection (PME)
It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH).
Severe Accident Ex-Vessel Interactions and Challenges to Containment
Haskin, et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents.
- Dynamic pressure (shockwaves)
- Internal missiles
- External missiles (not applicable to core melt accidents)
Standard failure modes
If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur.
In modern Russian plants, there is a "core catching device" in the bottom of the containment building, the melted core is supposed to hit a thick layer of a "sacrificial metal" which would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. However there has never been any full-scale testing of this device.
In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions.
In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One positive effect of the corium falling into water is that it is cooled and returns to a solid state.
Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature.
These procedures are intended to prevent release of radiation. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release.
However in case of Fukushima incident this design also at least partially failed: large amounts of highly radioactive water were produced and nuclear fuel has possibly melted through the base of the pressure vessels.
Cooling will take quite a while, until the natural decay heat of the corium reduces to the point where natural convection and conduction of heat to the containment walls and re-radiation of heat from the containment allows for water spray systems to be shut down and the reactor put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure within the containment. After a number of years for fission products to decay - probably around a decade - the containment can be reopened for decontamination and demolition.
Unexpected failure modes
Another scenario sees a buildup of hydrogen, which may lead to a detonation event, as happened for three reactors during Fukushima incident. Catalytic hydrogen recombiners located within containment are designed to prevent this from occurring; however, prior to the installation of these recombiners in the 1980s, the Three Mile Island containment (in 1979) suffered a massive hydrogen explosion event in the accident there. The containment withstood the pressure and no radioactivity was released. However, in Fukushima recombiners did not work due the absence of power and hydrogen detonation breached the containment.
Speculative failure modes
One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment.
Another theory called an 'alpha mode' failure by the 1975 Rasmussen (WASH-1400) study asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based[original research?] newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.)
It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the Loss-of-Fluid-Test Reactor described in Test Area North's fact sheet). The Three Mile Island accident provided some real-life experience, with an actual molten core within an actual structure; the molten corium failed to melt through the Reactor Pressure Vessel after over six hours of exposure, due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents. Some believe a molten reactor core could actually penetrate the reactor pressure vessel and containment structure and burn downwards into the earth beneath, to the level of the groundwater.
By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen.
The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; gravity would prevent it continuing to the other side.
Other reactor types
Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe.
CANDU reactors
CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank. These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well.
Gas-cooled reactors
One type of Western reactor, known as the advanced gas-cooled reactor (or AGCR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring.
Other types of highly advanced gas cooled reactors, generally known as high-temperature gas-cooled reactors (HTGRs) such as the Japanese High Temperature Test Reactor and the United States' Very High Temperature Reactor, are inherently safe, meaning that meltdown or other forms of core damage are physically impossible, due to the structure of the core, which consists of hexagonal prismatic blocks of silicon carbide reinforced graphite infused with TRISO or QUADRISO pellets of uranium, thorium, or mixed oxide buried underground in a helium-filled steel pressure vessel within a concrete containment. Though this type of reactor is not susceptible to meltdown, additional capabilities of heat removal are provided by using regular atmospheric airflow as a means of backup heat removal, by having it pass through a heat exchanger and rising into the atmosphere due to convection, achieving full residual heat removal. The VHTR is scheduled to be prototyped and tested at Idaho National Laboratory within the next decade (as of 2009) as the design selected for the Next Generation Nuclear Plant by the US Department of Energy. This reactor will use a gas as a coolant, which can then be used for process heat (such as in hydrogen production) or for the driving of gas turbines and the generation of electricity.
A similar highly advanced gas cooled reactor originally designed by West Germany (the AVR reactor) and now developed by South Africa is known as the Pebble Bed Modular Reactor. It is an inherently safe design, meaning that core damage is physically impossible, due to the design of the fuel (spherical graphite "pebbles" arranged in a bed within a metal RPV and filled with TRISO (or QUADRISO) pellets of uranium, thorium, or mixed oxide within). A prototype of a very similar type of reactor has been built by the Chinese, HTR-10, and has worked beyond researchers' expectations, leading the Chinese to announce plans to build a pair of follow-on, full-scale 250 MWe, inherently safe, power production reactors based on the same concept. (See Nuclear power in the People's Republic of China for more information.)
Experimental or conceptual designs
Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety.
The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built.
Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used.
The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times.
The liquid fluoride thermal reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged.
Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe.
Soviet Union-designed reactors
Soviet designed RBMKs, found only in Russia and the CIS and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and also have ECCS systems that are considered grossly inadequate by Western safety standards. The reactor from the Chernobyl Disaster was a RBMK reactor.
RBMK ECCS systems only have one division and have less than sufficient redundancy within that division. Though the large core size of the RBMK makes it less energy-dense than the Western LWR core, it makes it harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen, at high temperatures, graphite forms synthesis gas and with the water gas shift reaction the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. The RBMK tends towards dangerous power fluctuations. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity.
Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings.
The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds.
Western aid has been given to provide certain real-time safety monitoring capacities to the human staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in result to the weaknesses that were in the RBMK. However, numerous RBMKs still operate.
It is safe to say that it might be possible to stop a loss-of-coolant event prior to core damage occurring, but that any core damage incidents will probably assure massive release of radioactive materials. Further, dangerous power fluctuations are natural to the design.
Lithuania joined the EU recently, and upon acceding, it has been required to shut the two RBMKs that it has at Ignalina NPP, as such reactors are totally incompatible with the nuclear safety standards of Europe. It will be replacing them with some safer form of reactor.
The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK. It approaches the concept from a different and superior direction, optimizing the benefits, and fixing the flaws of the original RBMK design.
There are several unique features of the MKER's design that make it a credible and interesting option: One unique benefit of the MKER's design is that in the event of a challenge to cooling within the core - a pipe break of a channel, the channel can be isolated from the plenums supplying water, decreasing the potential for common-mode failures.
The lower power density of the core greatly enhances thermal regulation. Graphite moderation enhances neutronic characteristics beyond light water ranges. The passive emergency cooling system provides a high level of protection by using natural phenomena to cool the core rather than depending on motor-driven pumps. The containment structure is modern and designed to withstand a very high level of punishment.
Refueling is accomplished while online, ensuring that outages are for maintenance only and are very few and far between. 97-99% uptime is a definite possibility. Lower enrichment fuels can be used, and high burnup can be achieved due to the moderator design. Neutronics characteristics have been revamped to optimize for purely civilian fuel fertilization and recycling.
Due to the enhanced quality control of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast acting rapid shutdown system, the MKER's safety can generally be regarded as being in the range of the Western Generation III reactors, and the unique benefits of the design may enhance its competitiveness in countries considering full fuel-cycle options for nuclear development.
The VVER is a pressurized light water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems.
However, even with these positive developments, certain older VVER models raise a high level of concern, especially the VVER-440 V230.
The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps an inch or two in thickness, grossly insufficient by Western standards.
- Has no ECCS. Can survive at most one 4 inch pipe break (there are many pipes greater than 4 inches within the design).
- Has six steam generator loops, adding unnecessary complexity.
- However, apparently steam generator loops can be isolated, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop - a feature found in few Western reactors.
The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility - built, no doubt, to deal with the enormous volume of rust within the primary coolant loop - the product of the slow corrosion of the RPV. This model is viewed as having inadequate process control systems.
Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states - rather than abandoning the reactors entirely - have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced.
The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models possessed by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention - but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety.
During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiply redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440's in the world.
The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels.
Chernobyl disaster
In the Chernobyl disaster the fuel became non-critical when it melted and flowed away from the graphite moderator - however, it took considerable time to cool. The molten core of Chernobyl (that part that did not vaporize in the fire) flowed in a channel created by the structure of its reactor building and froze in place before a core-concrete interaction could happen. In the basement of the reactor at Chernobyl, a large "elephant's foot" of congealed core material was found. Time delay, and prevention of direct emission to the atmosphere, would have reduced the radiological release. If the basement of the reactor building had been penetrated, the groundwater would be severely contaminated, and its flow could carry the contamination far afield.
The Chernobyl reactor was an RBMK type. The disaster was caused by a power excursion that led to a meltdown and extensive offsite consequences. Operator error and a faulty shutdown system led to a sudden, massive spike in the neutron multiplication rate, a sudden decrease in the neutron period, and a consequent increase in neutron population; thus, core heat flux very rapidly increased to unsafe levels. This caused the water coolant to flash to steam, causing a sudden overpressure within the reactor pressure vessel (RPV), leading to granulation of the upper portion of the core and the ejection of the upper plenum of said pressure vessel along with core debris from the reactor building in a widely dispersed pattern. The lower portion of the reactor remained somewhat intact; the graphite neutron moderator was exposed to oxygen containing air; heat from the power excursion in addition to residual heat flux from the remaining fuel rods left without coolant induced oxidation in the moderator; this in turn evolved more heat and contributed to the melting of the fuel rods and the outgassing of the fission products contained therein. The liquefied remains of the fuel rods flowed through a drainage pipe into the basement of the reactor building and solidified in a mass later dubbed corium, though the primary threat to the public safety was the dispersed core ejecta and the gasses evolved from the oxidation of the moderator.
Although the Chernobyl accident had dire off-site effects, much of the radioactivity remained within the building. If the building were to fail and dust was to be released into the environment then the release of a given mass of fission products which have aged for twenty years would have a smaller effect than the release of the same mass of fission products (in the same chemical and physical form) which had only undergone a short cooling time (such as one hour) after the nuclear reaction has been terminated. However, if a nuclear reaction was to occur again within the Chernobyl plant (for instance if rainwater was to collect and act as a moderator) then the new fission products would have a higher specific activity and thus pose a greater threat if they were released. To prevent a post-accident nuclear reaction, steps have been taken, such as adding neutron poisons to key parts of the basement.
The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur.
In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radiation release or danger to the public.
In practice, however, a nuclear meltdown is often part of a larger chain of disasters (although there have been so few meltdowns in the history of nuclear power that there is not a large pool of statistical information from which to draw a credible conclusion as to what "often" happens in such circumstances). For example, in the Chernobyl accident, by the time the core melted, there had already been a large steam explosion and graphite fire and major release of radioactive contamination (as with almost all Soviet reactors, there was no containment structure at Chernobyl). Also, before a possible meltdown occurs, pressure can already be rising in the reactor, and to prevent a meltdown by restoring the cooling of the core, operators are allowed to reduce the pressure in the reactor by releasing (radioactive) steam into the environment. This enables them to inject additional cooling water into the reactor again.
Reactor design
Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios.
Fast breeder reactors are more susceptible to meltdown than other reactor types, due to the larger quantity of fissile material and the higher neutron flux inside the reactor core, which makes it more difficult to control the reaction.
Accidental fires are widely acknowledged to be risk factors that can contribute to a nuclear meltdown.
United States
There have been at least eight meltdowns in the history of the United States. All are widely called "partial meltdowns."
- BORAX-I was a test reactor designed to explore criticality excursions and observe if a reactor would self limit. In the final test, it was deliberately destroyed and revealed that the reactor reached much higher temperatures than were predicted at the time.
- The reactor at EBR-I suffered a partial meltdown during a coolant flow test on November 29, 1955.
- The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor which operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959.
- Stationary Low-Power Reactor Number One (SL-1) was a United States Army experimental nuclear power reactor which underwent a criticality excursion, a steam explosion, and a meltdown on January 3, 1961, killing three operators.
- The SNAP8ER reactor at the Santa Susana Field Laboratory experienced damage to 80% of its fuel in an accident in 1964.
- The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward.
- The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969.
- The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt," led to the permanent shutdown of that reactor.
Soviet Union
In the most serious example, the Chernobyl disaster, design flaws and operator negligence led to a power excursion that subsequently caused a meltdown. According to a report released by the Chernobyl Forum (consisting of numerous United Nations agencies, including the International Atomic Energy Agency and the World Health Organization; the World Bank; and the Governments of Ukraine, Belarus, and Russia) the disaster killed twenty-eight people due to acute radiation syndrome, could possibly result in up to four thousand fatal cancers at an unknown time in the future and required the permanent evacuation of an exclusion zone around the reactor.
During the Fukushima I nuclear accidents, three of the power plant's six reactors reportedly suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted. TEPCO believes No.2 and No.3 reactors were similarly affected. On May 24, 2011, TEPCO reported that all three reactors melted down.
Meltdown incidents
- There was also a fatal core meltdown at SL-1, an experimental U.S. military reactor in Idaho.
Large-scale nuclear meltdowns at civilian nuclear power plants include:
- the Lucens reactor, Switzerland, in 1969.
- the Three Mile Island accident in Pennsylvania, U.S.A., in 1979.
- the Chernobyl disaster at Chernobyl Nuclear Power Plant, Ukraine, USSR, in 1986.
- the Fukushima I nuclear accidents following the earthquake and tsunami in Japan, March 2011.
Other core meltdowns have occurred at:
- NRX (military), Ontario, Canada, in 1952
- BORAX-I (experimental), Idaho, U.S.A., in 1954
- EBR-I (military), Idaho, U.S.A., in 1955
- Windscale (military), Sellafield, England, in 1957 (see Windscale fire)
- Sodium Reactor Experiment, (civilian), California, U.S.A., in 1959
- Fermi 1 (civilian), Michigan, U.S.A., in 1966
- Chapelcross nuclear power station (civilian), Scotland, in 1967
- Saint-Laurent Nuclear Power Plant (civilian), France, in 1969
- A1 plant, (civilian) at Jaslovské Bohunice, Czechoslovakia, in 1977
- Saint-Laurent Nuclear Power Plant (civilian), France, in 1980
China Syndrome
The China syndrome (loss-of-coolant accident) is a fictional nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then notionally through the crust and body of the Earth until reaching the other side, which in the United States is jokingly referred to as being China.
The system design of the nuclear power plants built in the late 1960s raised questions of operational safety, and raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979).
The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; momentum loss due to friction (fluid viscosity) would prevent it continuing to the other side.
See also
- Behavior of nuclear fuel during a reactor accident
- Chernobyl compared to other radioactivity releases
- Chernobyl disaster effects
- High-level radioactive waste management
- International Nuclear Event Scale
- List of civilian nuclear accidents
- Lists of nuclear disasters and radioactive incidents
- Nuclear fuel response to reactor accidents
- Nuclear safety
- Nuclear power
- Nuclear power debate
- Martin Fackler (June 1, 2011). "Report Finds Japan Underestimated Tsunami Danger". New York Times.
- International Atomic Energy Agency (IAEA) (2007). IAEA Safety Glossary: Terminology Used in Nuclear Safety and Radiation Protection (2007edition ed.). Vienna, Austria: International Atomic Energy Agency. ISBN 92-0-100707-8. Retrieved 2009-08-17.
- United States Nuclear Regulatory Commission (NRC) (2009-09-14). "Glossary". Website. Rockville, Maryland, USA: Federal Government of the United States. pp. See Entries for Letter M and Entries for Letter N. Retrieved 2009-10-03.
- Reactor safety study: an assessment of accident risks in U.S. commercial nuclear power plants, Volume 1
- Hewitt, Geoffrey Frederick; Collier, John Gordon (2000). "4.6.1 Design Basis Accident for the AGR: Depressurization Fault". Introduction to nuclear power (in Technical English). London, UK: Taylor & Francis. p. 133. ISBN 978-1-56032-454-6. Retrieved 2010-06-05.
- "Earthquake Report No. 91". JAIF. May 25, 2011. Retrieved May 25, 2011.
- Kuan, P.; Hanson, D. J., Odar, F. (1991). Managing water addition to a degraded core. Retrieved 2010-11-22.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. p. 3.1–5. Retrieved 2010-11-23.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–1 to 3.5–4. Retrieved 2010-12-24.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–4 to 3.5–5. Retrieved 2010-12-24.
- ANS : Public Information : Resources : Special Topics : History at Three Mile Island : What Happened and What Didn't in the TMI-2 Accident
- Nuclear Industry in Russia Sells Safety, Taught by Chernobyl
- 'Melt-through' at Fukushima? / Govt. suggests situation worse than meltdown http://www.yomiuri.co.jp/dy/national/T110607005367.htm
- Test Area North
- Walker, J. Samuel (2004). Three Mile Island: A Nuclear Crisis in Historical Perspective (Berkeley: University of California Press), p. 11.
- Lapp, Ralph E. "Thoughts on nuclear plumbing." The New York Times, 12 December 1971, pg. E11.
- "China Syndrome". Merriam-Webster. Retrieved December 11, 2012.
- Presenter: Martha Raddatz (15 March 2011). "ABC World News". ABC.
- Allen, P.J.; J.Q. Howieson, H.S. Shapiro, J.T. Rogers, P. Mostert and R.W. van Otterloo (April–June 1990). "Summary of CANDU 6 Probabilistic Safety Assessment Study Results". Nuclear Safety 31 (2): 202–214.
- http://www.insc.anl.gov/neisb/neisb4/NEISB_1.1.html INL VVER Sourcebook
- Partial Fuel Meltdown Events
- ANL-W Reactor History: BORAX I
- Wald, Matthew L. (2011-03-11). "Japan Expands Evacuation Around Nuclear Plant". The New York Times.
- The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-economic Impacts". International Atomic Energy Agency. p. 14. Retrieved 2011-01-26.
- The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts". International Atomic Energy Agency. p. 16. Retrieved 2011-01-26.
- Hiroko Tabuchi (May 24, 2011). "Company Believes 3 Reactors Melted Down in Japan". The New York Times. Retrieved 2011-05-25.
|
<urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6>
| 4.1875
|
http://en.wikipedia.org/wiki/Nuclear_meltdown
|
Classification of Burns
What are the classifications of burns?
Burns are classified as first-, second-, or third-degree, depending on how deep and severe they penetrate the skin's surface.
First-degree (superficial) burns
First-degree burns affect only the epidermis, or outer layer of skin. The burn site is red, painful, dry, and with no blisters. Mild sunburn is an example. Long-term tissue damage is rare and usually consists of an increase or decrease in the skin color.
Second-degree (partial thickness) burns
Second-degree burns involve the epidermis and part of the dermis layer of skin. The burn site appears red, blistered, and may be swollen and painful.
Third-degree (full thickness) burns
Third-degree burns destroy the epidermis and dermis. Third-degree burns may also damage the underlying bones, muscles, and tendons. The burn site appears white or charred. There is no sensation in the area since the nerve endings are destroyed.
|
<urn:uuid:d3e51a07-18ee-4328-b77c-1bb70f80bd53>
| 3.90625
|
http://healthcare.utah.edu/healthlibrary/library/diseases/pediatric/doc.php?type=90&id=P09575
|
Like other pulmonate land snails, most slugs have two pairs of 'feelers' or tentacles on their head. The upper pair is light sensing, while the lower pair provides the sense of smell. Both pairs are retractable, and can be regrown if lost.
On top of the slug, behind the head, is the saddle-shaped mantle, and under this are the genital opening and anus. On one side (almost always the right hand side) of the mantle is a respiratory opening, which is easy to see when open, but difficult to see when closed. This opening is known as the pneumostome. Within the mantle in some species is a very small, rather flat shell.
Like other snails, a slug moves by rhythmic waves of muscular contraction on the underside of its foot. It simultaneously secretes a layer of mucus on which it travels, which helps prevent damage to the foot tissues. Some slug species hibernate underground during the winter in temperate climates, but in other species, the adults die in the autumn.
In rural southern Italy, the garden slug Arion hortensis was used to treat gastritis, stomach ulcers or peptic ulcers by swallowing it whole and alive. Given that it is now known that most peptic ulcers are caused by Helicobacter pylori, the merit of swallowing a live slug is questionable. A clear mucus produced by the slug is also used to treat various skin conditions including dermatitis, warts, inflammations, calluses, acne and wounds.
|
<urn:uuid:4f92c42b-35b4-439c-8f11-a29c761e704a>
| 3.4375
|
http://melvynyeo.deviantart.com/art/Slug-258511210
|
Our main goal here is to give a quick visual summary that is at once convincing and data rich. These employ some of the most basic tools of visual data analysis and should probably become form part of the basic vocabulary of an experimental mathematician. Note that traditionally one would run a test such as the Anderson-Darling test (which we have done) for the continuous uniform distribution and associate a particular probability with each of our sets of probability, but unless the probability values are extremely high or low it is difficult to interpret these statistics.
Experimentally, we want to test graphically the hypothesis of normality and randomness (or non-periodicity) for our numbers. Because the statistics themselves do not fall into the nicest of distributions, we have chosen to plot only the associated probabilities. We include two different types of graphs here. A quantile-quantile plot is used to examine the distribution of our data and scatter plots are used to check for correlations between statistics.
The first is a quantile-quantile plot of the chi square base 10 probability values versus a a discrete uniform distribution. For this graph we have placed the probabilities obtained from our square roots and plotted them against a perfectly uniform distribution. Finding nothing here is equivalent to seeing that the graph is a straight line with slope 1. This is a crude but effective way of seeing the data. The disadvantage is that the data are really plotted along a one dimensional curve and as such it may be impossible to see more subtle patterns.
The other graphs are examples of scatter plots. The first scatter plot shows that nothing interesting is occurring. We are again looking at probability values this time derived from the discrete Cramer-von Mises (CVM) test base 10,000. For each cube root we have plotted the point , where is the CVM base 10,000 probability associated with the first 2500 digits of the cube root of i and is the probability associated with the next 2500 digits. A look at the graph reveals that we have now plotted our data on a two dimensional surface and there is a lot more `structure' to be seen. Still, it is not hard to convince oneself that there is little or no relationship between the probabilities of the first 2500 digits and the second 2500 digits.
The last graph is similar to the second. Here we have plotted the probabilities associated with the Anderson-Stephens statistic of the first 10,000 digits versus the first 20,000 digits. We expect to find a correlation between these tests since there is a 10,000 digit overlap. In fact, although the effect is slight, one can definitely see the thinning out of points from the upper left hand corner and lower right hand corner.
Figure 1: Graphs 1-3
|
<urn:uuid:6697aede-f5b6-4d7b-b653-9cc6d6586fb4>
| 3.5625
|
http://oldweb.cecm.sfu.ca/organics/vault/expmath/expmath/html/node15.html
|
Analog Input Channels
Temperature is a measure of the average kinetic energy of the particles in a sample of matter expressed in units of degrees on a standard scale. You can measure temperature in many different ways that vary in equipment cost and accuracy. The most common types of sensors are thermocouples, RTDs, and thermistors.
Figure 1. Thermocouples are inexpensive and can operate over a wide range of temperatures.
Thermocouples are the most commonly used temperature sensors because they are relatively inexpensive yet accurate sensors that can operate over a wide range of temperatures. A thermocouple is created when two dissimilar metals touch and the contact point produces a small open-circuit voltage as a function of temperature. You can use this thermoelectric voltage, known as Seebeck voltage, to calculate temperature. For small changes in temperature, the voltage is approximately linear.
You can choose from different types of thermocouples designated by capital letters that indicate their compositions according to American National Standards Institute (ANSI) conventions. The most common types of thermocouples include B, E, K, N, R, S, and T.
For more information on thermocouples, read The Engineer's Toolbox for Thermocouples.
Figure 2. RTDs are made of metal coils and can measure temperatures up to 850 °C.
A platinum RTD is a device made of coils or films of metal (usually platinum). When heated, the resistance of the metal increases; when cooled, the resistance decreases. Passing current through an RTD generates a voltage across the RTD. By measuring this voltage, you can determine its resistance and, thus, its temperature. The relationship between resistance and temperature is relatively linear. Typically, RTDs have a resistance of 100 Ω at 0 °C and can measure temperatures up to 850 °C.
For more information on RTDs, read The Engineer's Toolbox for RTDs.
Figure 3. Passing current through a thermistor generates a voltage proportional to temperature.
A thermistor is a piece of semiconductor made from metal oxides that are pressed into a small bead, disk, wafer, or other shape and sintered at high temperatures. Lastly, they are coated with epoxy or glass. As with RTDs, you can pass a current through a thermistor to read the voltage across the thermistor and determine its temperature. However, unlike RTDs, thermistors have a higher resistance (2,000 to 10,000 Ω) and a much higher sensitivity (~200 Ω/°C), allowing them to achieve higher sensitivity within a limited temperature range (up to 300 °C).
For information on thermistors, read The Engineer's Toolbox for Thermistors.
|
<urn:uuid:e3d9f26b-9215-49bf-a296-3724a4a14b64>
| 4.21875
|
http://sine.ni.com/np/app/main/p/ap/daq/lang/en/pg/1/sn/n17:daq,n21:11/fmid/2999/
|
Not only does it have over 800 illustrations and photographs but it's jam packed, full of
information for your homeschool or any supplement to your children's learning. In my opinion it
would make a great "Summer project" book to keep the kiddos on their toes and help them learn
while having fun!
The World Of Science is broken down into 7 different sections covering topics entitled: Matter and Chemicals - Energy, Motion and Machines - Electricity and Magnetism - Light and Sound - Earth
and Life - Space and Time - and includes over 60 Science experiments.
The photographs and diagrams are just right to help you teach your budding scientists. My 3
scientists range in age from 6 years old to 10 years and the information is shared in such a way
that from my Kindergartner to my 5th grader, they were able to learn, understand, and enjoy
what we've done so far.
We've been studying the atmosphere, covered in the Earth and Life section. One of the things we learned about is the Greenhouse effect. The boys are looking forward to doing more and I have a feeling we'll have conquered all the experiments by the end of the summer. They love this type of learning and it sticks with them so much better than plain questions and answers!
The World Of Science is a wonderful addition to our homeschool curriculum. I highly recommend it. We are using it as a supplement to our main course and it covers enough topics that it would make a nice addition to any curriculum. It reads much like an encyclopedia, thorough yet made for children to easily understand.
The introduction begins with, "In the beginning God Created the Heavens and the Earth." It goes on to explain about the Scientific method, talks about Real science, various fields of science, and even an explanation as to why we should do science. I think every home could benefit by having such a well rounded resource.
We used a tin tray I had on hand with a plastic lid that attaches
recycled toilet paper/ or paper towel rolls for the seed cups
sand and soil from outside & seeds from the dollar store
(you could also use grass with roots or flowers from outside)
1. Place about 1 inch of sand in the bottom of container tray
2. Cut paper rolls down to 2 inches - 2 1/2 inches
3. Places cups in tray and push them down into the sand
4. Fill individual cups with soil
5. Use a finger to make a little hole to drop seeds into
6. Cover back up with the soil - Water them in well
7. Cover with plastic lid and set tray in indirect sunlight
(you don't want to fry your little plants but they do need light - a windowsill is a good spot)
Within just a few minutes the boys noticed that our little
greenhouse was beginning to fog up and collect condensation
on the plastic lid - before we've even sprouted our
little carrots - they're understanding the "greenhouse effect"
From other cool sites:
Check out a couple of really fun ways to make your own
greenhouses - grow some plants - and learn about the
What kind of Science are you doing in your homeschool? Do you have any fun ideas to share?
|
<urn:uuid:4c45010f-7418-4504-883f-40b5f792ea4a>
| 3.328125
|
http://www.adventurezinchildrearing.com/2012/03/world-of-science-free-science-project.html?showComment=1330646792268
|
Pricing Carbon Emissions
A bill before Congress may prove a costly way to reduce greenhouse gases.
- Friday, June 5, 2009
- By Kevin Bullis
Experts are applauding a sweeping energy bill currently before the United States Congress, saying that it could lead to significant cuts in greenhouse-gas emissions and improve the likelihood of a comprehensive international agreement to cut greenhouse gases. "It's real climate-change legislation that's being taken seriously," says Gilbert Metcalf, a professor of economics at Tufts University. But many warn that the bill's market-based mechanisms and more conventional regulations could make these emissions reductions more expensive than they need to be.
The bill, officially called the American Clean Energy and Security Act of 2009, is also referred to as the Waxman-Markey Bill, after its sponsors, Henry Waxman (D-Ca.) and Edward Markey (D-Mass.). The legislation would establish a cap and trade system to reduce greenhouse gases, an approach favored by most economists over conventional regulatory approaches because it provides a great deal of flexibility in how emissions targets are met. But it also contains mandates that could significantly reduce the cost savings that the cap and trade approach is supposed to provide.
In a cap and trade system, the government sets a cap on total emissions of greenhouse gases from various industrial and utility sources, including power plants burning fossil fuels to generate electricity. It then issues allowances to polluters allowing them to emit carbon dioxide and other greenhouse gases; total emissions are meant to stay under the cap. Over a period of time, the government gradually reduces the cap and the number of allowances until it reaches its target. If companies' emissions exceed their allowances, they must buy more.
Economists like the system because companies can choose to either lower their emissions, such as by investing in new technology, or buy more allowances from the government or from companies that don't need them--whichever makes the best economic sense. It is meant to create a carbon market, putting a value on emissions.
In the proposed energy bill, the government will set caps to reduce greenhouse-gas emissions by 17 percent by 2020 (compared with 2005 levels) and by 80 percent by 2050--targets chosen to prevent the worst effects of climate change. Setting caps will make electricity more expensive, as companies turn to cleaner technologies to meet ever lower caps or have to spend money to buy allowances from others with lower emissions. But the bill has some provisions for cushioning the blow, especially at first. For one thing, it gives away most of the allowances rather than charging for them, and it also requires that any profits gained from these free allowances be passed on to electricity customers. It also allows companies to buy "offsets" that permit them to pay to reduce emissions outside the United States.
If the program is designed right, there are fewer allowances than the total emissions when the program starts. At first, when the caps are relatively easy to meet, the prices for allowances on the carbon market will be low. But eventually, they will get higher as the allowances become scarcer. In an ideal world, companies will predict what the price of the allowances will be, and plan accordingly.
|
<urn:uuid:ecbdee27-d586-4d08-a03d-036829352851>
| 3.40625
|
http://www.technologyreview.in/energy/22755/page1/
|
|Updated: 4/10/2007 10:12 am
||Published: 4/10/2007 10:12 am
For your car's engine to be able to start, several things have to happen. The flywheel needs to turn the crankshaft, which moves the pistons up and down, which in turn makes the valves draw air into the combustion chamber to mix with fuel to ignite. The process starts when your car's starter motor connects gears with the flywheel after the key switch is turned to start. The starter motor will disengage after the engine is started. Your car's battery provides electricity to turn the starter, and your car's alternator keeps the battery charged and powers all the accessories in your car once it's running. The alternator generates more electricity the faster your car's engine runs. The voltage regulator in your car controls the amount of current to the battery and prevents damage from overcharging. Contact a qualified mechanic in your area for more information on your car's starting system.
|
<urn:uuid:9eea44d6-be2b-40c4-ba2b-12698fe59b77>
| 3.625
|
http://www.fox16.com/guides/auto/topic/Starting-system/wqZDctonbE2wYwsuNMJBww.cspx
|
A Short History of the Permanent Diaconate
THE EARLY CHURCH
Traditionally, the beginning of the order of deacons is traced back to the story in Acts of the Apostles, Acts 6: 1-6. Whether this pertains to the history of the ordained order of deacons as they developed in the early centuries of the church is in dispute, but it is very much in the spirit in which the diaconate was and has been understood ever since. Very early in the history of the church, deacons were understood to hold a special place in the community, along with bishops and presbyters. The role of all ordained ministries is to be modeled on the life of Christ, and that of deacons especially was and still is, that of Christ the servant. Perhaps the earliest reference to deacons in this sense (ca. 53 A.D.?) occurs in St. Paul's letter to the Philippians in which he addresses "all the saints in Christ Jesus who are at Philippi, with the bishops and deacons".
However, it would be a mistake to interpret the servant role too literally as one of "waiting on tables". One of the seven first deacons, Stephen, was stoned to death because of his bold preaching of the Gospel, Acts 6: 8-15, 7: 54-60 . He is the first recognized martyr of the church, and his feast day is celebrated on December 26. Of the remaining seven, those of whom we have historical knowledge, it is clear that their ministry also quickly broadened to preaching and spreading the Gospel message.
The deacon became the eyes and ears of the bishop, his "right hand man". The bishop's principal assistant became known as the "archdeacon", and was often charged with heavy responsibilities, especially in the financial administration of the local church, above all in distribution of funds and goods to the poor. One measure of the importance of the deacon in the early church is the number of deacons elected pope in the early Middle Ages. Of the thirty-seven men elected pope between 432 and 684 A.D., only three are known to have been ordained to priest before their election to the Chair of Peter. (Llewellyn)
During the first Christian millennium deacons undertook, as the bishops' assistants, the functions that are today those of the vicar general, the judicial vicar, the vicar capitular, the cathedral chapter and the oeconome, or finance officer. In current canon law these are almost exclusively priests' functions. (Galles)
John Collins, writing in Pastoral Review, has this to say about the meaning of "diakonia" as understood in the early church:
"Two final segments: firstly my description of the semantic character of diakon- as applied to deacons in the early church (from Appendix I, Diakonia, p. 337).
As is well known, Vatican II cited this “not unto the priesthood, but unto the ministry” in Lumen Gentium. Thus it was understood that deacons were ordained not for any specific set of duties for serving the needy but to serve the bishop in whatever set of duties he would determine. The circumstances today are, of course, far different than in the early church. The size and complexity of the modern diocese makes such an intimate relationship with the bishop impractical. Although deacons serve in a wide variety of settings, including hospitals and prisons, the focus for today's deacon is normally parish based. However, he retains the historical tie with his bishop, whose "servant" he remains. The major point we should take from a study of early church history and the witness of the early Fathers of the Church is that they acknowledge the importance of the diaconal ministry. Saint Ignatius of Antioch, about 100 AD, says that it would be impossible to have the Church without bishops, priests and deacons. He explains that their task was nothing less than to continue ‘the ministry of Jesus Christ'.
Beginning as early as the fifth century, there was a gradual decline in the permanent diaconate in the Latin church, although it remained, right to the present, a vital part of the Eastern churches, both Catholic and Orthodox. One important factor was simply a failure on the part of both presbyters and deacons to understand the unique value of the diaconate as a distinct order in its own right. Deacons with too much power were often self-important and proud. Presbyters, on their part, were resentful at the fact that often deacons had power over them! St. Jerome demanded to know why deacons had so much power – "After all, deacons could not preside at Eucharist, and presbyters were really the same as bishops". By the early middle ages, the diaconate was perceived largely as only an intermediate step toward the reception of ordination to the priesthood. It was this prevailing attitude of the "cursus honorum" that was most responsible for the decline of the diaconate. The "cursus honorum" was simply the attitude of "rising through the ranks", following a tradition of gradual promotion, inherited from practices of secular government of the Roman Empire. Most older Catholics will be familiar with the many levels of "minor orders" and "major orders". First came the liturgical rite of "tonsure" which conferred upon a man the status of "cleric", and made him eligible for ordination. Then came the minor orders of porter, lector, exorcist and acolyte. These were ordinations but they were not sacraments. Finally came the major orders of sub deacon, deacon and priest. Sub diaconate, although a major order, was not a sacrament. The sub deacon did not receive a stole. Deacon and priest were, of course, sacramental in character. The whole process is well illustrated in a drawing (thanks to Dr. William Ditewig):
"Then, in 1972, Paul VI, following the direction of Vatican II, issued Ministeria quaedam, which realigned these things for the Latin Rite. Tonsure was suppressed, and now a person becomes a cleric through SACRAMENTAL ORDINATION AS A DEACON; this was a change to a pattern of more than 1000 years standing! The Pope also suppressed the minor orders altogether, converting two of them into "lay" ministries no longer requiring ordination; he also suppressed the subdiaconate, shifting the promise of celibacy to the diaconate. That left only two orders, both sacraments, from the old schema. Since Vatican II itself had taught about the sacramental nature of the bishop, we wound up with the three-fold ordained ministry that we have now, all of which are conferred by ordination and all of which confer a sacramental "character." Finally, the three orders are further subdivided by those orders that are sacerdotal (bishop and presbyter) and the orders that are diaconal (bishop and deacon). Yes, right now, presbyters are also deacons because they were ordained transitional deacons on their way to the presbyterate, BUT, as Guiseppe points out, this could be easily changed, and many are arguing for that (it's not likely anytime soon, but the case still needs to be pushed!)" (Ditewig)
The time finally came during deliberations of the Second Vatican Council in 1963, calling for restoration of the diaconate as a permanent level of Holy Orders. In June 1967 Pope Paul VI implemented this decree of the Council when he published the Apostolic letter Diaconatus Ordinem, in which he re-established the permanent diaconate in the Latin Church. The Council in its Dogmatic Constitution on the Church (Lumen Gentium) returns to the roots of the diaconate which we have previously discussed, roots going back to the New Testament and the early church Fathers:
At a lower level of the hierarchy are deacons, upon whom hands are imposed "not unto the priesthood, but unto a ministry of service". For strengthened by sacramental grace, in communion with the bishop and his group of priests they serve in the diaconate of the liturgy, of the word, and of charity to the people of God. It is the duty of the deacon, according as it shall have been assigned to him by competent authority, to administer baptism solemnly, to be custodian and dispenser of the Eucharist, to assist at and bless marriages in the name of the Church, to bring Viaticum to the dying, to read the Sacred Scripture to the faithful, to instruct and exhort the people, to preside over the worship and prayer of the faithful, to administer sacramentals, to officiate at funeral and burial services. Dedicated to duties of charity and of administration, let deacons be mindful of the admonition of Blessed Polycarp: "Be merciful, diligent, walking according to the truth of the Lord, who became the servant of all." (Lumen Gentium para. 29)
And so we have come full circle. The permanent diaconate has proved to be a resounding success, growing at an astounding rate throughout the world, but nowhere so much as here in the United States. The theology of the diaconate has yet to be fully explored, but with the help of the Holy Spirit, it will mature.
An excellent selection of books dealing with both the history and theology of the diaconate is available from the National Association of Diaconate Directors, http://www.nadd.org/publications.html .
For an excellent recent (Nov. 2006) scholarly article dealing with the meaning of "diakonia" see http://www.thepastoralreview.org/cgi-bin/archive_db.cgi?priestsppl-00127 .
|
<urn:uuid:13493cb6-b079-4ca5-9483-997395b28779>
| 3.734375
|
http://www.rcan.org/index.cfm/fuseaction/feature.display/feature_id/403/index.cfm
|
Latiné loqui disce sine molestiá!
Learn to speak Latin with ease! ¡Aprende a hablar latín sin esfuerzo!
Apprenez à parler latin sans peine! Impara a parlare latino senza sforzo! Lernen Sie latein zu sprechen ohne Mühe!
you will need to have the following font installed on your system:
Maybe this font can be downloaded for free from the Internet.
The alphabet used by the Romans of the classical period consisted of the following letters:
A B C D E F G H I K L M N O P Q R S T V X Y Z
It is basically the same alphabet as is still used today by a majority of languages in the world. The ancient Romans already observed a functional difference between the standard I and a more elongated alternative, which would later became J, but didnt conceive of a meaningful distinction between V and U, as was later established, nor did they use other variants like Ç, Ñ or W.
The Romans of the classical period had several styles to write the above letters, greatly depending on the materials used to write. As is true for most scripts, nevertheless, these styles can be grouped into two distinct ones.
There is a formal one, that we now call capitális, that was used on monuments, legal documents, public announcements, books for sale, jewelery, and in general whenever the text was meant to endure and might even have some sort of ornamental value. We can see it below used on stone, bronze, plastered walls, papyrus or, later on, parchment, and on many other surfaces and objects.
There was a second style, the informal one, that we now call cursíva, that was used for everyday transactions with no ornamental value. This is less well known to most people, because of the precarious nature of the materials on which it was used and the lesser artistic value of the objects where we find it; but it was in fact the main style most Romans would have used in their practical lives. We can see it below on waxen or wooden tablets, wall graffiti or bone, and was used on many similar surfaces.
In time there developed a third style, the unciális, which is just a smaller version of the capitális with some strong influence of the cursíva.
The shapes of the letters of the capitális style are practically identical with our present capitals, whereas the cursíva may have influenced the evolution of the former into the unciális, a smaller version which is in turn the predecessor of our lower case characters; but it is important to understand that, in Roman times, the difference between the capitális and the cursíva, or even the later unciális, was not at all comparable to the difference we now make between capitals and lower case when we use capitals at the beginning of some words, or for titles, in texts otherwise written in lower case.
They were just different styles to write the same single case of letters, and were equivalent rather to the duality that exists between our printing and our handwritten letters. They would of course not have been mixed in any one piece of writing, as we would not type some letters and write others by hand within the same text, let alone the same word.
Just like the Arabs or the Japanese, therefore, in spite of a variety of writing styles, the Romans didnt either have an equivalent to our meaningful alternation between capitals and lower case within the same piece of writing in any of them, nor did they write any differently the first letter of a sentence or proper name and the rest.
The Romans, in order to save space, given the high cost of most of the materials they wrote on, used the so called ligátúræ, i.e. groupings of letters written as a cluster by sharing a common stroke. There were many of them: AE could be found as Æ, and similarly AN, TR, VM and many others could appear fused together in groups of two, three and even more letters.
The Romans had only two diacritics, and they didnt use any of the two with any regularity.
The Romans would often write without even separating the words with spaces, as we have seen above in several instances. Moreover, they certainly never distinguished sentences or phrases using commas, semicolons, colons or stops, neither did they know of question or exclamation marks, brackets, inverted commas or any other diacritic we are used to. In fact, the only sign they used, and only in the more elegant writings, like monumental ones, was a dot they used not as final stop, but to separate single words. We have also seen this on the inscriptions above. This dot could sometimes take more sophisticated shapes, as a little ivy leaf, for instance, as below.
The Romans of the most sophisticated period of classical culture used, as much in monumental writing as in more domestic texts, a sign called apex, identical to what we nowadays know as acute accent ( ´ ). This sign, nevertheless, was not used to indicate the accent or stress in the word as in a minute number of modern vernaculars, but to mark long vowels (see the file on pronunciation), as is still done today in languages like Icelandic, Hungarian, Czech and many others.
Latin spelling nowadays
It is obvious that the writing practices of the Romans of the classical period were rather primitive in comparison with present ones. Some people believe for that reason that our spelling habits are vernacular, and therefore somehow spurious and artificially imposed on Latin subsequently. They forget that most of our spelling customs are the natural development of Roman practices and were organically furthered throughout history by people who spoke and wrote in Latin, in order to achieve greater clarity and distinction when reading and writing Latin itself, not the vernacular languages; and these usages passed on from Latin to the vernaculars, and not the other way around.
The ancient difference in shape between a shorter and a more elongated I (i/j), the latter of which, already in antiquity, was frequently used in the cursiva in word-initial position, often corresponding to the consonantal sound, as can be seen in the illustrations above, was formalised in later periods for this useful function specifically, thus allowing for complete transparency as regards the difference in pronunciation between the first sound in janua and in iambus, or in meaning between forms like perjerat and perierat. The previously meaningless difference between the pointed V of the capitalis and the rounded u of some forms of cursiva or of the uncialis was equally put to the service of a more transparent spelling. It was thus finally possible duly to distinguish vowels from consonants. Other variants that could be allocated no distinctive phonetic value, like a taller or shorter T or a more or less stretched S, were either kept for merely aesthetic purposes or eventually dropped as functionally improductive. Some ligatures like æ or were likewise preserved to help distinguish the corresponding diphthongs from the hiatuses ae and oe, whereas many others were abandoned. The separation of words by means of spaces was found to be such a useful device that few contemporaries would be able to read without it; and the rich variety of signs of punctuation introduced also in later stages of the history of Latin helped reading with the necessary pauses, and allowed us to distinguish the component parts of sentences, or to determine beyond doubt whether we are confronted with a statement, an exclamation or a question. Finally, the distinction between capitals and lower case brought in not only a certain elegance, but also some further clarity to grammar (highlighting proper names) and to discourse structure (marking the beginnings of sentences).
There has most unfortunately arisen, nevertheless, and for all the wrong reasons, a fashion of spelling fundamentalism that, abandoning a more than reasonable tradition of centuries of Latin writing, purports to go back to the writing usages of the ancient Romans. This is as absurd as wanting to give up the use of paper or the modern book, and claiming that something is not classical Latin unless its written on papyrus rolls. It should be obvious to anyone that we can be completely respectful of ancient culture and cultivate the purest form of classical Latinity while using more developed methods of writing than our ancestors had at their disposal and which are moreover the result of centuries of Latin tradition. Of course, since fundamentalists rarely guide themselves by reason, the return to the old usages doesnt follow any further criteria than their own arbitrary whim, and they sometimes are purists and sometimes not, as they please. Thus, some have set about eliminating the distinction between i and j as non-Roman, but they are only too happy against all logic to keep that between v and u. Others consider that the use of capitals should be eliminated, and they do use lower case letters at the beginning of sentences, but they then arbitrarily keep capitals for proper names or even adjectives. Of course, none of those purists has dared to admit the fact that a return to ancient usage would imply writing everything rather in capitals than in lower case, and that they would in fact have to stop using any punctuation at all.
The saddest aspect of the modern spelling mess is that it has nothing to do with Latin. It originates in attempts at spelling reforms that seemed to make perfect sense in a vernacular like Italian, but which some people felt the need to force also upon Latin, with deplorable consequences. While most European languages, including Latin, felt very comfortable with the century-old usage of i and j, and v and u, as all those letters represented clearly different sounds or appeared in clearly different syllabic contexts, in Italian the use of i and j had become so complicated by conflicting and arbitrary uses without too much relation with any phonetic reality that people struggled to determine when a word had to be written with i and when with j. Italian being a language with otherwise very straightforward spelling principles, a pressure arose therefore to drop the use of j. Now, this absolutely sensible measure for Italian was unnecessarily applied also to Latin by people who were persuaded that Latin must be spelt as modern Italian. Obviously, they could never have convinced the international Latin using community on such grounds, so they started to contrive specious justifications: that the sound of the vowel and the semivowel were similar enough (even though it is exactly the same difference that i and j have in German and many other languages that have never considered dropping the spelling distinction), that it brought Latin spelling closer to ancient practices (although, as we have explained, the distinction between i and j has in fact a much more ancient history than that of u and v), etc. Of course, they never mentioned that every single one of those reasons applies with exactly equal force to the Latin pair i/j (where [i] differs from [j]) and to the pair u/v (where [u] differs from either [w] or [v], however we care to pronounce it), which the Italians had no intention to simplify because in Italian it made sense to continue to use both letters in the latter case. As the international community began to drop the use of j in a quest for ancient purity, it became more and more obvious to everyone who hadnt given up the human capacity to reason, that it made absolutely no sense in Latin to drop the j without dropping also the v; so, having genuinely assimilated the specious excuses of the Italians to bring Latin spelling closer to Roman times, the best philologists around the world felt it absolutely necessary to do without v too, and many critical editions of classical texts are now published that way. We have thus a traditional i/j/u/v system, which was foolishly undermined and turned into just i/u/v in accordance with some vernacular spelling reforms, but in a move that has now inevitably but most unfortunately backfired (certainly against the expectations of those who promoted the use of i/u/v) into an ugly i/u system as the only reasonably acceptable outcome. Not only that, following the same perverse train of thought, many now feel the need also to drop the use of capitals in Latin texts. Where this absurd nonsense will take Latin spelling is difficult to foresee, but we cannot but lament that the narrow-minded whim of a nation with the most arrogant attitude towards the language of our common civilisation has managed to bring absolute chaos to an elegant, sensible, and century-long Latin spelling tradition.
We consider that our spelling usages were developed through millennia according to criteria of utility and clarity, which it is as absurd as it is unnecessary to renounce. Even if some certainly rude spirits could consider giving up aesthetic developments like the distinction between capitals and lower case, it seems absolutely preposterous to eliminate usages that reflect better the pronunciation of the language and help reading.
Indeed, we should avoid as non-transparent spelling those practices which, with the specious excuse of being truer to ancient practices or following vernacular considerations, disregard centuries of legitimate Latin spelling tradition and prefer to hinder learning of the Latin language by failing to represent transparently its different sounds. Using one and the same letter i to represent both the vowel [i] and the consonant [j] may be true to the most ancient practices, but it is as unfortunately as unnecessarily non-transparent because it doesn't allow to distinguish which is which in words like "iam" (where the i represents a semivowel, pronounced as English y in yes) and "iambus" (where the i represents a vowel, pronounced as English i in it), etc. Non-transparent spelling makes that more and more people nowadays fail to learn the language properly as they are preposterously kept in the dark about the sounds the words they read and write actually contain (we've heard many a professor, let alone students, pronounce "iam" rhyming with Ian and "iambus" starting as yummy). Using i for the vowel and j for the semivowel is conversely a much more transparent spelling, which is justified by centuries of Latin spelling tradition and which allows us to see immediately which is which by writing "jam" but "iambus", etc. Equally using the same ae combination both for the diphthong in "aereus" (where the ae represent a diphthong, pronounced in one syllable, rather like English eye) and the hiatus in "aerius" (where the ae represent an hiatus, pronounced in two syllables, rather like English a in father followed by the e in error), etc. is sadly non-transparent (and leads to error just as many). Using æ for the diphthong and ae of the hiatus, or at least ae for the diphthong but aë for the hiatus, is a much more transparent spelling and it allows us to see immediately which is which by writing "æreus" but "aerius" (or "aereus" but "aërius"), etc.
As inconsistent spelling we must avoid spelling practices that choose to be transparent in some cases but not in others with no legitimate phonetic or historical reason to do so in one case and not in the other whatsoever, as when some people choose to distinguish the vowel [u] from the semivowel [w] by writing the former as u and the latter as v, which is a nicely transparent practice, rather than spelling both as u, which would be non-transparent, and they don't care in this case not to be true to ancient practices; but then, with no phonetic or historical reason to do so, they choose not to to distinguish the vowel [i] from the semivowel [j] by writing the former as i and the latter as j, which would be nicely transparent practice, rather than spelling both as i, which is non-transparent. Equally choosing to distinguish the diphthong æ from the hiatus ae in a usefully transparent way, rather than writing both as ae, but at the same time not distinguishing the diphthong from the hiatus oe and writing both non-transparently as oe, would be also inconsistent.
Finally, indicating the length of the vowels in writing was something that the ancient Romans didn't need to do because they just knew which was which, either because they were native speakers or because they could learn to pronounce the words by listening to native speakers. The use of the apex in ancient inscriptions or manuscripts is therefore quite haphazard. For us, on the other hand, using a more thorough form of spelling, consistently marking all long vowels, is much more poignantly required if we aspire ever to learn to pronounce the words correctly. There was one case, nevertheless, when even ancient native speakers advocated that the use of the apex is actually necessary (cf. Quint. Inst. 1,7,2s), and that is when when a difference of length in a vowel can produce a different meaning in a word, as in "malus" and "málus" or "liber" and "líber" or "rosa" and "rosá" or "loqueris" and "loquéris". We must certainly never omit such necessary apices.
It is absolutely unnecesary to give up our spelling lore, on any grounds; and we advocate the full reinstatement of our century-long, sensible spelling tradition, in the interests of transparency, consistency, and thoroughness.
|
<urn:uuid:25ce0131-3a0f-410c-af57-29b3ca07f9fe>
| 3.40625
|
http://avitus.alcuinus.net/schola_latina/litterae_en.php
|
Scientific name: Coenonympha tullia
Rests with wings closed. Some have row of ‘eyespots’ on underwings, like Ringlet, but some don’t.
The Large Heath is restricted to wet boggy habitats in northern Britain, Ireland, and a few isolated sites in Wales and central England.
The adults always sit with their wings closed and can fly even in quite dull weather provided the air temperature is higher than 14B:C. The size of the underwing spots varies across its range; a heavily spotted form (davus) is found in lowland England, a virtually spotless race (scotica) in northern Scotland, and a range of intermediate races elsewhere (referred to aspolydama).
The butterfly has declined seriously in England and Wales, but is still widespread in parts of Ireland and Scotland.
Size and Family
- Family – Browns
- Small/Medium Sized
- Wing Span Range (male to female) - 41mm
- Listed as a Section 41 species of principal importance under the NERC Act in England
- Listed as a Section 42 species of principal importance under the NERC Act in Wales
- Classified as a Northern Ireland Priority Species by the NIEA
- UK BAP status: Priority Species
- Butterfly Conservation priority: High
- European Status: Vulnerable
- Protected in Great Britain for sale only
The main foodplant is Hare's-tail Cottongrass (Eriophorum vaginatum) but larvae have been found occasionally on Common Cottongrass (E. angustifolium) and Jointed Rush (Juncus articulatus). Early literature references to White Beak-sedge (Rhyncospora alba), are probably erroneous.
- Countries – England, Scotland and Wales
- Northern Britain and throughout Ireland
- Distribution Trend Since 1970’s = -43%
The butterflies breed in open, wet areas where the foodplant grows, this includes habitats such as; lowland raised bogs, upland blanket bogs and damp acidic moorland. Sites are usually below 500m (600m in the far north) and have a base of Sphagnum moss interspersed with the foodplant and abundant Cross-leaved Heath (the main adult nectar source).
In Ireland, the butterfly can be found where manual peat extraction has lowered the surface of the bog, creating damp areas with local concentrations of foodplant.
|
<urn:uuid:3a335f27-c035-4215-b4c2-b0179298929c>
| 3.5
|
http://butterfly-conservation.org/309-884/large-heath.html
|
The concept of disability has changed significantly through history. At one point, disability was seen as the result of sin, either by the person with the disability or his or her parents. Disability was associated with guilt and shame, and people with disabilities were hidden away.
Merriam-Websterís definition of disability: limitation in the ability to pursue an occupation because of a physical or mental impairment; lack of legal qualification to do something; or a disqualification, restriction or disadvantage. Most of the time, we apply these definitions to a person who ďhasĒ a disability. What if it is society that has the disability, not the individual?
With the advent of modern medicine, a medical understanding of disability developed. In a medical model of disability, the person with a disability is seen as having an illness, which presumably should be cured. The person with a disability, as someone who is sick or diseased, is excused from normal life and responsibilities such as working, family obligations and household maintenance.
The approach of the Social Security Administration supports the idea of disability as something that excludes one from a normal life. To fit the administrationís definition of disability, a person must be able to prove that he or she is incapable of gainful employment. The inference is that by having a disability, one remains outside mainstream society.
Yet there is another way to view disability. This view considers disability as a normal part of life. After all, most of us will have a disability, whether temporary or permanent, at some time in our life. As many people with disabilities will attest, their disabilities are integral to what makes them who they are. It isnít the disability that needs to be changed. The physical and attitudinal barriers people with disabilities face are what need to change.
For example, people who are unable to walk may still be able to move around as much as anyone with the use of wheelchairs and other assistive technology. The inability to use their limbs may be less of a limitation than heavy doors that donít open automatically or stairs that keep them from a building.
Similarly, culture and environment can dictate whether something is a disability. In a culture where healers and shamans regularly interact with a spirit world, hearing voices isnít a disability. Only in societies where reading is essential to daily functioning is a learning disability recognized. In our society, a vision impairment that necessitates wearing glasses isnít considered a disability, but a mobility impairment that requires a person to use a walker is. Disability is defined by context. It follows, then, that what makes something a disability is external to the individual.
If we presumed that people with disabilities were able to live full and enriching lives, participate fully in their communities and have adult responsibilities, couldnít we make it so by removing the societal barriers that we create? When we view disability as a natural part of life, our entire approach to disabilities has to change.
Tara Kiene is the director of case management with Community Connections Inc.
|
<urn:uuid:77410cee-4537-4925-bfef-b65e28696047>
| 3.78125
|
http://durangoherald.com/article/20130108/COLUMNISTS06/130109638/0/Past-campus-presents-new-possibilities-for-FLC/Disability-is-natural-part-of-life
|
A new study shows that inheritance may be the cause for the rise in diabetes in the U.S.
Scientists are studying additional forms of inheritance, besides DNA, like metabolic programming, which can occur in the womb or shortly after birth, and causes permanent changes in metabolism.
Researchers in the study looked at mice with diets high in saturated fat and studied the results in the mice and their offspring. They found that a high-fat diet brought on type 2 diabetes in the adult mice.
If a pregnant female mouse stayed on a high-fat diet, their offspring had a greater chance of developing diabetes, even when given a moderate-fat diet.
Researchers say that these studies have only been tested on mice, so there’s no further reason as of yet to warn mothers to eat differently during pregnancy. Even mated with healthy mice, the next generation offspring could develop diabetes as well.
This study was published in the September issue of the Journal of Lipid Research.
Source: Journal of Lipid Research
|
<urn:uuid:51b48948-9ea8-4176-a208-9908c3a47fc8>
| 3.34375
|
http://inventorspot.com/articles/new_study_shows_diabetes_may_be_transmitted_from_parents_childre_17193
|
The atoll is closed to the public and travel to the island is not allowed.
Both the US and the Kingdom of Hawaii annexed Johnston Atoll in 1858, but it was the US that mined the guano deposits until the late 1880s. Johnston and Sand Islands were designated wildlife refuges in 1926. The US Navy took over the atoll in 1934, and subsequently the US Air Force assumed control in 1948. The site was used for high-altitude nuclear tests in the 1950s and 1960s, and until late in 2000 the atoll was maintained as a storage and disposal site for chemical weapons. Munitions destruction is now complete. Cleanup and closure of the facility was completed by May 2005. Toxic waste from both operations is buried on the island.
The Fish and Wildlife Service and the US Air Force are currently discussing future management options, in the interim Johnston Atoll and the three-mile Naval Defensive Sea around it remain under the jurisdiction and administrative control of the US Air Force.
Tropical, but generally dry; consistent northeast trade winds with little seasonal temperature variation.
Strategic location in the North Pacific Ocean; Johnston Island and Sand Island are natural islands, which have been expanded by coral dredging; North Island (Akau) and East Island (Hikina) are manmade islands formed from coral dredging; egg-shaped reef is 34 km in circumference some low-growing vegetation. Highest point: Summit Peak, at 5 meters
Get in
By plane
There is an abandoned airstrip on Johnston Island.
By boat
[add listing] Buy
There is currently no economic activity on Johnston Atoll.
[add listing] Sleep
There are no public accommodations on Johnston Atoll.
|
<urn:uuid:732a6862-a11e-474a-91a6-91be7fc0a495>
| 3.40625
|
http://wikitravel.org/en/Johnston_Atoll
|
Guest Author - Preena Deepak
Music and dance rituals were an important aspect of temple worship in ancient India. In this pretext it was also a common Hindu tradition to dedicate or commission young girls to gods. These girls called ‘Devadasis’ were then separated from their families and trained in music, dance and a unique lifestyle within temple gates unlike their counterparts outside. Their life was from then on, one of service to the gods.
When a Devadasi attained puberty, she was married to the deity in a religious ceremony. From then on she would become a temple prostitute. Indian temples always had the patronage of Kings who ruled the country and in many instances devadasis became the concubines of the kings. In other cases, devadasis lived with their patron who provided them with property and wealth.
Marriage was forbidden for Devadasis who were eternally bound to the deity. However Devadasis became available to anyone who was able to ‘afford’ their keep.
The word ‘Devadasi’ means ‘Servant of God’. The girls dedicated to become Devadasis were mostly from poor, lower caste families for whom devoting one child to the god only meant less pressure on the family’s meager finances. Devadasis were considered blessed as entering into wedlock with the god protected them from widowhood. As a result of these reasons several young girls were pushed into prostitution, under the safe banner of religion.
Traces of the Devadasi system can be seen in many parts of India, particularly in South Indian states of Andhra Pradesh, Karnataka and Tamil Nadu. It is believed that the Devadasis of Orissa, called ‘Mahari’ did not practice prostitution but devoted themselves to temple service and had special duties assigned to them.
With the decline of monarchy in India and with the invasion of Moghul and British rulers, the Devadasi system began to deteriorate. Christian Missionaries who worked in India, Social Activists and Reformers also had a share in putting an end to this grotesque Indian custom. The Indian Government banned the Devadasi system in 1988. However in spite of this, the Devadasi system continues under cover in India.
Under the burden of poverty, many young girls are still commissioned as Devadasis though Indian temples no longer have music and dance rituals or dedication ceremonies. These girls are married to the god and then sent for prostitution in red light areas. Unlike olden days, most of these girls do not have regular patrons and suffer under many men before succumbing to AIDS and other venereal diseases.
The children born to Devadasis are forced to follow along the footsteps of their mothers, since they live alienated from main stream life and are rejected from social institutions. This creates a vicious circle in which many young girls in India get trapped even today.
|
<urn:uuid:9aa79f17-0445-4d02-8425-01d19cb43d0a>
| 3.640625
|
http://www.bellaonline.com/articles/art176785.asp
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.