Revolutionising Awareness

How to save Awareness

Archive for the ‘Rated R’ Category

Dumbing-Down Society Part 3: How to Reverse its Effects

Posted by Admin on August 8, 2011

http://vigilantcitizen.com/vigilantreport/dumbing-down-society-part-3-how-to-reverse-its-effects/

By  | July 24th, 2010

The first two parts of this series describes the negative effects that some commonly consumed chemicals have on the body and brain. This third and final part looks at some natural ways to keep the brain healthy and provides tips to rid the body of dangerous substances. In other words, how to fight back against the dumbing down of society!

Parts I and II of this series of articles identified some toxic substances found in common foods and medicines and described some of their effects on the human brain. The main culprits discussed were aspartame, mercury, fluoride and high fructose corn syrup (HFCS). Whether these substances disturb the nervous system, decrease cognitive function, impair judgment, or affect the memory, the net result is the general dumbing down of society.

All is not doom and gloom, however. Nature, with its wonderful tendency to restore equilibrium, provides us humans the cure to almost any affliction we might develop. Ancient healers even believed that nature helped humans discover the cure to their illnesses in subtle and mysterious ways:

“The plant might also be considered worthy of veneration because from its crushed leaves, petals, stalks, or roots could be extracted healing unctions, essences, or drugs affecting the nature and intelligence of human beings – such as the poppy and the ancient herbs of prophecy. The plant might also be regarded as efficacious in the cure of many diseases because its fruit, leaves, petals, or roots bore a resemblance in shape or color to parts or organs of the human body. For example, the distilled juices of certain species of ferns, also the hairy moss growing upon oaks, and the thistledown were said to have the power of growing hair; the dentaria, which resembles a tooth in shape, was said to cure the toothache; and the palma Christi plant, because of its shape, cured all afflictions of the hands.”
Manly P. Hall, Secret Teachings of All Ages

So, after dwelling in the awful world of poisonous chemicals and corrupted officials, the only fitting way to conclude this series of articles is to explore the all-natural ways to restore health.

Stop the Toxification

Warning: This article provides tips to naturally detoxify the body. If you are in need of a serious detox program, please consult a professional.

The first step in ridding your body from poisons is, quite logically, to stop ingesting poisons. It sounds simple enough, but this step is probably the most difficult, as many toxins are found in everyday foods and even tap water. An increased vigilance is necessary in everyday life and, sometimes, some annoying actions must be taken to keep the toxins out of your body. Nevertheless, once you actually feel your body and mind healing, you’ll be proud of your efforts.

Before we look at the ways to avoid specific toxins, here are some general guidelines any health-conscious person should apply at all times: Avoid processed foods and artificial drinks. Instead, look for organic and locally grown produce or, even better, grow your own fruits, vegetables and herbs. By doing so, you automatically avoid many harmful substances, including MSG, HFCS, pesticides, sodium fluoride and mercury. You also save money, which is always nice. When buying groceries, always read the labels and, as they say, if you can’t read it, don’t eat it.

Here are specific ways to avoid particular toxins:

Avoiding Fluoride

There are two types of fluoride: calcium fluoride and sodium fluoride. Calcium fluoride is naturally found in water sources, while sodium fluoride is a synthetic waste product of the nuclear, aluminum, and phosphate fertilizer industries. Guess which  type is found in our water? Right, the nasty one. Regular water filters such as Brita do a good job in reducing the taste of metals and chemicals in the water but they do not filter out the fluoride. Purifying the water through reverse osmosis is the most effective way to remove sodium fluoride from water.

Standard reverse osmosis system

Some processed foods also contain high concentrations of sodium fluoride, including instant tea, grape juice products, and soy milk for babies, so once again, avoid processed foods. Also, switch to a fluoride-free toothpaste (or at least try not to swallow the $0.93 Colgate you bought at Walmart).

Consuming foods rich in calcium and magnesium help prevent fluoride intoxication, as they prevent the poison from attaching to the body.

“Magnesium is a very important mineral that many are lacking. Besides being so important in the metabolism and synthesis of nutrients within your cells, it also inhibits the absorption of fluoride into your cells! Along with magnesium, calcium seems to help attract the fluorides away from your bones and teeth, allowing your body to eliminate those toxins. So during any detox efforts with fluoride, it is essential that you include a healthy supplemental dose of absorbable calcium/magnesium as part of the protocol.”
– Paul Fassa, How to Detox Fluoride from Your Body

Avoiding Mercury

First, if you or your children are being vaccinated, always request a Thimerosal-free shot. Second, avoid fish and seafood with high mercury levels; fish with the highest levels of mercury are marlin, orange roughy, shark, swordfish, tilefish and tuna (ahi, albacore and Yellowfin). Some seafood has low mercury levels, making them safer to consume, including anchovies, catfish, clam, crab, shrimp, flounder, salmon, sardine, tilapia and trout. As rule of thumb, bigger fish contain more mercury since they eat smaller fish and absorb all their mercury, and live longer, allowing mercury to build up.

As seen in Part II of this series, some foods containing HFCS are also contaminated with mercury. Here’s the chart produced by the EPA:

Avoiding Aspartame

Always read labels and avoid “sugar-free” products. Aspartame is found in soft drinks, over-the-counter drugs & prescription drugs (very common, listed under “inactive ingredients”), vitamin & herb supplements, yogurt, candy, breath mints, cereals, sugar-free chewing gum, cocoa mixes, coffee beverages, instant breakfasts, gelatin desserts, frozen desserts, juice beverages, laxatives, milk drinks, shake mixes, tabletop sweeteners, tea beverages, instant teas and coffees, topping mixes and wine coolers.

Avoiding HFCS

Read the labels and if you find high fructose corn syrup at the top the list of ingredients, tell the product “oh no you didn’t!”, snap your fingers with attitude and put it back on the shelf. Ignore the confused looks of other shoppers.

We will now look at some all-natural ways to detox the body from harmful substances.

The standard procedure for removing heavy metals from the body is called “chelation.” It is accomplished by administering a chelating agent  – usually dimercaptosuccinic acid (DMSA) – that binds to the heavy metals in the body and cause them to be naturally flushed out. This type of treatment is quite strenuous, has many side-effects and should be undertaken only with medical supervision.

If, however, you believe that ridding the body of a harsh substance with another harsh substance might be self-defeating, I tend to agree with you. Fortunately there are herbs and spices that naturally act as chelating agents: Cilantro does a great job at it.

The most widely used and loved herbs and spices worldwide are derived from the same plant, Coriandrum sativum. The leaves of this plant are frequently referred to as cilantro, while the seeds are most commonly called coriander. Other than making any dish spectacular, the herb has the unique power of neutralizing mercury.

“This kitchen herb is capable of mobilizing mercury, cadmium, lead and aluminum in both bones and the central nervous system. It is probably the only effective agent in mobilizing mercury stored in the intracellular space (attached to mitochondria, tubulin, liposomes etc) and in the nucleus of the cell (reversing DNA damage of mercury).”
– Dietrich Klinghardt, MD, PhD,  Chelation: How to remove Mercury, Lead, & other Metals

Studies have however suggested that cilantro only moves the problem to other parts of the body and thus must be used with another agent to complete the detoxification process.

Chlorella: Cilantro’s Side-Kick

In addition to repairing and activating the body’s detoxification functions, chlorella is known to bind to all known toxic metals and environmental toxins and facilitate their evacuation. This makes chlorella cilantro’s perfect sidekick.

“Because cilantro mobilizes more toxins then it can carry out of the body, it may flood the connective tissue (where the nerves reside) with metals, that were previously stored in safer hiding places.

This process is called re-toxification. It can easily be avoided by simultaneously giving an intestinal toxin-absorbing agent. Our definite choice is the algal organism chlorella. A recent animal study demonstrated rapid removal of aluminum from the skeleton superior to any known other detox agent.

Cilantro causes the gallbladder to dump bile — containing the excreted neurotoxins — into the small intestine. The bile-release occurs naturally as we are eating and is much enhanced by cilantro. If no chlorella is taken, most neurotoxins are reabsorbed on the way down the small intestine by the abundant nerve endings of the enteric nervous system”
– Ibid

Garlic

We may not be sure if garlic actually repels vampires, but we can be certain it repels toxins from the body.

“Garlic contains numerous sulphur components, including the most valuable sulph-hydryl groups, which oxidize mercury, cadmium and lead and make these metals water-soluble. (…) Garlic also contains the most important mineral, which protects from mercury toxicity, bioactive selenium.”
– Ibid

So, garlic zaps mercury and  lead and helps the body evacuate the metals from the body. Perhaps bad breath is the way to good health.

Turmeric (Curcuma)

This plant from the ginger family is widely used in Southeast Asia as a spice and its cleaning powers have been renowned for centuries. Turmeric is enshrined in Ayurvedic medicine as the king of spices. The bitter spice helps cleanse the liver, purify the blood, and promotes good digestion and elimination. It possesses powerful anti-inflammatory properties, but none of the unpleasant side effects of anti-inflammatory drugs. It has been used for skin cleansing, color enhancement and food preservation.

Turmeric steps up the production of three enzymes–aryl-hydrocarbon-hydroxylase, glutathione-S-transferase, and UDP-glucuronyl-transferase. These are chemical “knives” that break down potentially harmful substances in the liver. Turmeric offers similar protection for people who are taking medications such as methotrexate and other forms of chemotherapy, which are metabolized by, or shuttled through, the liver.
– James A. Dukes, Ph D., Dr. Duke’s Essential Herbs

Scientific studies have recently discovered that mixing black pepper with tumeric exponentially increases its healing properties on the body. No wonder traditional South Asian recipes often combine the two spices. So don’t hold back … grind some black pepper into that tumeric!

Omega-3

It is not a secret that consuming the fatty acids found in fish brings many healthy benefits. The acids do wonders for our brains. In fact, Omega-3 is literally our brain’s fuel, helping to maintain its core functions. Our most important organ heavily relies on Eicosapentaenoic acid (EPA) and Docosahexaenoic acid (DHA), two long-chain Omega-3 fatty acids that our bodies cannot create. The only way to obtain those acids is through diet.

“Most health professionals believe that DHA is the fatty acid that is most important for healthy structure and development of the brain and for vision so it is vital that there is enough DHA in the diet during pregnancy and in the first few years of a child’s life. EPA on the other hand, is essential for healthy functioning of the brain on a day to day basis, which means that throughout your life you need a constant supply of EPA.”
– David McEnvoy, Why Fish Oil is Brain Fuel

Here are some facts about Omega-3 that are of particular interest in the context of those articles:

  • Research by the University of Western Australia found that women who took fish oil supplements during the latter part of their pregnancy had babies with better hand-to-eye coordination, were better speakers and could understand more at the age of two and a half than babies whose mothers who were given olive oil instead.
  • A study by Aberdeen University, led by professor Lawrence Whalley, found that fish oil helps the brain to work faster, increases IQ scores and slows down the aging process.
  • The Durham trials led by Dr. Madeleine Portwood have consistently found that fish oil improves behaviour, concentration and learning in the classroom.
  • Researcher Natalie Sinn in Australia found fish oil to be more effective than Ritalin for ADHD.
  • Hibbeln et al. looked at diet in 22 countries and found an significant association between low fish consumption and post-natal depression.
  • Dr. Malcolm Peet found that ethyl-EPA, a highly concentrated form of Omega 3, dramatically reduces depression.

Fish oil also plays an important role in ridding the brain of unwanted substances:

“The fatty acid complexes EPA and DHA in fish oil make the red and white blood cells more flexible thus improving the microcirculation of the brain, heart and other tissues. All detoxification functions depend on optimal oxygen delivery and blood flow. EPA and DHA protect the brain from viral infections and are needed for the development of intelligence and eyesight. The most vital cell organelle for detoxification is the peroxisome. These small structures are also responsible for the specific job each cell has.

In the pineal gland the melatonin is produced in the peroxisome, in the neurons dopamine and norepinephrine, etc. It is here, where mercury and other toxic metal attach and disable the cell from doing its work.”
– Dietrich Klinghardt, MD, PhD,  Chelation: How to remove Mercury, Lead, & other Metals

That is all well and good, but what are the best sources of fish oil? Consuming large quantities of fish would be the logical choice, but knowing that many species contain high levels of mercury, doing so might actually cause further damage to the brain. For this reason, Omega-3 supplements are probably the best way to keep the body well-stocked with EPA and DHA.

When selecting an Omega-3 supplement, you need to make sure it is molecularly distilled and is high in both DHA and EPA, especially DHA. Molecular distillation is a special process that removes all the toxins from the oil (including mercury), ensuring it is safe for human consumption. Avoid low-grade products. They often contain low levels of fatty acids and are filled with other oils and preservatives.

Final Advice: Sleep, Sweat and Stimulate

  • Sufficient sleep is vital to keeping the body and the brain in good condition. Conversely, sleep deprivation impairs one’s ability to think handle stress, maintain a healthy immune system and moderate one’s emotions.
  • Regular exercise is critically important for detoxification. It allows the evacuation of toxins through skin while improving the entire metabolism.
  • Stimulate your brain: read, think, meditate and challenge it constantly.

In Conclusion

This article examines the ways to avoid harmful substances in everyday products and looks at a handful of all-natural ways to free the body from their poisonous grasp. In addition to providing us the necessary nutrients used by the body to evacuate toxins, the natural substances described in this article also help maintain general heath. Regularly consuming cilantro, garlic, turmeric and Omega-3 boosts the immune system, improves rational thinking and increases memory. The amazing properties of those simple ingredients are only now being (slowly) documented by science, but they have been used by cultures worldwide for centuries.

We are conditioned to treat ailments caused by artificial products with other artificial products, that, in turn, can cause other ailments. It is only by breaking this vicious circle that we can reclaim ownership of our brains and reach our fullest potential. So, today’s a new day: Put down the cheeseburger-flavored Doritos … and change your life.

Posted in Conspiracy Archives, Exopolitical Interventions, Rated R, Truthout Articles | Tagged: , , , , , , , , , , , | Leave a Comment »

Dumbing Down Society Part 2: Mercury in Foods and Vaccines

Posted by Admin on August 8, 2011

http://vigilantcitizen.com/vigilantreport/dumbing-down-society-pt-2-mercury-in-foods-and-vaccines/

By  | July 9th, 2010

Even though mercury is known to degenerate brain neurons and disrupt the central nervous system, it is still found in processed foods and mandatory vaccines. In this second part of the series examining the intentional dumbing-down of society, this article will discuss the presence of mercury in common foods and vaccines.


The first article in this series – Dumbing Down Society Pt 1: Foods, Beverages and Meds – looked at the effects of aspartame, fluoride and prescription pills on the human brain. These substances all cause a decrease of cognitive power which, on a large scale, leads to a dumbing down of the population that is ingesting them. This second article focuses on another toxic product found in everyday foods and mandatory vaccines: mercury.

Mercury is a heavy metal naturally found in the environment. However, it is not suitable for human consumption, as it is extremely harmful to the human body, especially the brain. While some people say that anything can be consumed in moderation, many experts agree that no amount of mercury is safe for the human body. Despite this and the many studies concerning the negative effects of  mercury, the heavy metal is continually added to mandatory vaccines and processed foods.

Mercury is known to cause brain neuron degeneration and to disturb the central nervous system. Direct exposure to the metal causes immediate and violent effects:

“Exposure to high levels of metallic, inorganic, or organic mercury can permanently damage the brain, kidneys, and developing fetus. Effects on brain functioning may result in irritability, shyness, tremors, changes in vision or hearing, and memory problems.”
– Source

Most people do not come in direct contact with mercury, but are exposed to small doses at a time, resulting in a slow but steady poisoning of the brain. As the years go by, the effects of the substance impairs judgment and rational thinking, decreases memory and disrupts emotional stability. In other words: It makes you dumber.

Mercury has also the unfortunate ability to transfer from pregnant woman to their unborn babies. According to the Environmental Protection Agency, mercury passed on to the fetus during pregnancy may have lasting consequences, including memory impairment, diminished language skills and other cognitive complications.

It has been highly publicized that mercury is found in dangerous quantities in seafood, such as tuna, swordfish and tilefish. This creates a rather ironic situation: Instead of making you smarter because of all the Omega-3 they contain,  the fish produce exactly the opposite effect on the brain due to mercury poisonning.

Unfortunately, mercury is also found in other products: vaccines and high-fructose corn syrup.

Vaccines

“I think it’s absolutely criminal to give mercury to an infant.”
– Boyd Haley, Ph.D., Chemistry Department Chair, University of Kentucky

Mercury is found in great quantities in mandatory vaccines. Before we get into the details of it, here are some facts about vaccines in America as noted by Dr. Sherri Tenpenny:

  • The U.S. government is the largest purchaser of vaccines in the country. In fact, nearly 30 percent of the Centers for Disease Control’s (CDC) annual budget is composed of purchasing vaccines and ensuring vaccination is completed for every child in the country.
  • Private insurance companies, which do the best liability studies, have completely abandoned coverage for damage to life and property due to: Acts of God, nuclear war, nuclear power plant accidents and … vaccination.
  • Laws have been passed to protect vaccine manufactures from liability, while at the same time, state laws require parents to inject their children with up to 100 vaccination antigens prior to entering school. If a vaccine injury–or death–occurs after a vaccine, parents cannot sue the doctor, the drug company or the government; they are required to petition the Vaccine Court for damages, a process that can take years and often ends with a dismissal of the case.
  • Each state has school vaccination laws that require children of appropriate age to be vaccinated for several communicable diseases. State vaccination laws mandate that children be vaccinated prior to being allowed to attend public or private schools. Failure to vaccinate children can not only result in children being prohibited from attending school, but their parents or guardians can receive civil fines and criminal penalties. Schools don’t usually tell parents is that in every state, an exemption exists allowing parents to legally refuse vaccines while still allowing their children to attend school.
  • The medical industry advocates vaccines, often demanding that parents vaccinate their children in order to remain under their doctor’s care. A sizable portion of a pediatrician’s income is derived from insurance reimbursement for vaccinations. The ever-expanding vaccination schedule that includes increasingly more expensive vaccines has been a source of increased revenues for vaccinating doctors.

Thimerosal

A child receives approximately 21 vaccines before the age of six and 6 more before the age of 18, for a  total of 27 shots during childhood. Many of these injections contain Thimerosal, a preservative added to the shots, made of 49% mercury. The unprecedented use of mercury on children has created a generation of cognitively impaired children.

“The symptoms experienced by children exposed to mercury are real and can be directly linked to the vaccines they were given as infants. It’s ironic that the vaccines given to these young people are meant to protect them, when in fact they are adversely affecting their neurological development.”
– Source

On top of causing an entire generation of babies to have their brains damaged, the use of Thimerosal in vaccines has been linked by many scientists to the staggering rise of autism in the past two decades. Did  the dumbing-down campaign go too far?

“In children who are fully vaccinated, by the sixth month of life they have received more mercury from vaccines than recommended by the EPA. There are many similarities in symptoms between mercury toxicity and autism, including social deficits, language deficits, repetitive behaviors, sensory abnormalities, cognition deficits, movement disorders, and behavioral problems. There are also similarities in physical symptoms, including biochemical, gastrointestinal, muscle tone, eurochemistry, neurophysiology, EEG measurements, and immune system/autoimmunity.”
– Source

Due to the suspected link between vaccines and autism, more than 5,000 U.S. families have filed claims in a federal vaccine court against the companies producing the vaccines. In most cases, the plaintiffs received no compensation and all correlation between the illness and vaccines was denied by the defendants. A public relations war has been going on for years, as studies and counter-studies have appeared, proving or denying the links between vaccines and autism, depending where they originate from. The studies claiming that vaccines are safe have often been funded by the very companies that produce them.

Despite the denials, Thimerosal is slowly–and silently–being phased out of vaccines for babies. Not too long after the phasing out began, cases of autism have sharply dropped in the country.

“Published in the March 10 issue of the Journal of American Physicians and Surgeons, the data show since mercury was removed from childhood vaccines, the reported rates of autism and other neurological disorders in children not only stopped increasing but actually dropped sharply – by as much as 35 percent. Using the government’s own databases, independent researchers analyzed reports of childhood neurological disorders, including autism, before and after removal of mercury-based preservatives.

According to a statement from the Association of American Physicians & Surgeons, or AAPS, the numbers from California show that reported autism rates hit a high of 800 in May 2003. If that trend had continued, the reports would have risen to more than 1,000 by the beginning of 2006. But the number actually went down to 620, a real decrease of 22 percent, and a decrease from the projection of 35 percent.
– Source

The phasing out of Thimerosal from vaccines intended for children is all well and good, but the preservative is still found in many vaccines intended for adults. Did someone realize that mercury in vaccines is too strong for children, making them sick and ultimately unproductive, but perfect to dumb-down fully developed adults?  The ruling class is not looking to create a generation of autistic people who would need constant care, but a mass of “useful idiots” that can accomplish repetitive and mind-numbing tasks, while accepting without questioning what they are being told.

As of today, Thimerosal is still found in Influenza vaccines, commonly known as the flu shot. Those shots are seasonal, meaning that patients are encouraged to come back every winter to get their yearly vaccine/dose of mercury.

Makers of the Influenza vaccine say it boasts a “solid health record,” meaning the shot does not seem cause observable illnesses. What is NEVER discussed, however, is the slow and gradual brain neuron degeneration most individuals go through, year after year, due constant mercury poisoning. This process of slowing down brain functions is not easily observable nor quantifiable but it is still happening on a world-wide scale. If mercury can completely disrupt the fragile minds of children enough to possibly cause autism, it will, at the very least, impair fully developed minds.

Almost as if created to generate demands for vaccines, new diseases appear periodically around the world that, with the help of mass media scare campaigns, cause people to beg their officials for the miracle shot that they are told will cure everybody.

H1N1, also known as the Swine Flu, was the latest of those scary diseases that terrified millions of people for several months. When the shot became available, heavily promoted and massive vaccination campaigns sprung around the world. One fact that was not promoted: Swine flu was often easily curable, and not very different than the “regular” flu. Another fact that was not promoted: Most  of the flu shots contained Thimerosal.

Depopulation?

Other than simply dumbing down the population, vaccines might be aiding in depopulation efforts. In a speech in April 2010, Bill Gates mentioned the use of vaccines in the effort to reduce world population.

“Gates made his remarks to the invitation-only Long Beach, California TED2010 Conference, in a speech titled, “Innovating to Zero!.” Along with the scientifically absurd proposition of reducing manmade CO2 emissions worldwide to zero by 2050, approximately four and a half minutes into the talk, Gates declares, “First we got population. The world today has 6.8 billion people. That’s headed up to about 9 billion. Now if we do a really great job on new vaccines, health care, reproductive health services, we lower that by perhaps 10 or 15 percent.

In plain English, one of the most powerful men in the world states clearly that he expects vaccines to be used to reduce population growth. When Bill Gates speaks about vaccines, he speaks with authority. In January 2010 at the elite Davos World Economic Forum, Gates announced his foundation would give $10 billion (circa ¤7.5 billion) over the next decade to develop and deliver new vaccines to children in the developing world.”
– Source

High-Fructose Corn Syrup (HFCS)

A poison is a substance that causes injury, illness, or death, especially by chemical means.” Going by this definition, high-fructose corn syrup (HFCS) is truly a poison. HFCS is a highly processed sweetner made from corn that has been used since 1970. It continues to replace white sugar and sucrose in processed foods and is currently found in the majority of processed foods found in supermarkets. Studies have determined that Americans consume an average of 12 teaspoons a day of the sweetner. Here’s a graph depicting the rise of HFCS in our diets:

Due to its sweetening propreties, HFCS is obviously found in sugary products like  jams, soft drinks and pre-packaged baked goods. However, many people do not realize that it is also found in numerous other products, including soups, breads, pasta sauces, cereals, frozen entrees, meat products, salad dressings and condiments. HFCS is also found in so-called health products, including protein-bars, “low-fat” foods and energy drinks.

How can something that taste so good be so bad? Here are some facts about HFCS:

  • Research links HFCS to increasing rates of obesity and diabetes in North America, especially among children. Fructose converts to fat more than any other sugar. And being a liquid, it passes much more quickly into the blood stream.
  • Beverages containing HFCS have higher levels of reactive compounds (carbonyls), which are linked with cell and tissue damage leading to diabetes.
  • There is some evidence that corn fructose is processed differently in the body than cane sugar, leading to reduced feelings of satiation and a greater potential for over-consumption.
  • Studies by researchers at UC Davis and the University of Michigan have shown that consuming fructose, which is more readily converted to fat by the liver, increases the levels of fat in the bloodstream in the form of triglycerides.
  • Unlike other types of carbohydrate made up of glucose, fructose does not stimulate the pancreas to produce insulin. Peter Havel, a nutrition researcher at UC Davis who studies the metabolic effects of fructose, has also shown that fructose fails to increase the production of leptin, a hormone produced by the body’s fat cells. Both insulin and leptin act as signals to the brain to turn down the appetite and control body weight. Havel’s research also shows that fructose does not appear to suppress the production of ghrelin, a hormone that increases hunger and appetite.
  • Because the body processes the fructose in HFCS differently than it does cane or beet sugar,  it alters the way metabolic-regulating hormones function. It also forces the liver to kick more fat out into the bloodstream. The end result is that our bodies are essentially tricked into wanting to eat more, while at the same time, storing more fat.
  • A study in The Journal of the National Cancer Institute suggested that women whose diet was high in total carbohydrate and fructose intake had an increased risk of colorectal cancer.
  • HFCS interferes with the heart’s use of key minerals like magnesium, copper and chromium.
  • HFCS has been found to deplete the immune system by inhibiting the action of white blood cells. The body is then unable to defend against harmful foreign invaders.
  • Research suggests that fructose actually promotes disease more readily than glucose. Glucose is metabolized in every cell in the body, but all fructose must be metabolized in the liver. The livers of test animals fed large amounts of fructose develop fatty deposits and cirrhosis, similar to problems that develop in the livers of alcoholics.
  • HFCS is highly refined–even more so than white sugar.
  • The corn from which HFCS is derived is almost always genetically modified, as are the enzymes used in the refining process.
  • There are increasing concerns about the politics surrounding the economics of corn production (subsidies, tariffs, and regulations), as well as the effects of intensive corn agriculture on the environment.

Many studies have observed a strong correlation between the rise HFCS in the past years and the rise of obesity during the same period of time.

Obesity, on top of being unhealthy for the body, directly affects brain functions. Some researchers have even questionned the role of obesity in brain degeneration.

Research scientists have long suspected that a relationship existed between obesity and a decline in brain power. New studies now confirm the contention that being overweight is detrimental to the brain. Researchers at the University of California in an article published in the Archives of Neurology demonstrated a strong correlation between central obesity (that is, being fat around the middle) and shrinkage of a part of the brain ( the hippocampus) fundamental for memory (as measured on MRI scans).
– Source

This does not mean that obese people are dumb. It does however mean that their brain is probably not processing as effectively as it could be.

But even if HFCS does not make you fat, it will still affect your brain. Recent studies have shown that the sweetener contains … you’ve guessed it … mercury!

“One study – published in the journal, Environmental Health – shows mercury in nine out of 20 samples of commercial high-fructose corn syrup.

The second study – by the Institute for Agriculture and Trade Policy (IATP) – finds nearly one in three of 55 brand-name foods contained mercury, especially dairy products, dressings and condiments. The brands included big names like Quaker, Hershey’s, Kraft and Smucker’s.”
– Source

Here is the table found in the IATP’s study called Not So Sweet: Missing Mercury and High Fructose Corn Syrup detailing the amount of mercury found in everyday products found in supermarkets.

Of course, companies who produce HFCS deny the results of those studies, claiming the sweetner is “natural”. But this is coming from those who, y’know, SELL the stuff. Corn refiners have even produced some strange PR ads to encourage people to keep ingesting their toxic product.

Nice going, buddy!

In Conclusion

Despite the existence of many studies describing the negative effects of mercury on the human brain, governments still push for the increased vaccination of the population with shots containing Thimerosal. Furthermore, governing bodies have protected the pharmaceutical companies who produce the vaccines and foods containing HFCS against any type of lawsuits. The fact that many high executives of these companies also hold key positions within the government, might provide an explanation. There are indeed a restricted amount of persons holding positions of high power in both the private and public sector. These people, in what are clear cases of conflict of interest, collide at the top to form what this site refers to as “the elite” or “the ruling class.” Most of these people have never been elected to governmental positions, yet they create public policies that further their agenda, regardless of the political party in power. Look at the membership of the Bilderberg Group, the Committee of 300 or the Council of Foreign Relations and you will find the CEOs of companies producing your food and medication … and the same people who pass laws governing your food and medication.

Since no public official is likely to betray his peers and fund-raisers to become a whistleblower, it is up to each one of us to learn about what we consume. The cliché saying “read the labels” is quite true, but if you have no idea what “monosodium glutamate” means, reading the label will not help you. This series of articles aims to raise basic awareness of the most harmful substances found in everyday products. I personally cannot claim to have a perfect diet … I grew up in the 80s and love the taste of processed foods like candy, sodas … even Hamburger Helper. But as you find more information and as you begin to realize that every step in the right direction really does make you feel better, each subsequent step becomes easier. No one can do it for you: It’s up to you to take that next step … whether it is toward your detoxification or to Burger King.

Posted in Conspiracy Archives, Exopolitical Interventions, Rated R, Truthout Articles | Tagged: , , , , , , , , , , | Leave a Comment »

Dumbing Down Society Part I: Foods, Beverages and Meds

Posted by Admin on August 8, 2011

http://vigilantcitizen.com/vigilantreport/dumbing-down-society-part-i-foods-beverages-and-meds/

By  | June 28th, 2010

Is there a deliberate effort by the government to dumb down the masses? The statement is hard to prove but there exists a great amount of data proving that the ruling elite not only tolerates, but effectively introduces policies that have a detrimental effect on the physical and mental health of the population. This series of articles looks at the many ways the modern man is being dumbed down. Part I looks at the poisons found in everyday foods, beverages and medications.

Image by deesillustration.com

The theme of dumbing-down and dehumanizing the masses are often discussed in articles on The Vigilant Citizen. The presence of those concepts in popular culture are, however, only the outward and symbolic expression of the profound transformation happening in our society. Scientific data has been proving for years that governments around the world are tolerating the selling of many products which have a direct and negative effect on cognitive and physical health. As we will see in this article many everyday products cause brain damage, impaired judgment and even a lower IQ.

Is a dumber population something that is desired by the elite? Hitler once said “How fortunate for the leaders that men do not think.” An educated population knows its rights, understands the issues and takes action when it does not approve of what is going on. Judging by the incredible amount of data available on the subject, it seems that the elite want the exact opposite: an unhealthy, frightened, confused and sedated population. We will look at the effects of medication, pesticides, fluoride and aspartame on the human body and how those products are being pushed by people from inside the power structure.

Prescription Drug Abuse

America has witnessed during the last decades a staggering rise of drugs being prescribed  to treat all kinds of problems. Children are particularly affected by this phenomenon. Since the 1990′s, an ever-rising proportion of American children are being diagnosed with “illnesses” such as Attention Deficit Disorder (ADD) and are prescribed mind-altering drugs, such as Ritalin.

The DEA has become alarmed by the tremendous increase in the prescribing of these drugs in recent years. Since 1990, prescriptions for methylphenidate have increased by 500 percent, while prescriptions for amphetamine for the same purpose have increased 400 percent. Now we see a situation in which from seven to ten percent of the nation’s boys are on these drugs at some point as well as a rising percentage of girls.
– Source

Today, children who show too much energy, character or strength are being willfully sedated with powerful drugs which directly affect the way their brains function. Are we going in the right direction here?

Even if ADD is not a clearly defined and documented disorder – it causes NO observable biological effects whatsoever – children are still being diagnosed with the illness in great numbers. This raises important ethical questions.

“Pediatricians as well as ethicists have also voiced their concerns in usage of these stimulants. In an article published in the New York Times, they have questioned the appropriateness of medicating children without a clear diagnosis in hopes that they do better in school. They also asked whether the drugs should be given to adults failing in their careers or are procrastinators. They question the worthy of this method.

This concern have also been voiced out in the January 2005 issue of Pediatrics in which the large discrepancies between pediatricians’ practice patterns and the American Academy of Pediatrics (AAP) guidelines for the assessment and treatment of children with attention-deficit/hyperactivity disorder (ADHD) was bought forth. The article also stated that because the medical community didn’t come to a consensus on how to diagnose ADD/ADHD, they should not be making extensive decisions as to how to treat individuals who have been diagnosed with the disorder.”

The usage of Ritalin at a young age breaks the psychological threshold people maintain towards the usage of prescription pills, which makes those children more likely to consume psychotropic drugs later in their lives. We should not be surprised to witness a dramatic increase of consumption of antidepressants in the years to come. The trend is already beginning:

“In its study, the U.S. Centers for Disease Control and Prevention looked at 2.4 billion drugs prescribed in visits to doctors and hospitals in 2005. Of those, 118 million were for antidepressants.

The use of antidepressants and other psychotropic drugs — those that affect brain chemistry — has skyrocketed over the last decade. Adult use of antidepressants almost tripled between the periods 1988-1994 and 1999-2000. Between 1995 and 2002, the most recent year for which statistics are available, the use of these drugs rose 48 percent, the CDC reported.”
– Elizabeth Cohen, CNN

The use of prescription pills might be of a great help for specific and properly diagnosed cases. The pharmaceutical industry however, which has many “friends” in the highest levels of government, is pushing for the widespread use of psychiatric drugs within the public. Since 2002, a great number of pills claiming to fix all kinds of mental conditions have been marketed to the public, but many of those pills were approved for sale without proper research for side effects. Even worse: the side effects might have been known but hidden to the public. Below is a list of warnings issued on commonly sold psychiatric drugs. Some of those side effects are actually frightening as a pill should not be able to have that much power over the human brain. Think about it: Some drugs are subject to warnings because they can cause you to … commit suicide?

2004

March 22: The Food and Drug Administration (FDA) warned that Prozac-like antidepressants (called Selective Serotonin Reuptake Inhibitors or SSRIs) could cause “anxiety, agitation, panic attacks, insomnia, irritability, hostility, impulsivity, akathisia [severe restlessness], hypomania [abnormal excitement] and mania [psychosis characterized by exalted feelings, delusions of grandeur].”

June: The Therapeutic Goods Administration, the Australian equivalent of the FDA, reported that the latest antipsychotic drugs could increase the risk of diabetes.

June: The FDA ordered that the packaging for the stimulant Adderall include a warning about sudden cardiovascular deaths, especially in children with underlying heart disease.

October 15: The FDA ordered its strongest “black box” label for antidepressants warning they could cause suicidal thoughts and actions in under those under 18 years old.

October 21: The New Zealand Medicines Adverse Reactions Committee recommended that older and newer antidepressants not be administered to patients less than 18 years of age because of the risk of suicide.

December 17: The FDA required packaging for the “ADHD” drug, Strattera, to advise that “Severe liver damage may progress to liver failure resulting in death or the need for a liver transplant in a small percentage of patients.”

2005

February 9: Health Canada, the Canadian counterpart of the FDA, suspended marketing of Adderall XR (Extended Release, given once a day) due to reports of 20 sudden unexplained deaths (14 in children) and 12 strokes (2 in children).

April 11: The FDA warned that antipsychotic drug use in elderly patients could increase the risk of death.

June 28: The FDA announced its intention to make labeling changes to Concerta and other Ritalin products to include the side effects: “visual hallucinations, suicidal ideation [ideas], psychotic behavior, as well as aggression or violent behavior.”

June 30: The FDA warned that the antidepressant Cymbalta could increase suicidal thinking or behavior in pediatric patients taking it.  It also warned about the potential increased risk of suicidal behavior in adults taking antidepressants.

August: The Australian Therapeutic Goods Administration found a relationship between antidepressants and suicidality, akathisia (severe restlessness), agitation, nervousness and anxiety in adults.  Similar symptoms could occur during withdrawal from the drugs, it determined.

August 19: The European Medicines Agency’s Committee for Medicinal Products warned against child antidepressant use, stating that the drugs caused suicide attempts and thoughts, aggression, hostility, aggression, oppositional behavior and anger.

September 26: The Agenzia Italiana del Farmaco (Italian Drug Agency, equivalent to the FDA) warned against use of older (tricyclic) antidepressants in people under 18 years old.  It also determined the drugs were associated with heart attacks in people of any age.

September 29: The FDA ordered that labeling for the “ADHD” drug Strattera include a boxed warning about the increased risk of suicidal thinking in children and adolescents taking it.

October 17: The FDA warned that the antidepressant Cymbalta could cause liver damage.

October 24: The FDA withdrew the stimulant Cylert from the market because of the risk of liver toxicity and failure.

November: The FDA warned that the antidepressant Effexor could cause homicidal thoughts.

2006

February 9: The FDA’s Drug Safety and Risk Management Advisory Committee urged that the strongest “black box” warning be issued for stimulants, because they may cause heart attacks, strokes and sudden death.

February 20: British authorities warned that Strattera was associated with seizures and potentially lengthening period of the time between heartbeats.

March 22: An FDA advisory panel heard evidence of almost 1,000 reports of kids experiencing psychosis or mania while taking stimulants.

May 3: FDA adverse drug reaction reports linked antipsychotic drugs to 45 child deaths and 1,300 serious adverse reactions, such as convulsions and low white blood cell count.

May 12: The manufacturer of Paxil warned that the antidepressant increases the risk of suicide in adults.

May 26: Health Canada issued new warnings of rare heart risks for all drugs prescribed for “ADHD,” including the risk of sudden death.

June 2: An FDA study determined that the antipsychotic drug, Risperdal, might cause pituitary tumors. The pituitary gland, at the base of the brain, secretes hormones that promote growth, and regulates body functions.  Antipsychotics may increase prolactin, a hormone in the pituitary gland, and this increase has been linked to cancer.  Risperdal was found to increase prolactin levels more frequently than in other antipsychotics.

July 19: The FDA said antidepressant packaging should carry warnings that they may cause a fatal lung condition in newborns whose mothers took SSRI antidepressants during pregnancy.  Migraine sufferers also need to be warned that combining migraine drugs with SSRIs could result in a life-threatening condition called serotonin syndrome.

Food Poisoning

The modern man ingests in his lifetime an incredible amount of chemicals, artificial flavors and additives. Although there is growing awareness regarding healthy eating, there is also a lot of misinformation and disinformation.

At the present time, a single company – Monsanto – produces roughly 95% of all soybeans and 80% of all corn in the US. Considering this, the corn flakes you had for breakfast, soda you drank at lunch and beefstew you ate for dinner likely were produced from crops grown with Monsanto’s patented genes. There are numerous documents and films exposing Monsanto’s strong-arming of the agricultural industry, so I won’t expand on that issue. It is however important to note that a virtual monopoly currently exists in the food industry and there’s a unhealthy link between Monsanto and the American government: Many people who have passed laws in the fields of food, drugs and agriculture were also, at some point on the payroll of Monsanto. In other words, the elite decides which foods are sold to you.

Public officials formerly employed by Monsanto:

  • Justice Clarence Thomas worked as an attorney for Monsanto in the 1970s. Thomas wrote the majority opinion in the 2001 Supreme Court decision J. E. M. Ag Supply, Inc. v. Pioneer Hi-Bred International, Inc.|J. E. M. AG SUPPLY, INC. V. PIONEER HI-BREDINTERNATIONAL, INC. which found that “newly developed plant breeds are patentable under the general utility patent laws of the United States.” This case benefited all companies which profit from genetically modified crops, of which Monsanto is one of the largest.
  • Michael R. Taylor was an assistant to the Food and Drug Administration (FDA) commissioner before he left to work for a law firm on gaining FDA approval of Monsanto’s artificial growth hormone in the 1980s. Taylor then became deputy commissioner of the FDA from 1991 to 1994. Taylor was later re-appointed to the FDA in August 2009 by President Barack Obama.
  • Dr. Michael A. Friedman was a deputy commissioner of the FDA before he was hired as a senior vice president of Monsanto.
  • Linda J. Fisher was an assistant administrator at the United States Environmental Protection Agency‎ (EPA) before she was a vice president at Monsanto from 1995 – 2000. In 2001, Fisher became the deputy administrator of the EPA.
  • Former Secretary of Defense Donald Rumsfeld was chairman and chief executive officer of G. D. Searle & Co., which Monsanto purchased in 1985. Rumsfeld personally made at least $12 million USD from the transaction.

Many laws (approved by ex-Monsanto employees) have facilitated the introduction and the consumption of genetically engineered foods by the public.

“According to current statistics, 45% of corn and 85% of soybeans in the United States is genetically engineered (GE). Estimates of 70-75% of processed foods found at our local supermarkets are believed to contain GE ingredients.

Other GE foods are canola, papayas, radicchio, potatoes, rice, squash or zucchini, cantaloupe, sugar beets, flax, tomatoes, and oilseed rape. One non-food crop that is commonly GE is cotton. The GE hormone recombinant bovine growth hormone (rBGH or Prosilac) was one of the first GE products allowed to enter the nation’s food supply. The U.S. Food and Drug Administration (FDA) approved Monsanto’s rBGH in 1993.”
– Anna M. Salanti, Genetically Engineered Foods

Although it is yet impossible to determine the long-term effects of genetically engineered foods on the human body, some facts have already been established. GE foods contain less nutrients and, most importantly, they are “chemical-friendly”.

“One of the features of GE foods is their ability to withstand unlimited application of chemicals, including pesticides. Bromoxynil and glyphosate have been associated with developmental disorders in fetuses, tumors, carcinomas, and non-Hodgkin’s lymphoma. Studies indicate that Monsanto’s recombinant Bovine Growth Hormone (rBGH) causes treated cows to produce milk with an increased second hormone, IGF-1. This hormone is associated with human cancers. Recommendations by the Congressional watchdog agency, Government Accounting Office (GAO), recommended that rBGH not be approved. The European Union, Canada, and others have banned it. The UN has also refused to certify that using rBGH is safe.”
– Ibid

Genetic modifications engineered by Monsanto makes their products bigger and more aesthetically pleasing. Another, less discussed “improvement” is the plants’ ability to withstand nearly unlimited amounts of Roundup brand pesticides. This encourages farmers to use that brand of pesticides which is produced by … Monsanto.


Studies on Roundup link the powerful pesticide and herbicide to many health problems such as:

  • Increased risks of the cancer non-Hodgkin’s lymphoma
  • Miscarriages
  • Attention Deficit Disorder (the real one)

Fluoride

Another source of harmful chemicals is found in the modern man’s water supplies and soft drinks.  As of 2002, the CDC statistics show that almost 60% of the U.S. population receives fluoridated water through the taps in their homes. The official reason for the presence of fluoride in our tap water? It prevents tooth decay. Ok … really? Is this mildly important benefit worth the consuming of great amounts of this substance by the population? Some studies even denied the dental benefits of fluorided water.

“Scientists now believe that the main protective action from fluoride does not come from ingesting the chemical, with the teeth absorbing it from inside the body, but from direct absorption through topical application to teeth. This means swallowing water is a far less effective way to fight cavities than brushing with fluoridated toothpaste.”
– Source

So why is fluoride still found in tap water? Here are some quick facts about fluoridation chemicals:

  • they were once used as pesticides
  • they are registered as “poisonous” under the 1972 Poisons Act, in the same group of toxins as arsenic, mercury and paraquat
  • fluoride is scientifically classed as more toxic than lead, but there is about 20 times more fluoride than lead in tap water

Toxicity of fluoride compared to other poisons

Many studies have been conducted on the effects of fluoride on the human body and some notable adverse effects have been noted: it changes bone structure and strength, impairs the immune system and it was linked to some cancers. Another alarming consequence of fluoridation is its effects on brain functions:

“In 1995, neurotoxicologist and former Director of toxicology at Forsyth Dental Center in Boston, Dr. Phyllis Mullenix published research showing that fluoride built up in the brains of animals when exposed to moderate levels. Damage to the brain occured and the behavior patterns of the animals was adversely effected. Offspring of pregnant animals receiving relatively low doses of fluoride showed permanent effects to the brain which were seen as hyperactivity (ADD-like symptoms). Young animals and adult animals given fluoride experienced the opposite effect — hypoactivity or sluggishness. The toxic effects of fluoride on the central nervous system was subsequently confirmed by previously-classified government research. Two new epidemiological studies which tend to confirm fluoride’s neurotoxic effects on the brain have shown that children exposed to higher levels of fluoride had lower IQs.”
– Source

A lesser known, but extremely important side effect of fluoride is the calcification of the pineal gland.

Up until the 1990s, no research had ever been conducted to determine the impact of fluoride on the pineal gland – a small gland located between the two hemispheres of the brain that regulates the production of the hormone melatonin. Melatonin is a hormone that helps regulate the onset of puberty and helps protect the body from cell damage caused by free radicals.

It is now known – thanks to the meticulous research of Dr. Jennifer Luke from the University of Surrey in England – that the pineal gland is the primary target of fluoride accumulation within the body.

The soft tissue of the adult pineal gland contains more fluoride than any other soft tissue in the body – a level of fluoride (~300 ppm) capable of inhibiting enzymes.

The pineal gland also contains hard tissue (hyroxyapatite crystals), and this hard tissue accumulates more fluoride (up to 21,000 ppm) than any other hard tissue in the body (e.g. teeth and bone).

– Source 

Other than regulating vital hormones, the pineal gland is known to serve an esoteric function. It is known by mystic groups as the “third eye” and has been considered by many cultures to be part of the brain responsible for spiritual enlightenment and the “link to the divine”. Is enlightenment out of bounds for the modern man?

“In the human brain there is a tiny gland called the pineal body, which is the sacred eye of the ancients, and corresponds to the third eye of the Cyclops. Little is known concerning the function of the pineal body, which Descartes suggested (more wisely than he knew) might be the abode of the spirit of man.”
– Manly P. Hall, The Secret Teachings of All Ages

Aspartame

Aspartame is an artificial sweetner used in “sugar-free” products such as diet sodas and chewing gum. Since its discovery in 1965, Aspartame caused great controversy regarding its health risks – primarily causing brain tumors – and was denied its application to be sold to the public by the FDA. Searle, the company attempting to market Aspartame then appointed Donald Rumsfeld as CEO in 1977 … and things changed drastically. In a short period of time, Aspartame could be found in over 5,000 products.

“Donald Rumsfeld was on President Reagan’s transition team and the day after he took office he appointed an FDA Commissioner who would approve aspartame. The FDA set up a Board of Inquiry of the best scientists they had to offer who said aspartame is not safe and causes brain tumors, and the petition for approval is hereby revoked. The new FDA Commissioner, Arthur Hull Hayes, over-ruled that Board of Inquiry and then went to work for the PR Agency of the manufacturer, Burson-Marstellar, rumored at $1000.00 a day, and has refused to talk to the press ever since.”
– Source

Years after its approval by the FDA, leading scientists still urge the organization to ban this product.

“Dr. John Olney, who founded the field of neuoscience called excitotoxicity, attempted to stop the approval of aspartame with Attorney James Turner back in 1996. The FDA’s own toxicologist, Dr. Adrian Gross told Congress that without a shadow of a doubt, aspartame can cause brain tumors and brain cancer and violated the Delaney Amendment which forbids putting anything in food that is known to cause Cancer. Detailed information on this can be found in the Bressler Report (FDA report on Searle).”
– Ibid

In 1995, the FDA was forced to release, under the Freedom of Information act, a list of ninety-two symptoms caused by aspartame reported by thousands of victims:

Those symptoms are however only the tip of the iceberg. Aspartame has been linked to severe illnesses and long term health issues.

“According to the top doctors and researchers on this issue, aspartame causes headache, memory loss, seizures, vision loss, coma and cancer. It worsens or mimics the symptoms of such diseases and conditions as fibromyalgia, MS, lupus, ADD, diabetes, Alzheimer’s, chronic fatigue and depression. Further dangers highlighted is that aspartame liberates free methyl alcohol. The resulting chronic methanol poisoning affects the dopamine system of the brain causing addiction. Methanol, or wood alcohol, constitutes one third of the aspartame molecule and is classified as a severe metabolic poison and narcotic.”
– Ibid

In Conclusion

If the main message of this website has been to this point “watch what enters your mind”, the main message of this article is “watch what enters your body.” The consumption of the products stated above will probably not cause an immediate and noticeable effect. But, after many years of ingesting those substances, one’s thoughts become increasingly clouded and foggy, the ability to concentrate becomes hindered and judgment becomes impaired. In other words, the once sharp mind becomes dull. What happens when a population is heavily sedated and poisoned on a daily basis? It becomes numb, zombie-like and docile. Instead of asking important questions and seeking a higher truth, the dumbed-down mass simply accomplishes its daily tasks and absorbs whatever the media tells them.  Is this what the elite is looking to create?

There is, however, a silver lining here. Many of the negative effects of the substances described above are reversible. And YOU are the one who decides what enters your body. This article provides a brief overview of dangers lurking for the unaware consumer, but tons of information is available on which to base enlightened decisions. Your body is a temple. Will you allow it to be desecrated?

Posted in Conspiracy Archives, Exopolitical Interventions, Rated R, Truthout Articles | Tagged: , , , , , , , , , , , | Leave a Comment »

Kenya and Uganda

Posted by Admin on January 30, 2011

“Analyse the factors behind the success or failure of the consolidation of democratic institutions in Kenya and Uganda.”

 

Introduction:

This paper examines the factors that have contributed to the success or failure of the consolidation of democratic institutions in Kenya and Uganda. The post-independence period of these two countries is the starting point for this study. In arriving at its conclusion that attempts at consolidation of democracy in these two countries have been abortive overall, this paper lists factors, both peculiar to these countries and those endemic in the larger context of African society as inhibitors of democracy. If the reign of dictators in these countries, most notably of Daniel arap Moi in Kenya and Milton Obote and later Idi Amin in Uganda were undoubtedly great factors in stalling democracy, this paper sees this phenomenon as only a symptom of the disease that has afflicted Africa –its near incompatibility with democracy. The factors that have brought about this situation are mentioned in the concluding part of this paper.

It needs mention that the scope of this paper precludes the need for a detailed examination of the chronology of events[1], because of which only events concerning the thesis topic are listed. Only actions concerning mostly Moi’s and Amin’s regimes are detailed in this paper.

 

Summary:

In the assessment of this paper, the most important outwardly factor to have acted as the stumbling block to democracy in these countries has been its dictators. Kenya’s rendezvous with democracy was also severely blunted by pathological corruption, as a result of which the country has been swinging between authoritarianism for most part of its post-independence existence, and some democracy. Kenya was placed on the road to democracy after independence, by its founding father, Jomo Kenyatta who had given the country a sample of democracy, but his successor, Daniel arap Moi chose the opposite route, and took the country toward totalitarian, one party rule. When multiparty elections did come, they were flawed. The end of his long rule paved the way for more democracy, but this was also severely affected not so much by despotic rule, as much as by corruption and compulsions of coalition politics.

 

In Uganda’s case, the role of two consecutive dictators, Milton Obote and Idi Amin were enough to keep democratic institutions in shackles for most part of its post-colonial history. The excesses of their establishment were the antithesis of any democratic attribute. If the Obote regime was going all out to kill democracy, then that of Idi Amin was brutal and abominable even by African standards. There has been some movement towards democracy in Uganda in the last two decades and in Kenya in the last few years, but this has not been in conditions that were different from those that fostered autocracy. There exists some democracy as of now on paper, but it is a fragile one, and the system can slide into its anarchic past at the slightest provocation.

 

Such brittleness of democratic institutions in these countries has been the outcome of the inward, critical reason for the stunted growth of democracy –the inability of the African political and social system to adapt this form of governance, which has been explored in the concluding part of this paper.

 

Kenya: Kenya’s tryst with democracy was full of difficulties, as a result of the fact that it was never based on serious intent. Arap Moi’s stranglehold over the Kenya African National Union (KANU) party he inherited from Kenyatta was complete. An attempted coup in 1982 was the perfect pretext for him to stifle democracy. His victory speech announced the justification for the continuance of single party rule:

There had to be a party giving people everywhere a sense of belonging and an arena of unity. The party was also to serve as an institution which the government and the people had in common — so that philosophies, policies and aspirations all sprang from the grass-roots of society. It was further visualized that the party, as a political instrument, must be appropriately involved in sustaining the countrywide momentum of nationalistic forces and feelings . . . (Ogot & Ochieng, 1995, p. 203)

 

Moi’s penchant for tyrannical rule got strengthened over time; the first multiparty elections[2] held in 1992 came about three decades after independence, and were marred by a lack of consensus or a pact among the parties. This fact made the elections a farce and rendered the opposition impotent. Coming as they did in the backdrop of increased international pressures over its human rights record and abysmally poor living conditions caused by corrupt governance, the elections of 1992 were a charade. Although they were multiparty elections held under the watchful eyes of international donors, they were won by the ruling party with skulduggery and the complicity of a pliant Election Commission and judiciary. Even after attaining a majority in parliament, the KANU led by Moi persistently refused to carry out constitutional reforms aimed at more democracy implying transparency in public administration. His term in office was marked by acts such as hounding opposition figures and the media, denying them the right of association and the carrying out of frequent arrests. The KANU’s unwillingness to relinquish power made them carry out much political stealth during the elections. (Harbeson, 1999, pp. 49-51)

 

This was a great slump in the democratic credentials of a party that had for most part of the “father of independence”, Kenyatta’s presidency, been a fairly neat example of democracy in Africa. From the time of independence in 1963 till his death in 1978, Kenyatta had nurtured a polity that had been surprisingly open to opposition parties. Although the KANU held monopoly of power on the national political scene since at least 1969 and the political system was based on patronage, Kenya was not a one-party monocracy. During his presidency, elections had been held regularly. Also, in a system that was at great variance with that in most other African regimes, for most part, the press was allowed to function without fear of political reprisal. There was liberty for the people to practise any religion, the Church and trade unions were allowed to voice their say, and civil liberty was being enforced by those in the legal profession and the judiciary. As Moi’s rule progressed, civil liberties starting taking a backseat, and power now started flowing into and getting consolidated into select ethnic communities. (Throup & Hornsby, 1998, p. 26)

 

Daniel arap Moi ran the government like a personal fiefdom, with extremely high levels of corruption[3] wreaking havoc in the daily lives of people. While corruption had been a bane of Kenyan society from Kenyatta’s time, accountability, a prerequisite of democracy, plummeted to new levels during Moi’s time. The treasury was virtually emptied during this time in innumerable scandals. The Goldenberg scandal was only the most famous of these, by which billions of Kenyan shillings were emptied from government coffers by fraudulent means. (Wright, 1998, p. 108) This resulted in strictures from Kenya’s donors and the World Bank culminating in suspension of vital aid, but even this did not alter the basic fabric of governance. (Lundahl, 2001, p. 99) This is where lay the problem –the inherent venality in the Kenyan society that some see as a takeoff from the colonial times, in which the colonisers used every method possible to deplete the exchequer. (Versi, 1996, p. 6)

 

Yet another factor of critical importance to arrest graduation to democracy was that Kenya’s opposition leaders were more mired in tribalism and had greater loyalty to their kinship, and were less clear about the system they wanted to put in place in the event of Moi’s defeat. Their cohesiveness was pre-empted by ethnic loyalty, which was always at the core of African society. It was never difficult for the seasoned Moi to exploit these inherent divisions. (Bates, 1999, p. 91)

 

Kenya also has some internalised and deep-rooted social factors that have made transition to democracy difficult. If the political factors listed above were direct and concrete factors that stalled the progress of democratic forces, social divisions such as ethnicity, location, education, income and gender lie at the heart of the society. This historic lopsidedness has made the assumption of more powers by some of such groups easy, while confining others into oblivion. Among these factors, perhaps the most important are “[e]thnic hostilities (which) reflect a weakness inherent in Kenya’s civic culture; (and) primordial fears and mistrust permeate society.” (Miller & Yeager, 1984, p. 74) This last factor has been the most important, larger reason for the lack of consolidation of democratic forces in Africa, of which these two countries are only a part. A presentation of this fact has been made in the last part of this paper.

 

Returning to Moi, finally, when the time came for him to hand over the reins of power, the administration that succeeded him, led by Mwai Kibaki, has been mired in coalition constraints; this administration, which came to office amid high promise and expectations, got entangled in corruption in much the same manner as its predecessor, (Kabukuru, 2006) to the extent of attracting censure from international donors if the system did not change. (Versi, 2005, p. 13) The crux of the power struggle has revolved round his attempt to acquire more power by bypassing his coalition partners, even while the entrenched system remains basically unchanged. (Wrong, 2005, p. 22) It has always been difficult for democracy to take root and flourish in such conditions.

 

 

Uganda: As was the case with Kenya, in Uganda, too, highly fragmented tribal and ethnic loyalties produced a splintered polity that could be exploited at will by the men in power. Right at the time of independence, major differences between the country’s most prosperous region that had the most powerful ethnic group, Buganda, and the rest of the country erupted and threatened to split the nation apart. Using their bargaining power, the Buganda had obtained a special status under the constitution of 1962, the nation’s first. Early into his term in office, the country’s first Prime Minister, Milton Obote was confronted with serious differences with the Buganda. After attempts at reconciling these and other various factions, Obote went on the offensive, and by 1966 had had the constitution annulled. When the Buganda protested this act and rose in rebellion by seeking foreign aid for separatism, the Obote government decided to take this state head on.  Declaring a state of national emergency, he ordered an Army crackdown on the Buganda stronghold, the tribal chief’s palace on Mengo Hill. Even as the Buganda chieftain, the Kabaka fled and the threat from this province receded, the Prime Minister seized this initiative to accumulate absolute power, starting with making himself president. This was the start of his autocratic regime; soon, what started as a measure to control internal dissent became an instrument of absolute power. A new constitution was promulgated in 1967, ostensibly to abolish the four dominant kingdoms and create a new government of unity, but was misused to gain total control. (Ofcansky, 1996, pp. 39-41)

 

Once he had been overthrown in a coup led by Idi Amin, his once trusted aide, what followed was a virtual bloodbath that was to soak the entire country, and mark a terrible chapter in its history. Amin first accused Tanzania of backing Obote, and chose this as a reason for rounding up alleged supporters of the deposed ruler. He next targeted officers of a failed coup, butchering several of them arbitrarily; one of his first acts was to proclaim himself ‘president for life’. Among his long list of perverse acts that were undemocratic was the suppression of the Catholic Church, on grounds that it was taking part in subversive activities; he even had the Anglican archbishop of Uganda murdered.  Such egregious acts continued with impunity till he was overthrown in 1979. (E.Jessup, 1998, p. 24) These were not before he had foreign journalists killed for attempting to cover the war with Tanzania, (Hachten, 1992, p. 42) and ordered the mass deportation of Indian businessmen who earned the cream of the economy. (Sowell, 1996, p. 321)

 

In Uganda, too, like in Kenya, society was deeply fractured along ethnic and tribal affiliations. Moreover, it was a country that had been built almost entirely on the strength of its agricultural sector. Industrialisation was almost totally absent, which meant that not only was dependence on this ancient system of production attracting more tribal loyalties, there was a total absence of any form of industrial bourgeoisie. This made organisation of the masses against tyranny ever more difficult. Thus, affiliations were more on the social rather than labour-oriented lines. This led to a solidification of the forces of concentration of power. This inhibited the spread of democracy to the grassroots level. With the advent of the colonisers, the geographical imbalance of power that was tilted in favour of Buganda was further aggravated in the form of animosity between the Catholic and Protestant and Christian and Muslim. (Hansen & Twaddle, 1988, p. 29)

 

In this milieu, the growth and consolidation of democratic institutions has always been next to impossible, as the present incumbent, Yoweri Museveni has been discovering. Despite not being given to the despotism of the earlier dictators, he has been having a difficult time keeping the country together due to insurrection from the sectarian Christian movement, the Lord’s Resistance Army, whose frequent attacks, and his government’s involvement in the civil war in Sudan  (P.Scherrer, 2002, p. 56) have weakened the graduation to democracy.

 

Conclusion: It is generally easy to point the difficulty in establishing democracy on a handful of dictators that ruled these two countries. While prima facie it is true that these men were responsible for this process as shown in this paper, a deeper understanding needs to be made of the conditions which enabled these men to assume absolute power and unleash such highly dictatorial regimes in these countries.

 

Discerning analysts and commentators have pointed out to systemic problems lying at the heart of African society as being the chief impediments to the fertilisation of democracy. This has been analysed thread bare with amazing clarity by Smith Hempstone (1995). The nub of this highly perspicacious analysis is that the genus of democracy was never present in Africa. This was a continent that had been blissfully insulated from all major events that shook the world –the Renaissance, the Reformation, the Industrial Revolution, explorations and political revolutions. The manure necessary for the sapling of democracy to sprout –openness, innovation and enterprise, was alien to African society. The institutional units that comprised society were the tribes, absolute and unquestioning obedience to whose leaders were the highest hallmarks of sacrosanct piety. All the dictators produced by Africa, Kenya and Uganda included, were people who made the best of this core of African life and society. The idea of accountability, rule of law and checks and balances were unknown to Africa. In a society that was light years away from the consciousness of nationhood, the parliamentary institutions that Britain put in place turned out to be seeds planted in an arid area. It is only natural that the only yield this continent threw up was dictators, the modern day avatar of tribal chiefs. (Hempstone, 1995) A Moi and an Idi Amin were the manifestations of this highly entrenched malaise that Africa has lived with. The democracy that has been seen in the last two decades has sprung out of the same conditions; hence, it should not be surprising if it breaks up on account of a consolidation of these primeval forces.

This being the core reason for the failure of democracy in Africa, it is not possible to understand or explain how Kenya or Uganda could have been exempt from this nature of governance in Africa. This, in the real and holistic sense, has been Africa’s story, and sums up the fulcrum of factors behind the failure of the consolidation of democratic institutions in Kenya and Uganda.

Written By Ravindra G Rao

 

References

 

 

Bates, R. H., (1999), 5 “The Economic Bases of Democratization”, in State, Conflict, and Democracy in Africa, Joseph, R., (Ed.), (pp. 83-93), Lynne Rienner, Boulder, CO.

 

E.Jessup, J., (1998), An Encyclopedic Dictionary of Conflict and Conflict Resolution, 1945-1996, Greenwood Press, Westport, CT.

 

Hachten, W., (1992), 4 “African Censorship and American Correspondents” in Africa’s Media Image, Hawk, B. G., (Ed.), (pp. 38-47), Praeger Publishers, Westport, CT.

 

Hansen, H. B. & Twaddle, M., (Eds.), (1988), Uganda Now: Between Decay & Development, James Currey, London.

 

Harbeson, J. W., (1999), 3 “Rethinking Democratic Transitions: Lessons from Eastern and Southern Africa”, in State, Conflict, and Democracy in Africa, Joseph, R. (Ed.), (pp. 39-54), Lynne Rienner, Boulder, CO.

 

Hempstone, S., (1995, Winter), “Kenya: A Tarnished Jewel”, The National Interest, 50+. Retrieved May 20, 2007, from Questia database: http://www.questia.com/

 

Kabukuru, W., (2006, March), “Kenya: Has Kibaki Delivered? on 27 December 2002, Kenyans Elected a New Government, Headed by President Mwai Kibaki, Amidst Euphoria and Optimism. Three Years on, What Is the Score? Has Kibaki’s Government Fulfilled Its Electoral Promises? or Has It Been More of the Same? from Nairobi, Wanjohi Kabukuru Takes an Indepth Look”, New African 10+, Retrieved May 20, 2007, from Questia database: http://www.questia.com/

 

Lundahl, M., (Ed.), (2001), From Crisis to Growth in Africa?. London: Routledge. Retrieved May 20, 2007, from Questia database: http://www.questia.com/PM.qst?a=o&d=107366356

 

Miller, N., & Yeager, R. (1984). The Quest for Prosperity The Quest for Prosperity, Westview Press, Boulder, CO.

 

Ofcansky, T. P., (1996), Uganda: Tarnished Pearl of Africa, Westview Press, Boulder, CO.

 

Ogot, B. A., & Ochieng, W. R., (1995), Decolonization & Independence in Kenya, 1940-93, James Currey, London.

 

P.Scherrer, C., (2002), Genocide and Crisis in Central Africa : Conflict Roots, Mass Violence, and Regional War, Praeger, Westport, CT.

 

Sowell, T., (1996), Migrations and Cultures: A World View, BasicBooks, New York.

 

Throup, D. W., & Hornsby, C., (1998), Multi-Party Politics in Kenya: The Kenyatta & Moi States & the Triumph of the System in the 1992 Election, James Currey, Oxford.

 

Versi, A., (1996, February) “The Culture of Sleaze”, African Business, 6. Retrieved May 20, 2007, from Questia database: http://www.questia.com/

 

Versi, A., (2005, March), “The Burr of Corruption”, African Business, 13. Retrieved May 20, 2007, from Questia database: http://www.questia.com/

 

Wright, S. (Ed.), (1998), African Foreign Policies, Westview Press, Boulder, CO.

 

Wrong, M., (2005, September 5), “World View: African Voters Are Naive about Their Constitutions. Ruthless, Corrupt Elites Will Not Suddenly Start Sharing Power Just Because a Legal Document Says They Must” New Statesman, Vol. 134, p. 22. Retrieved May 20, 2007, from Questia database: http://www.questia.com/


[1] For a more structured chronology of events about Kenya, the following links from BBC are a good reference: http://news.bbc.co.uk/2/hi/africa/country_profiles/1026884.stm and http://news.bbc.co.uk/2/hi/africa/country_profiles/1069166.stm

A chronological history of Uganda is very well documented on the sites of the US Library of Congress. This site is particularly important for this study:

http://lcweb2.loc.gov/frd/cs/ugtoc.html and http://lcweb2.loc.gov/cgi-bin/query/r?frd/cstdy:@field(DOCID+ug0017)

[2] A good article for the run-up to these elections can be found on http://www.accord.org.za/ct/2002-4/CT4_2002_pg28-35.pdf

[3] The all-pervasiveness of this malaise has been well documented in a World Bank study authored by Anwar Shah in the report “Corruption and Decentralized Public Governance”. This report is aimed essentially at trying to find out if decentralization is a panacea to endemic corruption; yet, it gives a good account of this subject as a whole. Critical issues relating to corruption in Kenya find some mention in this report. It can be accessed on http://www-wds.worldbank.org/servlet/WDSContentServer/WDSP/IB/2006/01/13/000016406_20060113145401/Rendered/PDF/wps3824.pdf

Posted in Rated R | Tagged: , , , , , , , , | Leave a Comment »

LIFE AND TEACHINGS OF THE BUDDHA

Posted by Admin on January 30, 2011

Buddha giving the Sermon in the Deer Park, dep...

The Gautama Buddha

Introduction: This paper profiles the Buddha’s early life and teachings. Having started off with a description of the interesting story of his preordained birth and early life leading to his renunciation, it looks at the circumstances that warranted the birth of the religion he founded. After describing the core philosophical and spiritual aspects of Buddhism, this paper rounds off with a discussion of the present situation of this religion around the world.

Limitation of this paper: While most of the requirements of this paper are met, one limitation is that it looks at only Buddhism as a whole from the perspectives mentioned above, without giving a thought to its main sects or traditions. Secondly, since an attempt is made in this paper to illustrate as lucidly as possible the core concepts of this religion, no reference is made to the holy texts of Buddhism, and for this reason, this paper has no quotes from its holy texts. This has been avoided for the simple reason that these texts are in the ancient Indian languages of either Pali or Prakrit.

Birth and early life: Buddha, nee Siddhartha, was born at a time when circumstances warranted the advent of a great soul that would cleanse the world of its miseries and suffering. As has happened during the arrival of all great men, there were prophecies of his advent, too, right from the time of his conception –his mother, the queen of the princely state of Magadha in eastern India dreamt of a six-tusked white elephant descending from the heavens entering her womb, plucking flowers along the way. Thus was the Buddha conceived in the most spectacular of fashions. At the time of his delivery, it is believed that four divine angels held out golden nets to receive the baby boy. As is the custom in India, the baby’s birth had to take place in the mother’s parental home; during the arduous journey to her father’s house, the baby was born. To mitigate her labor, a sal tree, under which she rested, bent to give her shelter and ease the birth. The prophesying continued right into his infancy. A saint foretold that the child’s chancing upon four symbols – an enfeebled old man, a sick man, a corpse and monk would take him from the royal palace to the path he was destined to traverse.  Shuddhodana, his doting father, apprehensive of losing his auspicious son to the esoteric, tried desperately to bring the prince back to the mundane. He ensured that no such persons ever entered the palace. (Ballou) Such was the effort the king took to shield the prince from the sight of these kinds of persons that he got built three different palaces for each of the seasons for his son to enjoy, and, whenever Siddhartha was being shifted from one palace to another, made sure anyone of such a description was removed from the way. As the boy grew up in all the imaginable royal comfort under the gazing, protective eye of his father, there was little in his princely upbringing that gave even the remotest chance of being exposed to the vagaries and vicissitudes of life. Yet, for all the insulation Shuddhodana tried to place his son in, there was an inner spiritual craving in Siddhartha that was causing a sense of ennui in him. On one occasion, he insisted that he be taken for a horse ride. His loyal charioteer, Channa took the young prince out on a fateful day on the equally loyal, strikingly handsome stallion, Kanthaka. Everyday life steeped in misery in India in the 6th century B.C. was precisely the reason Siddhartha was born. This ride exposed him to all the four categories of persons his father so assiduously wanted to keep him away from. (Corless 7)

The sight of these four persons brought about such a transformation in him that he decided to renounce the world right there, and seek the ultimate truth. Divine afflatus was so much on his side that the gods placed their palms under his feet, so that his possessive father would not hear the noise of the footsteps at the time he was walking out of the palace. Yet again, a divine act was performed to facilitate his departure: Shuddhodana had been so anxious about retaining his son that he had ordered the palace gates to be shut, in case his son still managed to find his way out. However, it was decreed that Siddhartha would leave, and nothing would stop the transformation of the prince to the Buddha, the enlightened. The powerful gates opened on their own. Once he had departed, he implored his faithful companions, Channa and Kanthaka to leave. The heartbroken horse is believed to have died out of grief from this separation. (Ballou)

Background to the birth of Buddhism: In the centuries leading to the birth of Buddhism, there was a series of events that greatly convulsed Indian society. The discovery and molding of iron took the art of warfare to hitherto unknown heights, giving rise to kingdoms and making the role of the despot more important to everyday life than ever before; over time, this resulted in unprecedented greed for power and pelf. The vast, sprawling area below the mighty Himalayas had a mere 16 states. At the same time, the scriptural injunctions of Hindu society, the Vedas and the Upanishads, preached the oneness of man and God. Taken to the extreme, this belief led to a glorification of rituals that were no more than a superficial symbolism of the core values of the Vedas, with several mutually contradictory approaches to Moksha, or liberation. (Prebish 7-9) Rituals, animal sacrifice and a highly class-based caste system took center stage of religious and social life. Compounding Brahminical hierarchical superiority was the existence of another recent religion, Jainism, which took nonviolence to such absurd levels that the implementation of austerities this religion prescribed was almost impossible for the common man. (Craig 38) This then was the state of spiritual decadence that India was going through at the time of the birth of the Buddha. Is it any surprise that in Buddhism, the caste system is totally absent, and there is no place for rituals and animal sacrifice?

Understanding the core concepts of Buddhism:  Buddhism centers round the fleeting nature of life and the material world. Man, in his state of ignorance, fails to understand this, and ends up getting attached to all the impermanent ideals and possessions that surround him. All that this attachment begets is misery. This misery is only alleviated when he gains knowledge of his true self, which is the true nature of the soul. Thus, it is attachment to the ephemeral that is at the root of his ignorance, and which clouds his grasp of the true nature of his self. This ignorance can only be removed, and man can be made to understand the connection between his true self and the permanent source of wisdom, or God, when he rids himself of attachment to evanescent objects. The way to achieving this is Karma. Again, this is tricky, because karma in the traditional sense means action, which can be of any kind, both good and bad. It is the pursuance of good deeds that frees man from bondage, and from the cycle of rebirths, and puts him into a state of eternal bliss, or Nirvana. (Morgan 25) To Buddha, the way to achieve this is the core of his philosophy, the concept of the ‘Four Noble Truths’, the ‘Five Aggregates’, and the ‘Eight Fold Path’. Of these, while the ‘Four Noble Truths’ is central, the others are tied to this in sequence, as if they were a corollary to the core. (Prebish 29) Accordingly, these are: 1) suffering is inseparable from material existence; 2) the cause of this suffering is desire and ignorance; 3) freedom from this ignorance and desire is freedom from their attendant suffering; and 4) there is a method or way by which this freedom can be achieved. This is the pursuit of wisdom and knowledge of the self. (Gross 148) The metaphysical metaphor for this is the transitive phase of existence from disease to cure; Buddha is seen as the physician, the Bhavaroga Vaidya, or doctor who cures worldly illnesses by first diagnosing the disease, then understanding its cause, then moving on to deciding the cure, and finally, administering the cure. (Keown 45) The ‘Five Aggregates’ are form, sensation, perception, mental formations and consciousness. (Akira 44) Finally, the Eight Fold Path consists of right belief, right resolve, right speech, right conduct, right occupation, right effort, right contemplation and right meditation. (“Exiled from Home, Loved” 3) If this is the core philosophy of Buddhism, then its chief spiritual traditions are nonviolence, tolerance and compassion for others; these are enunciated by the core means of community existence, or Sangha. (Boyle)

There are various theories ascribed to the growth of Buddhism. Though none of these is conclusive, the most widely accepted one is that after ancient India’s first great emperor, Ashoka, converted to Buddhism after his victory in the famous battle of Kalinga, in the 3rd century BC, he underwent a great transformation. Despite being the victor, he was so moved by the gore and grief the battle caused that he abjured all forms of violence, so essential a part of his sanguinary nature, before embracing Buddhism. The king was known to have made efforts to spread the essence of Buddhism to faraway countries such as present day Sri Lanka, Southeast Asia, Near East and Macedonia. (Keown 69) To the present day, this religion is predominant in these regions –India, Southeast Asia consisting of most countries in the region, Sri Lanka, and in pockets in the West. (Boyle)

Conclusion:  To Buddha, the highest emphasis was the halt to meaningless rituals and on good conduct, beyond which there existed no higher form of benevolence or spirituality. These to him were more important than blind beliefs, which had become the bane of Hindu society at that time. The simple, yet incisive organization of thought is reflected in the stunningly refreshing strata of his thought and logic: the Four Noble Truths is the essence of life; this is accomplished by the exercise of the Five Aggregates, and results in the Eight Fold path. The logic of action and consequence, though rooted in the Hindu concept of Karma, was more straightforward. Like Hinduism, it too, lays out the belief that karma is individualistic and unique to the person attached to it. Like Confucius, who detested everything that was superficial, Buddha, too, felt the same feeling of revulsion towards blind adherence to only the flip side of the Upanishads. To him, good thoughts and deeds beget good, although over a cycle of births. Thus, his teachings, apart from standing out for their crystal clear logic, were aimed at cleansing the rot that had set in. Buddhism, an offshoot of Hinduism, went on to change the lives of people in an entire continent. (Ballou)

Written By Ravindra G Rao

Works Cited

 

Akira, Hirakawa. A History of Indian Buddhism: From Sakyamuni to Early Mahayana. Trans. Paul Groner. Ed. Paul Groner. Honolulu: University of Hawaii Press, 1990. .

Ballou, Robert O., Ed. (1944). The Portable World Bible (1st Ed.). New York: Penguin Books.

Boyle, Joan. “Buddhist Discourse: An Instrument of Peace.” International Journal of Humanities and Peace 17.1 (2001): 27+. Questia. 26 Nov. 2005 <http://www.questia.com&gt;.

Cheetham, Eric. Fundamentals of Mainstream Buddhism. Boston, MA: Tuttle Publishing, 1994.

Corless, Roger J. The Vision of Buddhism: The Space under the Tree. 1st ed. St. Paul, MN: Paragon House, 1989.

Craig, Edward. Philosophy: A Very Short Introduction. Oxford, England: Oxford University Press, 2002.

“Exiled from Home, Loved in Liverpool.” Liverpool Echo (Liverpool, England) 27 May 2004: 3. Questia. 26 Nov. 2005 <http://www.questia.com&gt;.

Gross, Rita M. Buddhism after Patriarchy: A Feminist History, Analysis, and Reconstruction of Buddhism. Albany, NY: State University of New York Press, 1993.

Keown, Damien. Buddhism A Very Short Introduction. Oxford: Oxford University Press, 1996.

Morgan, Kenneth W., ed. The Path of the Buddha Buddhism Interpreted by Buddhists.  New York: Ronald Press, 1956.

Prebish, Charles S., ed. Buddhism: A Modern Perspective.  University Park, PA: Pennsylvania State University Press, 1994.

Posted in Rated R | Tagged: , , , , , , , | Leave a Comment »

THE INDIAN ACT OF 1876 AND ITS EFFECTS

Posted by Admin on January 30, 2011

Introduction: This research paper is a full-scale exploration of the Indian Act, passed in Canada in 1876. It makes a detailed explanation of the background to the Act. It takes the position that this Act has been a primary contributor to the state of underdevelopment among the Indians, from the time of its inception till date. It vindicates this position by illustrating elaborately the situation in which this Act was passed. This paper elucidates that this Act was passed with the intention of depriving the Indians of the share of development that took place on their own lands. This was done by disowning mixed marriages, the most potent tool with which access to their lands was gained, once their utility dissipated to the Settlers. It then lists the ways in which this Act has affected the Indians by driving them out of their lands and isolating them into near prisons of underdevelopment.

Background to the promulgation of the Act:  The Indian Act of 1876 was passed under the guise of restoring lands to their original owners, the Native peoples, such as the Indians, Metis and Inuit, from whom the European settlers had confiscated them. It was the culmination of a series of events in the tiff between the Native Indians and the European settlers over the issue of land ownership and control spanning several centuries. The conflict between the Natives and Settlers revolved round two crucial factors –expropriation of Native lands by the Settlers, and the failure of the medium through which this was initially carried out –mixed marriages between Europeans and Natives.

The starting point of the conflict between the two groups dates to the time of colonization of this vast stretch of land, which had been given sovereign legitimacy with the passage of the Royal Proclamation of 1763, passed by King George III. Ironically, this proclamation recognized the right of the Natives to the ownership of their lands; accordingly, these indigenous groups, called the various ‘nations’ of the Indians, were to be treated on a ‘nation to nation’ basis. It was conceived with the salutary intention of demarcating lands between Natives and Settlers, with the proviso that should the Natives decide to sell their lands to the Settlers, they would be compensated through legally binding treaties. In this was implicit the acknowledgment that the Indians were a distinct set of people with their own identity and cultural practices. Unfortunately, the later years saw a terrible misuse of the well-intentioned parts of these treaties, with the result that they became nothing more than mere documents whose spirit could be negated with impunity to drive the Natives out of their lands. (Cote, 2001, p. 15) The relationship between the Natives and Settlers took an anthropo-economic dimension, given the nature of involvement of the Settlers in the land they chose to settle in: they were essentially attracted to the New World for trade reasons. The sprawling area of land that came to be called Canada was rich in fur, a critical commodity for Western trade. So central was fur trade to the dynamics of the new colony, that it got wedded with the Native culture, quite literally –in order to facilitate this trade, several settlers started marrying into the local native communities. This was necessitated as much by the willingness of the Natives to expand their socioeconomic base by synthesizing the two cultures by means of exogamous marriages, as by the additional incentive the Settler got by becoming a part of the Native band. Over time, as the Settlers saw no major economic or social benefit by marrying the Natives, towards the end of the colonial period, the accent shifted from fostering marriages to impeding and repudiating them. Thus, while in the beginning of and in the course of the history of colonization, the union of the two cultures was predicated along matrimonial lines,  “…by the end of the colonial period, intermarriage had been transformed by settler society into “marrying-out.” Aboriginal women lost their Indian status if they married nonstatus males. Aboriginal groups were deprived of any say in the matter and their kinship structures were ignored.”  (Van Kirk, 2002) To further reinforce this separation from the Native tribes, the Settlers enforced the policy of segregating the Natives.

During the period of the relationship through marriage, there were two distinct nuances, intertwined with the question of race and gender: one concerning the attitudes of the colonizers towards the Natives, and another concerning fur trade. Even as the process of colonization, and with it, exogamous marriages were taking place, the general perception among the Settlers was that they were marrying members of a tribe who belonged to a pagan, inferior race, to whom loyalties to the ritualistic clan were more important than to what the Settlers considered the one and only true God. Marriage to such a tribe would entail passing on these base qualities to subsequent generations of their own blood. Thus, it comes as no surprise that the practice was overwhelmingly unions between European males and Native females. However, there was frequent resentment between the two groups since the Native culture demanded that marriages became legitimate only when the Settler accepted Native cultural practices, most of which were anathema to the Settler. Yet, several men consented to marriages despite all these reservations, since marriage to a member of the Native clan was an essential ingredient to access to the fur trade, and also because Native women were active participants in the fur trade, involving themselves in its secondary aspects. Within the huge territory, marriage patterns varied according to the geography of the region –in areas where fur was abundant, marriages were overwhelmingly in greater numbers than in areas that had other resources. A major reason mixed marriages failed was that there existed irreconcilable differences in the meaning and utility of marriage between the two groups. If the Natives perceived marriage to a settler as a means of economic betterment, the Settlers thought marriage was meant “…to make them like us, to give them the knowledge of the true God…” (Van Kirk, 2002) It was only natural that marriage between such dichotomous entities would collapse, once the thread that wrapped them snapped. From being indispensable to facilitating trade, the Natives now became an intrusion into Settler lives.

It was in this backdrop that the Indian Act was promulgated in 1876, aimed at isolating the Natives. Designed to spell out the policy of the Federal government towards the Natives, it looked at the question of Natives in a condescending manner; instead of recognizing the rights of the Natives over the lands that rightfully belonged to them, the law aimed at the total obliteration of the Indian cultures. (Cote, 2001, p. 15) Defining an Indian as one, “…who pursuant to this Act is registered as an Indian or is entitled to be registered as an Indian”, (Harold, 1969, p. 18) what the law did was to put the Federal seal on a virtual ostracism of the Natives under the garb of granting them land. Under the pretext of placing the Natives in what were conveniently called ‘Reserve lands’, the government used this as a ruse to Christianize the Natives. (Cote, 2001, p. 15) Classified on par with minors, and thus deprived of the legal status of full citizenship, the Indians were further discriminated against; Indian women who married non-Indians were excluded from the definition of ‘Indians’. (Titley, 1986, p. 11) With the law quarantining the Natives in their own country, the next step was to make sure they were molded into the Settlers’ line of thinking. “Once they “proved” they were civilized, they were supposed to disappear or assimilate into the general Canadian population, thus getting rid of the (so-called) “Indian problem” by getting rid of Indians and their special status altogether.” (Cote, 2001, p. 15)

The primary aim of the Act was to weaken Indian families, by placing the option of leaving the reserves out of their own volition if they wished to join the mainstream of British society, a precondition for full franchise. In other words, this Act was intended to give the Indians two options –either to forcibly become part of the establishment (Tennant, 1990, p. 45), or simply “…disappear as distinct and recognizable ethnic groups”. (Nichols, 1998, p. 226)

Effect of the law: The effect of this law has also been manifold: at the social level, it has created a system of bands, defined as a “…body of Indians holding lands or a reserve in common or for whom funds were held in trust by the federal government” (Titley, 1986, p. 11), wherein the Natives are to exercise their powers through the equivalent of municipal commissions, having nothing more than the subordinate powers these bodies are granted. The Minister of Indian Affairs still has absolute control over vital areas. (Cote, 2001, p. 15) Even the composition of the band council, consisting usually of one chief and one councilor for every 100 band members, is left to the discretion of the Minister, who may authorize the creation of a band council “…in the interest of the good government of the band”.(Catt & Murphy, 2002, p. 85)

The second effect of this law is economic in nature: it has driven the Indians to penury. It has left only the Indians in pockets of underdevelopment, while the UN rates Canada the best country in the world to live in. This is understandable, considering that the Indian Act is a systematic procedure aimed at detaching the Indians from development, by confining them to the reserves. The malefic effects of this apartheid are all too obvious, resulting in average unemployment rates touching a high of 25 percent. In particular, one of the sections of the Acts hits the Indians where it hurts most, by prohibiting them from borrowing loans by pledging their lands, their only resource. This deprives them of the already meager avenues for development. (Kendall, 2001, p. 43)

Another example of how this law crippled the Indians of their bare resources is that of the Indians living on the Saskatchewan River in the years following the enactment of the law. Here, poverty forced the Natives to trap beavers and muskrats for a living. Driven to desperation, they often resorted to over trapping. This led to a severe weakening of water levels. With a drop in the water levels, the muskrat population started dwindling; this chain reaction reduced the Indians’ incomes drastically. (Nichols, 1998, p. 276)

Conclusion: The Indian Act was designed to be discriminatory in nature. The tragedy is that the original spirit of the Royal Proclamation, promulgated in a different context and time continues to be preserved down the ages, especially that which bifurcated lands between Natives and Settlers. Although the Act has had its progenies in the form of the Penner Report of 1983, the Charlottetown Accord of 1992, and the Final Report of the Canadian Royal Commission on Aboriginal People of 1995, the best these half-hearted measures have achieved is that some of the Indian tribes, such as the Sechelt, the Nisga’a, the Inuit, Yukon First Nations and the James Bay Cree have negotiated separate treaties and have diluted some of the provisions of the original Act. Overall, the spirit of discrimination embodied in this law remains essentially the same. (Catt & Murphy, 2002, p. 85)

Viewed in its totality, this Act has been the symbol of the degradation and aggrandizement of Native resources by the Settlers. This has been the commonality between most cultures that were colonized.

Written By Ravindra G Rao

References

 

 

Catt, H., & Murphy, M. (2002). Sub-State Nationalism: A Comparative Analysis of Institutional Design. London: Routledge. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=103320455

 

Cote, C. (2001). Historical Foundations of Indian Sovereignty in Canada and the United States: A Brief Overview. 15. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=5002420582

 

Harold. (1969). The Unjust Society: The Tragedy of Canada’s Indians. Edmonton, Alta.: M. G. Hurtig. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=6072432

 

Kendall, J. (2001). Circles of Disadvantage: Aboriginal Poverty and Underdevelopment in Canada. 43. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=5002420590

 

Nichols, R. L. (1998). Indians in the United States and Canada : A Comparative History /. Lincoln, NE: University of Nebraska Press. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=102088840

 

Tennant, P. (1990). Aboriginal Peoples and Politics: The Indian Land Question in British Columbia, 1849-1989. Vancouver, B.C.: University of British Columbia Press. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=59692822

 

Titley, E. B. (1986). A Narrow Vision: Duncan Campbell Scott and the Administration of Indian Affairs in Canada. Vancouver, B.C.: University of British Columbia Press. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=38479922

 

Van Kirk, S. (2002). From “Marrying-In” to “Marrying-Out”: Changing Patterns of Aboriginal/non-Aboriginal Marriage in Colonial Canada. Frontiers – A Journal of Women’s Studies, 23(3), 1+. Retrieved October 2, 2005, from Questia database: http://www.questia.com/PM.qst?a=o&d=5002526579

Posted in Rated R | Tagged: , , , , , , , | Leave a Comment »

HUMAN RIGHTS AND DEMOCRACY IN IRAN

Posted by Admin on January 30, 2011

Table of contents:

1.      Introduction;

2.      Outline;

3.      Limitations of this study;

4.      The road to democracy;

5.      Democracy in Iran;

6.       Human rights in Iran;

7.      Conclusion.

* * * * * * * * * * * *

 

1. Introduction:

This paper looks at human rights and democracy in Iran in the wake of political reforms being implemented there since the late 1980’s/early 90’s. It proceeds on the important premise that before preparing a marks sheet on Iran’s progress in these two areas, it is necessary to bear in mind that these two concepts have a unique dimension shaped by a chain of events that ushered them in Iran. It would not make much sense to make sweeping and generalised statements about democracy and human rights, essentially Western concepts, when they are applied in one of the world’s oldest civilisations, in which an Islamic form of government is very much at the centre of power. “In Iran as in other Muslim countries, paths to human rights lie within Islam, to the extent that dialogue can grow between traditionalists and innovators” (Gustafson & Juviler, 1999, p. 9). Any discourse on democracy and human rights in Iran has to be understood in relation to the country’s circumstance, which is that the reform movement, which tried to infuse these ideas into the country, was basically a reaction to the failure of the Revolution to sustain the goal it sought to achieve in the face of the changing dynamics in international relations in the post-Gulf War and Iran –Iraq war. Thus, one has to understand that there exists a unique paradigm for democracy and human rights in Iran, which is at variance from what the West broadly perceives as universal values for all mankind. Keeping this consideration in mind, this paper looks at the progress made on these two fronts, guaranteeing and denying which is the leitmotif of the opposing camps, the reformists and the conservatives, respectively.

2. Outline:

This paper takes off by detailing how democracy has been introduced in stages. The most striking feature of this country’s process of democratisation has been the reluctance of the ruling establishment to give in to the moderates, who have sought to implement democracy. Thus, the study of the democratisation of Iran has been chiefly characterised by the tussles that have been taking place in the country’s political establishment between those who want to introduce democracy and those who want to abort it. Hence, a considerable portion of this paper is devoted to sketching the long series of battles in the war between the reformists and the conservatives. Human rights in Iran, an offshoot of attempts at launching democracy, and its corollary, are detailed here. Mention is made of the efforts at bettering human rights in the country by Nobel Peace laureate, Shirin Ebadi. Finally, this paper offers its conclusions, in which it tries to prognosticate prospects and pitfalls for democracy and human rights in the country.

3. Limitations of this study:

A complete study of the actual progress made in the transition of the political system in any country would be truly comprehensive and complete if one were to keep one’s ears to the ground; in the absence of this factor, this paper relies heavily on the writings of opinion-makers emanating from that country. This is not to doubt their authenticity, but most of these opinion makers have their own agendas to carry out, and as such, their objectivity is not indubitable. A thorough and objective study is best arrived at by measuring the impact of democracy and human rights at the grassroots level. In the absence of this exercise, this paper is prone to get swayed by the (at times) emotive nature of the sources from which it bases its study. In other words, the most objective and scholarly work on human rights and democracy in Iran would be one that is seen from Iranian, not Western or Western-oriented eyes, a requirement not met by this paper. Some attention is given to reports of human rights violations from Amnesty International, whose objectivity has never been proven.

Another important shortfall of this paper is that it looks at human rights in Iran only from the time the new regime has taken power, i.e., after the death of the Ayatollah, who led the Revolution. Although gross human rights violations took place during the time the Revolution installed an Islamic-type government and the Shah’s regime it overthrew, this paper does not look at those, and chooses the period from the start of the new regime, only because this is when democratisation started in the political system. Finally, since the two are closely interrelated, there may be some overlaps in describing the events pertaining to these two. Another very important aspect to be borne in mind is that this paper was written just a few weeks prior to the presidential election of 2005, when the tussles between the conservatives and moderates were at their peak. The result of this election has not been reflected in this paper.

4. The road to democracy:

The reform movement in Iran, which has been spearheading the implementation of democracy and human rights in the country, was born in the wake of the failure of the Revolution to spread benefits to the masses. (Kazemi, 2003) Although the Islamic Revolution of 1979 was an event whose importance has deeply impacted modern Iranian history, ironically, the country’s two earlier revolutions, those of 1906 and 1953, took place for the furtherance of democracy. (Momayesi, 2000, p. 41) They resulted in the establishment of monarchies. The latest revolution, the root of the current tussle for democratisation, at first was followed by major international political and economic problems. (Wright, 1996) The Revolution took place in very violent circumstances, whose culmination was the overthrow of the corrupt, unflinchingly pro-Western Shah. (Seliktar, 2000, p. 73-90) For all the tumult and convulsion that major event precipitated, the direct effect it produced, that of total Islamic rule, lasted no more than a little over a decade. The regime had to soon slowly either abandon or dilute some of its core ideals. This was due to the variety of unforeseen changes that unfurled on the international scene. One of the ideals that had to inevitably become a product of the changed situation was democracy. “In the 1990s, several factors contributed to the intensification of the debate over democracy and democratic institutions in Iranian society. These include the death of Ayatollah Khomeini in 1989, the disillusionment of a substantial portion of Iranian society with government policies, especially in the areas of liberty and individual rights, the imposition of more restrictions over freedom, and authoritarian infringement of people’s constitutional rights. The advocates of reformist Islam launched afresh a campaign to promote democratic values in government and society.” (Momayesi, 2000, p. 41) The first concrete step towards the latest round of democratisation was the elevation of the moderate reformist, Hashemi Rafsanjani from Speaker of the parliament, the Majlis, to the office of the president in 1989. Rafsanjani had assumed office at a time when “…the struggle to determine the true revolutionary path had entered a new phase, involving major policy reevaluation”. (Ayalon, 1995, p. 317)

5. Democracy in Iran:

To undo the highly ensconced politico- religious system in a matter of two presidential terms was no easy task. After the end of his two four- year terms, the mantle of presidency now passed on to his successor and like-minded reformist, Mohammed Khatami, who “…emphasized the country’s need for national unity, respect for the law and civil rights, the creation of a vibrant civil society, and the eradication of poverty.” (Amuzegar, 1998, p. 76) His efforts at reform of the political system, aimed at bringing about democracy were well received at first, as they were representative of the change the people were yearning for. (Yasin, 2002) Initially, Khatami seemed to have taken off from where his predecessor had left. He enjoyed massive support from the least thinkable constituencies in the earlier theocratic regime –youth and women. One of the most drastic changes he sought to implement was in the area of religious governance; he went about altering the structure of the clergy, something that was unimaginable earlier. Changes were implemented in some of the most important institutions, such as those of the supreme leader, the Faqih, the presidency, the judiciary and the Majlis. Khatami carried out amendments to the 1979 Islamic constitution, which had come into effect because of the Revolution. Dictated by the need of the hour, brought about by the death of the architect of the Revolution, Ayatollah Khomeini, one of the most tangible steps towards democratisation of the ruling clergy was “…a significant revision in the qualifications for the holder of this omnipotent office. The all-important and stringent religious qualifications were reduced.” (Kazemi, 2003)

After Khatami’s re-election in 2001 with a reduced majority, the pace of democratic reform lost some of its earlier tempo. The opposition to his democratisation process has been growing steadily, especially since the hardliner conservatives have enjoyed greater numerical superiority in the Majlis. The hardliners have stepped up the ante in opposition to the various reforms he has initiated. With a greater say in the Majlis, they have intensified their opposition to Khatami’s reforms. “Khatami’s victory ushered great hope for progress toward democratisation and reform of the rigid political system. This hope has been largely dashed as the conservative supporters of the Islamic Republic have prevented meaningful political reform…[t]he forces of opposition to Khatami are made up of a disparate but powerful set of institutions and actors with entrenched political, economic and ideological interests. While cognizant of Khatami’s massive electoral victories and popular support, they can find other means of thwarting his reform agenda, through the country’s major institutions” (Kazemi, 2003) Another area of discomfort for Khatami has been in the constituency on whose back he rode to power –students. Their earlier support for him dissipated when he tried to implement a major reform– privatisation of universities. Protests by student bodies at this proposal spilled on to the streets, in the form of massive demonstrations against the president as well as the clergy, on two occasions, once in July 1999, and on the fourth anniversary of this event.  (“Student Heroes Take on,” 2003, p. 23)

In another important round of their row, in February 2004, the conservatives gained an upper hand, disqualifying 2300 candidates belonging to the reformist camp from general elections later that year. With the conservatives gaining a comfortable majority in these elections, the process of democratisation has suffered a major setback, with the presidency, at that time being the only reformist position in the government. (Deccan Herald, 23rd Feb. 2004, p.8) In the words of US president Bush, “[s]uch measures undermine the rule of law and are clear attempts to deny the Iranian people’s desire to freely choose their leaders.” (The Washington Times, 25th Feb. 2004, p. A15.) Yet another major setback to democratisation has opened up as recently as on May 22, 2005, with barely a month to go for the presidential elections slated for June 17, 2005. The Council of Guardians barred from standing in the election the reformist camp’s candidate for president, Mostafa Moin. Additionally, in the same breath, it disqualified each and every of the 89 women candidates saying women are unfit to lead the country. Even as the reformists cried hoarse at the move, saying it has amounted to a coup d’ etat, and saying this move undermines the spirit of election to the presidency in that it would virtually amount to having an appointed president, one silver lining for the reformist camp is that of the six candidates allowed to contest the presidential election out of the 1014 who threw their hat in the ring, one is Hashemi Rafsanjani himself. The other consolation is that they have control over the Interior Ministry. (The Hindu, 24th May 2005, p.10)

6. Human rights in Iran:

Despite the avowed aim of the reformists in Iran to bring about democracy and respect for human rights, there are everyday occurrences of incidents in which amputations and floggings are commonplace, and pregnant women and children are routinely executed. (The Washington Post, 5th January 2005, p. A12)

If the reformists and the conservatives are united over one issue, it is their antipathy to any reference to human rights in the country. They are unanimous and vehement in their opinion that America is seeking to use international human rights organisations to criticise Iranian human rights. They believe that the US is trying to establish its hegemony by interfering with the internal affairs of strategically important countries such as Iran. They accuse the Americans of being selective in their criticism of human rights violations in different countries. (Karabell, 2000, pp. 212) The Iranian government allowed the Red Cross and the UN to inspect the country’s human rights situation in 1990 for the first time in its history. (Kamminga, 1992, p. 99) The Red Cross and the UN had reported that 113,000 women had been arrested in Teheran alone either for improperly wearing their headdress or for moral corruption; the UN had also reported an increase in executions, suppression of minorities and the press, and summary executions of anti-government demonstrators. (Mohaddessin, 1993, p. 142) The government reacted very angrily when America accused the Iranian government of expelling the members of the Red Cross on grounds of complicity with America. It came out heavily against the Human Rights Commission envoy. When the topic was reinvigorated in 1996, reflecting the general opinion in the country, an editorial in the Teheran Times said:

“Criteria for human rights are respected by everyone; however, any judgement on the situation of human rights in a country should be harmonious with the nation’s culture, religion and traditions. The special envoy should not surrender to direct and indirect pressures from the United States and other Western powers, whose aims are to use human rights as a leverage against Iran…”(Karabell, 2000, pp. 212, 213) Arguments and counter arguments between human rights organizations and the government continue with regularity.

The confrontation between the conservatives and reformists in the Majlis has also contributed to violations of human rights: Khatami’s reform of the clergy was based on the idea of undermining the six-member ‘Council of Guardians’, a powerful clerical body in the power structure of the ruling elite by exposing their corruption.  This earned him the scorn of those in power: this Council hit back by hounding his aides, who were seen as moderates. Hojjat-al-Islam Mohsin Kadivar, a well-known liberal writer, Gholam-Hussein Karbaschi, the then mayor of Teheran and Abdollah Nouri, the former interior minister, were among those in the reformist camp that the conservative clerics persecuted. The leftist, pro-Khatami newspaper, Salam, also suffered a similar fate, and was forced to close down. This brought students to the streets in support of Khatami on July 9, 1999. To quell this mob, the police had to open fire; Khatami thus unwittingly ended up antagonising the very constituency that took to the streets to support him. (Sardar, 1999)

Amnesty International, in its report on human rights violations in Iran came out with some scathing observations, which it attributes to the feud between the reformists and the conservatives. Its summary reads thus: “Scores of political prisoners, including prisoners of conscience, continued to serve sentences imposed in previous years following unfair trials. Scores more were arrested in 2003, often arbitrarily and many following student demonstrations. At least a dozen political prisoners arrested during the year were detained without charge, trial or regular access to their families and lawyers. Judicial authorities curtailed freedoms of expression, opinion and association, including of ethnic minorities; scores of publications were closed, Internet sites were filtered and journalists were imprisoned. At least one detainee died in custody, reportedly after being beaten. During the year the pattern of harassment of political prisoners’ family members re-emerged. At least 108 executions were carried out, including of long-term political prisoners and frequently in public. At least four prisoners were sentenced to death by stoning while at least 197 people were sentenced to be flogged and 11 were sentenced to amputation of fingers and limbs. The true numbers may have been considerably higher.”(Amnesty International, Report 2004)

A look at the field of human rights in Iran would be incomplete without a mention of the efforts of the Nobel Peace laureate, Shirin Ebadi. Her efforts have been primarily focussed on the improvement of human rights in the areas concerning women and children in over the past three decades. Inspired to work for the improvement of human rights in her country following her demotion under the Revolution from the position as the country’s first woman judge, she believes that guaranteeing human rights in an Islamic society is not at all impossible. The two are never incompatible, she feels, saying that the important question is not the law of Islamic jurisprudence, the Shariat in itself, but its interpretation. Some of her major accomplishments have been the victories she has secured in getting important reforms done to the family law, the legal age at which girls can marry, and the rights of illegitimate children. Another significant victory of hers in improving human rights in Iran has been in pressurising the government to reveal the identities of the student demonstrators that were killed in the police violence of July 9, 1999. (Lancaster, 2003)

7.      Conclusion:

The road to democracy and human rights continues to be bumpy in Iran, so long as the tussle for supremacy continues within the Majlis between the conservatives and the moderates.  Seen in the overall sense, the speed of change towards democracy has been rather slow-paced.

This is perhaps understandable in an ancient country in which till recently, authoritarianism was so pervasive that most of the country’s resources were held by a thousand or so families. (Lytle, 1987, p. 1) Another major reason for democracy to take more than the expected time to gain ground in feudalistic societies such as Iran is that by its very nature, it cannot be planted violently in the system, in the way the Revolution of 1979 was. If it were to supplant the existing system and take its place by coercion, that would have to be done by adopting undemocratic means, thus defeating its very nature and ending up being an oxymoron!

Seen in this overall sense of the country’s difficult path to democratisation, despite the relative slowness being taken for democratisation to take root, there is still a lot of scope for optimism, as this observation by Momayesi (2000) best sums up the situation: “It is perhaps appropriate to view the current situation as an ongoing, step-by-step struggle and conflict over reform, rather than simply a stagnation under the grip of vested conservative clerical interests. It is evident that Iran shows some signs of movement toward a stable constitutional definition of governmental powers and processes. It seems more apt to see the glass of freedom in Iran as half full rather than half empty… [w]e must think in terms of a long march rather than a simple transition to democracy. Democracy and human rights must be adapted to suit countries with a distinctive culture and experiences, rather than simply being transplanted from existing democracies, East or West. The diversity and the range of democratization alongside persistent authoritarianism sometimes gets lost in the selective media coverage of Islamic Iran. But new freedoms pose difficult challenges to the most capable of leaders everywhere.” (Momayesi, 2000, p. 41) Thus, “…democracy, an element external or internal to Islam, was originally planted in the foundations of Islamism and is emerging, although extremely slowly, as a far more potent element of the Iranian revolution than it had been.” (Usman, 2002)

Having said this, the picture for human rights may not be as rosy: the crackdown on human rights is a major setback to the government, negating as it does important moves to draw foreign investment that the country can ill-afford to forego. For instance, prior to the moves by the conservatives in February 2004, some leading companies, such as the French car giant, Renault, the Turkish communications giant, Turkcell, and some Japanese companies, which would develop the country’s oil fields at an eventual cost of some $ 2 billion, were in the process of investing huge amounts in the economy, which was opened up for the first time since the Revolution. These actions by the government place the investors under pressure to withdraw, as they would not like to be seen to be investing in tyrannical governments. They also throw the intentions of the government in doubt, as they prompt the foreign investors to pack their baggage. (The Washington Times, 25th Feb. 2004, p. A15.)

Unfortunately, it often happens in Iran that for the hardened attitudes of the clergy, it is the moderates who take the blame. Their attempts to undo the years of reactionary policies are often frowned at. For instance, the Second of Khordad, a reformist party that is seen as Khatami’s most important aide, along with its close allies, has been all for “…economic liberalization and privatization, as well as increased personal freedoms, including those of women, and have criticized the corruption and arbitrary power of the ruling clerics. But the front has been unable to implement policies that would address the country’s high unemployment rate or the high poverty rate (40 percent). Reformers have been unable to improve the lot of most Iranians, either because they have been blocked by conservative clerics or because they do not make bread-and-butter issues their top priority.” (Cole, 2004, p. 7)

A major test of the triumph or defeat of democracy would be the presidential elections scheduled for June 17, 2005. Its victors would play a decisive role in shaping the democratic process in the country. A sustained effort at this would be necessary for further democratisation and furtherance of human rights if the moderates were to come to power. But if they have to continue the process Rafsanjani and Khatami have set in motion, there would have to be installed a new reformist president who has considerable freedom to implement the reforms; or else, he too, would go the Khatami way, forever fettered by a conservative parliament.

On the other hand, should the conservatives pull off another coup and get one of their own elected as president, that would almost certainly neutralise all the efforts at democratisation and furtherance of human rights that have been taking place till now. Whether Iran would emerge as a champion of democracy and human rights or go back to being an inheritor of a theocratic government brought about by violent revolution, only the upcoming presidential elections would say. If the upper hand the conservatives have been gaining till now in its tiff with the moderates is any indication, the second scenario seems to have a slightly higher chance of materialising.

Written By Ravindra G Rao

References

 

 

Amnesty International, Report 2004. Available: http://web.amnesty.org/report2004/Irn-summary-eng (Accessed 2005, May 25)

 

Amuzegar, J.,1998, Khatami’s Iran, One Year Later. Middle East Policy, Vol. 6, No.2, 76-94.

 

Ayalon, A. (Ed.), 1995, Middle East Contemporary Survey: 1993, Vol. 17, Westview Press, Boulder, CO.

 

Cole, J., Iran’s Tainted Elections. The Nation, Vol. 278, No. 7. (2004, March 1) Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Gustafson, C. & Juviler, P. (Eds.). 1999, Competing Claims? Competing Claims? M.E. Sharpe, Armonk, NY.

 

2004. “Hard-Liners Face Hurdles in New Iran; despite Poll Win, Options Limited” The Washington Times (Washington , USA) February 25, 2004, p. A15.

2004. “Conservative ‘coup’” Deccan Herald (Bangalore, India) February 23, 2004, p.8.

2005. “Guardian Council’s move a coup d’ etat: reformers” The Hindu (Bangalore, India) May 24, 2005, p.10.

2005,.”Risks of Appeasing Iran’s Mullahs”, The Washington Times (Washington, USA)2005,  January 5, p. A12. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Kamminga, M. T.,1992, Inter-State Accountability for Violations of Human Rights, University of Pennsylvania Press, Philadelphia.

 

Karabell, Z.,2000, 8 “Iran and Human Rights”. In Human Rights and Comparative Foreign Policy /, Forsythe, D. P. (Ed.) (pp. 206-221), United Nations University Press, New York.

 

Kazemi, F., 2003, The Precarious Revolution: Unchanging Institutions and the Fate of Reform in Iran Iranian Politics Is a System Made by the Clerics for the Clerics, and for Their Supporters Who Possess a near Monopoly on the Spoils of the Revolution and the Country’s Resources. Journal of International Affairs, Vol. 57, No.1, p. 81+. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Lancaster, P., “A Worthy Winner: The News That Iran’s Shirin Ebadi Was the Nobel Peace Prize Winner Came as a Surprise to Many, Not Least the Peace Laureate Herself”, The Middle East, , November 2003, p. 32+. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Lytle, M. H..1987, The Origins of the Iranian-American Alliance, 1941-1953, Holmes & Meier, New York.

 

Mohaddessin, M.,1993,  Islamic Fundamentalism: The New Global Threat, Seven Locks Press, Washington, DC.

 

Momayesi, N., 2000, “Iran’s Struggle for Democracy”, International Journal on World Peace, Vol. 17, No.4, p.41. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Sardar, Z., “Iranians Hold a Dress Rehearsal for Revolution”. New Statesman, 1999, July 26, Vol. 128, p.12+. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Seliktar, O.,2000, Failing the Crystal Ball Test: The Carter Administration and the Fundamentalist Revolution in Iran, Praeger Publishers, Westport, CT

 

“Student Heroes Take on Mullahs; the Pro-Democracy Movement in Iran Continues to Gather Momentum despite the Ruthless Tactics Employed by the Ruling Islamic Theocracy to Hold on to Power”,  July 22, 2003. Insight on the News, No. 23. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Usman, J., 2002, “The Evolution of Iranian Islamism from the Revolution through the Contemporary Reformers” Vanderbilt Journal of Transnational Law, Vol.35, No. 5, p. 1679+. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Wright, R., 1996 Summer, “Dateline Tehran: A Revolution Implodes”, Foreign Policy, p.161+. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

 

Yasin, T.,2002, “Knocked off Axis? Iranian Reform Challenged” Harvard International Review, Vol. 24, No.2, p.12+. Retrieved May 25, 2005, from Questia database, http://www.questia.com.

Posted in Rated R | Tagged: , , , , , , , , | Leave a Comment »

THE HIROSHIMA AND NAGASAKI ATOM BOMBS

Posted by Admin on January 30, 2011

Atomic bombing of Nagasaki on August 9, 1945.

Atomic bombing of Nagasaki on August 9, 1945.

“Why did America use the atomic bombs on Japan at the end of the second world war?”

Table of contents:

 

Part I: Introduction:

Background to the events leading to the bombing of the two cities;

Part II: Political factors behind the event;

Part III: The personality of Truman and his perception of the nature of the bombs as a factor;

Part IV: Other aspects of the bombing;

Part V: Conclusion.

________________________________________________________________________

Part I:

Introduction:

Background to the events leading to the bombing of the two cities:

The American decision to use the two atomic bombs on the Japanese cities of Hiroshima and Nagasaki at the end of World War II has come in for feverish debate in the years following the incident. It is one of the best-documented events in history, and has, at the same time, provoked lasting, emotionally heated reaction. Almost everyone with even a fleeting interest in World War II seems to have a strong opinion on this American action. (Harbour, 1999, p. 68) To state that the Americans bombed the Japanese because the latter were their rivals in the war is to speak simplistically of an issue that was a product of complex factors. The dropping of the bombs on the two cities was the climax of the great rivalry the two countries had developed against each other over some years; thus, to try to understand the motives behind America’s actions, one needs to look at how this rivalry developed between these two distant countries, whose culmination was the bombing of the two cities.

The Japanese and Americans had been pitted against each other in the Pacific many years before World War II began. Some historians fix the date of the crystallisation of US-Japanese rivalry at 1931, when the Japanese occupied Manchuria in China. The Americans considered this an audacious attack on their interests in Asia. 1931not only marked a nadir in the relations between America and Japan, this year was also extremely significant to Japan’s administration, for this was when the radical, militant elements in the Japanese administration led successfully what has been termed a coup, by which they ‘overthrew’ the moderate elements in the royal government and set the country on the long road of fascism of the kind that Europe was falling prey to. (Morris and Heath, 1963, pp. 2, 3 and 20) This Japanese act was the outcome of an ongoing rivalry, which dates back to an earlier period, when Japan embarked on an ambitious programme of industrialisation. A strong animosity had developed in America against the Japanese from the time she started growing in strength having realised that the way to prosperity lay in industrialisation, and had tried to make herself a strong industrial country. The rapid pace and force of Japanese industrialisation was started since her first contact with the western world, which, ironically, began with the US itself, (Wainstock, p.1) which had contributed more than any other country to Japan’s industrial strength, but was not able to tolerate her expansionist designs later. (Levine, 1995, p. 1) In an era of aggrandisements leading to the war, Japan, since she did not have the resources to match her rapid industrialisation, committed acts of aggression on several countries of South East Asia. Sensing that her food supplies could be cut off with ease by an enemy, Japan built a strong navy. But even so, her trade routes were unsafe. To neutralise this, she intensified her policy of annexation of several mainland countries and strategically important islands in the Pacific, some of which were equally economically or strategically important to an America that was seeking to establish its influence in the Pacific. In this climate of growing hostility, one by one, several territories started falling to the Japanese sword, the most important of which was the Chinese mainland in 1937, following, of course, the annexation of Manchuria. (Wainstock, pp.1 & 2)  The main reason for Japan’s annexation of China was to undo the Revolution, which she viewed as a possible threat to her dynastic rule. (Levine, 1995, p. 1) The fall of China intensified the American perception of the rapidly expanding Japan as a threat. Another milestone in the building up of their rivalry was Japan’s decision to join the Axis Alliance, led by Europe’s most brutal fascist regimes, those of Hitler and Mussolini, in 1940. (Conroy & Wray, 1990, p. 73) The bombing of Pearl Harbour, an American base, was the last straw. It jolted America out of its self-imposed isolation brought about by a feeling that it was a secure, unassailable fortress. (Hein & Selden, 1997, p. 69) Following Pearl Harbour, America, along with Britain and the Netherlands, blockaded Japan’s oil supplies. In order to obtain vital fuel, Japan started annexing large parts of the Pacific in quick succession –Hong Kong, Philippines, Singapore, Burma, the Dutch East Indies, French Indochina, (Hane, 1992, pp. 316 & 426) Guam, and Wake Islands (Wainstock, p. 2) Even after the attack on Pearl Harbour, America was not able to dent the superior Japanese navy. However, a decisive victory in the Battle of Midway, in June 1942, gave it an advantage. This campaign was crucial in halting Japanese advances, which, left unchecked would have given her access to territories as far as India, Australia and Hawaii. Holding on tenaciously, with superior intelligence, the Americans pulled off a famous victory, which boosted their morale. (United States Strategic Bombing Survey, 1946, p. 58) The field was now left open for a climactic battle; the Americans created this in the closing stages of the war and acted upon it. This was the episode relating to the bombs on Hiroshima and Nagasaki. If the factors listed above constituted the background to the rivalry between the two countries, a combination of factors, mostly political, precipitated the event. Some of these are listed in this research paper.

Since the purview of this paper is to merely look at the factors that led to the bombing of the two cities, no attempt is made to look at the moral aspect of the issue, or to stand in judgment on the incident. No matter how unspeakable the suffering the bombs ended up causing to the people who bore the brunt, and the mark it made on the national psyche of the country and its civilisation, this paper avoids reference to these areas of discussion, since this clearly falls outside its scope. However, some controversies related to the issue are taken up, for these are intertwined with the incident. While this paper has made a classification of the reasons for this attack, mention needs to be made that a watertight compartmentalisation may not be possible, and some overlaps may have occurred. Mention also needs to be made that in the section pertaining to the controversies of this action, since entire arguments of historians have been taken up for discussion, very long references to individual authors appear.

Part II:

Political factors behind the event

Surely, for an action of such great magnitude and far-reaching consequences, political factors were the most important consideration for president Truman. He saw in this situation an opportunity to strike a double blow –to silence Japan’s recalcitrance, and to fire a salvo at the Russian leader, Josef Stalin, with whom his country had been forced to develop an alliance because of the exigency of the hour. “The bomb was dropped primarily for its effect not on Japan but on the Soviet Union. One, to force a Japanese surrender before the USSR came into the Far Eastern war, and two, to show under war conditions the power of the bomb. Only in this way could a policy of intimidation [of the Soviet Union] be successful…[t]he United States dropped the bomb to end the war against Japan and thereby stop the Russians in Asia, and to give them sober pause in Eastern Europe.”(Kagan, 1995)

A crucial meeting, which ultimately decided the course of this action was called by Truman and held in the White House on June 18, as the Okinawa campaign was drawing to a close. The intention of this meeting was to seek from his Joint Chiefs of Staff (JCS) their opinion on the quickest and most effective means to ending the war in Japan. Those who attended it were the president’s Chief of Staff, Admiral William Leahy, Navy Chief of Staff, Ernest King, Army Chief of Staff, General George Marshall, Secretary of War, Henry Stimson, Assistant Secretary of War, John McCloy, Secretary of Navy, James Forrestal, and Ira Eaker, representing General Arnold for the Army Air Forces. The opinion that emerged out of this meeting was that the best way forward was to invade Japan through its southernmost tip, Kyushu. The probable date of this planned invasion was set for November 1. Marshall suggested that the next phase of the invasion would be an attack, at a later date, on Honshu, the island on which Tokyo stands. General Marshall spoke on behalf of the Joint Chiefs, reading out from a paper they had prepared: ‘The Kyushu operation is essential to a strategy of strangulation and appears to be the least costly worth-while operation following Okinawa’. Although they were silent on two important clarifications Truman had sought, namely how long the operation would last, and what would be the expected number of American casualties, all who attended this meeting were unanimous in their assessment that the invasion of Japan through this route was the best option before them. The Chiefs arrived at a figure of 31,000 for the possible number of casualties during the first phase of the invasion, in the first 30 days of the campaign. This was arrived at by equating the casualties in this campaign with that in Luzon, in which the same figure had died, or were wounded or missing. The figure of 46,000 dead and another 174,000 wounded was estimated if the invasion went into the second phase. Truman’s most important consideration was the number of American casualties, which he wanted to be kept at the minimum. There was wide agreement on the number of casualties. This figure found reinforcement when, prior to the meeting, Marshall requested the expected number of American casualties from General Douglas McArthur, commander of the American Army in the Pacific. The General projected a figure that was almost exactly similar to these estimates –105,000 battle and an additional 12,500 non-battle casualties. (Walker, 1997, pp. 36-39) If this meeting spoke of a land invasion, one factor hurried up the decision to specifically use the bombs: some decrypted Japanese diplomatic communications, codenamed MAGIC, which revealed that the Japanese were looking forward to negotiations, rather than to peace, and in this direction, were looking towards Soviet Union, not America, were seen by Truman. This turned out to be one of the reasons he steeled his resolve to drop the bomb on the Japanese. Under the codename Downfall, the Americans had been making heavy preparations to lay an amphibious operation at the time Truman went to Potsdam. Just after he set sail, on July 16, he gained knowledge of the successful experimentation of the atom bomb, which was carried out by American scientists. The timing of the completion of the bomb coincided with Truman’s meeting with the Russian and British heavyweights. His main aim of going to Potsdam was to get an assurance from Stalin that the Russians would not enter the war till the time the Americans carried out their operation. So, it was clear that he had at the back of his mind two crucial elements –the operability and potential of the bomb to curtail severely American losses, and, for its successful implementation, the guarantee that Russia would not enter the war. (Allen, Polmar & Bernstein, 1995)

Depriving Russia a role in Japan was surely a paramount reason for the urgency with which the bombs were dropped; the Soviets were scheduled to enter the war on August 8. An Asia in which the Soviets would play a decisive role, was a prospect the Truman administration had to prevent at all costs; nothing gave it a better chance than the timing of the development of the bomb, and Russia’s scheduled date of entry into Japan. By hurrying up the bomb, the Truman administration made sure the Japanese surrendered to the Americans alone, as argued by the British physicist, P.M.S. Blackett, who, in his book Fear, War and the Bomb, has contended that ‘the dropping of the atomic bombs was not so much the last military act of the Second World War as the first major operation of the cold diplomatic war with Russia now in progress’(Clarfield & Wiecek, 1984, p. 58) The Russia factor was at work all through. Policy-makers in the US were clear from the beginning that America was to be alone, and that Russia was to be excluded from the bomb project. One of the most strident critics of the use of the bomb, Leo Szilard, had feared that the use against civilians would be catastrophic. He had gone on to suggest that the Americans and Russians get into a joint effort at developing the bomb, wherein, his reasoning went, by openly sharing this knowledge with Russian scientists, the certain arms race that was set to follow could be prevented. In trying to enlighten the American political establishment about his idea, he sought a meeting with president Roosevelt; however, he was referred to Secretary of State James Byrnes, who brusquely squelched the idea, and prevented the meeting with Roosevelt. Szilard even invited Churchill’s fury for having suggested this idea. (Szasz, 1984, p. 146)

The idea of bombing Japan was taken in order to force a total and unconditional surrender, towards which the Truman administration wanted to make sure no effort was spared. Quoting Brower (1982), Lee (1998) states: “The JCS understood that Japan’s defeat would result from the increasing application of military, psychological and political pressures upon the island nation. Their strategy clearly reflected that understanding. The JCS gradually tightened the blockade, bombed Japan relentlessly with conventional and atomic weapons, contributed to efforts to induce an early Japanese capitulation through a clarification of the unconditional surrender formula, and strongly urged two presidents to secure early Soviet entry into the war” (Lee, 1998, p. 109).

Another perceptive line of reasoning is that the bombs were essentially a culmination of the process of American isolationism that had been building up from the time World War I ended. If, as argued by Glynn (1992), America, whose political and economic power was way ahead of that possessed by any country in Europe, had shown sagacity and generosity in bailing France out financially and in redressing the German expansionist designs, it would have effectively put a brake on the growth of the deep animosities these two frontline European nations developed towards each other. Having failed to do it, mainly because of its isolationist designs, America sought to maintain its position of eminence in world affairs by spearheading the revolution in physics that was catching up in Europe. Having triggered the race for weapons development in Europe, what it did was to show it was ahead of the rest. This it could accomplish only by demonstrating its power to the rest of the world. The perfect excuse for this was provided by Japan’s defiance. It is true that the situation of war made scientists of each country work on the bomb faster than their counterparts in other countries. If the war had not taken place, it is possible that the invention itself would not have taken place. This writer extends this argument to suggest that once America failed to show pragmatism in dealing with Europe after World War I, when the time came, it had to showcase its newly-acquired might in brute fashion. It had to vindicate the appositeness of its policy of isolationism after World War I; no other action served to show that better than the decisiveness with which it dropped the bombs on targets that were convenient to it from all perspectives. (Glynn, 1992, p. 114)

Truman had taken office at a time when the Soviet Union, with a diametrically opposite ideology, was taking shape as a potential rival to the emerging American dominance in world affairs. Roosevelt had been hoping that a conciliatory approach towards this country was the best way to an amicable post-war settlement. However, following his death, Truman had to rely on his predecessor’s advisors in international affairs, an area in which he was vastly untested; however, their opinion was different from their master’s. (Clarfield & Wiecek, 1984, p. 82) Thus, opposition to Russia was a philosophy Truman imbibed from the start of this tenure.

Bruce Cumings (1999) proffers another interesting insight into the urgency with which Truman used the newly devised bomb. It has to do with the nature of the political arrangement in the US. There is a certain irony about the position of the president –as the foremost decision maker in the country, he is yet faced with a tight situation, sitting on a seat of thorns. On the one hand, he is handicapped by the power of the Congress alone to go to war; on the other, his is a temporary position; all the power he commands is gone when he loses his election or has run out his term. In the final sense, he is alone responsible for the decisions he takes. It is a high-pressure office, in which he is the sole decision-making authority, into whose shoes nobody would like to step in. Nor does anyone else have the authority or power to take decisions of the gravity he does in a system in which there are liberal doses of daily infighting and squabbling among the different agencies such as the legislature and the judiciary, and also within the Congress. The possession of the control of the just-invented bomb came to symbolise the sway the president held over all others in the administration. This was the most concrete symbol of this power that he and nobody else could enjoy in the administration. Truman, in particular, vested the control of the atomic bomb with the Atomic Energy Commission, which made sure it did not fall into the hands of the top military brass. Thus, possession and sole control over who controlled the bomb weighed more in Truman’s presidency than in any other’s mainly because it was then that the bomb was invented. It is in this sense, that, quoting Sherwin (1975), Cumings goes on to argue “…why the bomb, once readied, was used: not just to intimidate the Russians, but to intimidate everyone from recalcitrant Republican congressmen to isolationists in the broad body politic to Hirohito to Stalin to Churchill to the “total field” in which the American president has held sway since 1941, namely, the world.” (Cumings, 1999, p. 56)

Part III:

The personality of Truman and his perception of the nature of the bombs as a factor:

It is possible to argue that the attitude and decision-making nature of the new president and the peculiarity of the situation in which he was inaugurated into the presidency could be classified as another reason the bombs were dropped on the two cities. Almost from the moment the uniqueness of the new weapons was made known to the president, he took an altogether authoritative role. A study of the assertiveness with which the just-sworn in president acted lends to this conclusion. It is difficult to say with certainty if the same incidents of bombings would surely have taken place if a person other than Truman had been at the helm of affairs at that time.

President Truman won greater admiration once he had quit office than when he was in power; this was more pronounced after his death. During the years he was in the White House, he was seen as a president who had inherited a difficult mantle from the formidable Roosevelt. There was an aura of greatness created around him, most notably because of the famous words he had painted on his desk, ‘the buck stops here’. However, recent research, carried out a good few decades after his death, has shown that underneath the image of an astute, frank and honest president were qualities that hardly got the attention they really had to: suspicion, insensitivity and narrow-mindedness. He used his dashing demeanour to guise his innate insecurity and terrible self-doubt. (Walker, 1997, p. 7) This quality of his was perhaps well known in the White House. A humorous anecdote may not be out of place to illustrate this: it is said that when Truman rushed to the White House upon hearing the news of president Roosevelt’s sudden death, he is said to have offered his help to the family. To this, Eleanor is reported to have quipped: ‘Is there anything we can do for you? For you are the one in trouble now’! (Boller, 1996, p. 278) It was only natural that the bomb caught his attention like no other, and became the idée fixe of his presidency. When he took office from president Roosevelt, he was sure about nothing but the fact that he had to carry on his illustrious predecessor’s legacy, which centred round the victory that America, with its coalition partners in the Allied forces, had to seal in the Pacific with minimum loss of American lives. In a sense, it was a difficult legacy he inherited, because he only knew he had to continue with Roosevelt’s legacy, but was unsure about which that was. (Walker, 1997, pp. 7-9) There had been a considerable difference of opinion between the Roosevelts on the purpose of the war. If Delano had been under the impression that the ultimate aim was winning the war, Eleanor differed with him, asserting that winning the war was only half the battle won; the First Lady was of the strong view that winning the peace after the war ended was more important. This, to her, was the lasting victory, one that would place America on the pedestal to its chosen destiny. (Rozell & Pederson, 1997, p. 209) Is it any wonder then that the utterly confused president made the following comments at a press conference the day after taking oath: ‘Boys, if you ever pray, pray for me now. I don’t know whether you fellows ever had a load of hay fall on you, but when they told me yesterday what had happened, I felt like the moon, the stars, and all the planets had fallen on me’? (Jones, 1994, p. 36) Thus, it was only natural that the invention that came into existence weeks after he took office, the bomb, turned out to be a weapon in both the literal and figurative senses –it would help him shake off the Roosevelt hangover; using it with unequivocal force would firmly establish his position.

On July 7, 1945, a palpably tense and reluctant Truman set sail for Potsdam in Germany to attend for a meeting with ‘Generalissimo’ Josef Stalin and Winston Churchill. This unease was predictable to a greenhorn who was barely three months into his presidency: “Truman’s anxiety about attending the conference was understandable. He was still a novice at his job and still learning the complexities of the many problems he faced. He was traveling to meet and doubtlessly disagree on important issues with two crusty and renowned leaders who must have seemed larger than life, even to the president of the United States. He was determined to protect American interests but worried about how successful he would be in jousting with his formidable, tenacious, and experienced counterparts.” (Walker, 1997, pp. 7-9 and 53) In the situation that he was in, nothing gave him greater strength than the bomb; it was a godsend to a cornered president, one arrow with which he could kill all–the butterflies in his own stomach, the Rooseveltian noose that hung over his head, and all the political issues discussed earlier.

Thus, once the awesome bomb had been unfurled, the president became unshakably firm in his conviction that it had to be used, come what may. The first significant communication he made after learning about the power of the bomb was: ‘I am going to make a decision which no man in history has ever had to make…’He was clear right from the moment he had got a grasp of the bomb’s potency that there was no alternative to using it. He had told Byrnes that ‘he had given thought to the problem and, while reluctant to use this weapon, saw no way of avoiding it’. This was also reflected in the address he gave the nation three days after Hiroshima, in which he plainly declared that ‘having found the bomb we used it’. Even in his memoirs, he expressed scant regret for having used it, stating: ‘Let there be no mistake about it. I regarded the bomb as a military weapon and never had any doubt that it should be used’ Another factor that may have influenced Truman to use the bomb against Japan was that he had to take off from where Roosevelt had left; he had to continue a majority of the projects and policies that Roosevelt had initiated, one of the which was the Manhattan Project, whose brief it was to develop the bomb. It was in line with his resolution to continue Roosevelt’s policies. Finally, the very fact of the sheer, intimidating power of the powerful bomb he knew was not just another bomb gave him control over it. This power of controlling the world’s most powerful bomb till then, making him the only man in the universe, filled with him pride and ego. This would give him unquestionable might, and enhance his already powerful status. (Gaddis, Gordon, May, & Rosenberg, 1999, pp. 16, 17) Two months after the incidents in Japan, he exhibited his knowledge of the importance of the bomb, saying, ‘The discovery of the means of releasing atomic energy began a new era in the history of civilization. The scientific and industrial knowledge on which this discovery rests does not relate merely to another weapon. It may some day prove to be more revolutionary in the development of human society than the invention of the wheel, the use of metals, or the steam or internal-combustion engine.

Never in history has society been confronted with a power so full of potential danger and at the same time so full of promise for the future of man and for the peace of the world. I think I express the faith of the American people when I say that we can use the knowledge we have won not for the devastation of war but for the future welfare of humanity.’ (Koenig, 1956, p. 122) On August 9, 1945, in response to a letter from a prelate that the Americans had ‘indiscriminately’ bombed Hiroshima, Truman is said to have remarked: ‘Nobody is more disturbed over the use of the Atomic bombs than I am but I was greatly disturbed over the unwarranted attack by the Japanese on Pearl Harbor and their murder of our prisoners of war. The only language they seem to understand is the one we have been using to bombard them. When you have to deal with a beast you have to treat him as a beast. It is most regrettable but nevertheless true’ (Cumings, 1999, p. 58) Since the time of the capture of Pearl Harbour, the propaganda war intensified in the US, making the Japanese the ultimate villains in their eyes. Although the Americans did inflict a heavy defeat on the Japanese in the campaigns of Iwo Jima and Okinawa, the strong sentiment they had against the Japanese, by which not even the president was immune from this stereotype, may have forced him to choose Japan as the target for the testing of the atomic bombs. It is not surprising, considering that in his private diaries, he referred to the Japanese as ‘savages, ruthless, merciless and fanatic’. (Wainstock, p.121) The anti-Japan feeling was so strong in the US that from the time the bomb was conceived, it was decided to develop it to be used, and to be used against Japan. (Blumenson et al., 1960, p. 496) It was also decided that it should be used on a dual target comprising military installations and civilian targets such as residences close to these installations, and should be used without prior warning (Divine, 1969, p. 315). This was despite vehement pleas not to use it against Japan by none other than one of the chief architects of the bomb, Leo Szilard, who pleaded that the administration refrain from using the deadly bomb because, to him, ‘Japan was essentially defeated’, and ‘it would be wrong to attack its cities with atomic bombs as if atomic bombs were simply another military weapon’. Another strong motivation for Truman was that he ordered the atomic bombs to be dropped to vindicate the cost in terms of money and manpower that went into making the bombs. The bombs had been developed at a cost of $two billion. It seemed foolish to him at that point of time to not use it after having spent so much on a project into which the country’s best scientific minds had been invested. He felt he was answerable to a hostile Congress about a project that had been carried out in great secrecy, and that he was accountable to it. When the executive had fought with the Congress to get the money, Truman and his team were afraid of offending the Congress by not using the bombs. The leaders were eager to please the Congress, whose various committees had been demanding that ‘the results had better be worth the $2 billion investment.’ (Wainstock, 1996, pp. 1&2, 37 &38 and 121-123) A measure of the relief the success of the bomb sent in the inner political coterie responsible for its development could be discerned from the remark Stimson is believed to have made immediately upon receiving news of the success of the trial: ‘Well, I have been responsible for spending two billions of dollars on this atomic venture. Now that it is successful I shall not be sent to prison in Fort Leavenworth.’ The president was overjoyed at hearing the news of the success. Stimson records in his diary that on hearing the news of the successful explosion, ‘The President was tremendously pepped up by it’, and ‘and spoke to me of it again and again, when I saw him. He said it gave him an entirely new feeling of confidence….’ This became clear in the way he conducted himself at the conference the next day, something even Churchill found almost tangible, saying Truman had become more forceful the next day ‘because of this new piece of knowledge’. (Szasz, 1984, pp. 145, 146)

Further, the importance the bomb held in Truman’s heart was so great that some historians such as Alperovitz & Bird, (1994) have taken up from this point to suggest that it was this penchant for this bomb that was to not only motivate Truman to go ahead and bomb Japan, it was the turning point in the polarisation of the world’s superpowers the led to the Cold War. Their logic is based on the following reasoning: the potential for conflict between the Americans and the Russians was no doubt in the air even as they were going into the war, but what actually put the two powers on the road to rivalry was the bomb, and Truman’s grasp of its unprecedented might. This was to serve as the catalyst for sealing the alignment of forces that shaped the world leading to the famed Cold War. Even while getting into Potsdam, Truman had been in two minds about his own ability to pull off a diplomatic coup over Russia; as he confided to his wife in his diary, he was jittery about the prospect of what his meeting with the Generalissimo would achieve. It had always been Roosevelt’s policy to contain the armament of Germany, which he believed was crucial to assure the world that a rearmed Germany would never again threaten it, and to contain the Russians with an alliance of like-minded western powers. However, at the time, and in Truman’s initial days in office, till the time the atom bomb was tested, the battle lines were only hazy. There was no clear agreement on the shape the defeated Germany would take after it had surrendered. These researchers conclude that if there was something that gave direction and thrust to the rivalry that was to concretise as the Cold War, it was the bomb, and its primacy in the president’s mind. (Alperovitz & Bird, 1994)

Part IV:

Other aspects of the bombing:

A study of why America dropped the bombs on the two Japanese cities is incomplete without a reference to the controversies surrounding the issue. In a nutshell, the controversies relate to the two grave questions historians have asked in later years: was the bombing of Japan necessary at all in the first place to force a surrender on it, and, secondly, if it was, would not one bomb have sufficed?

“Post-war historians have challenged President Harry Truman’s decision to use the atomic bomb to shorten World War II and save American lives. Some claim that the Allies could have ended the war by negotiating with the Japanese; others contend dropping the bombs was patent racism and that atomic bombs never would have been dropped on the Germans.” (Allen, Polmar & Bernstein, 1995)

Historians accuse Truman of not taking all factors into consideration, and of not making a full understanding of the internal situation in Japan at that time.

After Potsdam, as we have seen, his will to drop the bombs was hastened, on the thought that its use would totally save American lives, as compared to an invasion, bringing the Japanese to their knees. But it is clear that this was an oversight, and an assumption that went wrong –five days after the end of the second bombing on Nagasaki, there was an attempted coup, whose success would have dragged on the battle for many more weeks or months. Even the massive bombings had not diluted Japanese will; a group of senior Japanese army and navy officers were still determined to carry on fighting after staging a coup. They had made preparations for a great showdown with the American forces on the beaches, under the codename ‘Decisive Battle’. That the coup did not succeed and ‘Decisive Battle’ did not materialise is another matter. The point being raised by present-day historians is –assuming that the coup would have been successful, there is no doubt that fighting would have dragged on, and would have resulted in losses of several American lives. The question is, how many American lives would have been sacrificed in the fighting? Given the near depletion of resources at Japan’s command at that point of time, it is possible that not more than a handful of American lives would have been lost. The crucial point is, when the Japanese were fighting on only one resource, their determination, and given the fact that even without the dropping of the bombs, not more than a relatively few American lives would have been lost, was it fair to estimate that the Japanese would have killed a million Americans?  Truman had long been obsessed with one thought more than any other –the prevention of the loss of a million American lives. Critics are agreed on the fact that this figure was a) grossly exaggerated in the first place when secretary Stimson arrived at this figure, and b) this was seized upon relentlessly by Truman to be used every now and then to justify the catastrophic bombings. They are at a loss to understand how Truman could have taken this figure of a quarter to a million potential American deaths as the gospel truth when even the trigger-happy Gen. Douglas McArthur arrived at an estimate, done without any prompting, on a figure that was nowhere near what Truman put forth throughout. What adds substance to the whole issue is that in the first place, Gen. McArthur himself had exaggerated the whole estimate, for some unknown reasons. But what is highly pertinent is that records discovered after the war showed that at that time, McArthur’s staff had released an all-important communication: ‘The strategists at Imperial General Headquarters believed that, if they could succeed in inflicting unacceptable losses on the United States in the Kyushu operation, convince the American people of the huge sacrifices involved in an amphibious invasion of Japan, and make them aware of the determined fighting spirit of the Japanese army and civilian population, they might be able to postpone, if not escape altogether, a crucial battle in the Kanto [Tokyo] area. In this way, they hoped to gain time and grasp an opportunity which would lead to the termination of hostility on more favorable terms than those which unconditional surrender offered.’ Obviously, this too, points to the fact that there was clearly no need to force a total Japanese surrender at that point of time, given the drain they were facing, and more importantly, to use the bombs to force a surrender. Moreover, the American Sixth Army in Luzon, Philippines, had estimated that the Kyushu invasion would have the same gravity as that of the earlier invasion, on Okinawa. This was to be taken as the correct estimate by any standards, for this was not carried out by ideology-driven politicians, but by groups of professional soldiers and doctors who had actually been at the scene of fighting. Using the Okinawa invasion as the standard, they had estimated, as was the regular practice, that on a ratio of 1:4 for the Kyushu invasion, this would claim no more than four times the number of casualties the Okinawa episode had claimed. Even estimating that in the face of a heavy, sustained Japanese kamikaze raids, though a distinct impossibility, had the losses been in the order of ten times that of the Okinawa invasion, the total American casualties would have amounted to nothing more than 147,500 dead and some 343,000 wounded. In the event of an American offensive, ‘Decisive Battle’, the losses on both sides would have been terrible. If the bomb gave the American president an alternative to an invasion, it would have given the Japanese Emperor an opportunity to end the war. (Allen, Polmar & Bernstein, 1995)

Historians have also come out with evidence that Truman exaggerated the potential American casualties of an invasion of Japan to justify his use of the atomic bomb after the war ended. They quote a letter he wrote in 1948, in which he insisted, as he had done all along, that he decided to use the atomic bomb ‘to save 250,000 boys from the United States.’ He was convinced all along that by carrying out these attacks, he had achieved his aim, and in his memoirs written in 1955, after his presidency had ended, Truman still insisted that ‘half-a-million American lives were saved by the bomb.’ However, his critics now claim that the Joint War Plans Committee on June 15 had given a figure of only about 40,000 American deaths if the planned invasion of the home islands took place. To calculate the number of American casualties on the mainland, Admiral Leahy took the earlier battle in Okinawa as the basis, in which American casualties were roughly 35 percent of the total force of 120,000. Thus, even if it was agreed that Okinawa was the proper basis for the number of casualties the Americans would sustain in the event of a mainland invasion, and updating Leahy’s figures to a more accurate 29 percent, the casualty figure in the entire campaign should not have exceeded a maximum of 200,000 deaths and 725,000 injuries. (Loebs, 1995)

 

The second major controversy relates to this point –if for a moment, for the sake of argument, it is assumed that the bombing was absolutely necessary, then the bomb on Hiroshima would have done the job, and the second one on Nagasaki was totally redundant. The Nagasaki bomb was a non-factor in forcing the Japanese Emperor to order surrender. There were protracted arguments and vacillations in the Japanese think-tank about the decision to surrender following the Hiroshima bombing on August 6. Of course, it has to be admitted that Truman was not aware of these wranglings; but the reasons given by the American decision makers to use the second bomb was very unconvincing and specious. In his memoirs, Truman has explained his reason for dropping the second bomb: ‘On August 9, the second atom bomb was dropped, this time on Nagasaki. We gave the Japanese three days in which to make up their minds to surrender and the bombing would have been held off another two days had weather permitted.’ But the truth is that Truman took three days from the first bombing till the second not because he wanted to give the Japanese time to decide, but because the second bomb became ready only on August 9. Truman had ordered his military on July 25 that they should use ‘additional bombs as soon as they are made available by the project staff.’ Thus, if the second bomb had been ready on August 7, or even August 6 itself, it would have been dropped then. This is buttressed by General Groves, director of the Manhattan Project, which developed these weapons, who said the second bomb had to ‘follow the first one quickly so that the Japanese would not have time to recover their balance.’ Truman knew the extent of the destruction in full detail, and had the time to stop the Nagasaki bombing. “The Nagasaki story shows that America’s leaders, understandably obsessed with ending the war quickly, failed to use the second atomic bomb rationally or tactically. No high-level discussion was held to consider the second bomb. Nobody challenged or reviewed the informal, unofficial, and premature judgment of General Groves, reached in December 1944, to drop two atomic bombs.” (Loebs, 1995)

Another argument put forward at the time of the bombing was that this bomb helped reduce the spread of nuclear weapons: this line of thinking goes that this bomb contributed to the subsequent prevention of nuclear weapons and helped maintain a balance of power in the later years. This argument, of course, has been defeated by the logic of why it was necessary to drop these bombs on thickly populated areas, and not on deserted areas, if use of the bomb was the only prerequisite to this argument. (Hein & Selden, 1997, p. 58)

There is another pressing argument put forward by some critics: Truman dropped the bomb for diplomatic, not military reasons. Truman’s critics quote his remark that ‘the bomb might well put us in a position to dictate our own terms’ (an obvious reference to the Soviet Union) after the war, and Secretary Byrnes’s equally strong statement that ‘our possessing and demonstrating the bomb would make Russia more manageable in Europe.’ But in hindsight, they ask, is it not possible to argue that Stalin would have been as convinced and apprehensive about American might even if the bombs had been dropped on some desert or any other kind of barren land? Was if necessary to bomb thickly populated, flourishing cities if the only intention was to fill awe in Stalin? Moreover, if that was the sole purpose, far from not being justified in dropping the bombs on civilian areas, would not just one bomb have sent the same message as two bombs? (Loebs, 1995)

 

In July, two major breakthroughs were achieved by the code-breaking operations entitled MAGIC and ULTRA. MAGIC intercepts read by Truman and this team showed that the Japanese were unrelenting, and that the elite, the moderate elements in the administration, were willing to negotiate peace terms with the Allies, but afraid of discussing this intention with the military, the hardline elements, fearing a reprisal from them. Traditionally, the military had controlled most of Japan’s decision making. These elements were seen to be taking very firm actions on those who were even willing to talk about peace. It is clear that the Japanese started making moves to ask the Soviets to mediate in a peace effort with the Americans and the British and bring an end to the war in the Pacific. MAGIC had revealed that the Emperor had plans of deputing his prince, Konoye Fumimaro to Moscow with his message. (Newman, 1995, p. 13)

One of the most scholarly, yet controversial works on Truman’s decision to use the bomb has been from Gar Alperovitz. His highly provocative analysis may have triggered serious debate from a school of thought on the subject not inclined to hear his viewpoint, but it is necessary for us to develop our thinking on the issue of the great American blunder in dropping the bombs:

Quoting the US Strategic Bombing Survey (USSBS), he says that it came out with its findings as early as 1946, which were presented in its report entitled, ‘Japan’s struggle to end the war’. The summary of the report read as follows: ‘certainly prior to 31 December 1945, and in all probability prior to 1 November 1945, Japan would have surrendered even if the atomic bombs had not been dropped, even if Russia had not entered the war, and even if no invasion had been planned or contemplated’ Quoting another report from the secretive War Department, which was carried out in April 1946, but made public only in 1989, he furnishes the exact words of the report: ‘the Japanese leaders had decided to surrender and were merely looking for sufficient pretext to convince the die-hard Army Group that Japan had lost the war and must capitulate to the Allies… ‘. Further, in the event of the bombs not being dropped, Russia would have entered the war, according to the set plan, in early August, the time when the bombs were actually dropped. This report says that had the Russians actually entered Japan in early August, that would have given the Japanese just the pretext they were looking to, in order to surrender. The moderate elements in the Japanese administration were at pains to convince the militant elements to make them agree to a surrender; citing the Russian entry would have made the moderates use this as the solid reason to make the hardliners see the writing on the wall, and would in all probabilities have made them relent. Hence, according to the report, in the event of the Russian entry in early August, not only would the need for the bombs have been obviated, there would have been no possibility of the planned invasion of Kyushu in November, which would have, in Truman’s assessment, led to the loss of all those American lives, and the subsequent attack on Tokyo in April 1946. (Alperovitz, 1995)

Part V:

Conclusion:

It is clear that while taking a decision of this magnitude, Japan happened to provide the American administration just the conditions that would have given it the excuse to drop the bomb; the decision went far ahead of the cursory purpose of ending the war, and forcing a Japanese surrender.

A look at the actions of the Truman administration right from the start of the decision-making process would suggest that it overlooked all parameters that went against the action:

Even at the all-important June 18 meeting, the central point that arose was the agreed number of probable American casualties. There is no proof whatsoever, that a figure that even remotely resembled the half to one million, the figure that the president and his advisors kept brandishing throughout, was mentioned during this most important meeting about the invasion. (Walker, 1997, p. 39)

Another major advice against the use of the bomb came in June 1945, when it was imminent that the atomic bomb would be used on an enemy. This came from an eminent, highly concerned group of scientists, which went aghast at the prospect of the use of this weapon. Presenting this report to the War Department, it said sage words: ‘In the past, scientists could disclaim direct responsibility for the use to which mankind had put their disinterested discoveries. We now feel compelled to take a more active stand because the success which we have achieved in the development of nuclear power is fraught with infinitely greater dangers than were all the inventions of the past. All of us, familiar with the present state of nucleonics, live with the vision before our eyes of sudden destruction visited on our own country, of a Pearl Harbor disaster repeated a thousand-fold magnification in every one of our major cities.’ Asserting that this would give short term benefits, that too, only political, at the cost of long-term detriment, this group went on to add that if there was one country that was more vulnerable to such attacks in future, from any country that copied this technology, it was America, with its concentration of industrial complexes and civilian areas in close proximity to each other. It warned that Russia, with its deep mistrust of America, could develop an even more unimaginably powerful device that it could easily use against the US. In view of the fact that taking the first step towards destruction would not only endanger American security in future, but also that of the entire world by precipitating a contest in which each of the participants could become more destructive than the other, this group suggested that the technology be demonstrated in the full glare of the world, under the auspices of a world body (the United Nations was in the process of being formed then). (Williams, 1956, pp. 952, 953)

Gar Alperovitz has come out with the startling theory that president Truman was aware of the fact that there existed several alternatives to the bomb. Just before Potsdam, on July 12, one of the several important codes that the US decoded mentioned in explicit terms that Emperor Hirohito was seriously contemplating intervening personally to offer surrender. When Truman was informed of this, all he did was to dismiss the cable as just another of the many of the Emperor’s communications, saying it was ‘…the Jap Emperor asking for peace’. This was believed to be the ideal time for the surrender; the only sticking point was what formula was to be worked out vis-à-vis the Emperor, for taking the Emperor as a war criminal would have been the ultimate insult to a nation that considered him god-incarnate, and was sure to provoke rebellion of the highest degree. All along, from the time of Germany’s surrender on May 8, the prospect of a Japanese surrender was always on the cards: the American insistence that Russia enter the war around August 8 was meant to give them the advantage of diverting the Japanese in Manchuria. This would have given the Americans the leverage to take on the Japanese army in the mainland, as a major force would have been diverted to the fighting in Manchuria. “By midsummer, however, Japan’s position had deteriorated so much that top U.S. military planners believed the mere shock of a Red Army attack might be sufficient to bring about surrender and thus make an invasion unnecessary.”(Alperovitz, 1995)

Yet, with all these factors, Truman’s determination to nip the growth of Russia’s strength was far greater than all the considerations he was expected to take. His decision to override the advice of the group of scientists was perhaps understandable, but the fact is that he overlooked the decision of one of the most intimate insiders in his administration, Admiral Leahy, who, making a pensive reflection of the American decision, had these to say: “It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender….” (Boorstein & Boorstein, 1990, p. 47), and ‘in being the first to use it, we … adopted an ethical standard common to the barbarians of the Dark Ages. I was not taught to make war in that fashion, and wars cannot be won by destroying women and children’ (Weisserman, 2004)

Thus, in the end, the decision to drop the bomb, seen in the light of all the facts presented in this paper, seems to have been an impetuous one; not only did the bombs cause appalling damage and put to use an innovative technology that was to take man’s power of destruction to unseen heights, this decision of Truman’s, ignoring the advice of the moderate elements in his administration, also fulfilled the dire predictions that the group of scientists made and caused an arms race, whose implications we are still seeing today.

Written By Ravindra G Rao

References

 

 

Allen, T. B., Polmar, N., & Bernstein, B. J. (1995). “Question: Was Truman Right to Drop the Bomb?” Insight on the News, Vol.11, p.18+. Retrieved April 10, 2005, from Questia database, http://www.questia.com.

 

Alperovitz, G. (1995) “Hiroshima: Historians Reassess”. Foreign Policy, p.15+. Retrieved April 10, 2005, from Questia database, http://www.questia.com.

 

Alperovitz, G., & Bird, K. (1994) “The Centrality of the Bomb”. Foreign Policy, p.3+. Retrieved April 10, 2005, from Questia database, http://www.questia.com.

 

Blumenson, M., Coakley, R. W., Conn, S., Fairchild, B., Leighton, R. M., Von Luttichau, C. V., Macdonald, C. B., Mathews, S. T., Matloff, M., Mavrogordato, R. S., Meyer, L. J., Miller, J. J., Morton, L., Pogue, F. C., Ruppenthal, R. G., Smith, R. R., & Ziemke, F. (1960), Command Decisions (K. R. Greenfield, Ed.), Office of the Chief of Military History, Washington, DC.

 

Boller, P. F. (1996), Presidential Anecdotes (Revised ed.), Oxford US, New York.

 

Boorstein, E., & Boorstein, R. (1990), Counterrevolution: U.S. Foreign Policy, International Publishers, New York.

 

Clarfield, G. H., & Wiecek, W. M. (1984), Nuclear America: Military and Civilian Nuclear Power in the United States, 1940-1980 (1st ed.), Harper & Row, New York.

 

Conroy, H. & Wray, H. (Eds.) (1990), Pearl Harbor Reexamined: Prologue to the Pacific War, University of Hawaii Press, Honolulu.

 

Cumings, B. (1999), Parallax Visions: Making Sense of American-East Asian Relations at the End of the Century, Duke University Press, Durham, NC.

 

Divine, R. A. (Ed.) (1969), Causes and Consequences of World War II, Quadrangle Books, Chicago.

 

Gaddis, J. L., Gordon, P. H., May, E. R., & Rosenberg, J. (Eds.) (1999), Cold War Statesmen Confront the Bomb: Nuclear Diplomacy since 1945, Oxford University Press, Oxford.

 

Glynn, P. (1992), Closing Pandora’s Box: Arms Races, Arms Control, and the History of the Cold War, Basic Books, New York.

 

Hane, M. (1992), Modern Japan: A Historical Survey (2nd ed.) Westview Press, Boulder, CO.

 

Harbour, F. V. (1999), Thinking about International Ethics: Moral Theory and Cases from American Foreign Policy, Westview Press, Boulder, CO.

 

Hein, L. E. & Selden, M. (Eds.). (1997), Living with the Bomb: American and Japanese Cultural Conflicts in the Nuclear Age, M.E. Sharpe, New York.

 

(1963), Japan 1931-1945: Militarism, Fascism, Japanism? (I. Morris, Ed.) D. C. Heath, Boston.

 

Jones, C. O. (1994), The Presidency in a Separated System, Brookings Institution Washington, DC:

 

Kagan, D. (1995), “Why America Dropped the Bomb”. Commentary, Vol.100, p.17+. Retrieved April 10, 2005, from Questia database, http://www.questia.com.

 

Koenig, L. W. (Ed.) (1956), The Truman Administration, Its Principles and Practice, New York University Press, New York.

 

Lee, L. E. (Ed.) (1998), World War II in Asia and the Pacific and the War’s aftermath, with General Themes: A Handbook of Literature and Research, Greenwood Press Westport, CT.

 

Levine, A. J. (1995), The Pacific War: Japan Versus the Allies, Praeger Publishers, Westport, CT.

 

Loebs, B. (1995), “Hiroshima & Nagasaki: One Necessary Evil, One Tragic Mistake” Commonweal, Vol.122, p.11+. Retrieved April 10, 2005, from Questia database, http://www.questia.com.

 

Newman, R. P., (1995), Truman and the Hiroshima Cult, Michigan State University Press, East Lansing, MI.

 

United States Strategic Bombing Survey, (1946), The Campaigns of the Pacific War. Washington: U.S. Strategic Bombing Survey (Pacific).

 

Rozell, M. J. & Pederson, W. D., (Eds.) (1997), FDR and the Modern Presidency: Leadership and Legacy, Praeger, Westport, CT.

 

Szasz, F. M., (1984), The Day the Sun Rose Twice: The Story of the Trinity Site Nuclear Explosion, July 16, 1945 (1st ed.), University of New Mexico Press, Albuquerque.

 

Wainstock, D. D, (1996), The Decision to Drop the Atomic Bomb, Praeger, Westport, CT.

 

Walker, J. S., (1997), Prompt and Utter Destruction Truman and the Use of Atomic Bombs against Japan, University of North Carolina Press, Chapel Hill, NC.

 

Weisserman, Gary, “Truman’s Decision to Drop the Atomic Bomb”. Available: http://www.weisserman.com/myth_papers/2004.01.04.106/default.lasso (Accessed 2005, April 10)

 

Williams, W. A. (Ed.), (1956), The Shaping of American Diplomacy, Rand McNally,

Chicago.

Posted in Rated R | Tagged: , , , , , , , | Leave a Comment »

US FOREIGN POLICY IN THE MIDDLE EAST SINCE 1991

Posted by Admin on January 30, 2011

“Critically assess the impact of US foreign policy on the Middle East since 1991: how does the post-Cold War global order affect Middle East politics, and how does conflict in the Middle East affect the ‘New World Order’?”

Table of contents:

Part I: Summary;

Part II: Background to and nature of American policy in the Middle East since 1991;

Part III: Impact of American policy in the Middle East;

Part IV:  Conclusion.

————————————————————————————————————

Part I:

Summary:

The Middle East has always been critical to American interests: it is a region in which all but one country, Israel, are autocratic. This country, the only non-Islamic country in the region, is the target of constant war with most other countries in the region. This makes it the most volatile region in the world. While American policy was aimed primarily at using some countries led by Israel as a bulwark against communism in the Cold War years, the end of a bipolar world saw a radical shift in American policy towards the Middle East. This was brought about by the threat it saw to its most vital interest –oil in the region as a result of the Iraqi invasion of Kuwait; at the same time, with the sudden demise of the hitherto counterbalancing factor, the Soviet Union, the stage was now set for a decisive policy. One country in the region had attacked another and had set sights on America’s most precious interest in the region at a time when the latter was being anointed the sole superpower in the world. This presented the occasion for America to spell out its new policy, primarily aimed at the protection of its oil interests. Though spelt out in a jiffy, the guiding principle of the new policy was simple –with oil and the prevention of its usurpation by another state as the leitmotif of its Middle East policy, America spelt out its doctrine for the region, the ‘New World Order’, an imperious dictum according to which no state has the right to lay claim to what it considers its right to a scarce, exhaustible resource. Since all these happened at the confluence of the end of the Cold War and the potential threat to its interests, the Middle East turned out to be the stage on which America enacted its ‘New Global Order’. Since this is the arena in which America spelt out its policy after becoming the sole superpower, it is only natural that the post-Cold War world gets profoundly affected by whatever America does in this region.  Anything that America considers its interests in the region has a huge, marked bearing on the world. Its supplementary policies, such as the advancement of democracy and the destruction of Weapons of Mass Destruction (WMD), impact the region greatly, as the ongoing example of a post- Saddam Hussein Iraq shows. However, in the process of safeguarding that interest, America has embarked on a dangerous endeavour. It is a policy fraught with dangers; for all the might it may use in pursuing its policy, it has to reckon with the local sentiment that would be a crucial element in guiding its policy. A sound example of the bottlenecks associated with this design is the daily dose of conflict it is facing in Iraq. In trying to aggrandise the country’s oil resources beneath the garb of promoting democracy, America may well be treading a potentially hazardous path. This paper argues that the American policy of planting democracy in societies that do not have the necessary preconditions and institutional frameworks to accepting and absorbing the system could mean risking a backlash. This could seriously undermine its ‘New World Order’ if other countries start emulating Iraq’s example.

Part II:

Background to and nature of American policy in the Middle East since 1991:

The importance of the Middle East to American foreign policy can never be overstated– it is this region that has the greatest say in America’s fuel-driven economy, being the biggest source of American energy supply. It is also the venue of major conflicts, both active and dormant. Situations in countries in the region such as the imminently explosive Lebanon, the ever-active struggle for existence in Israel, the resurgence of fundamentalist Islam, and the American perception that it is the epicentre of Islamic militancy make it a highly volatile region. (Amirahmadi, 1993, p. 3)

American foreign policy in the Middle East has undergone a dramatic transformation necessitated by the political, social and economic changes in the region in the years since 1991. The first major test of American foreign policy in the Middle East unfolded as the end of the Cold War was accompanied by the Iraqi invasion of Kuwait. As a result, the focus of American involvement in the Middle East shifted from a fear of interstate aggression, the last of which caused the Gulf War, to concerns brought about by issues such as terrorism, the proliferation of Weapons of Mass Destruction (WMD) and social tensions exacerbated by a fall in oil prices. In the backdrop of these developments, American foreign policy is focussed on advancement of its interests in six areas: countering terrorism, countering WMD proliferation, the maintenance of stable oil prices, the support of regimes that are friendly and efforts at ensuring their stability, ensuring Israel’s security, and protection and promotion of America ‘s core values –human rights and democracy. (Bensahel & Byman, 2003, pp. 1 & 2) American post Cold War security objectives in the region can be summarised in the following: “[t]he interests of the United States in the Persian Gulf region have been very simple and consistent: first, to ensure access by the industrialized world to the vast oil resources of the region; and second, to prevent any hostile power from acquiring political or military control over those resources…[o]ther objectives, such as preserving the stability and independence of the Gulf states or containing the threat of Islamic fundamentalism, were derivative concerns and were implicit in the two grand themes of oil and containment. Preoccupation with the security of Israel (is) a driving factor in U.S. Middle East policy…” (Sick, 1999, p. 277) Israel has provided the pivot of the American strategy calculus. A militarily strong, democratic Israel situated in the heart of the Middle East, in the midst of hostile Arab neighbours served America’s geostrategic interests from the time of Israel’s existence. Added to this, the influence of a strong Israeli lobby in the US has created in the American foreign policy establishment a strong commitment to the existence and security of Israel. (Lesch, 1999, p. 354) The pursuit of these objectives came to be called the ‘New World Order’, and took shape when George Bush Sr. was president. He laid out his vision of a ‘New World Order’ in the backdrop of the Gulf War. Simply put, it is the articulation of “…a new world order defined not by the presence of peace and stability but by the fact that there is only one superpower; and that superpower must decide whether or not it is in its national interest to play an activist role in the effort to achieve peace and stability in many parts of the world.” (Zogby, 1993)

The New World Order was spelt out in response to a sudden event –the Iraqi invasion of Kuwait. The Soviet Union had just disintegrated, and just when the American administration was groping to find focus on what policy it could lay out, the rather unexpected invasion presented a chance for the then administration to spell out a policy that few had anticipated had such clarity. George Bush Sr. found in this event the perfect occasion to spell out his vision of a world order. “…[T]he American response to Iraq’s invasion of Kuwait was ultimately justified in terms of a vision of world order and of the leading role America would play in the achievement of that order. A grand design that prior to the crisis had remained unarticulated and partially obscured even to its architects was now laid bare.” (Tucker & Hendrickson, 1992, p. 31) Thus, the Gulf War provided the ideal setting for America to “…crystallize positive feelings about a new era into a more palpable vision and approach while advancing its national interests and asserting its global primacy. “(Miller & Yetiv, 2001, p. 56)

Part III:

Impact of American policy in the Middle East:

On the whole, America’s policies towards the Middle East have been less than welcome in the major countries of the region. Since the primary focus of the ‘New World Order’ has been on the procurement of oil and the resolution of the Israel- Palestine problem, the impact of these two aspects is taken up:

A) In relation to oil: At the time of the conception of the ‘New World Order’, while America vowed to lay the countries of the Eastern Bloc on the road to democracy, in the Middle East, its policy was aimed at establishing its hegemony.  (Kuroda, 1994, p. 53) In the aftermath of the September 11 attacks, there is a growing realisation in the American establishment that the promotion of democracy in the countries in which it plans to enact a policy of ‘twin containment’, Iran and Iraq, is a strategic imperative. Since then, the US administration has moved in to work on these areas with added thrust. In pursuit of these policies, the brute force that America is exhibiting has not gone down well in these countries. (Tucker et al., 2002) If the progress American policy in Iraq, which constitutes the prime example of American engagement, and the case in which America has invested considerable resources is any indication, the picture is far from pleasing –in the area of WMD, American efforts have come to a huge naught, for the administration has simply failed to find any, or to implicate Saddam Hussein of any involvement in the 9/11 attacks and to the Al Qaeda. The lone silver lining of this policy is that it is certain not to return the country to a dictatorial or theocratic government. American policy has not been any more effective in Iraq’s neighbour, Iran. A central player in the American scheme of things in the region, Iran has started using the nuclear threat to avert an Iraq-like situation in its country. With its presidential elections round the corner, it is difficult to predict whether the hardliners or the reformists are going to be returned to power. (Clark, 2004) America’s policy of coercive appropriation of the region’s only major resource has had another parallel, though highly profound impact. In order to break free from what is perceived as the American stranglehold over their resources, many countries have started cooperating with each other to exploit the oil-rich Caspian region. Based on the idea of excluding America from the pipeline grid, this brings several countries from even outside the periphery of the Middle East in close ties with each other. This could spell a total alteration of the geo-strategic dynamics of the region. This idea involves not only countries of the regions such as Iran, it also brings into its embrace some former Soviet republics and China, Pakistan, India, Bangladesh and Myanmar. This has stimulated America into fostering friendly regimes in the Caucasus. (The Hindu, 10th April 2005, p.10) These events have been rooted in America’s policy in the Middle East.

B) In relation to the Arab-Palestine issue: In the absence of the Soviet factor, American policy in the Middle East has become more intrusive; American policy could have a positive impact if its moves towards establishing its policy are perceived as being salutary. A prime test case of this policy is the way its role is seen in the Israeli-Palestine issue. (Cantori, 1994, p. 452) The immediate years after the Gulf War led to a hyperactive engagement in the region under president Bill Clinton, for whom resolution of the Arab-Israeli conflict was a principal goal. In his presidency, America assumed the role of an ‘honest broker’ in bringing about a peaceful settlement of issues bedevilling the region. However, before substantial headway was made, a new regime took guard under Bush Jr., under whom the same vigour was not enforced. American interventionism, which became low-key under the new dispensation, has led to suspicion in Arab quarters that America, with its uncompromising tilt towards Israel, has not been the ‘honest broker’ that it promised to be. This has led to a feeling that the American administration has no clear-cut, comprehensive policy towards resolving the Arab-Palestine conflict. (Lukacs, 2001, p. 32) “The problems of devising and implementing a coherent regional strategy were reflected in and exacerbated by the inherent tension generated by Washington’s goals…American diplomatic, economic, informational, and military efforts rarely, if ever, were simultaneously applauded by both Israelis and Arabs. Instead, the norm was that whatever the United States did to support one side was frequently denounced by the other” (1996, p. 122) Its obsession with obtaining fuel has generated a feeling that America is losing its leverage in the region by failing to go the distance in promoting one of its ideals in the region, peace between Israel and Palestine. One of the major impacts of this policy has been that most of the peace accords set to be implemented to end this dispute and those between the various countries in the region have gathered dust, with the result that the situation on the ground has hardly changed. (Lukacs, 2001, p. 32)

Part IV:

Conclusion: In this section, an analysis is made of how the cherished American policy in the region can go awry if tardily implemented, or in the event of an outbreak of war or a backlash against American policy, because there exist real and plausible causes for either or all of these in the region.

American policy in the Middle East, spelt out in its ‘New World Order’ axiom, is in the process of evolution; hence, at this stage, the events that have been unfolding in the region offer, at best, an indication of things to come. In the overall sense, even if the policy in the Middle East is clear, its result is still in an inchoate stage, and constitutes a mixed bag. Yet, a few patterns can be discerned:

A new urgency has been brought about by the terrorist attacks of September 11, 2001. In the aftermath of this event of seminal importance, the Bush administration has been looking at its foreign policy through an altogether different prism. The US has now adopted the aggressive stance by which it categorises countries as either its friends or abettors of terror. On account of this thinking, the world has been polarised more than during the Cold War. The US is finding that it is a lot easier to take on one country at a time and mould it to its will, than taking on amorphous, seamless terrorist groups that can carry out terror attacks on just about any part of the world at will. (Rahman, 2002) This is the foremost example of how the Middle East gets affected by the nuances of the ‘New World Order’.

Some of America’s staunchest allies (apart from Israel) and most bitter rivals in the region have had Islamic forms of governance. Examples of these two extremes could be Saudi Arabia and Iran. The establishment in America is inclined to think, as some in the media are, that terrorism is rooted in and is inextricably linked to Islam. (Esposito, 1993, p. 188) Any American policy towards the region that is seen as being antithetical to Islam, (which is a very likely outcome on account of American predisposition towards Israel) is sure to antagonise public opinion in the region against America, if it does not take the sensitivities of the local populace. Gawkily implemented policy in the region in the backdrop of the strong religious flavour could seriously dent America’s efforts at gaining a foothold in the region; in addition, it could unite the region against American hegemony.

In this setting, it is all the easier for the countries in the region to line up in defence of one of their brethren. With the battle lines, so to speak, clearly drawn, mostly the result of America’s own policy, oil, nuclear blackmail and Islam could easily prove to be the uniting factors against America. Emulation of the Iraqi example by other countries could very well lay the region on the road to total chaos. American policy at preventing interstate conflict may have succeeded as of now, but there is no guarantee it will endure if it goes overboard in implementing its policy. Thus, the potential for an all-out conflagration in the region against America is very real. If this materialises, American objectives spelt out in its ‘New World Order’ could go haywire.

In order to pre-empt this scenario, America needs to become more amiable and resort to less arm-twisting in the implementation of its policy: “[i]n the years to come, the liberation of U.S. foreign policy from the protracted political impasse of the post-cold war era will likely require the restoration of consensus regarding the country’s appropriate role in foreign affairs. In the absence of such a consensus, the likelihood remains that U.S. policy will continue to be driven by crises overseas, (as in) the Middle East.” (Hook, 1998, p. 326)

Written By Ravindra G Rao

References

 

 

Amirahmadi, H., (Ed.). (1993), The United States and the Middle East : A Search for New Perspectives, State University of New York Press, Albany, NY.

 

Bensahel, N. & Byman, D. L., (Eds.). (2003), The Future Security Environment in the Middle East: Conflict, Stability, and Political Change, Rand, Santa Monica, CA.

 

Bhadrakumar, M.K., 2005, ‘The great game for Caspian oil’, The Hindu, 20th April 2005, p.10. This article can be accessed online at http://www.hindu.com/2005/04/20/stories/2005042002371000.htm

 

Cantori, L. J. (1994), “The Middle East in the New World Order”, in The Gulf War and the New World Order International Relations of the Middle East, Ismael, T. Y. & Ismael, J. S. (Eds.) (pp. 451-464), University Press of Florida, Gainesville, FL.

 

Clark, W. (2004), “Broken Engagement: The Strategy That Won the Cold War Could Help Bring Democracy to the Middle East-If Only the Bush Hawks Understood It”, Washington Monthly, Vol. 36, p. 26+, Retrieved April 21, 2005, from Questia database, http://www.questia.com.

 

Esposito, J. L. (1993), “Islamic Movements, Democratization, and U.S. Foreign Policy” in Riding the Tiger: The Middle East Challenge after the Cold War, Marr, P. & Lewis, W. (Eds.) (pp. 187-207), Westview Press, Boulder, CO.

 

Hook, S. W. (1998), “The White House, Congress, of the Paralysis of the U.S. State Department after the Cold War”, in After the End: Making U.S. Foreign Policy in the Post-Cold War World, Scott, J. A. (Ed.) (pp. 305-326), Duke University Press, Durham, NC.

 

Kuroda, Y. (1994), “Bush’s New World Order”, in The Gulf War and the New World Order International Relations of the Middle East, Ismael, T. Y. & Ismael, J. S. (Eds.) (pp. 52-69), University Press of Florida, Gainesville, FL.

 

Lesch, D. W. (Ed.), (1999), A Historical and Political Reassessment A Historical and Political Reassessment, Westview Press, Boulder, CO.

 

Lukacs, Y. (2001), “America’s Role – as the Israeli-Palestinian War of Attrition Enters Its Second Year, an Intense Debate Is Taking Place over the Content Scope, and Future Direction of America’s Policy in the Middle East” World and I, Vol. 16, p. 32, Retrieved April 21, 2005, from Questia database, http://www.questia.com.

 

Miller, E. A., & Yetiv, S. A., (2001), “ The New World Order in Theory and Practice: The Bush Administration’s Worldview in Transition”, Presidential Studies Quarterly, Vol.31, No.1, p. 56. Retrieved April 21, 2005, from Questia database, http://www.questia.com.

 

Rahman, S.,  (2002), “Another New World Order? Multilateralism in the Aftermath of September 11”, Harvard International Review, Vol. 23 No.4, p. 40+, Retrieved April 21, 2005, from Questia database, http://www.questia.com.

 

Sick, G., (1999), “The United States in the Persian Gulf: from Twin Pillars to Dual Containment”, in A Historical and Political Reassessment A Historical and Political Reassessment, Lesch, D. W. (Ed.), (pp. 277-290), Westview Press, Boulder, CO.

 

Tucker, R. W., & Hendrickson, D. C., (1992), The Imperial Temptation: The New World Order and America’s Purpose, Council on Foreign Relations Press, New York.

 

Tucker, R. W., Howard, M., Schmitt, G., Mearsheimer, J. J., Joffe, J., Chace, J., Gungwu, W., Kupchan, C. A., & Hassner, P. (2002), “One Year On: Power, Purpose and Strategy in American Foreign Policy”, The National Interest,  p. 5+. Retrieved April 21, 2005, from Questia database, http://www.questia.com.

 

(1996), “The United States and the Middle East: Continuity and Change”, in U.S. Foreign and Strategic Policy in the Post-Cold War Era: A Geopolitical Perspective, Wiarda, H. J. (Ed.) (pp. 107-126), Greenwood Press, Westport, CT.

 

Zogby, James, “It’s the economy, stupid! –And it’s the World, Too!”. Available: http://www.aaiusa.org/wwatch_archives/011193.htm (Accessed 2005, April 05)

Posted in Rated R | Tagged: , , , , , , , , , | Leave a Comment »

EFFECTS OF ALCOHOL ADDICTION

Posted by Admin on January 30, 2011

ABSTRACT: This research paper details the social effects of alcoholism. As part of this, it focuses on the short and long term impact alcohol addiction has on the family and its social interaction. A major part of this study is devoted to the effect alcoholism has on children. In discussing this, an exploration is made of the link between children of alcoholics.

Style: APA; sources: 10; Pages: 8

Limitations of this study: Alcoholism is the only area taken up for this study. No other forms of addiction, such as drug addiction, substance abuse and so on are covered in this study. In addition, a watertight compartmentalization is not made of the short and long term effects of alcoholism –they are intertwined. Another area that is not treated separately is the effect of alcoholism on the different members of the family, because generally everyone in the family suffers from an addicted member. The effect it has on children and that on the spouse of the addict is not disjointed.

Thesis and overview:

Understanding alcoholism: Alcoholism defies a clear-cut, comprehensive definition. Although there is no one way by which it is defined, an alcohol addict is one who fulfils the following criteria among others: an irresistible urge to consume alcohol, loss of control once s/he starts consuming alcohol, and a relapse into the habit following a session of rehabilitation. (Swift, 1999, p. 207) Although there is no conclusive finding regarding whether alcoholics are born or are made, research seems to suggest that there is a strong, if not irrefutable link between alcoholism and genetics. Having said this, it should be reiterated that this relationship is at best shaky, for it is commonplace to find children of alcoholics turning out to be non-alcoholic; by the same coin, it is also equally true that not all alcoholics had an alcoholic parent. (Ullman & Orenstein, 1994)

Whether the family has alcoholic mothers or alcoholic fathers, the common denominator is that their educational and social backgrounds matter little in making them alcoholics, although alcoholics are usually more prevalent in the working class. Thus, the nature of the problem is that it is spread across all strata of the society, and any child from any age or social or economic group is susceptible to have an alcoholic parent. (Hunt, 1997)

Effects on family: The effects of alcoholism on the family can be felt from the earliest stage of the child’s upbringing. Parents who are alcoholic are known to be far inferior to non-alcoholic parents as caregivers to the family. These parents, apart from denying the necessary care to children, also generally suppress children from talking about this habit among the social circles in which they interact, are generally inconsistent and unpredictable in dealing with children, and are usually rigid in their expectations from their children. There is also a subtle effect alcoholism has on Children of Alcoholics (COAs): they make them more easily vulnerable than their counterparts to “…antisocial behaviors, problems with intimacy and trust, perfectionism, underachievement, low self-esteem and low self-worth, depression and/or anxiety, and lack of understanding of normalcy.” (Hunt, 1997) It is also proven that such children are prone to psychosomatic ailments such as headache, depression, insomnia, eating disorders and stomach upsets. They are also likely to develop learning disabilities. (Stark, 1987) These effects do not simply vanish at some later stage of the children’s life; even into their adulthood, as a result of growing up in the shadow of an alcoholic parent or adult in the family, they exhibit undue nervousness, fretfulness, maladjustment in the family, and generally end up becoming unsuccessful parents themselves. In addition, there is a very heavy impact alcoholism has on the emotional development of children –they are known to suffer from labeling and stereotyping as COAs, which causes major personal and social consequences. (Hunt, 1997) A direct result of alcoholism is that there is a sense of embarrassment in bringing home a friend or relative to the family with an alcoholic. “In the majority of cases, the alcoholic member is doing his or her drinking at home–sometimes privately, but often in full view of the rest of the family. Home-based events, such as meals and family entertainment, frequently occur at times when the alcoholic member is intoxicated. The family must also decide how friends and strangers are to be treated when active drinking is going on. Are they to be welcomed into the home, or kept at bay until the storm has passed? How is the day to be planned? How are household chores to be carried out when one is presumably never confident about what state one’s alcoholic spouse or parent will be in at any particular moment?”  (Steinglass, Bennett, Wolin & Reiss, 1987, p. 177) Alcoholic adults in the family can also sometimes mar social and family occasions such as vacations or even daily assembly at dinner time by creating scenes on such occasions; this leads to a situation of conflict in the family, because children are deprived of love, so central to a family. “There may also be continuous conflict within the family because the alcoholic parent is too erratic to play a key role in everyday decision making, but refuses to accept a subordinate role. Relationships between siblings may resemble a warring band more than a supportive group because of competition for the scarce supply of adult attention. (One of) the nonalcoholic parent(s) can be so wrapped up in reacting to the whims and needs of the alcoholic that he or she cannot provide a stable environment and is unresponsive to the children’s needs.”  (Ullman & Orenstein, 1994) This means that the sense of bonding among the family members is lost.  The loss of happiness and peace of mind far exceeds any other loss, and is economically incalculable. It is not uncommon to find education of the children being disrupted because huge proportions of the family income get frittered away on alcohol consumption by one or many members of the family. When these children, deprived of love, try to find other avenues from which they can fill this lacuna at home, there is a likeliness that they may end up looking for comfort with the wrong people; this can lead to further family and social tensions. (Ullman & Orenstein, 1994) One of the easiest and most readily available outlets such children seek to find is drugs. Adolescents from this kind of background are particularly liable to get attracted to drugs.  This is the stage of life when dramatic developmental, physical and emotional vicissitudes take place. This is when, on account of these very great changes, adolescents, who make the graduation from one stage of life to another very important one, are in need of very strong emotional support from their families. Most drug abusers fall into the habit not because of adolescence per se, but when the family deprives them of these supports that they so badly need, a direct fallout of alcoholism, as we have seen. Drug abuse, especially during adolescence, can have baleful effects on the individual. (Trad, 1994) The major precipitator of these is the social factors, consisting of the family, whose relationship with psychological factors is inseparable. (Mcdonald & Towberman, 1993) This triggers a chain reaction –drugs can lead to a strong antisocial component: crime. If people who land up in jails for having committed crimes have one extremely strong factor in common, it is their social environment, a prime factor among which is the existence of an alcoholic in the family. A study done during 1990-91, which was spread across four states in the US, and had respondents from ten inner-city high schools and six correctional facilities threw up some interesting facts, all of which point to the irrefutable evidence of a link between the criminals and their social background. (Curtis, 1998, p. 1233) Or, these children can turn to alcoholism itself as a source of solace. It has to be admitted, though, that this link is subjective and is dependant on how the family views alcoholism by one of its members. (Ullman & Orenstein, 1994)

In addition to all these, there are indirect and secondary social aspects of alcoholism –one of the direct results of alcoholism, domestic violence, costs the American exchequer anywhere between $ three billion and $ 10 billion a year by way of losses accrued on account of absenteeism, reduced employee turnover, healthcare costs and so on. (Overman) Obviously, this is a major cost of alcoholism.

The positive side of a family with an alcohol: After all these illustrations about the ill-effects of alcoholism on the psychological, emotional and social upbringing of COAs, it is worth exploring if there can be positive fallouts of alcoholism.

While admitting that these findings are at best largely based on assumption, whose effect can be inadvertent, some researchers classify these positives from alcoholism under the following sequence of behavior: COAs, after being exposed to alcoholic parents over a period of time, normally assume four roles. One of these is that of the ‘hero’. The eldest child usually assumes this role, by which it becomes some kind of a surrogate parent, taking on all the roles of the parent, from caring for the younger children to running the household. This child is normally a great achiever, doing extremely well at academics and exhibiting a flair for leadership qualities. The next in line is the ‘scapegoat’, or what is termed the problem child. What this child does is to invite trouble on account of its misbehavior and earn scorn for its actions. The third role some COAs assume is that of the ‘mascot’. This kind of child puts on a brave face over all the tumultuous events at home, and is the most jovial, deflecting the sorrow of the home with its own sense of humor. Finally, there is the existence of another role, that of the ‘lost child’, which is reclusive and isolated.  (Stark, 1987) The one possible positive of this assumption of roles is that it could have an indirect benefit –it could inculcate some sense of leadership in COAs. Again, it needs to be reemphasized that this argument is tenuous – COAs need not necessarily react in only this manner; further, even if an alcoholic parent can induce leadership qualities in the child, it would only happen to the child who actually wears the mantle of responsibility, and not among each of the children.

These findings are not to suggest that it is only the children of alcoholics who suffer; if there is another person who takes the burden equally, it is the spouse. It is generally found that wives of alcoholics suffer in almost all areas of a happy married life: “talk or communications, mealtime, joint recreation and social activities, and sexual intimacy. The increasing failure of the husband to provide satisfaction in these areas leaves the wife almost entirely without a means of role fulfillment as a spouse. Predictably, wives react in anger and sorrow. The drinking husband, also predictably, reacts with hostility to his wife’s unhappiness. Ultimately, as will be seen, she withdraws, hurt and alienated, and he is further cut off from the society of nondrinking companions.  (Wiseman, 1991, p. 117)

Conclusion: As can be seen, it is indisputable that alcohol is a social evil of the highest order. While some concrete, researched issues are covered under this paper, these still do not constitute a comprehensive set of effects alcohol causes. For instance, it can never be estimated to what extent COAs, had they been born to non-alcoholic parents would have developed into. In the absence of the proper conditions for emotional development, this addiction by just one leading member of the family can have caused irreparable loss to society. For who knows if that child had the potential to be a real achiever, but had its creativity drowned in the parent’s habit? It is also common to hear stories of how people have committed some of the most violent and criminal acts under the spell of alcohol. It is thus that only some tangible issues are addressed here. The solution to deep-seated problems such as alcoholism needs to be tackled on an individual basis, taking several factors into consideration. It needs to be seen as a battle that needs to be won with tact and patience. These, however, are beyond the purview of this paper.

Written By Ravindra G Rao

References

 

 

Curtis, R. (1998). The Improbable Transformation of Inner-City Neighborhoods: Crime, Violence, Drugs, and Youth in the 1990s. Journal of Criminal Law and Criminology, 88(4), 1233. Retrieved November 15, 2005, from Questia database. http://www.questia.com

 

Hunt, M. E. (1997). A Comparison of Family of Origin Factors between Children of Alcoholics and Children of Non-Alcoholics in a Longitudinal Panel. American Journal of Drug and Alcohol Abuse, 23(4), 597+. Retrieved November 15, 2005, from Questia database: http://www.questia.com

 

Mcdonald, R. M., & Towberman, D. B. (1993). Psychosocial Correlates of Adolescent Drug Involvement. Adolescence, 28(112), 925+. Retrieved November 15, 2005, from Questia database: http://www.questia.com

 

Overman, Stephanie. (1997, August) Preventing Domestic Violence From Spilling Over Into the Workplace. Restaurant. Org Retrieved from  http://www.restaurant.org/rusa/magArticle.cfm?ArticleID=579

 

Stark, E. (1987, January). Forgotten Victims: Children of Alcoholics. Psychology Today, 21, 58+. Retrieved November 15, 2005, from Questia database: http://www.questia.com

 

Steinglass, P., Bennett, L. A., Wolin, S. J., & Reiss, D. (1987). The Alcoholic Family. New York, NY: Basic Books.

 

Swift, R. M. (1999). Medications and Alcohol Craving. Alcohol Research & Health, 23(3), 207. Retrieved November 15, 2005, from Questia database: http://www.questia.com

 

Trad, P. V. (1994). Developmental Vicissitudes That Promote Drug Abuse in Adolescents. American Journal of Drug and Alcohol Abuse, 20(4), 459+. Retrieved November 15, 2005, from Questia database: http://www.questia.com

 

Ullman, A. D., & Orenstein, A. (1994). Why Some Children of Alcoholics Become Alcoholics: Emulation of the Drinker. Adolescence, 29(113), 1+. Retrieved November 15, 2005, from Questia database: http://www.questia.com

 

Wiseman, J. P. (1991). The Other Half Wives of Alcoholics and Their Social-Psychological Situation. New York: Aldine de Gruyter.

Posted in Rated R | Tagged: , , , , , , , | Leave a Comment »

DEMOCRACY IN 19TH CENTURY WESTERN EUROPE

Posted by Admin on January 30, 2011

“How democratic were France, Germany and Britain by 1900?”

Table of contents:

Part I: Summary;

Part II: Outline;

Part III: Limitation of this study;

Part IV: Democracy in France;

Part V: Democracy in Germany;

Part VI: Democracy in Britain;

Part VII: Conclusion.

Part I:

Summary: Just over a century ago, the kind of government that existed in these frontline western European states was a far cry from what is seen today. The political earthquake called the French Revolution had its epicentre in France, but its rumblings were felt through most of the continent, as well as in faraway colonies, leaving the politics of most European countries in a state of flux. But the intended harvest of this revolution, an obliteration of monarchy and the rule of law, the indispensable elements of a democracy, took its time to get ingrained in the political systems of these countries, and evolved as a form of government very differently in each of the three countries taken up in this paper. If the advent of Napoleon affected these three countries, and the Vienna Congress stunted France and Germany’s graduation to democracy, the internal political dynamics in all these countries were different from each other’s. In Britain, whose brand of democracy was mixed, the Reform Acts turned out to be milestones on the road to democracy. Such serious and well-intended steps to democracy were not taken in the other two countries. This is mainly because France kept seesawing between monarchy and autocracy through most of the 19th century, while Germany was a disparate state for most of that century. In sum, in Britain, by the end of the 19th century, a parliamentary democracy, which the nation had been having for a long time, was fairly well established, although under a monarchy. The same was not the case with the other two; in all, Germany enjoyed the least democracy. The reasons for this discrepancy form the backbone of this paper.

Part II:

Outline: This paper takes up separately the extent to which democracy was ushered in into these three countries. In each of these cases, a narration is made of how democracy developed. Since the nature of this paper is analytical, too much detail is not made of this aspect; this explanation is given only to reinforce the thesis question. The starting point for the evolution of democracy in each of these countries is taken up separately. This is for the simple reason that while the French Revolution happened in France, such an event did not take place in the other two countries. For these, appropriate historically important dates or events are taken up.

Part III:

Limitation of this study: While 1789 may be termed a signal event for modern democracy, no event of such importance concerning democracy happened in 1900, the cut off date for this paper. However, since this is the period up to which this paper is concerned, it restricts itself to developments in most parts of the 19th century, in which the major themes were unification for Germany, political uncertainty for France, and the reform of the parliamentary system in the Victorian Era for Britain.

Part IV:

Democracy in France:

France was home to one of the watershed political events of modern Europe, the French Revolution, in which the people rose in revolt with the slogan, war to the châteaux, peace to the cottages. The gravity and repercussions of this event are far too great to bear banal repetition; however, while the essential aim of the Revolution was to bring an end to the autocratic and inept regimes that misruled the nation, (Frey & Frey, 2004, p. 57) the result it sought to instil, democracy, did not have a smooth inception or development, either, suffering from several long and enduring birth pangs.

Strangely, for most part of the 19th century, it seemed as if the great revolution had turned out to be no more than an isolated, standalone event. The dividend the Revolution sought to pay, democracy, had to wait for a seemingly interminable period of time to fructify and get implanted in the nation’s political system, because the succession of governments it brought were anything but democratic. Leading political figures of the day, such as Robespierre feared that the system the revolution put in place was one which had a penchant for forgetting “the interests of the people”, would “lapse into the hands of corrupt individuals”, and worst of all, “reestablish the old tyranny” (Cohen, 1997, p. 130) Later decades showed that his prognosis was not far off the mark.

The decades following the Revolution saw a chain of events, none of which took the country anywhere near democracy, the avowed aim of the Revolution. The years from the Revolution to the Franco-Prussian War saw political fissures of one or another kind, which had no semblance of democracy, starting with the ascent of Napoleon, perhaps the most powerful dictator the country had ever produced. His defeat was followed by the Restoration of the monarchy; this gave rise to the Revolution of 1830, and the rule of Louis Philippe, till 1848. It took another revolution to bring down his regime, this time in 1848. Finally, this heralded the era of the Second Republic, and the tenure of the fickle Napoleon III, leading to another event of seminal importance for the nation, the Franco-Prussian war, to be followed by yet another Republic, the Third. (Haine, 2000, p. 97) This regime, too heavily weighed down by palace intrigues, scandals, wars and renewed national pride in the wake of a highly recharged and resurgent neighbour, Prussia, (Wright, 1916, pp. 2-4) was left with little room or time for democracy. Nothing of import happened in the period till the end of the 19th century to necessitate the emergence of a democracy.

Part V:

Democracy in Germany:

Germany’s tryst with democracy in the 19th century needs to be seen in circumstances that were peculiar and unique to the nation’s history. This was when the German people united as a nation for the first time.  They had been a loosely knit confederation of princely states that owed its allegiance to the Holy Roman Empire by the time of the French Revolution; yet, in about a century of this event, they had been cobbled together almost magically under the Prussian banner. A series of moves replete with uninhibited daredevilry, gamble, deceit and sheer diplomatic astuteness on the part of its Chancellor, Otto Von Bismarck had united the German people, ridding them of the yoke of Austrian domination of its peoples. (Snell, 1976, pp. 3, 4) However, Germany had only been united, resulting in the realisation of a long-lasting and cherished dream of a German nation; this did not in any way mean that a democracy had been put in place. The arrangements the Congress of Vienna made for Europe in 1815 undid Bismarck’s work, setting the clock back on democracy. Even so, the newly-knit entity did not have the prerequisite groundwork for democracy, suffering from a basic flaw –it “was constructed by its princes, not by its people. That important fact distinguished Germany from nations like England, France, and the United States, where the constitutions were designed “with the consent of the governed.” The German Empire was a federation of sovereign states, its constitution created by a treaty among the hereditary rulers of those states. The “wars of unification” were not revolutionary popular movements; they were narrowly focused international conflicts designed by Bismarck to help Prussia eliminate Austrian power within Germany and to create a new Prussian-led German nation within Europe.”  (Turk, 1999, pp. xvii-null22) Whatever spattering of democracy the nation had towards the fag end of the century was limited to social democracy, in which it was confined to labour unions. (Berghahn, 1994, p. 160)

Part VI:

Democracy in Britain: The year 1815 is considered a benchmark for the politics of Britain, as it was for several other European countries, for the simple reason that this year saw the end of the power and influence of one of the greatest nemeses it ever saw, Napoleon. However, while this was the major issue for the nation externally, Britain had its share of internal problems, as well, during this century. The Industrial Revolution brought in its wake dramatic changes which the nation had to ingest, with both the promises and the pitfalls it spawned. Among the most important social effects the Industrial Revolution had on the nation was a near-explosion in population, and the drawbacks of nascent industrialisation, at which it had no forerunners from any part of the world. Thus, the greatest priority at that time was a set of policies that gave the country social solidity and some element of peace. (McCord, 1991, p. 1) With the high rates of population growth and their attendant problems such as high infant mortality being great priorities during the early part of the 19th century, (Brown, 1991, p. 30) the air of politics was abuzz with the question of which of the institutions the British had so assiduously built up over the previous centuries was best suited to give coherence to the society that was changing at a feverish pace. In this milieu, the emphasis for British politics was more over what kind of reform was suited and needed for the society, polity and the economy, rather than which form of government was best suited to carry these changes out. Opinion was sharply divided among the Conservatives and the Liberals about which of its institutions could carry the day for Britain. The unshakable British faith in the monarchy was as firm as ever, not diluting or eroding even slightly on account of these changes. (Park, 1950, pp. 3-5) In essence, the 19th century, during whose most part Britain was under the rule of one of its longest-reigning monarchs, Queen Victoria, saw the emergence of a peculiarly hybridised, yet often contradictory system of governance. Quintessential democratic institutions, such as the parliament, the judiciary, the cabinet and the local government were alive and well, but functioned under a monarchy. On the one hand, fair and free elections, the ultimate identifier of a democracy, were being held with amazing regularity; on the other, it could not be denied that participation in these elections was limited to the handful of rich and powerful. It was to correct this set of imbalances and to draw more people into the electorate that the Reform Acts were passed. The basic intent of these sets of legislation was the promotion of greater democracy, by drawing the excluded and marginalised sections of society into the electorate. (Pugh, 1999, p. 20) The nation went through three Reform Acts, passed in 1832, 1867 and 1884, whose central aim was increasing the numbers of the electorate. (Hammond & Foot, 1952, pp. 212-214) At about the time these Acts were passed, a parallel social and political reform movement, Chartism, was very active. The basic demand of this radical, unionised movement was greater political participation for the working classes, so that the fruits of the Industrial Revolution percolated down to the labour class, too. (Maccoby, 1935, p. 33) However, in the light of the needs of the day, and the priority these Acts had, they met with little success in actually bringing in democracy to the country. What has been said about the Reform Act of 1832, perhaps holds good for the other Acts, too –that they were “…an excellent example of the British skill of muddling through. An aristocracy muddled through to a democracy, taking many of the aristocratic virtues with them; and they muddled through from an age of privilege to an age of numbers. The democratic implications of the act(s) were not in fact revealed for more than a generation…” (Smellie, 1962, p. 164) As a result, through most of the Victorian Era, although efforts were made haltingly towards bringing in more democracy, there was no more than a sprinkling of democracy; even this happened at the grassroots level, being restricted to the municipal level, as a series of Acts were passed at the local government level. (Harrison, 1996, p. 20)

Part VII:

Conclusion: A study of the thesis question throws up a mixed picture. Overall, democracy, so essential a feature of these countries today, had had to make a bumpy and potholed journey. In all these countries, democracy was nebulous and uncertain in the 19th century, albeit in varying degrees. In Britain, a parliamentary democracy was very much in full bloom, but the inherent love and pride of the British people for their monarchy pre-empted a switch to a full-fledged democratic form of government. As a result, these democratic institutions functioned under a monarchy that controlled the largest empire of the day.

In France, the scene was different. In the absence of democratic institutions of the kind Britain had nurtured, the governance the French Revolution brought about vacillated between various kinds, with the result that democracy took a backseat.

In Germany, the struggles inherent in a newly unified nation, coupled with its naivety in running its newly developing imperialism resulted in too many squabbles and bottlenecks for democracy. The nation that Bismarck had welded together had the ingenuity to only work under a newly consolidated empire, not having been inculcated the necessary mindset for a democracy. It was never going to be easy for these fissiparous peoples to be administered a sudden dose of democracy, as by definition they had been inured to centuries of localism. By the end of that century, democracy was nowhere registered in the average German psyche.

Of all these nations taken up for this study, it can be said that Britain had the highest form of democracy by the end of the 19th century; yet, here too, despite the Reform Acts, which could not be termed a great harbinger of democracy, it was nowhere near what may be termed a pure democracy, something that came so naturally to some of its colonies, principally America.

Written By Ravindra G Rao

 

References

 

Berghahn, V. R., (1994), Imperial Germany, 1871-1914: Economy, Society, Culture, and Politics, Berghahn Books, Providence, RI.

 

Brown, R., (1991), Society and Economy in Modern Britain, 1700-1850, Routledge, New York.

Cohen, P. M., (1997), Freedom’s Moment: An Essay on the French Idea of Liberty from Rousseau to Foucault, University Of Chicago Press, Chicago.

Frey, L. S., & Frey, M. L. (2004). The French Revolution /, Greenwood Press, Westport, CT.

 

Haine, W. S., (2000), The History of France (F. W. Thackeray & J. E. Findling, Ed.), Greenwood Press, Westport, CT.

 

Hammond, J. L., & Foot, M. R., (1952), Gladstone and Liberalism, English Universities Press, London.

 

Harrison, B., (1996), The Transformation of British Politics, 1860-1995, Oxford University Press, Oxford.

 

Maccoby, S., (1935), English Radicalism, Allen & Unwin, London.

 

McCord, N., (1991), British History, 1815-1906, Oxford University Press, Oxford.

 

Park, J. H., (1950), British Prime Ministers of the Nineteenth Century, New York University Press, New York.

 

Pugh, M., (1999), State and Society: A Social and Political History of Britain, 1870-1997, Arnold, London.

 

Smellie, K. B., (1962), Great Britain since 1688: A Modern Histor, University of Michigan Press, Ann Arbor, MI.

 

Snell, J. L., (1976), The Democratic Movement in Germany, 1789-1914 (H. A. Schmitt, Ed.), University of North Carolina Press, Chapel Hill, NC.

Turk, E. L., (1999), The History of Germany, Greenwood Press, Westport, CT.

 

Wright, C. H., (1916), A History of the Third French Republic, Houghton Mifflin Company, Boston.

Posted in Rated R | Tagged: , , , , , , , , , | Leave a Comment »

 
%d bloggers like this: