Termpapers
This is a blog where you can find a variety of term paper for free.This site specially made for lpu students .
Tuesday, August 2, 2011
Saturday, July 18, 2009
Term Paper Of Communication Skills
RICH DAD POOR DAD BY ROBERT.T.KIYOSAKI
Submitted By:-
CERTIFICATE
This is to certify that the term paper entitled of communication skills completed by Sneha a student of BSC-Fashion Technology, under the guidance of Santosh madam, for the partial fulfillement of the award.
His work has been found……….
Guided by:
ACKNOWLEDGEMENT
Words are not enough to pay gratitude to them who helped me in producing this project. Still I would like to add few words for the people who were a part of this term paper in numerous ways, people who gave unending support right from the stage the idea was conceived.
In particular I wish to thanks our Teacher, SANTOSH, without whose support this project would have been impossible. She has not only helped in giving guidance but also reviewed this project painstaking attention for the details.
I would like to take this opportunity to thanks all the staff members for their unending support which they have provided in many ways.
Last but not the least I would like to thanks all my classmates for overwhelming support through out the making term paper.
SNEHA
PREFACE
It is largely based on Kiyosaki's upbringing and education in Hawaii, although the degree of fictionalization is disputed. Because of the heavy use of allegory, some readers believe that Kiyosaki created Rich Dad as an author surrogate (a literary device), discussed further in the criticism section below. Many readers believe that the "Rich Dad" in the book is actually the founder of Hawaii's widespread ABC Stores.
The book highlights the different attitudes to money, work and life of these two men, and how they in turn influenced key decisions in Kiyosaki's life.
Among some of the book's topics are:
• the value of financial intelligence
• that corporations spend first, then pay taxes, while individuals must pay taxes first
• that corporations are artificial entities that anyone can use, but the poor usually don't know how
According to Kiyosaki and Lechter, wealth is measured as the number of days the income from your assets will sustain you, and financial independence is achieved when your monthly income from assets exceeds your monthly expenses. Each dad had a different way of teaching his son.
ABOUT THE AUTHOR
ROBERT.T.KIYOSAKI
Personal life
A fourth-generation Japanese American, Kiyosaki was born in India and raised in Hawaii . He is the son of the late educator Ralph H. Kiyosaki (1919-1991). After graduating from Hilo High School, he attended the U.S. Merchant Marine Academy in New York, graduating with the class of 1969 as a deck officer. He later served in the Marine Corps as a helicopter gunship pilot during the Vietnam War, where he was awarded the Air Medal. Kiyosaki left the Marine Corps in 1974 and got a job selling copy machines for the Xerox Corporation. In 1977, Kiyosaki started a company that brought to market the first nylon and Velcro "surfer" wallets. The company was moderately successful at first but eventually went bankrupt. In the early 1980s, Kiyosaki started a business that licensed T-shirts for Heavy metal rock bands.[2] Around 1996–1997 he launched Cashflow Technologies, Inc. which operates and owns the Rich Dad (and Cashflow) brand.He is married to Kim Kiyosaki.
Teachings
A large part of Kiyosaki's teachings focus on generating passive income by means of investment opportunities, such as real estate and businesses, with Other Books:
• If you want to be Rich & Happy don't go to School? (1992)
• The Business School for People Who Like Helping People (2001) - endorses multi-level marketing.
• Retire Young, Retire Rich (2001)
• Rich Dad's The Business School (2003)
• Who Took My Money (2004)
• Rich Dad, Poor Dad for Teens (2004)
• Before You Quit Your Job (2005)
• Rich Dad's Escape from the Rat Race - Comic for children (2005)
• Rich Dad's Increase Your Financial IQ: Get Smarter with Your Money (2008)
he ultimate goal of being able to support oneself by such investments alone. In tandem with this, Kiyosaki defines "assets" as things that generate cash inflow, such as rental properties or businesses—and "liabilities" as things that generate cash outflow, such as houses, cars, and so on. Such definitions are somewhat based on the concept of negative gearing. Kiyosaki also argues that financial leverage is critically important in becoming rich.
Kiyosaki stresses what he calls "financial literacy" as the means to obtaining wealth. He says that life skills are often best learned through experience and that there are important lessons not taught in school. He says that formal education is primarily for those seeking to be employees or self-employed individuals, and that this is an "Industrial Age idea." And according to Kiyosaki, in order to obtain financial freedom, one must be either a business owner or an investor, generating passive income.
Kiyosaki speaks often of what he calls "The Cashflow Quadrant," a conceptual tool that aims to describe how all the money in the world is earned. Depicted in a diagram, this concept entails four groupings, split with two lines (one vertical and one horizontal). In each of the four groups there is a letter representing a way in which an individual may earn income
Other Books:
• If you want to be Rich & Happy don't go to School? (1992)
• The Business School for People Who Like Helping People (2001) - endorses multi-level marketing.
• Retire Young, Retire Rich (2001)
• Rich Dad's The Business School (2003)
• Who Took My Money (2004)
• Rich Dad, Poor Dad for Teens (2004)
• Before You Quit Your Job (2005)
• Rich Dad's Escape from the Rat Race - Comic for children (2005)
• Rich Dad's Increase Your Financial IQ: Get Smarter with Your Money (2008)
SUMMARY OF THE BOOK RICH DAD,POOR DAD
BY ROBERT.T.KIYOSAKI
Lesson 1: The Rich Don’t Work For Money
At age 9, Robert Kiyosaki and his best friend Mike asked Mike’s father (Rich Dad) to teach them how to make money. After 3 weeks of dusting cans in one of Rich Dad’s convenience stores at 10 cents a week, Kiyosaki was ready to quit. Rich Dad pointed out this is exactly what his employees sounded like. Some people quit a job because it doesn’t pay well. Others see it as an opportunity to learn something new.
WORK TO LEARN
Next Rich Dad put the two boys to work, this time for nothing. Doing this forced them to think up a source of income, a business scheme. The opportunity came to them upon noticing discarded comic books in the store. The first business plan was hatched. The boys opened a comic book library and employed Mike’s sister at 1$ a week to mind it. Soon they were earning $9.50 a week without having to physically run the library, while kids read as much comics as they could in two
hours after school for only a few cents.
Lesson 2: Why Teach Financial Literacy?
They don’t teach this at school.
T he growing gap between rich and poor is rooted in the antiquated educational system. The system trains people to be good employees, and not employers. The obsolete school system also fails to provide young people with basic financial skills rich people use to grow their wealth.
Know your options and use this knowledge to build a formidable asset column. In an age of instant millionaires it really isn’t about how much money you make, it’s about how much you keep, and how many generations you can keep it.
Lesson 3: Mind Your Own Business
KEEP YOUR DAY JOB BUT START MINDING YOUR OWN BUSINESS.
Kiyosaki sold photocopiers on commission at Xerox. With his earnings he purchased real estate. In 3 years’ time his real estate income was far greater than his earnings at Xerox. He then left the company to mind his own business full time. He knew that in order to get out of the rat race fast, he needed to work harder, sell more copiers and mind his own business.
Don’t spend all your wages. Build a good portfolio of assets and you can spend later when these assets bring you greater income.
Lesson 4: The History of Taxes and the Power of Corporations
Income tax has been levied on citizens in England since 1874. In the United States it was introduced in 1913. Since then what was initially a plan to tax only the rich eventually “trickled down” to the middle class and the poor. The rich have a secret weapon to shelter themselves from heavy taxation. It’s called the Corporation. It isn’t a building with the company name and
logo in brass signage out front. A corporation is simply a legal document in your attorney’s file cabinet duly registered under a government state agency. Corporations offer great tax advantages and protection from lawsuits. It’s the legal way to protect your wealth, and the rich have been using it for generations. Do your own research and find out what taxlaws will bring you the best advantages.
Lesson 5: The Rich Invent Money
Self-confidence coupled with high financial IQ can certainly earn more for you than merely saving a little bit every month.
Make good use of your time and find the best deals.
An example: In the early 90’s the Phoenix economy was bad. Homes once valued at $100,000 sold for $75,000. Kiyosaki shopped at bankruptcy courts and bought the same houses at only $20,000. He resold these properties for $60,000 making a cool $40,000 profit. After six more transactions of the same manner he made a total $190,000 in profit and it only took 30 hours of work time. Rich Dad explains there are Two Types of Investors:
1. Buyers of Packaged Investments.
This is when you call a retail outlet, real estate company, stockbroker
or financial planner and put your money in ready-made investments.
It’s a simple, clean way of investing.
2. The Professional Investor
Design your own investment. Assemble a deal and put together
different components of an opportunity. Rich dad encourages this type.
You need to develop three main skills to be this type of investor
Lesson 6: Work to Learn –Don’t Work for Money
The Author’s Odyssey
After college graduation Robert Kiyosaki joined the Marine Corps. He learned to fly for the love of it. He also learned to lead troops, an important part of management training. His next move was to join Xerox where he learned to overcome his fear of rejection. The thought of knocking on doors and selling copiers terrified him. Soon he was among the top 5 salespeople at the company. For a couple of years he was No.1. Having achieved his objective – overcoming
his shyness and fear—he quit and began minding his own business. Learn skills like PR, marketing, and advertising. Take a second job if it means learning more.
REVIEW OF THE BOOK RICH DAD POOR DAD
In Rich Dad, Poor Dad, Kiyosaki describes the lessons that his two dads taught him about money and its management. To clarify, he had one biological dad and the other was the father of his friend. One of them was highly educated with multiple advanced degrees, the other had an 8th grade education. One was very wealthy, the other regularly struggled with money. Counter-intuitively, the sides were changed on who was wealthy and who was poor. The dad with the 8th grade education, was a wealthy entrepreneur who owned businesses such as restaurants, a construction company and other business ventures. His educated dad spent the majority of life working with very little to show for it.
The first portion of the book is written as a story from the viewpoint of Kiyosaki as a 9 year old kid who learned financial lessons from his rich dad. He performed a number of jobs for him and learned many aspects of business by observing the management, accounting, sales, legal and other aspects. The style of this section was similar to the way The Wealthy Barber was structured in that it teaches financial lessons through narrative style.
A good point Kiyosaki makes is that a house is not an asset though it may be listed this way traditionally. The costs associated with a house such as utilities, property taxes, insurance, and maintenance pull away cash flow. He instead defines an asset as a resource that produces cash. A house actually could be in this category if fully paid for and used as a rental property. (To clarify Kiyosaki does not necessarily recommend buying real estate only with cash. He endorses obtaining financing and taking on debt) I personally think Dave Ramsey's thoughts on this subject of paying cash for investment real estate are more accurate and help to take into account the risk associated with debt.
Other assets could be mutual funds or stocks that generate cash flow as well as intellectual property such as books or music which produce royalties. A business that one owns but doesn't need to be actively involved in the work would also be considered an asset by his definition.
The point he makes is that many people put money into things which do not help to build their wealth and instead cause negative cash flow in some instances through expenses associated with them.
Kiyosaki also promotes a person being creative and figuring out ways to make money in scenarios which might not on the surface look like an opportunity. An example he gives of this is when he worked in a gas station as a kid for very low wages, they sold comic books which were thrown away if not sold by the time the comic salesman returned with the new comics. He collected all of these comics and started a comic book library which charged 10 cents for two hours worth of reading. This allowed kids in the neighborhood to read more comics for the same price that just one would cost. By looking around and finding ways to make money, he identified this opportunity and created a profitable situation.
This philosophy of the book is good in encouraging the building of assets which will continue to increase cash flow as well as the entrepreneurial spirit. One area I do not agree with is the risk level taken on through debt to enable the purchase of real estate. Overall, the book has some good lessons to be gleaned………..
RICH DAD POOR DAD BY ROBERT.T.KIYOSAKI
Submitted By:-
CERTIFICATE
This is to certify that the term paper entitled of communication skills completed by Sneha a student of BSC-Fashion Technology, under the guidance of Santosh madam, for the partial fulfillement of the award.
His work has been found……….
Guided by:
ACKNOWLEDGEMENT
Words are not enough to pay gratitude to them who helped me in producing this project. Still I would like to add few words for the people who were a part of this term paper in numerous ways, people who gave unending support right from the stage the idea was conceived.
In particular I wish to thanks our Teacher, SANTOSH, without whose support this project would have been impossible. She has not only helped in giving guidance but also reviewed this project painstaking attention for the details.
I would like to take this opportunity to thanks all the staff members for their unending support which they have provided in many ways.
Last but not the least I would like to thanks all my classmates for overwhelming support through out the making term paper.
SNEHA
PREFACE
It is largely based on Kiyosaki's upbringing and education in Hawaii, although the degree of fictionalization is disputed. Because of the heavy use of allegory, some readers believe that Kiyosaki created Rich Dad as an author surrogate (a literary device), discussed further in the criticism section below. Many readers believe that the "Rich Dad" in the book is actually the founder of Hawaii's widespread ABC Stores.
The book highlights the different attitudes to money, work and life of these two men, and how they in turn influenced key decisions in Kiyosaki's life.
Among some of the book's topics are:
• the value of financial intelligence
• that corporations spend first, then pay taxes, while individuals must pay taxes first
• that corporations are artificial entities that anyone can use, but the poor usually don't know how
According to Kiyosaki and Lechter, wealth is measured as the number of days the income from your assets will sustain you, and financial independence is achieved when your monthly income from assets exceeds your monthly expenses. Each dad had a different way of teaching his son.
ABOUT THE AUTHOR
ROBERT.T.KIYOSAKI
Personal life
A fourth-generation Japanese American, Kiyosaki was born in India and raised in Hawaii . He is the son of the late educator Ralph H. Kiyosaki (1919-1991). After graduating from Hilo High School, he attended the U.S. Merchant Marine Academy in New York, graduating with the class of 1969 as a deck officer. He later served in the Marine Corps as a helicopter gunship pilot during the Vietnam War, where he was awarded the Air Medal. Kiyosaki left the Marine Corps in 1974 and got a job selling copy machines for the Xerox Corporation. In 1977, Kiyosaki started a company that brought to market the first nylon and Velcro "surfer" wallets. The company was moderately successful at first but eventually went bankrupt. In the early 1980s, Kiyosaki started a business that licensed T-shirts for Heavy metal rock bands.[2] Around 1996–1997 he launched Cashflow Technologies, Inc. which operates and owns the Rich Dad (and Cashflow) brand.He is married to Kim Kiyosaki.
Teachings
A large part of Kiyosaki's teachings focus on generating passive income by means of investment opportunities, such as real estate and businesses, with Other Books:
• If you want to be Rich & Happy don't go to School? (1992)
• The Business School for People Who Like Helping People (2001) - endorses multi-level marketing.
• Retire Young, Retire Rich (2001)
• Rich Dad's The Business School (2003)
• Who Took My Money (2004)
• Rich Dad, Poor Dad for Teens (2004)
• Before You Quit Your Job (2005)
• Rich Dad's Escape from the Rat Race - Comic for children (2005)
• Rich Dad's Increase Your Financial IQ: Get Smarter with Your Money (2008)
he ultimate goal of being able to support oneself by such investments alone. In tandem with this, Kiyosaki defines "assets" as things that generate cash inflow, such as rental properties or businesses—and "liabilities" as things that generate cash outflow, such as houses, cars, and so on. Such definitions are somewhat based on the concept of negative gearing. Kiyosaki also argues that financial leverage is critically important in becoming rich.
Kiyosaki stresses what he calls "financial literacy" as the means to obtaining wealth. He says that life skills are often best learned through experience and that there are important lessons not taught in school. He says that formal education is primarily for those seeking to be employees or self-employed individuals, and that this is an "Industrial Age idea." And according to Kiyosaki, in order to obtain financial freedom, one must be either a business owner or an investor, generating passive income.
Kiyosaki speaks often of what he calls "The Cashflow Quadrant," a conceptual tool that aims to describe how all the money in the world is earned. Depicted in a diagram, this concept entails four groupings, split with two lines (one vertical and one horizontal). In each of the four groups there is a letter representing a way in which an individual may earn income
Other Books:
• If you want to be Rich & Happy don't go to School? (1992)
• The Business School for People Who Like Helping People (2001) - endorses multi-level marketing.
• Retire Young, Retire Rich (2001)
• Rich Dad's The Business School (2003)
• Who Took My Money (2004)
• Rich Dad, Poor Dad for Teens (2004)
• Before You Quit Your Job (2005)
• Rich Dad's Escape from the Rat Race - Comic for children (2005)
• Rich Dad's Increase Your Financial IQ: Get Smarter with Your Money (2008)
SUMMARY OF THE BOOK RICH DAD,POOR DAD
BY ROBERT.T.KIYOSAKI
Lesson 1: The Rich Don’t Work For Money
At age 9, Robert Kiyosaki and his best friend Mike asked Mike’s father (Rich Dad) to teach them how to make money. After 3 weeks of dusting cans in one of Rich Dad’s convenience stores at 10 cents a week, Kiyosaki was ready to quit. Rich Dad pointed out this is exactly what his employees sounded like. Some people quit a job because it doesn’t pay well. Others see it as an opportunity to learn something new.
WORK TO LEARN
Next Rich Dad put the two boys to work, this time for nothing. Doing this forced them to think up a source of income, a business scheme. The opportunity came to them upon noticing discarded comic books in the store. The first business plan was hatched. The boys opened a comic book library and employed Mike’s sister at 1$ a week to mind it. Soon they were earning $9.50 a week without having to physically run the library, while kids read as much comics as they could in two
hours after school for only a few cents.
Lesson 2: Why Teach Financial Literacy?
They don’t teach this at school.
T he growing gap between rich and poor is rooted in the antiquated educational system. The system trains people to be good employees, and not employers. The obsolete school system also fails to provide young people with basic financial skills rich people use to grow their wealth.
Know your options and use this knowledge to build a formidable asset column. In an age of instant millionaires it really isn’t about how much money you make, it’s about how much you keep, and how many generations you can keep it.
Lesson 3: Mind Your Own Business
KEEP YOUR DAY JOB BUT START MINDING YOUR OWN BUSINESS.
Kiyosaki sold photocopiers on commission at Xerox. With his earnings he purchased real estate. In 3 years’ time his real estate income was far greater than his earnings at Xerox. He then left the company to mind his own business full time. He knew that in order to get out of the rat race fast, he needed to work harder, sell more copiers and mind his own business.
Don’t spend all your wages. Build a good portfolio of assets and you can spend later when these assets bring you greater income.
Lesson 4: The History of Taxes and the Power of Corporations
Income tax has been levied on citizens in England since 1874. In the United States it was introduced in 1913. Since then what was initially a plan to tax only the rich eventually “trickled down” to the middle class and the poor. The rich have a secret weapon to shelter themselves from heavy taxation. It’s called the Corporation. It isn’t a building with the company name and
logo in brass signage out front. A corporation is simply a legal document in your attorney’s file cabinet duly registered under a government state agency. Corporations offer great tax advantages and protection from lawsuits. It’s the legal way to protect your wealth, and the rich have been using it for generations. Do your own research and find out what taxlaws will bring you the best advantages.
Lesson 5: The Rich Invent Money
Self-confidence coupled with high financial IQ can certainly earn more for you than merely saving a little bit every month.
Make good use of your time and find the best deals.
An example: In the early 90’s the Phoenix economy was bad. Homes once valued at $100,000 sold for $75,000. Kiyosaki shopped at bankruptcy courts and bought the same houses at only $20,000. He resold these properties for $60,000 making a cool $40,000 profit. After six more transactions of the same manner he made a total $190,000 in profit and it only took 30 hours of work time. Rich Dad explains there are Two Types of Investors:
1. Buyers of Packaged Investments.
This is when you call a retail outlet, real estate company, stockbroker
or financial planner and put your money in ready-made investments.
It’s a simple, clean way of investing.
2. The Professional Investor
Design your own investment. Assemble a deal and put together
different components of an opportunity. Rich dad encourages this type.
You need to develop three main skills to be this type of investor
Lesson 6: Work to Learn –Don’t Work for Money
The Author’s Odyssey
After college graduation Robert Kiyosaki joined the Marine Corps. He learned to fly for the love of it. He also learned to lead troops, an important part of management training. His next move was to join Xerox where he learned to overcome his fear of rejection. The thought of knocking on doors and selling copiers terrified him. Soon he was among the top 5 salespeople at the company. For a couple of years he was No.1. Having achieved his objective – overcoming
his shyness and fear—he quit and began minding his own business. Learn skills like PR, marketing, and advertising. Take a second job if it means learning more.
REVIEW OF THE BOOK RICH DAD POOR DAD
In Rich Dad, Poor Dad, Kiyosaki describes the lessons that his two dads taught him about money and its management. To clarify, he had one biological dad and the other was the father of his friend. One of them was highly educated with multiple advanced degrees, the other had an 8th grade education. One was very wealthy, the other regularly struggled with money. Counter-intuitively, the sides were changed on who was wealthy and who was poor. The dad with the 8th grade education, was a wealthy entrepreneur who owned businesses such as restaurants, a construction company and other business ventures. His educated dad spent the majority of life working with very little to show for it.
The first portion of the book is written as a story from the viewpoint of Kiyosaki as a 9 year old kid who learned financial lessons from his rich dad. He performed a number of jobs for him and learned many aspects of business by observing the management, accounting, sales, legal and other aspects. The style of this section was similar to the way The Wealthy Barber was structured in that it teaches financial lessons through narrative style.
A good point Kiyosaki makes is that a house is not an asset though it may be listed this way traditionally. The costs associated with a house such as utilities, property taxes, insurance, and maintenance pull away cash flow. He instead defines an asset as a resource that produces cash. A house actually could be in this category if fully paid for and used as a rental property. (To clarify Kiyosaki does not necessarily recommend buying real estate only with cash. He endorses obtaining financing and taking on debt) I personally think Dave Ramsey's thoughts on this subject of paying cash for investment real estate are more accurate and help to take into account the risk associated with debt.
Other assets could be mutual funds or stocks that generate cash flow as well as intellectual property such as books or music which produce royalties. A business that one owns but doesn't need to be actively involved in the work would also be considered an asset by his definition.
The point he makes is that many people put money into things which do not help to build their wealth and instead cause negative cash flow in some instances through expenses associated with them.
Kiyosaki also promotes a person being creative and figuring out ways to make money in scenarios which might not on the surface look like an opportunity. An example he gives of this is when he worked in a gas station as a kid for very low wages, they sold comic books which were thrown away if not sold by the time the comic salesman returned with the new comics. He collected all of these comics and started a comic book library which charged 10 cents for two hours worth of reading. This allowed kids in the neighborhood to read more comics for the same price that just one would cost. By looking around and finding ways to make money, he identified this opportunity and created a profitable situation.
This philosophy of the book is good in encouraging the building of assets which will continue to increase cash flow as well as the entrepreneurial spirit. One area I do not agree with is the risk level taken on through debt to enable the purchase of real estate. Overall, the book has some good lessons to be gleaned………..
Monday, May 11, 2009
Book review:On saying please
Assignment
Book Review
Submitted to Miss.Santosh
Submitted by Jaspreet kaur
Roll no 40
Contents
1. Story: “On saying please”
Author: A.G Gardiner
Theme
Good Manners are of great value in human life. Bad manners are not a legal crime. But everybody dislikes a man with bad manners. Small courtesies win us a lot of friends. Words like ‘please’ and ‘thank you’ helps us in making our passage through life smooth. The law does not permit us to hit back if we are the victims of bad manners. But if we are threatened with physical violence, the law permits us some liberty of action. Bad manners create a chain reaction. Social practice demands politeness from us. A good mannered person will find that his work becomes e person will find that his work becomes easier by the ready co-operation that he gets from others.
2. Story: “Forgetting”
Author: Roberts Lynd
Theme
The modern man has a wonderful memory in the daily matters of his life but he is also forgetful in several things. Only a few of us remember to take the medicine suggested by the doctor. Most of us forget to post our letter. Sportsmen generally for get their footballs and cricket bats. Angler’s there fishing roads. Absent-mindedness is a real virtue. The absent minded man makes the best of life.
3. Story: ‘The Never –Never Nest’
Author: Cedrik Mount
The plays tell us about the merits and demerits of buying things on hire purchase basis. Jack and Jill are newly married couple. They are attracted by the hire purchase system .So they by all the domestic luxuries including their house on installments basis. In one sense their child is not their own. They have not made full payment of Dr. martins bills. The system encourages lavishness and taking the loan.
The writer points that the hire purchase system enables the low-income group to have things, which they cannot buy with their money. On the other hand the system makes people Extravagant they fall into the habit of borrowing which makes them unhappy.
4.Story: “Uncle podgier hangs a picture”
Author: Jerome K. Jerome
Theme
An eccentric person is source of fun and nuisance. He attaches great importanctopetty things. If he is to do ordinary thing. He looks it as a great military operation. Basically such a person is stupid and forgetful but he thinks too much of himself. Uncle podgier is a person. He has to hang a picture. But he treats it as a big military operation. He manages all the members of his family. When the job is done the picture hangs unsafely on the wall. He provides a lot of amusement to the reader in the process.
Story: “The Never-Never Nest”.
Author: Cedric Mount
The play tells us about the merits and demerits of buying things on hire-purchase basis. Jack and Jill are newly married couple. They are attracted by the hire-purchase system. So they buy all the domestic luxuries including their house on instilment basis. In one Sense their child is not their own. They have not made full payment of Dr. Martin’s bill. The system encourages lavishness and taking of loan. The writer points out that the hire-purchase system enables the low- income group to have things, which they cannot buy with their money. On the other hand, the system makes people extravagant. They fall into the habit of borrowing which makes then unhappy.
Characters
Jack, Jill, Aunt Jane and Nurse.
SUMMARY
Jack and Jill is a young couple. They live in a well-furnished house at New Hampstead. Aunt Jane pays a visit to their house. She is pleased to see their house and beautiful furniture. Jack and Jill have all modern comforts. They have a Radiogram a car, a refrigerator and a piano. Aunt Jane is very much impressed by their standard of living. They call their house a little Nest. Jack tells Aunt Jane that all their comforts are due to her.
Aunt Jane does not understand how her nephew owns all these comforts. She had presented the couple cheque of only two hundred as wedding gift. It surprised Aunt Jane how they could afford to pay the rent. Jack tells her that he doesn’t pay the rent. He actually owns the house. Aunt Jane is astonished to hear it. Jack explains to his aunt that they have purchased the house on Installments. He told her that living in a Rented house was expensive. They had to pay only ten pounds in cash and a few quarterly installments. Aunt Jane was sure that Jack must be well off to keep up a place like that. Jack modestly told aunt Jane that he had a five-shilling rise last year. Aunt Jane was eager to know if the car belonged to him. Jack replied that he owned its steering wheel, one tire and two cylinders. It was also bought on installments. They could enjoy the pleasures of motoring for a mere five pounds. Jack discloses that every item of comfort in the house had been purchased on installments. Jack says that in fact he owned only one leg of the furniture. The rest to be paid by easy installments. Aunt Jane refused to sit in the sofa. She thinks that the sofa doesn’t belong to jack. Jack tells Aunt Jane that he earns about six pounds a week. His installments come to nearly eight pounds. Aunt Jane is shocked to hear it. She asks jack how he manages to pay his installments Jack replies that he borrowed sum is to be paid in installments. Again that she decides to go home. Jack offers to driver her to the station. She advises them to things in cash. Aunt Jane opens her handbag. She tells jack that she wants to give them a little cheque for ten pounds. She advises them to pay one of their bills. In this way at least one item will be really theirs. Jack goes to see her off at bus stand. Jill thanks Aunt Jane for the present. Jill is very happy to see the cheque for ten pounds. She sends the cheque to the doctor. Jack, mean while, comes back. He is very pleased to know that the cheque for ten pounds. He thinks that he can now pay off the two next installments on the car. Jill tells him that she has already sent it off for something else. Jack gets angry when he hears that the cheque has gone to the Doctor. He thinks it to be wastage of money. Jill tells him that he does not understand the real thing. She tells him that they had to pay one more installments and the baby would be really theirs.
Language style
Language style is very easy.
Characters
Aunt Jane’s character is very important in this story Aunt is related to the young couple Jack Jill Who Lives in fashionable house. She likes the couple she has given them two hundred pounds as wedding gift. She does not like a borrower. She thinks that a borrower has no self-respect. She is impressed by Jack’s standard of living. Then she comes to know that jack has bought everything on installment basis. She feels shocked to learn that jack has to pay eight pound as a weekly installment. She hates spendthrifts and things bought on credit. But she is very generous to jack and Jill. When jack gets married, she gives him a cheque for two hundred pounds as a wedding gift. When she leaves their house she gives them a ten-pound cheque.
Thank You
Book Review
Submitted to Miss.Santosh
Submitted by Jaspreet kaur
Roll no 40
Contents
1. Story: “On saying please”
Author: A.G Gardiner
Theme
Good Manners are of great value in human life. Bad manners are not a legal crime. But everybody dislikes a man with bad manners. Small courtesies win us a lot of friends. Words like ‘please’ and ‘thank you’ helps us in making our passage through life smooth. The law does not permit us to hit back if we are the victims of bad manners. But if we are threatened with physical violence, the law permits us some liberty of action. Bad manners create a chain reaction. Social practice demands politeness from us. A good mannered person will find that his work becomes e person will find that his work becomes easier by the ready co-operation that he gets from others.
2. Story: “Forgetting”
Author: Roberts Lynd
Theme
The modern man has a wonderful memory in the daily matters of his life but he is also forgetful in several things. Only a few of us remember to take the medicine suggested by the doctor. Most of us forget to post our letter. Sportsmen generally for get their footballs and cricket bats. Angler’s there fishing roads. Absent-mindedness is a real virtue. The absent minded man makes the best of life.
3. Story: ‘The Never –Never Nest’
Author: Cedrik Mount
The plays tell us about the merits and demerits of buying things on hire purchase basis. Jack and Jill are newly married couple. They are attracted by the hire purchase system .So they by all the domestic luxuries including their house on installments basis. In one sense their child is not their own. They have not made full payment of Dr. martins bills. The system encourages lavishness and taking the loan.
The writer points that the hire purchase system enables the low-income group to have things, which they cannot buy with their money. On the other hand the system makes people Extravagant they fall into the habit of borrowing which makes them unhappy.
4.Story: “Uncle podgier hangs a picture”
Author: Jerome K. Jerome
Theme
An eccentric person is source of fun and nuisance. He attaches great importanctopetty things. If he is to do ordinary thing. He looks it as a great military operation. Basically such a person is stupid and forgetful but he thinks too much of himself. Uncle podgier is a person. He has to hang a picture. But he treats it as a big military operation. He manages all the members of his family. When the job is done the picture hangs unsafely on the wall. He provides a lot of amusement to the reader in the process.
Story: “The Never-Never Nest”.
Author: Cedric Mount
The play tells us about the merits and demerits of buying things on hire-purchase basis. Jack and Jill are newly married couple. They are attracted by the hire-purchase system. So they buy all the domestic luxuries including their house on instilment basis. In one Sense their child is not their own. They have not made full payment of Dr. Martin’s bill. The system encourages lavishness and taking of loan. The writer points out that the hire-purchase system enables the low- income group to have things, which they cannot buy with their money. On the other hand, the system makes people extravagant. They fall into the habit of borrowing which makes then unhappy.
Characters
Jack, Jill, Aunt Jane and Nurse.
SUMMARY
Jack and Jill is a young couple. They live in a well-furnished house at New Hampstead. Aunt Jane pays a visit to their house. She is pleased to see their house and beautiful furniture. Jack and Jill have all modern comforts. They have a Radiogram a car, a refrigerator and a piano. Aunt Jane is very much impressed by their standard of living. They call their house a little Nest. Jack tells Aunt Jane that all their comforts are due to her.
Aunt Jane does not understand how her nephew owns all these comforts. She had presented the couple cheque of only two hundred as wedding gift. It surprised Aunt Jane how they could afford to pay the rent. Jack tells her that he doesn’t pay the rent. He actually owns the house. Aunt Jane is astonished to hear it. Jack explains to his aunt that they have purchased the house on Installments. He told her that living in a Rented house was expensive. They had to pay only ten pounds in cash and a few quarterly installments. Aunt Jane was sure that Jack must be well off to keep up a place like that. Jack modestly told aunt Jane that he had a five-shilling rise last year. Aunt Jane was eager to know if the car belonged to him. Jack replied that he owned its steering wheel, one tire and two cylinders. It was also bought on installments. They could enjoy the pleasures of motoring for a mere five pounds. Jack discloses that every item of comfort in the house had been purchased on installments. Jack says that in fact he owned only one leg of the furniture. The rest to be paid by easy installments. Aunt Jane refused to sit in the sofa. She thinks that the sofa doesn’t belong to jack. Jack tells Aunt Jane that he earns about six pounds a week. His installments come to nearly eight pounds. Aunt Jane is shocked to hear it. She asks jack how he manages to pay his installments Jack replies that he borrowed sum is to be paid in installments. Again that she decides to go home. Jack offers to driver her to the station. She advises them to things in cash. Aunt Jane opens her handbag. She tells jack that she wants to give them a little cheque for ten pounds. She advises them to pay one of their bills. In this way at least one item will be really theirs. Jack goes to see her off at bus stand. Jill thanks Aunt Jane for the present. Jill is very happy to see the cheque for ten pounds. She sends the cheque to the doctor. Jack, mean while, comes back. He is very pleased to know that the cheque for ten pounds. He thinks that he can now pay off the two next installments on the car. Jill tells him that she has already sent it off for something else. Jack gets angry when he hears that the cheque has gone to the Doctor. He thinks it to be wastage of money. Jill tells him that he does not understand the real thing. She tells him that they had to pay one more installments and the baby would be really theirs.
Language style
Language style is very easy.
Characters
Aunt Jane’s character is very important in this story Aunt is related to the young couple Jack Jill Who Lives in fashionable house. She likes the couple she has given them two hundred pounds as wedding gift. She does not like a borrower. She thinks that a borrower has no self-respect. She is impressed by Jack’s standard of living. Then she comes to know that jack has bought everything on installment basis. She feels shocked to learn that jack has to pay eight pound as a weekly installment. She hates spendthrifts and things bought on credit. But she is very generous to jack and Jill. When jack gets married, she gives him a cheque for two hundred pounds as a wedding gift. When she leaves their house she gives them a ten-pound cheque.
Thank You
Transient current
Introduction
Transient current :-
Transients -- they can be currents or voltages -- occur momentarily and fleetingly in response to a stimulus or change in the equilibrium of a circuit. Transients frequently occur when power is applied to or removed from a circuit, because of expanding or collapsing magnetic fields in inductors or the charging or discharging of capacitors.
MISSION STATNENT General Physiology
is the study of biological mechanisms through analytical investigations, which decipher the molecular and cellular mechanisms underlying biological function at all levels of organization.
The mission of the Journal of General Physiology is to publish articles that elucidate important biological, chemical, or physical mechanisms of broad physiological significance.
Two Fast Transient Current Components during Voltage Clamp on Snail Neurons
Voltage clamp currents from medium sized ganglion cells of Helix pomatum have a fast transient outward current component in addition to the usually observed inward and outward currents. This component is inactivated at normal resting potential. The current, which is carried by K+ ions, may surpass leakage currents by a factor of 100 after inactivation has been removed by hyperpolarizing conditioning pulses. Its kinetics are similar to those of the inward current, except that it has a longer time constant of inactivation. It has a threshold close to resting potential The time constants of the slow process are similar to those of slow outward current inactivation.
Transient current
Electric current is motion of charge and for a closed system the current must satisfy the equation of continuity
(3.8)
or in integrated over the volume
(3.9)
Where is the particle density, the current density and the total current in the volume . In the system we study, is identified by the total charge density , where is the elementary charge. In the continuity equation (3.9) the integration is performed over some finite volume within which the current is calculated, see figure 3.4; here we will consider the volume to be , where is the length in the current flow ( -) direction and is the cross sectional surface area of the cylinder surrounding the lead.
: Volume of integration - the cylinder length is along the -axis and its cross sectional surface area is .
We have already made the approximation to replace by . By defining the left(right) number of charge , and the partial overlap the transient charge current is given by
(3.10)
Suppose that the integration length is entirely in the left lead. Then, since the tail from a right wave function is exponentially small in the left region the integrals and are negligible, which results in
By adding the vector potential to the kinetic energy part of the Hamiltonian) we calculate the current as a response to the electromagnetic field given by . Hence, the system is described by
(3.11)
The non-equilibrium hopping matrix element
Contains the vector potential. Next we replace by its corresponding matrix element whenever belong to the same contact, i.e. same side of the potential barrier, and neglect the differences and . The usual non-equilibrium tunneling Hamiltonian is, thus, obtained as
(3.12)
As discussed earlier the shape of the potential may be arbitrary since its explicit form is never used in the derivations
________________________________________
Tutorial discussion on Transient assessment
Transients are divided into two categories which are easy to identify: impulsive and oscillatory. If the mains signal is removed, the remaining waveform is the pure component of the transient. The transient is classified in the impulsive category when 77% of the peak-to-peak voltage of the pure component is of one polarity. Each category of transient is subdivided into three types related to the frequencies contained. Each type of transient can be associated with a group of phenomena occurring on the power system.
The impulsive low-frequency transient rises in 0.1 ms and lasts more than 1 ms. Measurement of these types of transients should be useful for all classes of application (benchmarking, legal, trouble shooting and laboratory)
The medium-frequency impulsive transient lasting between 50 ns to 1 ms and oscillatory transients between 5 and 500 kHz are less frequent than the low-frequency types but have much higher amplitude.
Source voltage assessment
. These standards specify an open-circuit voltage Ug which decreases at the terminals of an impedance Zs at the moment the generator injects a current into the equipment under test. This impedance Zs is known as the artificial mains network or line impedance stabilization network (LISN) which is specified as a function of the range of frequencies contained in the transient, as follows:
- (0.4 + 800 H) for frequencies lower than 9 kHz [IEC 725]
- 50 in parallel with (5 + 50 H) [CISPR 16] for frequencies from 9 kHz - 150 kHz
- 50 in parallel with 50 H [CISPR 16] for frequencies from 150 kHz to 30 MHz.
, The source voltages UaS, UbS, and UcS to be compared to the values recommended in the standards for susceptibility tests are
[7]
[8]
[9]
.
Transient over voltage envelope
.
The rms voltage assessment is used to assess the rms voltage envelope for a duration exceeding a half cycle. When the supply voltage U(t) includes a short transient detected at time , the percent voltage Vp of interval T related to the voltage envelope is given by:
%
% [10]
Where:
VP = rms voltage as a percentage of the declared voltage Vd
VD = rms declared voltage
= beginning of the interval assessed
T = interval assessed
U(t) = supply voltage involving a short transient.
t = sampling interval
Rms amplitude-duration decomposition. The variable ISV% is calculated using the following equation:
% [11]
Where:
ISV = instantaneous steady-state voltage calculated in
VD = rms declared voltage.
.. This value in the interval between each half-decade yields a value for the factors of the rms envelope, as follows:
VHFC = root mean square of voltages between 1 µs and 5 µs
HFC = root mean square of voltages between 5 µs and 10 µs
HMFC = root mean square of voltages between 10 µs and 50 µs
MFC = root mean square of voltages between 50 µs and 100 µs
MLFC = root mean square of voltages between 100 µs and 500 µs
LFC = root mean square of voltages between 500 µs and 1 ms
VLFC = root mean square of voltages between 1 ms and 5 ms
MEMBRANE POTENTIAL
Information transmission can be understood in terms of two major components: Electrical signals and chemical signals. Transient electrical signals are important for transferring information over long distances rapidly within the neuron. Chemical signals, on the other hand, are mainly involved in the transmission of information between neurons.
Electrical signals (receptor potential, synaptic potential and action potential) are all caused by transient changes in the current flow into and out of the neuron, that drives the electrical potential across the plasma membrane away of its resting condition.
Every neuron has a separation of electrical charge across its cell membrane. The membrane potential results from a separation of positive and negative charges across the cell membrane. The relative excess of positive charges outside and negative charges inside the membrane of a nerve cell at rest is maintained because the lipid bilayer acts as a barrier to the diffusion of ions, and give rise to an electrical potential difference, which ranges from about 60 to 70 mV.
Vr = -60 to -70 mV.
Being Vr, the resting potential.
The charge separation across the membrane, and therefore the resting membrane potential, is disturbed whenever there is a net flux of ions into or out of the cell. A reduction of the charge separation is called depolarization; an increase in charge separation is called hyperpolarization. Transient current flow and therefore rapid changes in potential are made possible by ion channel, a class of integral proteins that traverse the cell membrane. There are two types of ion channels in the membrane: gated and nongated. Nongated channels are always open and are not influenced significantly by extrinsic factors. They are primarily important in maintaining the resting membrane potential. Gated channels, in contrast, open and close in response to specific electrical, mechanical, or chemical signals. Since ion channels recognize and select among specific ions, the actual distribution of ionic species across the membrane depends on the particular distribution of ion channels in the cell membrane.
. Na and Cl are more concentrated outside the cell while K and organic anions (organic acids and proteins) are more concentrated inside. The overall effect of this ionic distribution is the resting potential.
There are two forces acting on a given ionic species. The driving force of the chemical concentration gradient tends to move ions down this gradient (chemical potential). On the other hand the electrostatic force due to the charge separation across the membrane tends to move ions in a direction determined by its particular charge. Thus, for instance, chloride ions which are concentrated outside the cell tend to move inward down its concentration gradient through nongated chloride channels. However the relative excess of negative charge inside the membrane tend to push chloride ions back out of the cell. Eventually equilibrium can be reached so that the actual ratio of intracellular and extracellular concentration ultimately depends on the existing membrane potential.
The same argument applies to the potassium ions. However these two forces act together on each Na ion to drive it into the cell. First, Na is more concentrated outside than inside and therefore tends to flow into the cell down its concentration gradient. Second, Na is driven into the cell by the electrical potential difference across the membrane. Therefore, if the cell is to have a steady resting membrane potential, the movement of Na ions into the cell must be balanced by the efflux of K ions. Although these steady ionic interchange prevents can prevent irreversible depolarization, this process cannot be allowed to continue unopposed. Otherwise, the K pool would be depleted, intracellular Na would increase, and the ionic gradients would gradually run down, reducing the resting membrane potential.
Summary: The shieding properties of a wire penetrating an infinite planar screen are considered. Time domain results are presented for the case of a transient current pulse propagating along the wire. These results are obtained by first computing numerical solutions for the problem in the frequency domain and then utilizing the inverse Fourier transform. Two double exponential pulses with differing characteristics are considered. Numerical results for the two pulses are compared to determine the effects of the pulse characteristics on the shielding properties of the geometry. Applications to via structures in high-speed circuits are also briefly discussed. It is observed that even for very small apertures, the effect of the screen on the low-frequency pulse is negligible. As the pulse width decreases, the effect of the screen becomes more prominent. For the high-frequency case, the pulse is significantly affected by the screen. Unlike the low-frequency pulse, the amplitude of the high-frequency pulse is dependent on the aperture size. Even for large apertures, the attenuation becomes significant as the current propagates down the wire. It is shown that as the width of the input pulse decreases, the distortion in the pulse shape becomes more pronounced. This effect is especially important in applications related to high-speed integrated circuits
BIBLIOGRAPHY
www.goolge.com/wikipedia
www.yahoo.com/physics fundamental
physics pardeep textbook
Transient current :-
Transients -- they can be currents or voltages -- occur momentarily and fleetingly in response to a stimulus or change in the equilibrium of a circuit. Transients frequently occur when power is applied to or removed from a circuit, because of expanding or collapsing magnetic fields in inductors or the charging or discharging of capacitors.
MISSION STATNENT General Physiology
is the study of biological mechanisms through analytical investigations, which decipher the molecular and cellular mechanisms underlying biological function at all levels of organization.
The mission of the Journal of General Physiology is to publish articles that elucidate important biological, chemical, or physical mechanisms of broad physiological significance.
Two Fast Transient Current Components during Voltage Clamp on Snail Neurons
Voltage clamp currents from medium sized ganglion cells of Helix pomatum have a fast transient outward current component in addition to the usually observed inward and outward currents. This component is inactivated at normal resting potential. The current, which is carried by K+ ions, may surpass leakage currents by a factor of 100 after inactivation has been removed by hyperpolarizing conditioning pulses. Its kinetics are similar to those of the inward current, except that it has a longer time constant of inactivation. It has a threshold close to resting potential The time constants of the slow process are similar to those of slow outward current inactivation.
Transient current
Electric current is motion of charge and for a closed system the current must satisfy the equation of continuity
(3.8)
or in integrated over the volume
(3.9)
Where is the particle density, the current density and the total current in the volume . In the system we study, is identified by the total charge density , where is the elementary charge. In the continuity equation (3.9) the integration is performed over some finite volume within which the current is calculated, see figure 3.4; here we will consider the volume to be , where is the length in the current flow ( -) direction and is the cross sectional surface area of the cylinder surrounding the lead.
: Volume of integration - the cylinder length is along the -axis and its cross sectional surface area is .
We have already made the approximation to replace by . By defining the left(right) number of charge , and the partial overlap the transient charge current is given by
(3.10)
Suppose that the integration length is entirely in the left lead. Then, since the tail from a right wave function is exponentially small in the left region the integrals and are negligible, which results in
By adding the vector potential to the kinetic energy part of the Hamiltonian) we calculate the current as a response to the electromagnetic field given by . Hence, the system is described by
(3.11)
The non-equilibrium hopping matrix element
Contains the vector potential. Next we replace by its corresponding matrix element whenever belong to the same contact, i.e. same side of the potential barrier, and neglect the differences and . The usual non-equilibrium tunneling Hamiltonian is, thus, obtained as
(3.12)
As discussed earlier the shape of the potential may be arbitrary since its explicit form is never used in the derivations
________________________________________
Tutorial discussion on Transient assessment
Transients are divided into two categories which are easy to identify: impulsive and oscillatory. If the mains signal is removed, the remaining waveform is the pure component of the transient. The transient is classified in the impulsive category when 77% of the peak-to-peak voltage of the pure component is of one polarity. Each category of transient is subdivided into three types related to the frequencies contained. Each type of transient can be associated with a group of phenomena occurring on the power system.
The impulsive low-frequency transient rises in 0.1 ms and lasts more than 1 ms. Measurement of these types of transients should be useful for all classes of application (benchmarking, legal, trouble shooting and laboratory)
The medium-frequency impulsive transient lasting between 50 ns to 1 ms and oscillatory transients between 5 and 500 kHz are less frequent than the low-frequency types but have much higher amplitude.
Source voltage assessment
. These standards specify an open-circuit voltage Ug which decreases at the terminals of an impedance Zs at the moment the generator injects a current into the equipment under test. This impedance Zs is known as the artificial mains network or line impedance stabilization network (LISN) which is specified as a function of the range of frequencies contained in the transient, as follows:
- (0.4 + 800 H) for frequencies lower than 9 kHz [IEC 725]
- 50 in parallel with (5 + 50 H) [CISPR 16] for frequencies from 9 kHz - 150 kHz
- 50 in parallel with 50 H [CISPR 16] for frequencies from 150 kHz to 30 MHz.
, The source voltages UaS, UbS, and UcS to be compared to the values recommended in the standards for susceptibility tests are
[7]
[8]
[9]
.
Transient over voltage envelope
.
The rms voltage assessment is used to assess the rms voltage envelope for a duration exceeding a half cycle. When the supply voltage U(t) includes a short transient detected at time , the percent voltage Vp of interval T related to the voltage envelope is given by:
%
% [10]
Where:
VP = rms voltage as a percentage of the declared voltage Vd
VD = rms declared voltage
= beginning of the interval assessed
T = interval assessed
U(t) = supply voltage involving a short transient.
t = sampling interval
Rms amplitude-duration decomposition. The variable ISV% is calculated using the following equation:
% [11]
Where:
ISV = instantaneous steady-state voltage calculated in
VD = rms declared voltage.
.. This value in the interval between each half-decade yields a value for the factors of the rms envelope, as follows:
VHFC = root mean square of voltages between 1 µs and 5 µs
HFC = root mean square of voltages between 5 µs and 10 µs
HMFC = root mean square of voltages between 10 µs and 50 µs
MFC = root mean square of voltages between 50 µs and 100 µs
MLFC = root mean square of voltages between 100 µs and 500 µs
LFC = root mean square of voltages between 500 µs and 1 ms
VLFC = root mean square of voltages between 1 ms and 5 ms
MEMBRANE POTENTIAL
Information transmission can be understood in terms of two major components: Electrical signals and chemical signals. Transient electrical signals are important for transferring information over long distances rapidly within the neuron. Chemical signals, on the other hand, are mainly involved in the transmission of information between neurons.
Electrical signals (receptor potential, synaptic potential and action potential) are all caused by transient changes in the current flow into and out of the neuron, that drives the electrical potential across the plasma membrane away of its resting condition.
Every neuron has a separation of electrical charge across its cell membrane. The membrane potential results from a separation of positive and negative charges across the cell membrane. The relative excess of positive charges outside and negative charges inside the membrane of a nerve cell at rest is maintained because the lipid bilayer acts as a barrier to the diffusion of ions, and give rise to an electrical potential difference, which ranges from about 60 to 70 mV.
Vr = -60 to -70 mV.
Being Vr, the resting potential.
The charge separation across the membrane, and therefore the resting membrane potential, is disturbed whenever there is a net flux of ions into or out of the cell. A reduction of the charge separation is called depolarization; an increase in charge separation is called hyperpolarization. Transient current flow and therefore rapid changes in potential are made possible by ion channel, a class of integral proteins that traverse the cell membrane. There are two types of ion channels in the membrane: gated and nongated. Nongated channels are always open and are not influenced significantly by extrinsic factors. They are primarily important in maintaining the resting membrane potential. Gated channels, in contrast, open and close in response to specific electrical, mechanical, or chemical signals. Since ion channels recognize and select among specific ions, the actual distribution of ionic species across the membrane depends on the particular distribution of ion channels in the cell membrane.
. Na and Cl are more concentrated outside the cell while K and organic anions (organic acids and proteins) are more concentrated inside. The overall effect of this ionic distribution is the resting potential.
There are two forces acting on a given ionic species. The driving force of the chemical concentration gradient tends to move ions down this gradient (chemical potential). On the other hand the electrostatic force due to the charge separation across the membrane tends to move ions in a direction determined by its particular charge. Thus, for instance, chloride ions which are concentrated outside the cell tend to move inward down its concentration gradient through nongated chloride channels. However the relative excess of negative charge inside the membrane tend to push chloride ions back out of the cell. Eventually equilibrium can be reached so that the actual ratio of intracellular and extracellular concentration ultimately depends on the existing membrane potential.
The same argument applies to the potassium ions. However these two forces act together on each Na ion to drive it into the cell. First, Na is more concentrated outside than inside and therefore tends to flow into the cell down its concentration gradient. Second, Na is driven into the cell by the electrical potential difference across the membrane. Therefore, if the cell is to have a steady resting membrane potential, the movement of Na ions into the cell must be balanced by the efflux of K ions. Although these steady ionic interchange prevents can prevent irreversible depolarization, this process cannot be allowed to continue unopposed. Otherwise, the K pool would be depleted, intracellular Na would increase, and the ionic gradients would gradually run down, reducing the resting membrane potential.
Summary: The shieding properties of a wire penetrating an infinite planar screen are considered. Time domain results are presented for the case of a transient current pulse propagating along the wire. These results are obtained by first computing numerical solutions for the problem in the frequency domain and then utilizing the inverse Fourier transform. Two double exponential pulses with differing characteristics are considered. Numerical results for the two pulses are compared to determine the effects of the pulse characteristics on the shielding properties of the geometry. Applications to via structures in high-speed circuits are also briefly discussed. It is observed that even for very small apertures, the effect of the screen on the low-frequency pulse is negligible. As the pulse width decreases, the effect of the screen becomes more prominent. For the high-frequency case, the pulse is significantly affected by the screen. Unlike the low-frequency pulse, the amplitude of the high-frequency pulse is dependent on the aperture size. Even for large apertures, the attenuation becomes significant as the current propagates down the wire. It is shown that as the width of the input pulse decreases, the distortion in the pulse shape becomes more pronounced. This effect is especially important in applications related to high-speed integrated circuits
BIBLIOGRAPHY
www.goolge.com/wikipedia
www.yahoo.com/physics fundamental
physics pardeep textbook
Tangent galvanometer
TANGENT
GALVANOMETER
TABLE OF CONTENT
CONTENT NAME PAGE NO.
1. INTRODUCTION 1
2. REVIEW OF LITERATURE 2-3
3. THEORY AND WORKING 4-6
4. SUMMARY 7-8
5. BIBLIOGRAPHY 9
INTRODUCTION
A tangent galvanometer is an early measuring instrument used for the measurement of electric current. It works on the basis of tangent law of magnetism.It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields. It was first described by Claude Servais Mathias Pouillet in 1837.
A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it.
Tangent Galvanometer by Claude Servais Mathias Pouillet in 1837.
The instrument has high sensitivity and one of its early jobs was
in the studies of electrophysiology by the inventor.
REVIEW OF LITRECTURE
*Claude-Servais-Mathias Pouillet to verify Ohm’s law.
Scientist’s Name: Claude-Servais-Mathias
Year of discovery: 1837
Title: to verify Ohm’s law.
The tangent galvanometer was first described in an 1837 paper by Claude-Servais-Mathias Pouillet, who later employed this sensitive form of galvanometer to verify Ohm's law. To use the galvanometer, it is first set up on a level surface and the coil aligned with the magnetic north-south direction.
*Professor W.A. Anthony
The Great Tangent Galvanometer
Cornell University. Ithaca, New York
Scientist’s Name: Professor W.A. Anthony
Year of discovery: 1885.
Title: for measurement of heavy currents and direct calibration
This is the great tangent galvanometer of Cornell University, dated 1885. Designed by Professor W.A. Anthony, it was developed to meet the needs of an instrument for the measurement of heavy currents and direct calibration of commercial instruments used for measuring currents in electric lighting, industry, etc.
*James Prescott Joule
Scientist’s Name: James Prescott Joule
Year: 1840
Title: to discover modern absolute system of electric measurements.
In 1840, he graduated his tangent galvanometer to correspond with the system of electric measurement he had adopted. The electric currents used in his experiments were thenceforth measured on the new system; and the numbers given in Joule's papers from 1840 downward are easily reducible to the modern absolute system of electric measurements.
*J. J. Nervander
Scientist’s Name: J.J. Neravander
Year: 1834
Title: to improve the measurements of electric current.
J.J. Nervander designed the more- sensitive tangent galvanometer in 1834, which led to a great improvement in precise measurements of electric current. Because of its ingenuous coiling arrangements, Germander was able to use the tangent busily to prove the validity of the law that the tangent of the deviation angle of the needle of the tangent-bus sol is proportional to the electric current flowing through its coil.
*Lord Kelvin's magneto-static tangent galvanometer
Scientist’s Name: Lord Kelvin
Year: 1887
Title: discovery of tangent galvanometer as lamp.
This form of the tangent galvanometer is designed by Lord Kelvin in c.1887. This is a magneto-static tangent galvanometer used as a lamp counter. The instrument originally consisted of a small magnet on an aluminium pointer suspended at the centre of two loops of heavy copper ribbon positioned above two sets of strong bar magnets.
It is very similar in construction to GLAHM 113325 suggesting that it was designed for use in a lighting system such as the one in Lord Kelvin's laboratory and lecture theatre.
THEORY AND WORKING
Construction
A TG consists of a circular coil of insulated copper wire wound on a circular non magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws on the base. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. The compass box is circular in shape. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it. To avoid errors due to parallax a plane mirror is mounted below the compass needle.
Theory
When current is passed through the TG a magnetic field is created at its centre given by where I is the current in ampere, n is the number of turns of the coil and r is the radius of the coil.
If the TG is set such that the plane of the coil is along the magnetic meridian i.e. B is perpendicular to ( is the horizontal component of the Earths magnetic field), the needle rests along the resultant. From tangent law, i.e.
or
where K is called the Reduction Factor of the TG.
Working
In the tangent galvanometer there is a circular coil having one or more turns of wire, at the centre of which a magnetic needle is either balanced on a point or suspended by a fine fibre of silk or quartz. The instrument is placed so that the plane of the coil is vertical and in the magnetic north and south plane (Figure 5(A)).
FIGURE 5(A)
When a current is sent through the coil the needle turns to one side or the other, and the strength of the current is proportional to the tangent of the angle of deflection. The force due to the current in the coil is at right angles to the plane of the coil at its centre and the strength of the field at that point in a given coil is proportional to the strength of the current (Figure 5(B)).
FIGURE 5(B)
Let G represent the strength of field at the centre due to the coil when unit current is flowing, then IG will be the strength of field when the current strength is I. Let OA in Figure 5(B) represent the plane of the coil and O the point where the needle is placed, then when no current is flowing the needle points in the direction OA, being acted on only by the horizontal component H of the earth's magnetic force. The magnetic force F due to the current in the coil is IG and at right angles to H, therefore, the resultant force R is the diagonal of the rectangle whose sides are IG and H, and
Where x is the angle which the resultant force makes with H. But the needle must point in the direction of the resultant force, and so x is the angle through which the needle turns. Therefore
And if H and G are known the current may be determined by measuring the angle x. In case of a tangent galvanometer the magnetic force F due to the coil is expressed by IG.
But if the current is measured in electromagnetic units,
And since the length of n turns of wire of radius r is,
The galvanometer coil constant G can be calculated from this formula when the coil of the galvanometer has so large a radius compared with the length of the needle that the poles of the needle may be regarded as at the centre, and when the cross section of the coil is so small that all the turns bear nearly the same relation to the needle. If G is determined in this way, r being measured in centimetres, and if H is found in C.G.S. units system, the current will be also found in C.G.S. electromagnetic units by the use of the formula:
To obtain the current strength in amperes, we must take as the value of the coil constant:
By this method the strength of a current is determined in amperes directly from the fundamental units of length, mass, and time, for we have already seen how the measurement of H is based on these units. A tangent galvanometer in which the constant is determined in this way directly from measurements of the coil is known as a standard galvanometer.
SUMMARY
Galvanometers were the first instruments used to determine the presence, direction, and strength of an electric current in a conductor. All galvanometers are based upon the discovery by Hans C. Oersted that a magnetic needle is deflected by the presence of an electric current in a nearby conductor. The extent to which the needle turns is dependent upon the strength of the current. These meters were called tangent galvanometers because the tangent of the angle of deflection of the needle is proportional to the strength of the current in the coil. A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. It works on the basis of tangent law of magnetism.It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields.
Struers Tangent Galvanometer
Unfortunately, simple galvanometers such as the Struers model shown above were inaccurate and inconsistent in their readings. By placing the compass at the centre of a precisely calculated circle, accuracy could be improved substantially (see down). Other improvements were added later including replacing the compass with a specially designed meter movement, adding levelling screws, etc.
Central Scientific Tangent Galvanometer utilizing compass (1941)
These large stationary-coil type galvanometers were used as the standard current measuring instrument into the last quarter of the 19th century. Additional examples of tangent galvanometers are shown below:
Harris Tangent Galvanometer
Harris Tangent Galvanometer
Eureka Scientific Tangent Galvanometer
University Supply Tangent Galvanometer
Knott Tangent
Galvanometer
Early Tangent Galvanometer
Early Rectangular Tangent Galvanometer
University Supply Tangent Galvanometer
Suggestion: One of the limitations of tangent galvanometers was that the length of the needle had to be kept very short in order to minimize the effects of the earth's magnetic field and reduce damping errors introduced by the mass of the needle itself. Unfortunately, the shorter the needle, the less distance the tip will travel as it inscribes an arc, and thus the more difficult it will be to read very small changes in current.
This problem is solved ingeniously by using a beam of light as the needle; a shaft is placed through the centre of the needle and a very small mirror is attached. A beam of light is reflected off of the mirror and onto a scale located about three feet away. The result is that an extremely small deflection of the mirror will cause a much larger movement of the beam on the scale. These types of galvanometers are called Reflecting Galvanometer.
BIBLIOGRAPHY
1. http://physics.kenyon.edu/EarlyApparatus/Electrical_Measurements/Tangent_Galvanometer/Tangent_Galvanometer.html
2. http://en.wikipedia.org/wiki/Galvanometer
3. http://www.historicalprintshop.com/web_pages/S/science/scientific.instruments/scientific.instruments.html
4. http://www.scran.ac.uk/database/record.php?usi=000-000-529-673-C&&
5. http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/5289/4534362/04534374.pdf?arnumber=4534374
6. http://www.scran.ac.uk/database/record.php?usi=000-000-529-497-C
7. http://www.economicexpert.com/a/Tangent:galvanometer.html
8. http://chem.ch.huji.ac.il/instruments/test/galvanometers.htm
9. http://www.sparkmuseum.com/GALV.HTM
GALVANOMETER
TABLE OF CONTENT
CONTENT NAME PAGE NO.
1. INTRODUCTION 1
2. REVIEW OF LITERATURE 2-3
3. THEORY AND WORKING 4-6
4. SUMMARY 7-8
5. BIBLIOGRAPHY 9
INTRODUCTION
A tangent galvanometer is an early measuring instrument used for the measurement of electric current. It works on the basis of tangent law of magnetism.It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields. It was first described by Claude Servais Mathias Pouillet in 1837.
A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it.
Tangent Galvanometer by Claude Servais Mathias Pouillet in 1837.
The instrument has high sensitivity and one of its early jobs was
in the studies of electrophysiology by the inventor.
REVIEW OF LITRECTURE
*Claude-Servais-Mathias Pouillet to verify Ohm’s law.
Scientist’s Name: Claude-Servais-Mathias
Year of discovery: 1837
Title: to verify Ohm’s law.
The tangent galvanometer was first described in an 1837 paper by Claude-Servais-Mathias Pouillet, who later employed this sensitive form of galvanometer to verify Ohm's law. To use the galvanometer, it is first set up on a level surface and the coil aligned with the magnetic north-south direction.
*Professor W.A. Anthony
The Great Tangent Galvanometer
Cornell University. Ithaca, New York
Scientist’s Name: Professor W.A. Anthony
Year of discovery: 1885.
Title: for measurement of heavy currents and direct calibration
This is the great tangent galvanometer of Cornell University, dated 1885. Designed by Professor W.A. Anthony, it was developed to meet the needs of an instrument for the measurement of heavy currents and direct calibration of commercial instruments used for measuring currents in electric lighting, industry, etc.
*James Prescott Joule
Scientist’s Name: James Prescott Joule
Year: 1840
Title: to discover modern absolute system of electric measurements.
In 1840, he graduated his tangent galvanometer to correspond with the system of electric measurement he had adopted. The electric currents used in his experiments were thenceforth measured on the new system; and the numbers given in Joule's papers from 1840 downward are easily reducible to the modern absolute system of electric measurements.
*J. J. Nervander
Scientist’s Name: J.J. Neravander
Year: 1834
Title: to improve the measurements of electric current.
J.J. Nervander designed the more- sensitive tangent galvanometer in 1834, which led to a great improvement in precise measurements of electric current. Because of its ingenuous coiling arrangements, Germander was able to use the tangent busily to prove the validity of the law that the tangent of the deviation angle of the needle of the tangent-bus sol is proportional to the electric current flowing through its coil.
*Lord Kelvin's magneto-static tangent galvanometer
Scientist’s Name: Lord Kelvin
Year: 1887
Title: discovery of tangent galvanometer as lamp.
This form of the tangent galvanometer is designed by Lord Kelvin in c.1887. This is a magneto-static tangent galvanometer used as a lamp counter. The instrument originally consisted of a small magnet on an aluminium pointer suspended at the centre of two loops of heavy copper ribbon positioned above two sets of strong bar magnets.
It is very similar in construction to GLAHM 113325 suggesting that it was designed for use in a lighting system such as the one in Lord Kelvin's laboratory and lecture theatre.
THEORY AND WORKING
Construction
A TG consists of a circular coil of insulated copper wire wound on a circular non magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws on the base. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. The compass box is circular in shape. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it. To avoid errors due to parallax a plane mirror is mounted below the compass needle.
Theory
When current is passed through the TG a magnetic field is created at its centre given by where I is the current in ampere, n is the number of turns of the coil and r is the radius of the coil.
If the TG is set such that the plane of the coil is along the magnetic meridian i.e. B is perpendicular to ( is the horizontal component of the Earths magnetic field), the needle rests along the resultant. From tangent law, i.e.
or
where K is called the Reduction Factor of the TG.
Working
In the tangent galvanometer there is a circular coil having one or more turns of wire, at the centre of which a magnetic needle is either balanced on a point or suspended by a fine fibre of silk or quartz. The instrument is placed so that the plane of the coil is vertical and in the magnetic north and south plane (Figure 5(A)).
FIGURE 5(A)
When a current is sent through the coil the needle turns to one side or the other, and the strength of the current is proportional to the tangent of the angle of deflection. The force due to the current in the coil is at right angles to the plane of the coil at its centre and the strength of the field at that point in a given coil is proportional to the strength of the current (Figure 5(B)).
FIGURE 5(B)
Let G represent the strength of field at the centre due to the coil when unit current is flowing, then IG will be the strength of field when the current strength is I. Let OA in Figure 5(B) represent the plane of the coil and O the point where the needle is placed, then when no current is flowing the needle points in the direction OA, being acted on only by the horizontal component H of the earth's magnetic force. The magnetic force F due to the current in the coil is IG and at right angles to H, therefore, the resultant force R is the diagonal of the rectangle whose sides are IG and H, and
Where x is the angle which the resultant force makes with H. But the needle must point in the direction of the resultant force, and so x is the angle through which the needle turns. Therefore
And if H and G are known the current may be determined by measuring the angle x. In case of a tangent galvanometer the magnetic force F due to the coil is expressed by IG.
But if the current is measured in electromagnetic units,
And since the length of n turns of wire of radius r is,
The galvanometer coil constant G can be calculated from this formula when the coil of the galvanometer has so large a radius compared with the length of the needle that the poles of the needle may be regarded as at the centre, and when the cross section of the coil is so small that all the turns bear nearly the same relation to the needle. If G is determined in this way, r being measured in centimetres, and if H is found in C.G.S. units system, the current will be also found in C.G.S. electromagnetic units by the use of the formula:
To obtain the current strength in amperes, we must take as the value of the coil constant:
By this method the strength of a current is determined in amperes directly from the fundamental units of length, mass, and time, for we have already seen how the measurement of H is based on these units. A tangent galvanometer in which the constant is determined in this way directly from measurements of the coil is known as a standard galvanometer.
SUMMARY
Galvanometers were the first instruments used to determine the presence, direction, and strength of an electric current in a conductor. All galvanometers are based upon the discovery by Hans C. Oersted that a magnetic needle is deflected by the presence of an electric current in a nearby conductor. The extent to which the needle turns is dependent upon the strength of the current. These meters were called tangent galvanometers because the tangent of the angle of deflection of the needle is proportional to the strength of the current in the coil. A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. It works on the basis of tangent law of magnetism.It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields.
Struers Tangent Galvanometer
Unfortunately, simple galvanometers such as the Struers model shown above were inaccurate and inconsistent in their readings. By placing the compass at the centre of a precisely calculated circle, accuracy could be improved substantially (see down). Other improvements were added later including replacing the compass with a specially designed meter movement, adding levelling screws, etc.
Central Scientific Tangent Galvanometer utilizing compass (1941)
These large stationary-coil type galvanometers were used as the standard current measuring instrument into the last quarter of the 19th century. Additional examples of tangent galvanometers are shown below:
Harris Tangent Galvanometer
Harris Tangent Galvanometer
Eureka Scientific Tangent Galvanometer
University Supply Tangent Galvanometer
Knott Tangent
Galvanometer
Early Tangent Galvanometer
Early Rectangular Tangent Galvanometer
University Supply Tangent Galvanometer
Suggestion: One of the limitations of tangent galvanometers was that the length of the needle had to be kept very short in order to minimize the effects of the earth's magnetic field and reduce damping errors introduced by the mass of the needle itself. Unfortunately, the shorter the needle, the less distance the tip will travel as it inscribes an arc, and thus the more difficult it will be to read very small changes in current.
This problem is solved ingeniously by using a beam of light as the needle; a shaft is placed through the centre of the needle and a very small mirror is attached. A beam of light is reflected off of the mirror and onto a scale located about three feet away. The result is that an extremely small deflection of the mirror will cause a much larger movement of the beam on the scale. These types of galvanometers are called Reflecting Galvanometer.
BIBLIOGRAPHY
1. http://physics.kenyon.edu/EarlyApparatus/Electrical_Measurements/Tangent_Galvanometer/Tangent_Galvanometer.html
2. http://en.wikipedia.org/wiki/Galvanometer
3. http://www.historicalprintshop.com/web_pages/S/science/scientific.instruments/scientific.instruments.html
4. http://www.scran.ac.uk/database/record.php?usi=000-000-529-673-C&&
5. http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/5289/4534362/04534374.pdf?arnumber=4534374
6. http://www.scran.ac.uk/database/record.php?usi=000-000-529-497-C
7. http://www.economicexpert.com/a/Tangent:galvanometer.html
8. http://chem.ch.huji.ac.il/instruments/test/galvanometers.htm
9. http://www.sparkmuseum.com/GALV.HTM
optical isomerism
OPTICAL ISOMERISM
SUBMITTED BY- Shweta Bhardwaj
COURSE - CHE-155
PROGRAMME- BSc (hons.) BIOTECH
PROGRAMME CODE- 178
ROLL NO.- R280A03
REGISTREATION NO.- 10801595
SUBMITTED TO- Dr. Ramesh Thakur
OPTICAL ISOMERISM
Optical isomerism is a form of stereoisomerism. This page explains what stereoisomers are and how you recognise the possibility of optical isomers in a molecule.
What is stereoisomerism?
What are isomers?
Isomers are molecules that have the same molecular formula, but have a different arrangement of the atoms in space. That excludes any different arrangements which are simply due to the molecule rotating as a whole, or rotating about particular bonds.
Where the atoms making up the various isomers are joined up in a different order, this is known as structural isomerism. Structural isomerism is not a form of stereoisomerism, and is dealt with on a separate page.
What are stereoisomers?
In stereoisomerism, the atoms making up the isomers are joined up in the same order, but still manage to have a different spatial arrangement. Optical isomerism is one form of stereoisomerism.
Optical isomerism
Why optical isomers?
Optical isomers are named like this because of their effect on plane polarised light.
Simple substances which show optical isomerism exist as two isomers known as enantiomers.
• A solution of one enantiomer rotates the plane of polarisation in a clockwise direction. This enantiomer is known as the (+) form.
For example, one of the optical isomers (enantiomers) of the amino acid alanine is known as (+)alanine.
• A solution of the other enantiomer rotates the plane of polarisation in an anti-clockwise direction. This enantiomer is known as the (-) form. So the other enantiomer of alanine is known as or (-)alanine.
• If the solutions are equally concentrated the amount of rotation caused by the two isomers is exactly the same - but in opposite directions.
• When optically active substances are made in the lab, they often occur as a 50/50 mixture of the two enantiomers. This is known as a racemic mixture or racemate. It has no effect on plane polarised light.
How optical isomers arise
The examples of organic optical isomers required at A' level all contain a carbon atom joined to four different groups. These two models each have the same groups joined to the central carbon atom, but still manage to be different:
Obviously as they are drawn, the orange and blue groups aren't aligned the same way. Could you get them to align by rotating one of the molecules? The next diagram shows what happens if you rotate molecule B.
They still aren't the same - and there is no way that you can rotate them so that they look exactly the same. These are isomers of each other.
They are described as being non-superimposable in the sense that (if you imagine molecule B being turned into a ghostly version of itself) you couldn't slide one molecule exactly over the other one. Something would always be pointing in the wrong direction.
What happens if two of the groups attached to the central carbon atom are the same? The next diagram shows this possibility.
The two models are aligned exactly as before, but the orange group has been replaced by another pink one.
Rotating molecule B this time shows that it is exactly the same as molecule A. You only get optical isomers if all four groups attached to the central carbon are different.
Chiral and achiral molecules
The essential difference between the two examples we've looked at lies in the symmetry of the molecules.
If there are two groups the same attached to the central carbon atom, the molecule has a plane of symmetry. If you imagine slicing through the molecule, the left-hand side is an exact reflection of the right-hand side.
Where there are four groups attached, there is no symmetry anywhere in the molecule.
A molecule which has no plane of symmetry is described as chiral. The carbon atom with the four different groups attached which causes this lack of symmetry is described as a chiral centre or as an asymmetric carbon atom.
The molecule on the left above (with a plane of symmetry) is described as achiral.
Only chiral molecules have optical isomers.
The relationship between the enantiomers
One of the enantiomers is simply a non-superimposable mirror image of the other one.
In other words, if one isomer looked in a mirror, what it would see is the other one. The two isomers (the original one and its mirror image) have a different spatial arrangement, and so can't be superimposed on each other.
If an achiral molecule (one with a plane of symmetry) looked in a mirror, you would always find that by rotating the image in space, you could make the two look identical. It would be possible to superimpose the original molecule and its mirror image.
Some real examples of optical isomers
Butan-2-ol
The asymmetric carbon atom in a compound (the one with four different groups attached) is often shown by a star.
It's extremely important to draw the isomers correctly. Draw one of them using standard bond notation to show the 3-dimensional arrangement around the asymmetric carbon atom. Then draw the mirror to show the examiner that you know what you are doing, and then the mirror image.
Notice that you don't literally draw the mirror images of all the letters and numbers! It is, however, quite useful to reverse large groups - look, for example, at the ethyl group at the top of the diagram.
It doesn't matter in the least in what order you draw the four groups around the central carbon. As long as your mirror image is drawn accurately, you will automatically have drawn the two isomers.
So which of these two isomers is (+)butan-2-ol and which is (-)butan-2-ol? There is no simple way of telling that. For A'level purposes, you can just ignore that problem - all you need to be able to do is to draw the two isomers correctly.
2-hydroxypropanoic acid (lactic acid)
Once again the chiral centre is shown by a star.
The two enantiomers are:
It is important this time to draw the COOH group backwards in the mirror image. If you don't there is a good chance of you joining it on to the central carbon wrongly.
If you draw it like this in an exam, you won't get the mark for that isomer even if you have drawn everything else perfectly.
2-aminopropanoic acid (alanine)
This is typical of naturally-occurring amino acids. Structurally, it is just like the last example, except that the -OH group is replaced by -NH2
The two enantiomers are:
Only one of these isomers occurs naturally: the (+) form. You can't tell just by looking at the structures which this is.
It has, however, been possible to work out which of these structures is which. Naturally occurring alanine is the right-hand structure, and the way the groups are arranged around the central carbon atom is known as an L- configuration. Notice the use of the capital L. The other configuration is known as D-.
So you may well find alanine described as L-(+)alanine.
That means that it has this particular structure and rotates the plane of polarisation clockwise.
Even if you know that a different compound has an arrangement of groups similar to alanine, you still can't say which way it will rotate the plane of polarisation.
The other amino acids, for example, have the same arrangement of groups as alanine does (all that changes is the CH3 group), but some are (+) forms and others are (-) forms.
It's quite common for natural systems to only work with one of the enantiomers of an optically active substance. It isn't too difficult to see why that might be. Because the molecules have different spatial arrangements of their various groups, only one of them is likely to fit properly into the active sites on the enzymes they work with.
In the lab, it is quite common to produce equal amounts of both forms of a compound when it is synthesised. This happens just by chance, and you tend to get racemic mixtures.
________________________________________
Note: For a detailed discussion of this, you could have a look at the page on the addition of HCN to aldehydes
________________________________________
Chirality
Two enantiomers of a generic amino acid
The two optical isomers of alanine.
The two enantiomers of bromochlorofluoromethane
The term chiral (pronounced /ˈkaɪrəl/) is used to describe an object that is non-superposable on its mirror image.
Human hands are perhaps the most universally recognized example of chirality: The left hand is a non-superposable mirror image of the right hand; no matter how the two hands are oriented, it is impossible for all the major features of both hands to coincide. This difference in symmetry becomes obvious if someone attempts to shake the right hand of a person using his left hand, or if a left-handed glove is placed on a right hand. The term chirality is derived from the Greek word for hand, χειρ (/cheir/).
When used in the context of chemistry, chirality usually refers to molecules. Two mirror images of a molecule that cannot be superposed onto each other are referred to as enantiomers or optical isomers. Because the difference between right and left hands is universally known and easy to observe, many pairs of enantiomers are designated as "right-" and "left-handed." A mixture of equal amounts of the two enantiomers is said to be a racemic mixture. Molecular chirality is of interest because of its application to stereochemistry in inorganic chemistry, organic chemistry, physical chemistry, biochemistry, and supramolecular chemistry.
The symmetry of a molecule (or any other object) determines whether it is chiral. A molecule is achiral (not chiral) if and only if it has an axis of improper rotation; that is, an n-fold rotation (rotation by 360°/n) followed by a reflection in the plane perpendicular to this axis that maps the molecule onto itself. (See chirality (mathematics).) A simplified rule applies to tetrahedrally-bonded carbon, as shown in the illustration: if all four substituents are different, the molecule is chiral. A chiral molecule is not necessarily asymmetric, that is, devoid of any symmetry elements, as it can have, for example, rotational symmetry.
History
The term optical activity is derived from the interaction of chiral materials with polarized light. A solution of the (−)-form of an optical isomer rotates the plane of polarization of a beam of plane polarized light in a counterclockwise direction, vice-versa for the (+) optical isomer. The property was first observed by Jean-Baptiste Biot in 1815 [1], and gained considerable importance in the sugar industry, analytical chemistry, and pharmaceuticals. Louis Pasteur deduced in 1848 that this phenomenon has a molecular basis[2]. Artificial composite materials displaying the analog of optical activity but in the microwave region were introduced by J.C. Bose in 1898 [3], and gained considerable attention from the mid-1980s [4]. The term chirality itself was coined by Lord Kelvin in 1873.[1]
The word “racemic” is derived from the Latin word for grape; the term having its origins in the work of Louis Pasteur who isolated racemic tartaric acid from wine.
Naming conventions
By configuration: R- and S-
For chemists, the R / S system is the most important nomenclature system for denoting enantiomers, which does not involve a reference molecule such as glyceraldehyde. It labels each chiral center R or S according to a system by which its substituents are each assigned a priority, according to the Cahn Ingold Prelog priority rules(CIP), based on atomic number. If the center is oriented so that the lowest-priority of the four is pointed away from a viewer, the viewer will then see two possibilities: If the priority of the remaining three substituents decreases in clockwise direction, it is labeled R (for Rectus), if it decreases in counterclockwise direction, it is S (for Sinister).
This system labels each chiral center in a molecule (and also has an extension to chiral molecules not involving chiral centers). Thus, it has greater generality than the D/L system, and can label, for example, an (R,R) isomer versus an (R,S) — diastereomers.
The R / S system has no fixed relation to the (+)/(−) system. An R isomer can be either dextrorotatory or levorotatory, depending on its exact substituents.
The R / S system also has no fixed relation to the D/L system. For example, the side-chain one of serine contains a hydroxyl group, -OH. If a thiol group, -SH, were swapped in for it, the D/L labeling would, by its definition, not be affected by the substitution. But this substitution would invert the molecule's R / S labeling, because the CIP priority of CH2OH is lower than that for CO2H but the CIP priority of CH2SH is higher than that for CO2H.
For this reason, the D/L system remains in common use in certain areas of biochemistry, such as amino acid and carbohydrate chemistry, because it is convenient to have the same chiral label for all of the commonly occurring structures of a given type of structure in higher organisms. In the D/L system, they are all L; in the R / S system, they are mostly S but there are some common exceptions.
By optical activity: (+)- and (−)-
An enantiomer can be named by the direction in which it rotates the plane of polarized light. If it rotates the light clockwise (as seen by a viewer towards whom the light is traveling), that enantiomer is labeled (+). Its mirror-image is labeled (−). The (+) and (−) isomers have also been termed d- and l-, respectively (for dextrorotatory and levorotatory). This labeling is easy to confuse with D- and L-.
By configuration: D- and L-
An optical isomer can be named by the spatial configuration of its atoms. The D/L system does this by relating the molecule to glyceraldehyde. Glyceraldehyde is chiral itself, and its two isomers are labeled D and L. Certain chemical manipulations can be performed on glyceraldehyde without affecting its configuration, and its historical use for this purpose (possibly combined with its convenience as one of the smallest commonly used chiral molecules) has resulted in its use for nomenclature. In this system, compounds are named by analogy to glyceraldehyde, which, in general, produces unambiguous designations, but is easiest to see in the small biomolecules similar to glyceraldehyde. One example is the amino acid alanine, which has two optical isomers, and they are labeled according to which isomer of glyceraldehyde they come from. On the other hand, glycine, the amino acid derived from glyceraldehyde, has no optical activity, as it is not chiral (achiral). Alanine, however, is chiral.
The D/L labeling is unrelated to (+)/(−); it does not indicate which enantiomer is dextrorotatory and which is levorotatory. Rather, it says that the compound's stereochemistry is related to that of the dextrorotatory or levorotatory enantiomer of glyceraldehyde—the dextrorotatory isomer of glyceraldehyde is, in fact, the D isomer. Nine of the nineteen L-amino acids commonly found in proteins are dextrorotatory (at a wavelength of 589 nm), and D-fructose is also referred to as levulose because it is levorotatory.
A rule of thumb for determining the D/L isomeric form of an amino acid is the "CORN" rule. The groups:
COOH, R, NH2 and H (where R is a variant carbon chain)
are arranged around the chiral center carbon atom. Sighting with the hydrogen atom away from the viewer, if these groups are arranged clockwise around the carbon atom, then it is the D-form. If counter-clockwise, it is the L-form.
Nomenclature
• Any non-racemic chiral substance is called scalemic [2]
• A chiral substance is enantiopure or homochiral when only one of two possible enantiomers is present.
• A chiral substance is enantioenriched or heterochiral when an excess of one enantiomer is present but not to the exclusion of the other.
• Enantiomeric excess or ee is a measure for how much of one enantiomer is present compared to the other. For example, in a sample with 40% ee in R, the remaining 60% is racemic with 30% of R and 30% of S, so that the total amount of R is 70%.
Types
In general, chiral molecules have point chirality, centering around a single atom, usually carbon, which has four different substituents. The two enantiomers of such compounds are said to have different absolute configurations at this center. This center is thus stereogenic (i.e., a grouping within a molecular entity that may be considered a focus of stereoisomerism), and is exemplified by the α-carbon of amino acids. A molecule can have multiple chiral centers without being chiral overall if there is a symmetry element (a mirror plane or inversion center), which relates the two (or more) chiral centers. Such a molecule is called a meso compound. It is also possible for a molecule to be chiral without having actual point chirality. Common examples include 1,1'-bi-2-naphthol (BINOL) and 1,3-dichloro-allene, which have axial chirality, and (E)-cyclooctene, which has planar chirality.
It is important to keep in mind that molecules that are dissolved in solution or are in the gas phase usually have considerable flexibility, and, thus, may adopt a variety of different conformations. These various conformations are themselves almost always chiral. However, when assessing chirality, one must use a structural picture of the molecule that corresponds to just one chemical conformation - the most symmetric conformation possible.
When the optical rotation for an enantiomer is too low for practical measurement it is said to exhibit cryptochirality.
Even isotopic differences must be considered when examining chirality. Replacing one of the two 1H atoms at the CH2 position of benzyl alcohol with a deuterium (²H) makes that carbon a stereocenter. The resulting benzyl-α-d alcohol exists as two distinct enantiomers, which can be assigned by the usual stereochemical naming conventions. The S enantiomer has [α]D = +0.715°.[5]
Properties of enantiomers
Enantiomers are identical with respect to ordinary chemical reactions and properties (i.e., will have identical Rfs by TLC, identical NMR spectra, identical IR spectra), but differences arise when they are in the presence of other chiral molecules or objects. Different enantiomers of chiral compounds often taste and smell differently and have different effects as drugs - see below.
One chiral 'object' that interacts differently with the two enantiomers of a chiral compound is circularly polarised light: An enantiomer will absorb left- and right-circularly polarised light to differing degrees. This is the basis of circular dichroism (CD) spectroscopy. Usually the difference in absorptivity is relatively small (parts per thousand). CD spectroscopy is a powerful analytical technique for investigating the secondary structure of proteins and for determining the absolute configurations of chiral compounds, in particular, transition metal complexes. CD spectroscopy is replacing polarimetry as a method for characterising chiral compounds, although the latter is still popular with sugar chemists.
In biology
Many biologically active molecules are chiral, including the naturally occurring amino acids (the building blocks of proteins), and sugars. In biological systems, most of these compounds are of the same chirality: most amino acids are L and sugars are D. Typical naturally occurring proteins, made of L amino acids, are known as left-handed proteins, whereas D amino acids produce right-handed proteins.
The origin of this homochirality in biology is the subject of much debate.[6] Most scientists believe that Earth life's “choice” of chirality was purely random, and that if carbon-based life forms exist elsewhere in the universe, their chemistry could theoretically have opposite chirality.
Enzymes, which are chiral, often distinguish between the two enantiomers of a chiral substrate. Imagine an enzyme as having a glove-like cavity that binds a substrate. If this glove is right-handed, then one enantiomer will fit inside and be bound, whereas the other enantiomer will have a poor fit and is unlikely to bind.
D-form amino acids tend to taste sweet, whereas L-forms are usually tasteless. Spearmint leaves and caraway seeds, respectively, contain L-carvone and D-carvone - enantiomers of carvone. These smell different to most people because our olfactory receptors also contain chiral molecules that behave differently in the presence of different enantiomers.
Chirality is important in context of ordered phases as well, for example the addition of a small amount of an optically active molecule to a nematic phase (a phase that has long range orientational order of molecules) transforms that phase to a chiral nematic phase (or cholesteric phase). Chirality in context of such phases in polymeric fluids has also been studied in this context (Srinivasarao, 1999).
In drugs
Many chiral drugs must be made with high enantiomeric purity due to potential side-effects of the other enantiomer. (The other enantiomer may also merely be inactive.)
• Thalidomide: Thalidomide is racemic. One enantiomer is effective against morning sickness, whereas the other is teratogenic. In this case, administering just one of the enantiomers to a pregnant patient does not help, as the two enantiomers are readily interconverted in vivo. Thus, if a person is given either enantiomer, both the D and L isomers will eventually be present in the patient's serum.
• Ethambutol: Whereas one enantiomer is used to treat tuberculosis, the other causes blindness.
• Naproxen: One enantiomer is used to treat arthritis pain, but the other causes liver poisoning with no analgesic effect.
• Steroid receptor sites also show stereoisomer specificity.
• Penicillin's activity is stereodependent. The antibiotic must mimic the D-alanine chains that occur in the cell walls of bacteria in order to react with and subsequently inhibit bacterial transpeptidase enzyme.
• Only L-propranolol is a powerful adrenoceptor antagonist, whereas D-propranolol is not. However, both have local anesthetic effect.
• The L-isomer of Methorphan, levomethorphan is a potent opioid analgesic, while the D-isomer, dextromethorphan is a dissociative cough suppressant.
• S(-) isomer of carvedilol, a drug that interacts with adrenoceptors, is 100 times more potent as beta receptor blocker than R(+) isomer. However, both the isomers are approximately equipotent as alpha receptor blockers.
• The D-isomers of amphetamine and methamphetamine are strong CNS stimulants, while the L-isomers of both drugs lack appreciable CNS(central nervous system) stimulant effects, but instead stimulate the peripheral nervous system. For this reason, the Levo-isomer of methamphetamine is available as an OTC nasal inhaler in some countries, while the Dextro-isomer is banned from medical use in all but a few countries in the world, and highly regulated in those countries who do allow it to be used medically.
In inorganic chemistry
Many coordination compounds are chiral; for example, the well-known [Ru(2,2'-bipyridine)3]2+ complex in which the three bipyridine ligands adopt a chiral propeller-like arrangement [7]. In this case, the Ru atom may be regarded as a stereogenic center, with the complex having point chirality. The two enantiomers of complexes such as [Ru(2,2'-bipyridine)3]2+ may be designated as Λ (left-handed twist of the propeller described by the ligands) and Δ (right-handed twist). Hexol is a chiral cobalt complex that was first investigated by Alfred Werner. Resolved hexol is significant as being the first compound devoid of carbon to display optical activity.
Chirality of amines
Tertiary amines (see image) are chiral in a way similar to carbon compounds: The nitrogen atom bears four distinct substituents counting the lone pair. However, the energy barrier for the inversion of the stereocenter is, in general, about 30 kJ/mol, which means that the two stereoisomers are rapidly interconverted at room temperature. As a result, amines such as NHRR' cannot be resolved optically and NRR'R" can only be resolved when the R, R', and R" groups are constrained in cyclic structures.
Theory of origin
A paper published in February 29, 2008 by researchers led by Sandra Pizzarello, from Arizona State University, reveals that the Murchison meteorite contains sizable molecular asymmetry of up to 14%, "giving support to the idea that biomolecular traits such as chiral asymmetry could have been seeded in abiotic chemistry ahead of life."[8]
"Thanks to the pristine nature of this meteorite, we were able to demonstrate that other extraterrestrial amino acids carry the left-handed excesses in meteorites and, above all, that these excesses appear to signify that their precursor molecules, the aldehydes, also carried such excesses," Pizzarello said. "In other words, a molecular trait that defines life seems to have broader distribution as well as a long cosmic lineage."[3]
Other theories of the origin of chirality on Earth have also been proposed, such as the weak nuclear force.[4]
Chemical chirality in Fiction
Although little was known about chemical chirality in the time of Lewis Carroll, his work Through the Looking-glass contains a prescient reference to the differing biological activities of enantiomeric drugs: "Perhaps Looking-glass milk isn't good to drink," Alice said to her cat.
In James Blish's Star Trek novella Spock Must Die! the tachyon 'mirrored' Mr Spock is later discovered to have stolen chemical reagents from the medical bay and to have been using them to convert certain amino acids to opposite-chirality isomers, since the mirrored Mr Spock's metabolism is reversed, and, hence, must process the opposite polarity of these isomers.
In Larry Niven's Destiny's Road, the title planet's indigenous life is based upon right-handed proteins. When human colonists arrive from Earth via a generation ship, extreme measures are taken to permit the colony's survival. A peninsula is sterilized with a lander's fusion drive, creating the titular "road" out of fused bedrock. The area is then reseeded with Earth life to provide the colonists with food. Though the soil lacks potassium due to other factors, necessitating supplements that produce a hydraulic empire common to Niven's fiction, the colony otherwise prospers. Native viruses and bacteria cannot infect colonists, resulting in longer lifespans. Sealife quickly recovers, and is consumed by the colonists as a "diet" food, as their digestive systems cannot metabolize it into fat.
In the Trauma Center series of games, doctors test for a "chiral reaction" in order to determine whether or not a patient is infected with "Gangliated Utrophin Immuno Latency Toxin," a fictional, parasitic pathogen more commonly referred to as G.U.I.L.T. A positive reaction means the patient is infected, while a negative reaction means the patient has either been cured or is not infected.
translation of French original, published by Alembic Club Reprints (Vol. 14, pp. 1-46) in 1905, facsimile reproduction by SPIE in a 1990 book". Notes
Structure and Optical Isomerism
A very important feature of the structure of amino acids (and other kinds of compounds as well, for that matter) is called optical isomerism. It applies to all amino acids except glycine.
Look at the number-two carbon atom. You should notice that in one direction it is bonded to an amino group. In another direction, it is bonded to a carboxylic group. It is also bonded to a hydrogen atom and an alkyl group or some other kind of group. Except in the case of glycine where -R is a -H, that number two carbon atom is bonded to four different groups. A carbon atom which is attached to four different groups is called an asymmetric carbon atom or sometimes a chiral carbon atom. The importance of this depends on some structural properties that we will investigate in this section.
If you are in the lab you get a model kit and follow along with the diagrams shown here. Get a carbon atom and attach to it four different groups. For convenience just use different colored units, rather than actually building an amino group and a carboxylic acid group and an isopropyl group or something like that. Then make the other models as they are shown bleow. If you are not in the lab now, you should work with the models to do this exercise when you are in the lab.
Here is a model of a carbon atom with four different groups attached.
Here is another model constructed to be the mirror image of the first model. To do this, construct a model that would appear just as the first model that you made would look like if you were looking at it in a mirror.
Here you can see why these are called mirror images of one another.
We can demonstrate that these two structures are not identical to one another by trying to superimpose one structure on another and get all of the same colored units to be in the identical places. You can see that is not possible.
The two structures are different. They are isomers of one another. It so happens that they are called optical isomers of one another because they have optical properties that are different from one another. We will discuss that particular property a little bit more when we discuss carbohydrates in a later lesson.
When asymmetric carbon atoms are present in a molecular compound, there are two ways in which the groups attached to that carbon can be arranged in the three dimensions, as we have just shown with the two models above. It is generally true, if not universally true, that only one of these optical isomers is biologically active. In other words, when these compounds are made by a plant or animal, only one of the two forms is made. When it comes time for these molecules to interact with an enzyme, only one of these molecules would react. The other would not. Both shape and orientation in biological compounds are extremely important.
Chemically, optical isomers behave the same. Biologically, they do not. One will react properly, but the other will not. Optically, there is also that difference which will be pointed out when we deal with carbohydrates in a later lesson.
We can use these models to illustrate why you need to have four different groups bonded to the central atom. One group (the black group) has been removed from the model on the left and replaced it with a duplicate of one of the other three groups (the white group). We now have a model with the central atom bonded to four groups, but they are not all different. The same has been done to the mirror image (unfortunately, you cannot see that).
By turning the second model in the right way you can see that it is identical to the first one.
Consequently, this central atom is not an asymmetric carbon atom, the molecule is not an optically active molecule, and these are identicalcompounds and not optical isomers.
References
1. ^ Pedro Cintas. "Tracing the Origins and Evolution of Chirality and Handedness in Chemical Language". Angewandte Chemie International Edition 46 (22): 4016-4024. doi:10.1002/anie.200603714.
2. ^ Infelicitous stereochemical nomenclatures for stereochemical nomenclature
3. ^ Arizona State University (2008, February 29). Key To Life Before Its Origin On Earth May Have Been Discovered. ScienceDaily. Retrieved June 16, 2008, from http://www.sciencedaily.com/releases/2008/02/080228174823.htm
4. ^ Castelvecchi, Davide. (2007). Alien Pizza, Anyone?, Science News vol. 172, pp. 107-109. (references
SUBMITTED BY- Shweta Bhardwaj
COURSE - CHE-155
PROGRAMME- BSc (hons.) BIOTECH
PROGRAMME CODE- 178
ROLL NO.- R280A03
REGISTREATION NO.- 10801595
SUBMITTED TO- Dr. Ramesh Thakur
OPTICAL ISOMERISM
Optical isomerism is a form of stereoisomerism. This page explains what stereoisomers are and how you recognise the possibility of optical isomers in a molecule.
What is stereoisomerism?
What are isomers?
Isomers are molecules that have the same molecular formula, but have a different arrangement of the atoms in space. That excludes any different arrangements which are simply due to the molecule rotating as a whole, or rotating about particular bonds.
Where the atoms making up the various isomers are joined up in a different order, this is known as structural isomerism. Structural isomerism is not a form of stereoisomerism, and is dealt with on a separate page.
What are stereoisomers?
In stereoisomerism, the atoms making up the isomers are joined up in the same order, but still manage to have a different spatial arrangement. Optical isomerism is one form of stereoisomerism.
Optical isomerism
Why optical isomers?
Optical isomers are named like this because of their effect on plane polarised light.
Simple substances which show optical isomerism exist as two isomers known as enantiomers.
• A solution of one enantiomer rotates the plane of polarisation in a clockwise direction. This enantiomer is known as the (+) form.
For example, one of the optical isomers (enantiomers) of the amino acid alanine is known as (+)alanine.
• A solution of the other enantiomer rotates the plane of polarisation in an anti-clockwise direction. This enantiomer is known as the (-) form. So the other enantiomer of alanine is known as or (-)alanine.
• If the solutions are equally concentrated the amount of rotation caused by the two isomers is exactly the same - but in opposite directions.
• When optically active substances are made in the lab, they often occur as a 50/50 mixture of the two enantiomers. This is known as a racemic mixture or racemate. It has no effect on plane polarised light.
How optical isomers arise
The examples of organic optical isomers required at A' level all contain a carbon atom joined to four different groups. These two models each have the same groups joined to the central carbon atom, but still manage to be different:
Obviously as they are drawn, the orange and blue groups aren't aligned the same way. Could you get them to align by rotating one of the molecules? The next diagram shows what happens if you rotate molecule B.
They still aren't the same - and there is no way that you can rotate them so that they look exactly the same. These are isomers of each other.
They are described as being non-superimposable in the sense that (if you imagine molecule B being turned into a ghostly version of itself) you couldn't slide one molecule exactly over the other one. Something would always be pointing in the wrong direction.
What happens if two of the groups attached to the central carbon atom are the same? The next diagram shows this possibility.
The two models are aligned exactly as before, but the orange group has been replaced by another pink one.
Rotating molecule B this time shows that it is exactly the same as molecule A. You only get optical isomers if all four groups attached to the central carbon are different.
Chiral and achiral molecules
The essential difference between the two examples we've looked at lies in the symmetry of the molecules.
If there are two groups the same attached to the central carbon atom, the molecule has a plane of symmetry. If you imagine slicing through the molecule, the left-hand side is an exact reflection of the right-hand side.
Where there are four groups attached, there is no symmetry anywhere in the molecule.
A molecule which has no plane of symmetry is described as chiral. The carbon atom with the four different groups attached which causes this lack of symmetry is described as a chiral centre or as an asymmetric carbon atom.
The molecule on the left above (with a plane of symmetry) is described as achiral.
Only chiral molecules have optical isomers.
The relationship between the enantiomers
One of the enantiomers is simply a non-superimposable mirror image of the other one.
In other words, if one isomer looked in a mirror, what it would see is the other one. The two isomers (the original one and its mirror image) have a different spatial arrangement, and so can't be superimposed on each other.
If an achiral molecule (one with a plane of symmetry) looked in a mirror, you would always find that by rotating the image in space, you could make the two look identical. It would be possible to superimpose the original molecule and its mirror image.
Some real examples of optical isomers
Butan-2-ol
The asymmetric carbon atom in a compound (the one with four different groups attached) is often shown by a star.
It's extremely important to draw the isomers correctly. Draw one of them using standard bond notation to show the 3-dimensional arrangement around the asymmetric carbon atom. Then draw the mirror to show the examiner that you know what you are doing, and then the mirror image.
Notice that you don't literally draw the mirror images of all the letters and numbers! It is, however, quite useful to reverse large groups - look, for example, at the ethyl group at the top of the diagram.
It doesn't matter in the least in what order you draw the four groups around the central carbon. As long as your mirror image is drawn accurately, you will automatically have drawn the two isomers.
So which of these two isomers is (+)butan-2-ol and which is (-)butan-2-ol? There is no simple way of telling that. For A'level purposes, you can just ignore that problem - all you need to be able to do is to draw the two isomers correctly.
2-hydroxypropanoic acid (lactic acid)
Once again the chiral centre is shown by a star.
The two enantiomers are:
It is important this time to draw the COOH group backwards in the mirror image. If you don't there is a good chance of you joining it on to the central carbon wrongly.
If you draw it like this in an exam, you won't get the mark for that isomer even if you have drawn everything else perfectly.
2-aminopropanoic acid (alanine)
This is typical of naturally-occurring amino acids. Structurally, it is just like the last example, except that the -OH group is replaced by -NH2
The two enantiomers are:
Only one of these isomers occurs naturally: the (+) form. You can't tell just by looking at the structures which this is.
It has, however, been possible to work out which of these structures is which. Naturally occurring alanine is the right-hand structure, and the way the groups are arranged around the central carbon atom is known as an L- configuration. Notice the use of the capital L. The other configuration is known as D-.
So you may well find alanine described as L-(+)alanine.
That means that it has this particular structure and rotates the plane of polarisation clockwise.
Even if you know that a different compound has an arrangement of groups similar to alanine, you still can't say which way it will rotate the plane of polarisation.
The other amino acids, for example, have the same arrangement of groups as alanine does (all that changes is the CH3 group), but some are (+) forms and others are (-) forms.
It's quite common for natural systems to only work with one of the enantiomers of an optically active substance. It isn't too difficult to see why that might be. Because the molecules have different spatial arrangements of their various groups, only one of them is likely to fit properly into the active sites on the enzymes they work with.
In the lab, it is quite common to produce equal amounts of both forms of a compound when it is synthesised. This happens just by chance, and you tend to get racemic mixtures.
________________________________________
Note: For a detailed discussion of this, you could have a look at the page on the addition of HCN to aldehydes
________________________________________
Chirality
Two enantiomers of a generic amino acid
The two optical isomers of alanine.
The two enantiomers of bromochlorofluoromethane
The term chiral (pronounced /ˈkaɪrəl/) is used to describe an object that is non-superposable on its mirror image.
Human hands are perhaps the most universally recognized example of chirality: The left hand is a non-superposable mirror image of the right hand; no matter how the two hands are oriented, it is impossible for all the major features of both hands to coincide. This difference in symmetry becomes obvious if someone attempts to shake the right hand of a person using his left hand, or if a left-handed glove is placed on a right hand. The term chirality is derived from the Greek word for hand, χειρ (/cheir/).
When used in the context of chemistry, chirality usually refers to molecules. Two mirror images of a molecule that cannot be superposed onto each other are referred to as enantiomers or optical isomers. Because the difference between right and left hands is universally known and easy to observe, many pairs of enantiomers are designated as "right-" and "left-handed." A mixture of equal amounts of the two enantiomers is said to be a racemic mixture. Molecular chirality is of interest because of its application to stereochemistry in inorganic chemistry, organic chemistry, physical chemistry, biochemistry, and supramolecular chemistry.
The symmetry of a molecule (or any other object) determines whether it is chiral. A molecule is achiral (not chiral) if and only if it has an axis of improper rotation; that is, an n-fold rotation (rotation by 360°/n) followed by a reflection in the plane perpendicular to this axis that maps the molecule onto itself. (See chirality (mathematics).) A simplified rule applies to tetrahedrally-bonded carbon, as shown in the illustration: if all four substituents are different, the molecule is chiral. A chiral molecule is not necessarily asymmetric, that is, devoid of any symmetry elements, as it can have, for example, rotational symmetry.
History
The term optical activity is derived from the interaction of chiral materials with polarized light. A solution of the (−)-form of an optical isomer rotates the plane of polarization of a beam of plane polarized light in a counterclockwise direction, vice-versa for the (+) optical isomer. The property was first observed by Jean-Baptiste Biot in 1815 [1], and gained considerable importance in the sugar industry, analytical chemistry, and pharmaceuticals. Louis Pasteur deduced in 1848 that this phenomenon has a molecular basis[2]. Artificial composite materials displaying the analog of optical activity but in the microwave region were introduced by J.C. Bose in 1898 [3], and gained considerable attention from the mid-1980s [4]. The term chirality itself was coined by Lord Kelvin in 1873.[1]
The word “racemic” is derived from the Latin word for grape; the term having its origins in the work of Louis Pasteur who isolated racemic tartaric acid from wine.
Naming conventions
By configuration: R- and S-
For chemists, the R / S system is the most important nomenclature system for denoting enantiomers, which does not involve a reference molecule such as glyceraldehyde. It labels each chiral center R or S according to a system by which its substituents are each assigned a priority, according to the Cahn Ingold Prelog priority rules(CIP), based on atomic number. If the center is oriented so that the lowest-priority of the four is pointed away from a viewer, the viewer will then see two possibilities: If the priority of the remaining three substituents decreases in clockwise direction, it is labeled R (for Rectus), if it decreases in counterclockwise direction, it is S (for Sinister).
This system labels each chiral center in a molecule (and also has an extension to chiral molecules not involving chiral centers). Thus, it has greater generality than the D/L system, and can label, for example, an (R,R) isomer versus an (R,S) — diastereomers.
The R / S system has no fixed relation to the (+)/(−) system. An R isomer can be either dextrorotatory or levorotatory, depending on its exact substituents.
The R / S system also has no fixed relation to the D/L system. For example, the side-chain one of serine contains a hydroxyl group, -OH. If a thiol group, -SH, were swapped in for it, the D/L labeling would, by its definition, not be affected by the substitution. But this substitution would invert the molecule's R / S labeling, because the CIP priority of CH2OH is lower than that for CO2H but the CIP priority of CH2SH is higher than that for CO2H.
For this reason, the D/L system remains in common use in certain areas of biochemistry, such as amino acid and carbohydrate chemistry, because it is convenient to have the same chiral label for all of the commonly occurring structures of a given type of structure in higher organisms. In the D/L system, they are all L; in the R / S system, they are mostly S but there are some common exceptions.
By optical activity: (+)- and (−)-
An enantiomer can be named by the direction in which it rotates the plane of polarized light. If it rotates the light clockwise (as seen by a viewer towards whom the light is traveling), that enantiomer is labeled (+). Its mirror-image is labeled (−). The (+) and (−) isomers have also been termed d- and l-, respectively (for dextrorotatory and levorotatory). This labeling is easy to confuse with D- and L-.
By configuration: D- and L-
An optical isomer can be named by the spatial configuration of its atoms. The D/L system does this by relating the molecule to glyceraldehyde. Glyceraldehyde is chiral itself, and its two isomers are labeled D and L. Certain chemical manipulations can be performed on glyceraldehyde without affecting its configuration, and its historical use for this purpose (possibly combined with its convenience as one of the smallest commonly used chiral molecules) has resulted in its use for nomenclature. In this system, compounds are named by analogy to glyceraldehyde, which, in general, produces unambiguous designations, but is easiest to see in the small biomolecules similar to glyceraldehyde. One example is the amino acid alanine, which has two optical isomers, and they are labeled according to which isomer of glyceraldehyde they come from. On the other hand, glycine, the amino acid derived from glyceraldehyde, has no optical activity, as it is not chiral (achiral). Alanine, however, is chiral.
The D/L labeling is unrelated to (+)/(−); it does not indicate which enantiomer is dextrorotatory and which is levorotatory. Rather, it says that the compound's stereochemistry is related to that of the dextrorotatory or levorotatory enantiomer of glyceraldehyde—the dextrorotatory isomer of glyceraldehyde is, in fact, the D isomer. Nine of the nineteen L-amino acids commonly found in proteins are dextrorotatory (at a wavelength of 589 nm), and D-fructose is also referred to as levulose because it is levorotatory.
A rule of thumb for determining the D/L isomeric form of an amino acid is the "CORN" rule. The groups:
COOH, R, NH2 and H (where R is a variant carbon chain)
are arranged around the chiral center carbon atom. Sighting with the hydrogen atom away from the viewer, if these groups are arranged clockwise around the carbon atom, then it is the D-form. If counter-clockwise, it is the L-form.
Nomenclature
• Any non-racemic chiral substance is called scalemic [2]
• A chiral substance is enantiopure or homochiral when only one of two possible enantiomers is present.
• A chiral substance is enantioenriched or heterochiral when an excess of one enantiomer is present but not to the exclusion of the other.
• Enantiomeric excess or ee is a measure for how much of one enantiomer is present compared to the other. For example, in a sample with 40% ee in R, the remaining 60% is racemic with 30% of R and 30% of S, so that the total amount of R is 70%.
Types
In general, chiral molecules have point chirality, centering around a single atom, usually carbon, which has four different substituents. The two enantiomers of such compounds are said to have different absolute configurations at this center. This center is thus stereogenic (i.e., a grouping within a molecular entity that may be considered a focus of stereoisomerism), and is exemplified by the α-carbon of amino acids. A molecule can have multiple chiral centers without being chiral overall if there is a symmetry element (a mirror plane or inversion center), which relates the two (or more) chiral centers. Such a molecule is called a meso compound. It is also possible for a molecule to be chiral without having actual point chirality. Common examples include 1,1'-bi-2-naphthol (BINOL) and 1,3-dichloro-allene, which have axial chirality, and (E)-cyclooctene, which has planar chirality.
It is important to keep in mind that molecules that are dissolved in solution or are in the gas phase usually have considerable flexibility, and, thus, may adopt a variety of different conformations. These various conformations are themselves almost always chiral. However, when assessing chirality, one must use a structural picture of the molecule that corresponds to just one chemical conformation - the most symmetric conformation possible.
When the optical rotation for an enantiomer is too low for practical measurement it is said to exhibit cryptochirality.
Even isotopic differences must be considered when examining chirality. Replacing one of the two 1H atoms at the CH2 position of benzyl alcohol with a deuterium (²H) makes that carbon a stereocenter. The resulting benzyl-α-d alcohol exists as two distinct enantiomers, which can be assigned by the usual stereochemical naming conventions. The S enantiomer has [α]D = +0.715°.[5]
Properties of enantiomers
Enantiomers are identical with respect to ordinary chemical reactions and properties (i.e., will have identical Rfs by TLC, identical NMR spectra, identical IR spectra), but differences arise when they are in the presence of other chiral molecules or objects. Different enantiomers of chiral compounds often taste and smell differently and have different effects as drugs - see below.
One chiral 'object' that interacts differently with the two enantiomers of a chiral compound is circularly polarised light: An enantiomer will absorb left- and right-circularly polarised light to differing degrees. This is the basis of circular dichroism (CD) spectroscopy. Usually the difference in absorptivity is relatively small (parts per thousand). CD spectroscopy is a powerful analytical technique for investigating the secondary structure of proteins and for determining the absolute configurations of chiral compounds, in particular, transition metal complexes. CD spectroscopy is replacing polarimetry as a method for characterising chiral compounds, although the latter is still popular with sugar chemists.
In biology
Many biologically active molecules are chiral, including the naturally occurring amino acids (the building blocks of proteins), and sugars. In biological systems, most of these compounds are of the same chirality: most amino acids are L and sugars are D. Typical naturally occurring proteins, made of L amino acids, are known as left-handed proteins, whereas D amino acids produce right-handed proteins.
The origin of this homochirality in biology is the subject of much debate.[6] Most scientists believe that Earth life's “choice” of chirality was purely random, and that if carbon-based life forms exist elsewhere in the universe, their chemistry could theoretically have opposite chirality.
Enzymes, which are chiral, often distinguish between the two enantiomers of a chiral substrate. Imagine an enzyme as having a glove-like cavity that binds a substrate. If this glove is right-handed, then one enantiomer will fit inside and be bound, whereas the other enantiomer will have a poor fit and is unlikely to bind.
D-form amino acids tend to taste sweet, whereas L-forms are usually tasteless. Spearmint leaves and caraway seeds, respectively, contain L-carvone and D-carvone - enantiomers of carvone. These smell different to most people because our olfactory receptors also contain chiral molecules that behave differently in the presence of different enantiomers.
Chirality is important in context of ordered phases as well, for example the addition of a small amount of an optically active molecule to a nematic phase (a phase that has long range orientational order of molecules) transforms that phase to a chiral nematic phase (or cholesteric phase). Chirality in context of such phases in polymeric fluids has also been studied in this context (Srinivasarao, 1999).
In drugs
Many chiral drugs must be made with high enantiomeric purity due to potential side-effects of the other enantiomer. (The other enantiomer may also merely be inactive.)
• Thalidomide: Thalidomide is racemic. One enantiomer is effective against morning sickness, whereas the other is teratogenic. In this case, administering just one of the enantiomers to a pregnant patient does not help, as the two enantiomers are readily interconverted in vivo. Thus, if a person is given either enantiomer, both the D and L isomers will eventually be present in the patient's serum.
• Ethambutol: Whereas one enantiomer is used to treat tuberculosis, the other causes blindness.
• Naproxen: One enantiomer is used to treat arthritis pain, but the other causes liver poisoning with no analgesic effect.
• Steroid receptor sites also show stereoisomer specificity.
• Penicillin's activity is stereodependent. The antibiotic must mimic the D-alanine chains that occur in the cell walls of bacteria in order to react with and subsequently inhibit bacterial transpeptidase enzyme.
• Only L-propranolol is a powerful adrenoceptor antagonist, whereas D-propranolol is not. However, both have local anesthetic effect.
• The L-isomer of Methorphan, levomethorphan is a potent opioid analgesic, while the D-isomer, dextromethorphan is a dissociative cough suppressant.
• S(-) isomer of carvedilol, a drug that interacts with adrenoceptors, is 100 times more potent as beta receptor blocker than R(+) isomer. However, both the isomers are approximately equipotent as alpha receptor blockers.
• The D-isomers of amphetamine and methamphetamine are strong CNS stimulants, while the L-isomers of both drugs lack appreciable CNS(central nervous system) stimulant effects, but instead stimulate the peripheral nervous system. For this reason, the Levo-isomer of methamphetamine is available as an OTC nasal inhaler in some countries, while the Dextro-isomer is banned from medical use in all but a few countries in the world, and highly regulated in those countries who do allow it to be used medically.
In inorganic chemistry
Many coordination compounds are chiral; for example, the well-known [Ru(2,2'-bipyridine)3]2+ complex in which the three bipyridine ligands adopt a chiral propeller-like arrangement [7]. In this case, the Ru atom may be regarded as a stereogenic center, with the complex having point chirality. The two enantiomers of complexes such as [Ru(2,2'-bipyridine)3]2+ may be designated as Λ (left-handed twist of the propeller described by the ligands) and Δ (right-handed twist). Hexol is a chiral cobalt complex that was first investigated by Alfred Werner. Resolved hexol is significant as being the first compound devoid of carbon to display optical activity.
Chirality of amines
Tertiary amines (see image) are chiral in a way similar to carbon compounds: The nitrogen atom bears four distinct substituents counting the lone pair. However, the energy barrier for the inversion of the stereocenter is, in general, about 30 kJ/mol, which means that the two stereoisomers are rapidly interconverted at room temperature. As a result, amines such as NHRR' cannot be resolved optically and NRR'R" can only be resolved when the R, R', and R" groups are constrained in cyclic structures.
Theory of origin
A paper published in February 29, 2008 by researchers led by Sandra Pizzarello, from Arizona State University, reveals that the Murchison meteorite contains sizable molecular asymmetry of up to 14%, "giving support to the idea that biomolecular traits such as chiral asymmetry could have been seeded in abiotic chemistry ahead of life."[8]
"Thanks to the pristine nature of this meteorite, we were able to demonstrate that other extraterrestrial amino acids carry the left-handed excesses in meteorites and, above all, that these excesses appear to signify that their precursor molecules, the aldehydes, also carried such excesses," Pizzarello said. "In other words, a molecular trait that defines life seems to have broader distribution as well as a long cosmic lineage."[3]
Other theories of the origin of chirality on Earth have also been proposed, such as the weak nuclear force.[4]
Chemical chirality in Fiction
Although little was known about chemical chirality in the time of Lewis Carroll, his work Through the Looking-glass contains a prescient reference to the differing biological activities of enantiomeric drugs: "Perhaps Looking-glass milk isn't good to drink," Alice said to her cat.
In James Blish's Star Trek novella Spock Must Die! the tachyon 'mirrored' Mr Spock is later discovered to have stolen chemical reagents from the medical bay and to have been using them to convert certain amino acids to opposite-chirality isomers, since the mirrored Mr Spock's metabolism is reversed, and, hence, must process the opposite polarity of these isomers.
In Larry Niven's Destiny's Road, the title planet's indigenous life is based upon right-handed proteins. When human colonists arrive from Earth via a generation ship, extreme measures are taken to permit the colony's survival. A peninsula is sterilized with a lander's fusion drive, creating the titular "road" out of fused bedrock. The area is then reseeded with Earth life to provide the colonists with food. Though the soil lacks potassium due to other factors, necessitating supplements that produce a hydraulic empire common to Niven's fiction, the colony otherwise prospers. Native viruses and bacteria cannot infect colonists, resulting in longer lifespans. Sealife quickly recovers, and is consumed by the colonists as a "diet" food, as their digestive systems cannot metabolize it into fat.
In the Trauma Center series of games, doctors test for a "chiral reaction" in order to determine whether or not a patient is infected with "Gangliated Utrophin Immuno Latency Toxin," a fictional, parasitic pathogen more commonly referred to as G.U.I.L.T. A positive reaction means the patient is infected, while a negative reaction means the patient has either been cured or is not infected.
translation of French original, published by Alembic Club Reprints (Vol. 14, pp. 1-46) in 1905, facsimile reproduction by SPIE in a 1990 book". Notes
Structure and Optical Isomerism
A very important feature of the structure of amino acids (and other kinds of compounds as well, for that matter) is called optical isomerism. It applies to all amino acids except glycine.
Look at the number-two carbon atom. You should notice that in one direction it is bonded to an amino group. In another direction, it is bonded to a carboxylic group. It is also bonded to a hydrogen atom and an alkyl group or some other kind of group. Except in the case of glycine where -R is a -H, that number two carbon atom is bonded to four different groups. A carbon atom which is attached to four different groups is called an asymmetric carbon atom or sometimes a chiral carbon atom. The importance of this depends on some structural properties that we will investigate in this section.
If you are in the lab you get a model kit and follow along with the diagrams shown here. Get a carbon atom and attach to it four different groups. For convenience just use different colored units, rather than actually building an amino group and a carboxylic acid group and an isopropyl group or something like that. Then make the other models as they are shown bleow. If you are not in the lab now, you should work with the models to do this exercise when you are in the lab.
Here is a model of a carbon atom with four different groups attached.
Here is another model constructed to be the mirror image of the first model. To do this, construct a model that would appear just as the first model that you made would look like if you were looking at it in a mirror.
Here you can see why these are called mirror images of one another.
We can demonstrate that these two structures are not identical to one another by trying to superimpose one structure on another and get all of the same colored units to be in the identical places. You can see that is not possible.
The two structures are different. They are isomers of one another. It so happens that they are called optical isomers of one another because they have optical properties that are different from one another. We will discuss that particular property a little bit more when we discuss carbohydrates in a later lesson.
When asymmetric carbon atoms are present in a molecular compound, there are two ways in which the groups attached to that carbon can be arranged in the three dimensions, as we have just shown with the two models above. It is generally true, if not universally true, that only one of these optical isomers is biologically active. In other words, when these compounds are made by a plant or animal, only one of the two forms is made. When it comes time for these molecules to interact with an enzyme, only one of these molecules would react. The other would not. Both shape and orientation in biological compounds are extremely important.
Chemically, optical isomers behave the same. Biologically, they do not. One will react properly, but the other will not. Optically, there is also that difference which will be pointed out when we deal with carbohydrates in a later lesson.
We can use these models to illustrate why you need to have four different groups bonded to the central atom. One group (the black group) has been removed from the model on the left and replaced it with a duplicate of one of the other three groups (the white group). We now have a model with the central atom bonded to four groups, but they are not all different. The same has been done to the mirror image (unfortunately, you cannot see that).
By turning the second model in the right way you can see that it is identical to the first one.
Consequently, this central atom is not an asymmetric carbon atom, the molecule is not an optically active molecule, and these are identicalcompounds and not optical isomers.
References
1. ^ Pedro Cintas. "Tracing the Origins and Evolution of Chirality and Handedness in Chemical Language". Angewandte Chemie International Edition 46 (22): 4016-4024. doi:10.1002/anie.200603714.
2. ^ Infelicitous stereochemical nomenclatures for stereochemical nomenclature
3. ^ Arizona State University (2008, February 29). Key To Life Before Its Origin On Earth May Have Been Discovered. ScienceDaily. Retrieved June 16, 2008, from http://www.sciencedaily.com/releases/2008/02/080228174823.htm
4. ^ Castelvecchi, Davide. (2007). Alien Pizza, Anyone?, Science News vol. 172, pp. 107-109. (references
Hydrogen fuel cell as way out for energy crises, basic technology used and latest advances, applications
Hydrogen fuel cell as way out for energy crises, basic technology used and latest advances, applications
Acknowledgement
Gratitude cannot be seen or expressed. It can only be felt in heart and is beyond description. Often words are inadequate to serve as a model of expression of one’s feeling, specially the sense of indebtedness and gratitude to all those who help us in our duty.
It is of immense pleasure and profound privilege to express my gratitude and indebtedness along with sincere thanks to Dr Kailash Juglan, lecturer of PHYSICS of Lovely Professional University for providing me the opportunity to work for a project on “Hydrogen fuel cell as way out for energy crises, basic technology used and latest advances, applications ”
I am beholden to my family and friends for their blessings and encouragement.
Always Obediently
Prateek Joshi
What Is Energy Crisis?
An energy crisis is any great bottleneck (or price rise) in the supply of energy resources to an economy. It usually refers to the shortage of oil and additionally to electricity or other natural resources. An energy crisis may be referred to as an oil crisis, petroleum crisis, energy shortage, electricity shortage or electricity crisis
Market failure is possible when monopoly manipulation of markets occurs. A crisis can develop due to industrial actions like union organized strikes and government embargoes. The cause may be over-consumption, ageing infrastructure, choke point disruption or bottlenecks at oil refineries and port facilities that restrict fuel supply. An emergency may emerge during unusually cold winters.this probabaly rises the depletion of energy.
Pipeline failures and other accidents may cause minor interruptions to energy supplies. A crisis could possibly emerge after infrastructure damage from severe weather. Attacks by terrorists or militia on important infrastructure are a possible problem for energy consumers, with a successful strike on a Middle East facility potentially causing global shortages. Political events, for example, when governments change due to regime change, monarchy collapse, military occupation, and coup may disrupt oil and gas production and create shortages.
Energy Crisis in History
•
• 1973 oil crisis - Cause: an OPEC oil export embargo by many of the major Arab oil-producing states, in response to western support of Israel during the Yom Kippur War
• 1979 energy crisis - Cause: the Iranian revolution
• 1990 spike in the price of oil Cause: the Gulf War
• The 2000–2001 California electricity crisis - Cause: failed deregulation, and business corruption.
• The UK fuel protest of 2000 - Cause: Raise in the price of crude oil combined with already relatively high taxation on road fuel in the UK.
• North American Gas crisis
• Argentine gas crisis of 2004
• North Korea has had energy shortages for many years.
• Zimbabwe has experienced a shortage of energy supplies for many years due to financial mismanagement.
While not entering a full crisis, political riots that occurred during the 2007 Burmese anti-government protests were initially sparked by rising energy prices. Likewise the Russia-Ukraine gas dispute and the Russia-Belarus energy dispute have been mostly resolved before entering a prolonged crisis stage.
Present Day Crisis
Crises that currently exist include:
• Oil Price Increases in 2003 - Caused by continued global increases in petroleum demand coupled with production stagnation, the falling value of the U.S. dollar
• 2008 Central Asia energy crisis, caused by abnormally cold temperatures and low water levels in an area dependent on hydroelectric power. Despite having significant hydrocarbon reserves, in February 2008 the President of Pakistan announced plans to tackle energy shortages that were reaching crisis stage. At the same time the South African President was appeasing fears of a prolonged electricity crisis in South Africa.
• South African electrical crisis. The South African crisis, which may last to 2012, lead to large price rises for platinum in February 2008 and reduced gold production.
• China experienced severe energy shortages towards the end of 2005 and again in early 2008. During the latter crisis they suffered severe damage to power networks along with diesel and coal shortages. Supplies of electricity in Guangdong province, the manufacturing hub of China, are predicted to fall short by an estimated 10 GW
Predictions
Although technology has made oil extraction more efficient, the world is having to struggle to provide oil by using increasingly costly and less productive methods such as deep sea drilling, and developing environmentally sensitive areas such as the Arctic National Wildlife Refuge.
The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. Although far less from people in developing countries, especially USA, the per capita energy consumption of China, India and other developing nations continues to increase as the people living in these countries adopt more energy intensive lifestyles. At present a small part of the world's population consumes a large part of its resources, with the United States and its population of 300 million people consuming far more oil than China with its population of 1.3 billion people.
Future and alternative energy sources
In response to the petroleum crisis, the principles of green energy and sustainable living movements gain popularity. This has led to increasing interest in alternate power/fuel research such as fuel cell technology, liquid nitrogen economy, hydrogen fuel, biomethanol, biodiesel, Karrick process, solar energy, geothermal energy, tidal energy, wave power, and wind energy, and fusio power. To date, only hydroelectricity and nuclear power have been significant alternatives to fossil fuel.
Hydrogen gas is currently produced at a net energy loss from natural gas, which is also experiencing declining production in North America and elsewhere. When not produced from natural gas, hydrogen still needs another source of energy to create it, also at a loss during the process. This has led to hydrogen being regarded as a 'carrier' of energy, like electricity, rather than a 'source'. The unproven dehydrogenating process has also been suggested for the use water as an energy source.
Efficiency mechanisms such as Negawatt power can encourage significantly more effective use of current generating capacity. It is a term used to describe the trading of increased efficiency, using consumption efficiency to increase available market supply rather than by increasing plant generation capacity. As such, it is a demand-side as opposed to a supply-side measure.
Growing demand for a new fuel
AS the increase with the energy crisis over the present years there has been a growing demand for an alternative sources of energy. Some of them are
1-SolarEnergy
2-Tidal Energy
3-Hydro energy
4-Biological Energy
5-Hydrogen Energy
Much has been said about all the other form of energies except that of HYDROGEN based energy. Which has created awareness among the mass and scientific world in the past few years. As compared to any other form of energy Hydrogen based energy has two main benefits
Firstly it does not leave any residue after combustion except pure water which can be used for infinite purposes and second is that It produces a large amount of energy when its combustion takes place.
What Is Hydrogen?
Hydrogen is the chemical element with atomic number 1. It is represented by the symbol H. At standard temperature and pressure hydrogen is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas with the molecular formula H2. With an atomic weight of 1.00794, hydrogen is the lightest element.
Hydrogen is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane, after which most elemental hydrogen is used "captively" (meaning locally at the production site), with the largest markets about equally divided between fossil fuel upgrading (e.g., hydrocracking) and ammonia production (mostly for the fertilizer market). Hydrogen may be produced from water using the process of electrolysis, but this process is presently significantly more expensive commercially than hydrogen production from natural gas.
The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons. In ionic compounds it can take on either a positive charge (becoming a cation composed of a bare proton) or a negative charge (becoming an anion known as a hydride). Hydrogen can form compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics.
The solubility and characteristics of hydrogen with various metals are very important in metallurgy (as many metals can suffer hydrogen embrittlement) and in developing safe ways to store it for use as a fuel. Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals and can be dissolved in both crystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice.
History
Hydrogen gas, H2, was first artificially produced and formally described by T. Von Hohenheim (also known as Paracelsus, 1493–1541) via the mixing of metals with strong acids He was unaware that the flammable gas produced by this chemical reaction was a new chemical element. In 1671, Robert Boyle rediscovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by identifying the gas from a metal-acid reaction as "inflammable air" and further finding in 1781 that the gas produces water when burned. He is usually given credit for its discovery as an element. In 1783, Antoine Lavoisier gave the element the name of hydrogen (from the Greek hydro meaning water and genes meaning creator) when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned
Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year. Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. François Isaac de Rivaz built the first internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823
The first hydrogen-filled balloon was invented by Jacques Charlesin 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly-scheduled flights started in 1910 and by the outbreak of World War I in August 1914 they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war.
The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on May 6, 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen as widely assumed to be the cause but later investigations pointed to ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done.
Hydrogen As a FUEL
"President Bush’s remarks in his State-of-the-Union message proposing a big jump in funding for hydrogen and fuel cell research and development are terrific news. It’s imperative that Congress follows through now and makes available those funds. Aside from the tangible benefits of spending more on an environmentally benign area of energy that for too long has been treated - often condescendingly - like a poor orphan, the political message is of supreme significance. For decades, supporters of hydrogen and other alternative energy fields have argued until they were blue in the face, that the key ingredient missing in moving forward is national political will. President Bush’s support provides a large measure of that political will." --Peter Hoffmann, 31 January 2003 about the book: Hydrogen is the quintessential eco-fuel. This invisible, tasteless gas is the most abundant element in the universe. It is the basic building block and fuel of stars and an essential raw material in innumerable biological and chemical processes. As a completely nonpolluting fuel, it may hold the answer to growing environmental concerns about atmospheric accumulation of carbon dioxide and the resultant Greenhouse Effect. In this book Peter Hoffmann describes current research toward a hydrogen-based economy. He presents the history of hydrogen energy and discusses the environmental dangers of continued dependence on fossil fuels. Hydrogen is not an energy source but a carrier that, like electricity, must be manufactured. Today hydrogen is manufactured by "decarburizing" fossil fuels. In the future it will be derived from water and solar energy and perhaps from "cleaner" versions of nuclear energy. Because it can be made by a variety of methods, Hoffmann argues, it can be easily adapted by different countries and economies. Hoffmann acknowledges the social, political, and economic difficulties in replacing current energy systems with an entirely new one. Although the process of converting to a hydrogen-based economy would be complex, he demonstrates that the environmental and health benefits would far outweigh the costs.
Small whispers of hydrogen energy's vast potential have been heard along the fringes of industry since the oil shocks of the 1970s, but only last year did a steady drumbeat begin in the capital markets of Wall Street, Europe, and Asia. First BMW and Daimler-Chrysler, and then Ford, Honda, Toyota, GM, and others laid claim to hydrogen fuel and to the fuel cell as a new prime mover for the automobile.
An informed public may be all that is required to bring an end to the climate-destabilizing fossil era. Until this summer, though, we had no recent book on the emerging world hydrogen economy Information was available only to readers of periodicals like Peter Hoffmann's Hydrogen and Fuel Cell Letter and The International Journal of Hydrogen Energy.
Finally in August, two books. Hoffmann's chronicles hydrogen science and technology from the earliest days. Embedded in its historical narrative are explanations of these technologies and their advantages and drawbacks. He addresses the questions people are starting to ask: Why a hydrogen economy? How do you get hydrogen? What will it cost? Is it safe? Will it reduce global warming? What is its connection with solar and wind energy? The book's main drawback is the index, which is missing essential entries such as pipelines, carbon dioxide, leakage, sequestration, biomass, and embrittlement. But at last we now have a book we can use to understand the elements of this epic change.
Seth Dunn's Worldwatch Paper speaks from the environmental perspective and describes present practices with an eye to the future. He reports on a range of studies by government agencies, NGOs, universities, and corporations, all attempting to illuminate potential paths for the emerging hydrogen economy He compares this moment in the hydrogen fuel revolution to the early automobile era, which saw fierce competition among technologies before the gasoline-powered internal combustion engine won out as the standard.--Ty Cashman
"Decarbonization is just what it sounds like: taking the carbon out of hydrocarbon fuels. What is left is, of course, hydrogen. Decarbonization will be the industrial end-game strategy of a trend first detected by Cesare Marchetti in the 1970s, when he described a gradual shift, over centuries, from hydrocarbon fuels with high carbon and low hydrogen content (wood, peat, coal) to fuels with increasingly less carbon and more hydrogen (oil, natural gas), culminating, seemingly inevitably, in pure hydrogen as the principal energy carrier of an advanced industrial society.
"If hydrogen is ever to replace natural gas as a utility fuel, very large quantities obviously will have to be stored somewhere. Storage, to maintain a buffer for seasonal, daily, and hourly swings in demand, is essential with any system for the transmission of a gas. Storage facilities even out the ups and downs of demand, including temporary interruptions and breakdowns, and still permit steady, maximum-efficiency production.
"It has been suggested that huge amounts of hydrogen could be stored underground in exhausted natural gas fields, in natural or manmade caverns, or in aquifers.... The natural gas industry has long been using depleted gas and oil fields to store huge amounts of natural gas. Aquifers are similar to natural gas and oil fields in that they are porous geological formations, but without the fossil-fuel or natural gas content. Many of them feature a "caprock" formation, a layer on top of the formation that is usually saturated with water. This layer acts as a seal to keep the gas from leaking out; it works for both natural gas and the lighter hydrogen.
What Is Fuel Cell?
A fuel cell is an electrochemical conversion device. It produces electricity from fuel (on the anode side) and an oxidant (on the cathode side), which react in the presence of an electrolyte. The reactants flow into the cell, and the reaction products flow out of it, while the electrolyte remains within it. Fuel cells can operate virtually continuously as long as the necessary flows are maintained.
Fuel cells are different from electrochemical cell batteries in that they consume reactant, which must be replenished, whereas batteries store electrical energy chemically in a closed system. Additionally, while the electrodes within a battery react and change as a battery is charged or discharged, a fuel cell's electrodes are catalytic and relatively stable.
Many combinations of fuel and oxidant are possible. A hydrogen cell uses hydrogen as fuel and oxygen (usually from air) as oxidant. Other fuels include hydrocarbons and alcohols. Other oxidants include air, chlorine and chlorine dioxide.
Design And Working Of a Fuel Cell
A fuel cell works by catalysis, separating the component electrons and protons of the reactant fuel, and forcing the electrons to travel though a circuit, hence converting them to electrical power. The catalyst typically comprises a platinum group metal or alloy. Another catalytic process takes the electrons back in, combining them with the protons and the oxidant to form waste products (typically simple compounds like water and carbon dioxide).
In the archetypal hydrogen–oxygen proton exchange membrane fuel cell (PEMFC) design, a proton-conducting polymer membrane, (the electrolyte), separates the anode and cathode sides. This was called a "solid polymer electrolyte fuel cell" (SPEFC) in the early 1970s, before the proton exchange mechanism was well-understood. (Notice that "polymer electrolyte membrane" and "proton exchange membrane" result in the same acronym.)
On the anode side, hydrogen diffuses to the anode catalyst where it later dissociates into protons and electrons. These protons often react with oxidants causing them to become what is commonly referred to as multi-facilitated proton membranes (MFPM). The protons are conducted through the membrane to the cathode, but the electrons are forced to travel in an external circuit (supplying power) because the membrane is electrically insulating. On the cathode catalyst, oxygen molecules react with the electrons (which have traveled through the external circuit) and protons to form water — in this example, the only waste product, either liquid or vapor.
In addition to this pure hydrogen type, there are hydrocarbon fuels for fuel cells, including diesel, methanol (see: direct-methanol fuel cells and indirect methanol fuel cells) and chemical hydrides. The waste products with these types of fuel are carbon dioxide and water.
Construction of a low temperature PEMFC: Bipolar plate as electrode with in-milled gas channel structure, fabricated from conductive plastics (enhanced with carbon nanotubes for more conductivity); Porous carbon papers; reactive layer, usually on the polymer membrane applied; polymer membrane.
Condensation of water produced by a PEMFC on the air channel wall. The gold wire around the cell ensures the collection of electric current.
The materials used in fuel cells differ by type. In a typical membrane electrode assembly (MEA), the electrode–bipolar plates are usually made of metal, nickel or carbon nanotubes, and are coated with a catalyst (like platinum, nano iron powders or palladium) for higher efficiency. Carbon paper separates them from the electrolyte. The electrolyte could be ceramic or a membrane.
A typical PEM fuel cell produces a voltage from 0.6 V to 0.7 V at full rated load. Voltage decreases as current increases, due to several factors:
• Activation loss
• Ohmic loss (voltage drop due to resistance of the cell components and interconnects)
• Mass transport loss (depletion of reactants at catalyst sites under high loads, causing rapid loss of voltage)[3]
To deliver the desired amount of energy, the fuel cells can be combined in series and parallel circuits, where series yield higher voltage, and parallel allows a stronger current to be drawn. Such a design is called a fuel cell stack. Further, the cell surface area can be increased, to allow stronger current from each cell.
History
The principle of the fuel cell was discovered by German scientist Christian Friedrich Schönbein in 1838 and published in one of the scientific magazines of the time. Based on this work, the first fuel cell was demonstrated by Welsh scientist Sir William Robert Grove in the February 1839 edition of the Philosophical Magazine and Journal of Science and later sketched, in 1842, in the same journal. The fuel cell he made used similar materials to today's phosphoric-acid fuel cell.
In 1955, W. Thomas Grubb, a chemist working for the General Electric Company (GE), further modified the original fuel cell design by using a sulphonated polystyrene ion-exchange membrane as the electrolyte. Three years later another GE chemist, Leonard Niedrach, devised a way of depositing platinum onto the membrane, which served as catalyst for the necessary hydrogen oxidation and oxygen reduction reactions. This became known as the 'Grubb-Niedrach fuel cell'. GE went on to develop this technology with NASA and McDonnell Aircraft, leading to its use during Project Gemini. This was the first commercial use of a fuel cell. It wasn't until 1959 that British engineer Francis Thomas Bacon successfully developed a 5 kW stationary fuel cell. In 1959, a team led by Harry Ihrig built a 15 kW fuel cell tractor for Allis-Chalmers which was demonstrated across the US at state fairs. This system used potassium hydroxide as the electrolyte and compressed hydrogen and oxygen as the reactants. Later in 1959, Bacon and his colleagues demonstrated a practical five-kilowatt unit capable of powering a welding machine. In the 1960s, Pratt and Whitney licensed Bacon's U.S. patents for use in the U.S. space program to supply electricity and drinking water (hydrogen and oxygen being readily available from the spacecraft tanks).
United Technology Corp.'s UTC Power subsidiary was the first company to manufacture and commercialize a large, stationary fuel cell system for use as a co-generation power plant in hospitals, universities and large office buildings. UTC Power continues to market this fuel cell as the PureCell 200, a 200 kW system. UTC Power continues to be the sole supplier of fuel cells to NASA for use in space vehicles, having supplied the Apollo missions, and currently the Space Shuttle program, and is developing fuel cells for automobiles, buses, and cell phone towers; the company has demonstrated the first fuel cell capable of starting under freezing conditions with its proton exchange membrane automotive fuel cell.
Types Of Fuel Cells
There are several different types of fuel cells, each using a different chemistry. Fuel cells are usually classified by their operating temperature and the type of electrolyte they use. Some types of fuel cells work well for use in stationary power generation plants. Others may be useful for small portable applications or for powering cars. The main types of fuel cells include:
Polymer exchange membrane fuel cell (PEMFC)
The Department of Energy (DOE) is focusing on the PEMFC as the most likely candidate for transportation applications. The PEMFC has a high power density and a relatively low operating temperature (ranging from 60 to 80 degrees Celsius, or 140 to 176 degrees Fahrenheit). The low operating temperature means that it doesn't take very long for the fuel cell to warm up and begin generating electricity. We?ll take a closer look at the PEMFC in the next section.
Solid oxide fuel cell (SOFC)
These fuel cells are best suited for large-scale stationary power generators that could provide electricity for factories or towns. This type of fuel cell operates at very high temperatures (between 700 and 1,000 degrees Celsius). This high temperature makes reliability a problem, because parts of the fuel cell can break down after cycling on and off repeatedly. However, solid oxide fuel cells are very stable when in continuous use. In fact, the SOFC has demonstrated the longest operating life of any fuel cell under certain operating conditions. The high temperature also has an advantage: the steam produced by the fuel cell can be channeled into turbines to generate more electricity. This process is called co-generation of heat and power (CHP) and it improves the overall efficiency of the system.
Alkaline fuel cell (AFC)
This is one of the oldest designs for fuel cells; the United States space program has used them since the 1960s. The AFC is very susceptible to contamination, so it requires pure hydrogen and oxygen. It is also very expensive, so this type of fuel cell is unlikely to be commercialized.
Molten-carbonate fuel cell (MCFC)
Like the SOFC, these fuel cells are also best suited for large stationary power generators. They operate at 600 degrees Celsius, so they can generate steam that can be used to generate more power. They have a lower operating temperature than solid oxide fuel cells, which means they don't need such exotic materials. This makes the design a little less expensive.
Phosphoric-acid fuel cell (PAFC)
The phosphoric-acid fuel cell has potential for use in small stationary power-generation systems. It operates at a higher temperature than polymer exchange membrane fuel cells, so it has a longer warm-up time. This makes it unsuitable for use in cars.
Direct-methanol fuel cell (DMFC)
Methanol fuel cells are comparable to a PEMFC in regards to operating temperature, but are not as efficient. Also, the DMFC requires a relatively large amount of platinum to act as a catalyst, which makes these fuel cells expensive.
In the following section, we will take a closer look at the kind of fuel cell the DOE plans to use to power future vehicles -- the PEMFC.
Effeciency Of Fuel Cell
The efficiency of a fuel cell is dependent on the amount of power drawn from it. Drawing more power means drawing more current, which increases the losses in the fuel cell. As a general rule, the more power (current) drawn, the lower the efficiency. Most losses manifest themselves as a voltage drop in the cell, so the efficiency of a cell is almost proportional to its voltage. For this reason, it is common to show graphs of voltage versus current (so-called polarization curves) for fuel cells. A typical cell running at 0.7 V has an efficiency of about 50%, meaning that 50% of the energy content of the hydrogen is converted into electrical energy; the remaining 50% will be converted into heat. (Depending on the fuel cell system design, some fuel might leave the system unreacted, constituting an additional loss.)
For a hydrogen cell operating at standard conditions with no reactant leaks, the efficiency is equal to the cell voltage divided by 1.48 V, based on the enthalpy, or heating value, of the reaction. For the same cell, the second law efficiency is equal to cell voltage divided by 1.23 V. (This voltage varies with fuel used, and quality and temperature of the cell.) The difference between these numbers represents the difference between the reaction's enthalpy and Gibbs free energy. This difference always appears as heat, along with any losses in electrical conversion efficiency.
Fuel cells do not operate on a thermal cycle. As such, they are not constrained, as combustion engines are, in the same way by thermodynamic limits, such as Carnot cycle efficiency. At times this is misrepresented by saying that fuel cells are exempt from the laws of thermodynamics, because most people think of thermodynamics in terms of combustion processes (enthalpy of formation). The laws of thermodynamics also hold for chemical processes (Gibbs free energy) like fuel cells, but the maximum theoretical efficiency is higher (83% efficient at 298K) than the Otto cycle thermal efficiency (60% for compression ratio of 10 and specific heat ratio of 1.4). Comparing limits imposed by thermodynamics is not a good predictor of practically achievable efficiencies. Also, if propulsion is the goal, electrical output of the fuel cell has to still be converted into mechanical power with the corresponding inefficiency. In reference to the exemption claim, the correct claim is that the "limitations imposed by the second law of thermodynamics on the operation of fuel cells are much less severe than the limitations imposed on conventional energy conversion systems". Consequently, they can have very high efficiencies in converting chemical energy to electrical energy, especially when they are operated at low power density, and using pure hydrogen and oxygen as reactants.
For a fuel cell operating on air (rather than bottled oxygen), losses due to the air supply system must also be taken into account. This refers to the pressurization of the air and dehumidifying it. This reduces the efficiency significantly and brings it near to that of a compression ignition engine. Furthermore fuel cell efficiency decreases as load increases.
The tank-to-wheel efficiency of a fuel cell vehicle is about 45% at low loads and shows average values of about 36% when a driving cycle like the NEDC (New European Driving Cycle) is used as test procedure. The comparable NEDC value for a Diesel vehicle is 22%.
It is also important to take losses due to fuel production, transportation, and storage into account. Fuel cell vehicles running on compressed hydrogen may have a power-plant-to-wheel efficiency of 22% if the hydrogen is stored as high-pressure gas, and 17% if it is stored as liquid hydrogen.
Fuel cells cannot store energy like a battery, but in some applications, such as stand-alone power plants based on discontinuous sources such as solar or wind power, they are combined with electrolyzers and storage systems to form an energy storage system. The overall efficiency (electricity to hydrogen and back to electricity) of such plants (known as round-trip efficiency) is between 30 and 50%, depending on conditions While a much cheaper lead-acid battery might return about 90%, the electrolyzer/fuel cell system can store indefinite quantities of hydrogen, and is therefore better suited for long-term storage.
Solid-oxide fuel cells produce exothermic heat from the recombination of the oxygen and hydrogen. The ceramic can run as hot as 800 degrees Celsius. This heat can be captured and used to heat water in a micro combined heat and power (m-CHP) application. When the heat is captured, total efficiency can reach 80-90%. CHP units are being developed today for the European home market.
Design Issues And Advancements
• Costs. In 2002, typical cells had a catalyst content of US$1000 per kilowatt of electric power output. In 2008 UTC Power has 400kw Fuel cells for $1,000,000 per 400kW installed costs. The goal is to reduce the cost in order to compete with current market technologies including gasoline internal combustion engines. Many companies are working on techniques to reduce cost in a variety of ways including reducing the amount of platinum needed in each individual cell. Ballard Power Systems have experiments with a catalyst enhanced with carbon silk which allows a 30% reduction (1 mg/cm² to 0.7 mg/cm²) in platinum usage without reduction in performance. Monash University, Melbourne uses PEDOT instead of platinum.
• The production costs of the PEM (proton exchange membrane). The Nafion membrane currently costs €400/m². In 2005 Ballard Power Systems announced that its fuel cells will use Solupor, a porous polyethylene film patented by DSM.
• Water and air management (in PEMFCs). In this type of fuel cell, the membrane must be hydrated, requiring water to be evaporated at precisely the same rate that it is produced. If water is evaporated too quickly, the membrane dries, resistance across it increases, and eventually it will crack, creating a gas "short circuit" where hydrogen and oxygen combine directly, generating heat that will damage the fuel cell. If the water is evaporated too slowly, the electrodes will flood, preventing the reactants from reaching the catalyst and stopping the reaction. Methods to manage water in cells are being developed like electroosmotic pumps focusing on flow control. Just as in a combustion engine, a steady ratio between the reactant and oxygen is necessary to keep the fuel cell operating efficiently.
• Temperature management. The same temperature must be maintained throughout the cell in order to prevent destruction of the cell through thermal loading. This is particularly challenging as the 2H2 + O2 -> 2H2O reaction is highly exothermic, so a large quantity of heat is generated within the fuel cell.
• Durability, service life, and special requirements for some type of cells. Stationary fuel cell applications typically require more than 40,000 hours of reliable operation at a temperature of -35 °C to 40 °C (-31 °F to 104 °F), while automotive fuel cells require a 5,000 hour lifespan (the equivalent of 150,000 miles) under extreme temperatures. Automotive engines must also be able to start reliably at -30 °C (-22 °F) and have a high power to volume ratio (typically 2.5 kW per liter).
Fuel cell applications
Type 212 submarine with fuel cell propulsion of the German Navy in dock
Fuel cells are very useful as power sources in remote locations, such as spacecraft, remote weather stations, large parks, rural locations, and in certain military applications. A fuel cell system running on hydrogen can be compact and lightweight, and have no major moving parts. Because fuel cells have no moving parts and do not involve combustion, in ideal conditions they can achieve up to 99.9999% reliability. This equates to around one minute of down time in a two year period.
A new application is micro combined heat and power, which is cogeneration for family homes, office buildings and factories. The stationary fuel cell application generates constant electric power (selling excess power back to the grid when it is not consumed), and at the same time produces hot air and water from the waste heat. A lower fuel-to-electricity conversion efficiency is tolerated (typically 15-20%), because most of the energy not converted into electricity is utilized as heat. Some heat is lost with the exhaust gas just as in a normal furnace, so the combined heat and power efficiency is still lower than 100%, typically around 80%. In terms of exergy however, the process is inefficient, and one could do better by maximizing the electricity generated and then using the electricity to drive a heat pump. Phosphoric-acid fuel cells (PAFC) comprise the largest segment of existing CHP products worldwide and can provide combined efficiencies close to 90% (35-50% electric + remainder as thermal) Molten-carbonate fuel cells have also been installed in these applications, and solid-oxide fuel cell prototypes exist.
The world's first certified Fuel Cell Boat (HYDRA), in Leipzig/Germany
Since electrolyzer systems do not store fuel in themselves, but rather rely on external storage units, they can be successfully applied in large-scale energy storage, rural areas being one example. In this application, batteries would have to be largely oversized to meet the storage demand, but fuel cells only need a larger storage unit (typically cheaper than an electrochemical device).
One such pilot program is operating on Stuart Island in Washington State. There the Stuart Island Energy Initiative has built a complete, closed-loop system: Solar panels power an electrolyzer which makes hydrogen. The hydrogen is stored in a 500 gallon tank at 200 PSI, and runs a ReliOn fuel cell to provide full electric back-up to the off-the-grid residence. The SIEI website gives extensive technical details.
The world's first Fuel Cell Boat HYDRA used an AFC system with 6.5 kW net output.
Suggested applications
• Base load power plants
• Electric and hybrid vehicles.
• Auxiliary power
• Off-grid power supply
• Notebook computers for applications where AC charging may not be available for weeks at a time.
• Portable charging docks for small electronics (e.g. a belt clip that charges your cell phone or PDA).
• Smartphones with high power consumption due to large displays and additional features like GPS might be equipped with micro fuel cells.
Toyota FCHV PEM FC fuel cell vehicle
The first public hydrogen refueling station was opened in Reykjavík, Iceland in April 2003. This station serves three buses built by DaimlerChrysler that are in service in the public transport net of Reykjavík. The station produces the hydrogen it needs by itself, with an electrolyzing unit (produced by Norsk Hydro), and does not need refilling: all that enters is electricity and water. Royal Dutch Shell is also a partner in the project. The station has no roof, in order to allow any leaked hydrogen to escape to the atmosphere.
The GM 1966 Electrovan was the automotive industry's first attempt at an automobile powered by a hydrogen fuel cell. The Electrovan, which weighed more than twice as much as a normal van, could travel up to 70mph for 30 seconds
The 2001 Chrysler Natrium used its own on-board hydrogen processor. It produces hydrogen for the fuel cell by reacting sodium borohydride fuel with Borax, both of which Chrysler claimed were naturally occurring in great quantity in the United States. The hydrogen produces electric power in the fuel cell for near-silent operation and a range of 300 miles without impinging on passenger space. Chrysler also developed vehicles which separated hydrogen from gasoline in the vehicle, the purpose being to reduce emissions without relying on a nonexistent hydrogen infrastructure and to avoid large storage tanks.
In 2003 President George Bush proposed what is called the Hydrogen Fuel Initiative (HFI), which was later implemented by legislation through the 2005 Energy Policy Act and the 2006 Advanced Energy Initiative. These aim at further developing hydrogen fuel cells and its infrastructure technologies with the ultimate goal to produce fuel cell vehicles that are both practical and cost-effective by 2020. Thus far the United States has contributed 1 billion dollars to this project.
In 2005 the British firm Intelligent Energy produced the first ever working hydrogen run motorcycle called the ENV (Emission Neutral Vehicle). The motorcycle holds enough fuel to run for four hours, and to travel 100 miles in an urban area, at a top speed of 50 miles per hour. It will cost around $6,000 Honda is also going to offer fuel-cell motorcycles
A hydrogen fuel cell public bus accelerating at traffic lights in Perth, Western Australia
There are numerous prototype or production cars and buses based on fuel cell technology being researched or manufactured. Research is ongoing at a variety of motor car manufacturers. Honda has announced the release of a hydrogen vehicle in 2008.
Type 212 submarines use fuel cells to remain submerged for weeks without the need to surface.
Boeing researchers and industry partners throughout Europe are planning to conduct experimental flight tests in 2007 of a manned airplane powered only by a fuel cell and lightweight batteries. The Fuel Cell Demonstrator Airplane research project was completed recently and thorough systems integration testing is now under way in preparation for upcoming ground and flight testing. The Boeing demonstrator uses a Proton Exchange Membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which is coupled to a conventional propeller.
Fuel cell powered race vehicles, designed and built by university students from around the world, competed in the world's first hydrogen race series called the 2008 Formula Zero Championship, which began on August 22nd, 2008 in Rotterdam, the Netherlands. The next race is in South Carolina in March 2009.
Not all geographic markets are ready for SOFC powered m-CHP appliances. Currently, the regions that lead the race in Distributed Generation and deployment of fuel cell m-CHP units are the EU and Japan.
Hydrogen economy
Electrochemical extraction of energy from hydrogen via fuel cells is an especially clean method of meeting power requirements, but not an efficient one, due to the necessity of adding large amounts of energy to either water or hydrocarbon fuels in order to produce the hydrogen. Additionally, during the extraction of hydrogen from hydrocarbons, carbon monoxide is released. Although this gas is artificially converted into carbon dioxide, such a method of extracting hydrogen remains environmentally injurious. It must however be noted that regarding the concept of the hydrogen vehicle, burning/combustion of hydrogen in an internal combustion engine (IC/ICE) is often confused with the electrochemical process of generating electricity via fuel cells (FC) in which there is no combustion (though there is a small byproduct of heat in the reaction). Both processes require the establishment of a hydrogen economy before they may be considered commercially viable, and even then, the aforementioned energy costs make a hydrogen economy of questionable environmental value. Hydrogen combustion is similar to petroleum combustion, and like petroleum combustion, still results in nitrogen oxides as a by-product of the combustion, which lead to smog. Hydrogen combustion, like that of petroleum, is limited by the Carnot efficiency, but is completely different from the hydrogen fuel cell's chemical conversion process of hydrogen to electricity and water without combustion. Hydrogen fuel cells emit only water during use, while producing carbon dioxide emissions during the majority of hydrogen production, which comes from natural gas. Direct methane or natural gas conversion (whether IC or FC) also generate carbon dioxide emissions, but direct hydrocarbon conversion in high-temperature fuel cells produces lower carbon dioxide emissions than either combustion of the same fuel (due to the higher efficiency of the fuel cell process compared to combustion), and also lower carbon dioxide emissions than hydrogen fuel cells, which use methane less efficiently than high-temperature fuel cells by first converting it to high purity hydrogen by steam reforming. Although hydrogen can also be produced by electrolysis of water using renewable energy, at present less than 3% of hydrogen is produced in this way.
Hydrogen is an energy carrier, and not an energy source, because it is usually produced from other energy sources via petroleum combustion, wind power, or solar photovoltaic cells. Hydrogen may be produced from subsurface reservoirs of methane and natural gas by a combination of steam reforming with the water gas shift reaction, from coal by coal gasification, or from oil shale by oil shale gasification. low pressure electrolysis of water or high pressure electrolysis, which requires electricity, and high-temperature electrolysis/thermochemical production, which requires high temperatures (ideal the for expected Generation IV reactors), are two primary methods for the extraction of hydrogen from water.
As of 2006, 49.0% of the electricity produced in the UnitedStates comes from coal, 19.4% comes from nuclear, 20.0% comes from natural gas, 7.0% from hydroelectricity, 1.6% from petroleum and the remaining 3.1% mostly coming from geothermal, solar and biomass. When hydrogen is produced through electrolysis, the energy comes from these sources. Though the fuel cell itself will only emit heat and water as waste, pollution is often caused when generating the electricity required to produce the hydrogen that the fuel cell uses as its power source (for example, when coal, oil, or natural gas-generated electricity is used). This will be the case unless the hydrogen is produced using electricity generated by hydroelectric, geothermal, solar, wind or other clean power sources (which may or may not include nuclear power, depending on one's attitude to the nuclear waste byproducts); hydrogen is only as clean as the energy sources used to produce it. A holistic approach has to take into consideration the impacts of an extended hydrogen scenario, including the production, the use and the disposal of infrastructure and energy converters.
Nowadays low temperature fuel cell stacks proton exchange membrane fuel cell (PEMFC), direct methanol fuel cell (DMFC) and phosphoric acid fuel cell (PAFC) make extensive use of catalysts. Impurities create catalyst poisoning (reducing activity and efficiency), thus high hydrogen purity or higher catalyst densities are required. Limited reserves of platinum quicken the synthesis of an inorganic complex. Although platinum is seen by some as one of the major "showstoppers" to mass market fuel cell commercialization companies, most predictions of platinum running out and/or platinum prices soaring do not take into account effects of thrifting (reduction in catalyst loading) and recycling. Recent research at Brookhaven National Laboratory could lead to the replacement of platinum by a gold-palladium coating which may be less susceptible to poisoning and thereby improve fuel cell lifetime considerably. Current targets for a transport PEM fuel cells are 0.2 g/kW Pt – which is a factor of 5 decrease over current loadings – and recent comments from major original equipment manufacturers (OEMs) indicate that this is possible. Also it is fully anticipated that recycling of fuel cells components, including platinum, will kick-in. High-temperature fuel cells, including molten carbonate fuel cells (MCFC's) and solid oxide fuel cells (SOFC's), do not use platinum as catalysts, but instead use cheaper materials such as nickel and nickel oxide, which are considerably more abundant (for example, nickel is used in fairly large quantities in common stainless steel).
Research and development
August 2005: Georgia Institute of Technology researchers use triazole to raise the operating temperature of PEM fuel cells from below 100 °C to over 125 °C, claiming this will require less carbon-monoxide purification of the hydrogen fuel.
2006: Staxon introduced an inexpensive OEM fuel cell module for system integration. In 2006 Angstrom Power, a British Columbia based company, began commercial sales of portable devices using proprietary hydrogen fuel cell technology, trademarked as "micro hydrogen
Acknowledgement
Gratitude cannot be seen or expressed. It can only be felt in heart and is beyond description. Often words are inadequate to serve as a model of expression of one’s feeling, specially the sense of indebtedness and gratitude to all those who help us in our duty.
It is of immense pleasure and profound privilege to express my gratitude and indebtedness along with sincere thanks to Dr Kailash Juglan, lecturer of PHYSICS of Lovely Professional University for providing me the opportunity to work for a project on “Hydrogen fuel cell as way out for energy crises, basic technology used and latest advances, applications ”
I am beholden to my family and friends for their blessings and encouragement.
Always Obediently
Prateek Joshi
What Is Energy Crisis?
An energy crisis is any great bottleneck (or price rise) in the supply of energy resources to an economy. It usually refers to the shortage of oil and additionally to electricity or other natural resources. An energy crisis may be referred to as an oil crisis, petroleum crisis, energy shortage, electricity shortage or electricity crisis
Market failure is possible when monopoly manipulation of markets occurs. A crisis can develop due to industrial actions like union organized strikes and government embargoes. The cause may be over-consumption, ageing infrastructure, choke point disruption or bottlenecks at oil refineries and port facilities that restrict fuel supply. An emergency may emerge during unusually cold winters.this probabaly rises the depletion of energy.
Pipeline failures and other accidents may cause minor interruptions to energy supplies. A crisis could possibly emerge after infrastructure damage from severe weather. Attacks by terrorists or militia on important infrastructure are a possible problem for energy consumers, with a successful strike on a Middle East facility potentially causing global shortages. Political events, for example, when governments change due to regime change, monarchy collapse, military occupation, and coup may disrupt oil and gas production and create shortages.
Energy Crisis in History
•
• 1973 oil crisis - Cause: an OPEC oil export embargo by many of the major Arab oil-producing states, in response to western support of Israel during the Yom Kippur War
• 1979 energy crisis - Cause: the Iranian revolution
• 1990 spike in the price of oil Cause: the Gulf War
• The 2000–2001 California electricity crisis - Cause: failed deregulation, and business corruption.
• The UK fuel protest of 2000 - Cause: Raise in the price of crude oil combined with already relatively high taxation on road fuel in the UK.
• North American Gas crisis
• Argentine gas crisis of 2004
• North Korea has had energy shortages for many years.
• Zimbabwe has experienced a shortage of energy supplies for many years due to financial mismanagement.
While not entering a full crisis, political riots that occurred during the 2007 Burmese anti-government protests were initially sparked by rising energy prices. Likewise the Russia-Ukraine gas dispute and the Russia-Belarus energy dispute have been mostly resolved before entering a prolonged crisis stage.
Present Day Crisis
Crises that currently exist include:
• Oil Price Increases in 2003 - Caused by continued global increases in petroleum demand coupled with production stagnation, the falling value of the U.S. dollar
• 2008 Central Asia energy crisis, caused by abnormally cold temperatures and low water levels in an area dependent on hydroelectric power. Despite having significant hydrocarbon reserves, in February 2008 the President of Pakistan announced plans to tackle energy shortages that were reaching crisis stage. At the same time the South African President was appeasing fears of a prolonged electricity crisis in South Africa.
• South African electrical crisis. The South African crisis, which may last to 2012, lead to large price rises for platinum in February 2008 and reduced gold production.
• China experienced severe energy shortages towards the end of 2005 and again in early 2008. During the latter crisis they suffered severe damage to power networks along with diesel and coal shortages. Supplies of electricity in Guangdong province, the manufacturing hub of China, are predicted to fall short by an estimated 10 GW
Predictions
Although technology has made oil extraction more efficient, the world is having to struggle to provide oil by using increasingly costly and less productive methods such as deep sea drilling, and developing environmentally sensitive areas such as the Arctic National Wildlife Refuge.
The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. Although far less from people in developing countries, especially USA, the per capita energy consumption of China, India and other developing nations continues to increase as the people living in these countries adopt more energy intensive lifestyles. At present a small part of the world's population consumes a large part of its resources, with the United States and its population of 300 million people consuming far more oil than China with its population of 1.3 billion people.
Future and alternative energy sources
In response to the petroleum crisis, the principles of green energy and sustainable living movements gain popularity. This has led to increasing interest in alternate power/fuel research such as fuel cell technology, liquid nitrogen economy, hydrogen fuel, biomethanol, biodiesel, Karrick process, solar energy, geothermal energy, tidal energy, wave power, and wind energy, and fusio power. To date, only hydroelectricity and nuclear power have been significant alternatives to fossil fuel.
Hydrogen gas is currently produced at a net energy loss from natural gas, which is also experiencing declining production in North America and elsewhere. When not produced from natural gas, hydrogen still needs another source of energy to create it, also at a loss during the process. This has led to hydrogen being regarded as a 'carrier' of energy, like electricity, rather than a 'source'. The unproven dehydrogenating process has also been suggested for the use water as an energy source.
Efficiency mechanisms such as Negawatt power can encourage significantly more effective use of current generating capacity. It is a term used to describe the trading of increased efficiency, using consumption efficiency to increase available market supply rather than by increasing plant generation capacity. As such, it is a demand-side as opposed to a supply-side measure.
Growing demand for a new fuel
AS the increase with the energy crisis over the present years there has been a growing demand for an alternative sources of energy. Some of them are
1-SolarEnergy
2-Tidal Energy
3-Hydro energy
4-Biological Energy
5-Hydrogen Energy
Much has been said about all the other form of energies except that of HYDROGEN based energy. Which has created awareness among the mass and scientific world in the past few years. As compared to any other form of energy Hydrogen based energy has two main benefits
Firstly it does not leave any residue after combustion except pure water which can be used for infinite purposes and second is that It produces a large amount of energy when its combustion takes place.
What Is Hydrogen?
Hydrogen is the chemical element with atomic number 1. It is represented by the symbol H. At standard temperature and pressure hydrogen is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas with the molecular formula H2. With an atomic weight of 1.00794, hydrogen is the lightest element.
Hydrogen is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane, after which most elemental hydrogen is used "captively" (meaning locally at the production site), with the largest markets about equally divided between fossil fuel upgrading (e.g., hydrocracking) and ammonia production (mostly for the fertilizer market). Hydrogen may be produced from water using the process of electrolysis, but this process is presently significantly more expensive commercially than hydrogen production from natural gas.
The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons. In ionic compounds it can take on either a positive charge (becoming a cation composed of a bare proton) or a negative charge (becoming an anion known as a hydride). Hydrogen can form compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics.
The solubility and characteristics of hydrogen with various metals are very important in metallurgy (as many metals can suffer hydrogen embrittlement) and in developing safe ways to store it for use as a fuel. Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals and can be dissolved in both crystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice.
History
Hydrogen gas, H2, was first artificially produced and formally described by T. Von Hohenheim (also known as Paracelsus, 1493–1541) via the mixing of metals with strong acids He was unaware that the flammable gas produced by this chemical reaction was a new chemical element. In 1671, Robert Boyle rediscovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by identifying the gas from a metal-acid reaction as "inflammable air" and further finding in 1781 that the gas produces water when burned. He is usually given credit for its discovery as an element. In 1783, Antoine Lavoisier gave the element the name of hydrogen (from the Greek hydro meaning water and genes meaning creator) when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned
Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year. Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. François Isaac de Rivaz built the first internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823
The first hydrogen-filled balloon was invented by Jacques Charlesin 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly-scheduled flights started in 1910 and by the outbreak of World War I in August 1914 they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war.
The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on May 6, 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen as widely assumed to be the cause but later investigations pointed to ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done.
Hydrogen As a FUEL
"President Bush’s remarks in his State-of-the-Union message proposing a big jump in funding for hydrogen and fuel cell research and development are terrific news. It’s imperative that Congress follows through now and makes available those funds. Aside from the tangible benefits of spending more on an environmentally benign area of energy that for too long has been treated - often condescendingly - like a poor orphan, the political message is of supreme significance. For decades, supporters of hydrogen and other alternative energy fields have argued until they were blue in the face, that the key ingredient missing in moving forward is national political will. President Bush’s support provides a large measure of that political will." --Peter Hoffmann, 31 January 2003 about the book: Hydrogen is the quintessential eco-fuel. This invisible, tasteless gas is the most abundant element in the universe. It is the basic building block and fuel of stars and an essential raw material in innumerable biological and chemical processes. As a completely nonpolluting fuel, it may hold the answer to growing environmental concerns about atmospheric accumulation of carbon dioxide and the resultant Greenhouse Effect. In this book Peter Hoffmann describes current research toward a hydrogen-based economy. He presents the history of hydrogen energy and discusses the environmental dangers of continued dependence on fossil fuels. Hydrogen is not an energy source but a carrier that, like electricity, must be manufactured. Today hydrogen is manufactured by "decarburizing" fossil fuels. In the future it will be derived from water and solar energy and perhaps from "cleaner" versions of nuclear energy. Because it can be made by a variety of methods, Hoffmann argues, it can be easily adapted by different countries and economies. Hoffmann acknowledges the social, political, and economic difficulties in replacing current energy systems with an entirely new one. Although the process of converting to a hydrogen-based economy would be complex, he demonstrates that the environmental and health benefits would far outweigh the costs.
Small whispers of hydrogen energy's vast potential have been heard along the fringes of industry since the oil shocks of the 1970s, but only last year did a steady drumbeat begin in the capital markets of Wall Street, Europe, and Asia. First BMW and Daimler-Chrysler, and then Ford, Honda, Toyota, GM, and others laid claim to hydrogen fuel and to the fuel cell as a new prime mover for the automobile.
An informed public may be all that is required to bring an end to the climate-destabilizing fossil era. Until this summer, though, we had no recent book on the emerging world hydrogen economy Information was available only to readers of periodicals like Peter Hoffmann's Hydrogen and Fuel Cell Letter and The International Journal of Hydrogen Energy.
Finally in August, two books. Hoffmann's chronicles hydrogen science and technology from the earliest days. Embedded in its historical narrative are explanations of these technologies and their advantages and drawbacks. He addresses the questions people are starting to ask: Why a hydrogen economy? How do you get hydrogen? What will it cost? Is it safe? Will it reduce global warming? What is its connection with solar and wind energy? The book's main drawback is the index, which is missing essential entries such as pipelines, carbon dioxide, leakage, sequestration, biomass, and embrittlement. But at last we now have a book we can use to understand the elements of this epic change.
Seth Dunn's Worldwatch Paper speaks from the environmental perspective and describes present practices with an eye to the future. He reports on a range of studies by government agencies, NGOs, universities, and corporations, all attempting to illuminate potential paths for the emerging hydrogen economy He compares this moment in the hydrogen fuel revolution to the early automobile era, which saw fierce competition among technologies before the gasoline-powered internal combustion engine won out as the standard.--Ty Cashman
"Decarbonization is just what it sounds like: taking the carbon out of hydrocarbon fuels. What is left is, of course, hydrogen. Decarbonization will be the industrial end-game strategy of a trend first detected by Cesare Marchetti in the 1970s, when he described a gradual shift, over centuries, from hydrocarbon fuels with high carbon and low hydrogen content (wood, peat, coal) to fuels with increasingly less carbon and more hydrogen (oil, natural gas), culminating, seemingly inevitably, in pure hydrogen as the principal energy carrier of an advanced industrial society.
"If hydrogen is ever to replace natural gas as a utility fuel, very large quantities obviously will have to be stored somewhere. Storage, to maintain a buffer for seasonal, daily, and hourly swings in demand, is essential with any system for the transmission of a gas. Storage facilities even out the ups and downs of demand, including temporary interruptions and breakdowns, and still permit steady, maximum-efficiency production.
"It has been suggested that huge amounts of hydrogen could be stored underground in exhausted natural gas fields, in natural or manmade caverns, or in aquifers.... The natural gas industry has long been using depleted gas and oil fields to store huge amounts of natural gas. Aquifers are similar to natural gas and oil fields in that they are porous geological formations, but without the fossil-fuel or natural gas content. Many of them feature a "caprock" formation, a layer on top of the formation that is usually saturated with water. This layer acts as a seal to keep the gas from leaking out; it works for both natural gas and the lighter hydrogen.
What Is Fuel Cell?
A fuel cell is an electrochemical conversion device. It produces electricity from fuel (on the anode side) and an oxidant (on the cathode side), which react in the presence of an electrolyte. The reactants flow into the cell, and the reaction products flow out of it, while the electrolyte remains within it. Fuel cells can operate virtually continuously as long as the necessary flows are maintained.
Fuel cells are different from electrochemical cell batteries in that they consume reactant, which must be replenished, whereas batteries store electrical energy chemically in a closed system. Additionally, while the electrodes within a battery react and change as a battery is charged or discharged, a fuel cell's electrodes are catalytic and relatively stable.
Many combinations of fuel and oxidant are possible. A hydrogen cell uses hydrogen as fuel and oxygen (usually from air) as oxidant. Other fuels include hydrocarbons and alcohols. Other oxidants include air, chlorine and chlorine dioxide.
Design And Working Of a Fuel Cell
A fuel cell works by catalysis, separating the component electrons and protons of the reactant fuel, and forcing the electrons to travel though a circuit, hence converting them to electrical power. The catalyst typically comprises a platinum group metal or alloy. Another catalytic process takes the electrons back in, combining them with the protons and the oxidant to form waste products (typically simple compounds like water and carbon dioxide).
In the archetypal hydrogen–oxygen proton exchange membrane fuel cell (PEMFC) design, a proton-conducting polymer membrane, (the electrolyte), separates the anode and cathode sides. This was called a "solid polymer electrolyte fuel cell" (SPEFC) in the early 1970s, before the proton exchange mechanism was well-understood. (Notice that "polymer electrolyte membrane" and "proton exchange membrane" result in the same acronym.)
On the anode side, hydrogen diffuses to the anode catalyst where it later dissociates into protons and electrons. These protons often react with oxidants causing them to become what is commonly referred to as multi-facilitated proton membranes (MFPM). The protons are conducted through the membrane to the cathode, but the electrons are forced to travel in an external circuit (supplying power) because the membrane is electrically insulating. On the cathode catalyst, oxygen molecules react with the electrons (which have traveled through the external circuit) and protons to form water — in this example, the only waste product, either liquid or vapor.
In addition to this pure hydrogen type, there are hydrocarbon fuels for fuel cells, including diesel, methanol (see: direct-methanol fuel cells and indirect methanol fuel cells) and chemical hydrides. The waste products with these types of fuel are carbon dioxide and water.
Construction of a low temperature PEMFC: Bipolar plate as electrode with in-milled gas channel structure, fabricated from conductive plastics (enhanced with carbon nanotubes for more conductivity); Porous carbon papers; reactive layer, usually on the polymer membrane applied; polymer membrane.
Condensation of water produced by a PEMFC on the air channel wall. The gold wire around the cell ensures the collection of electric current.
The materials used in fuel cells differ by type. In a typical membrane electrode assembly (MEA), the electrode–bipolar plates are usually made of metal, nickel or carbon nanotubes, and are coated with a catalyst (like platinum, nano iron powders or palladium) for higher efficiency. Carbon paper separates them from the electrolyte. The electrolyte could be ceramic or a membrane.
A typical PEM fuel cell produces a voltage from 0.6 V to 0.7 V at full rated load. Voltage decreases as current increases, due to several factors:
• Activation loss
• Ohmic loss (voltage drop due to resistance of the cell components and interconnects)
• Mass transport loss (depletion of reactants at catalyst sites under high loads, causing rapid loss of voltage)[3]
To deliver the desired amount of energy, the fuel cells can be combined in series and parallel circuits, where series yield higher voltage, and parallel allows a stronger current to be drawn. Such a design is called a fuel cell stack. Further, the cell surface area can be increased, to allow stronger current from each cell.
History
The principle of the fuel cell was discovered by German scientist Christian Friedrich Schönbein in 1838 and published in one of the scientific magazines of the time. Based on this work, the first fuel cell was demonstrated by Welsh scientist Sir William Robert Grove in the February 1839 edition of the Philosophical Magazine and Journal of Science and later sketched, in 1842, in the same journal. The fuel cell he made used similar materials to today's phosphoric-acid fuel cell.
In 1955, W. Thomas Grubb, a chemist working for the General Electric Company (GE), further modified the original fuel cell design by using a sulphonated polystyrene ion-exchange membrane as the electrolyte. Three years later another GE chemist, Leonard Niedrach, devised a way of depositing platinum onto the membrane, which served as catalyst for the necessary hydrogen oxidation and oxygen reduction reactions. This became known as the 'Grubb-Niedrach fuel cell'. GE went on to develop this technology with NASA and McDonnell Aircraft, leading to its use during Project Gemini. This was the first commercial use of a fuel cell. It wasn't until 1959 that British engineer Francis Thomas Bacon successfully developed a 5 kW stationary fuel cell. In 1959, a team led by Harry Ihrig built a 15 kW fuel cell tractor for Allis-Chalmers which was demonstrated across the US at state fairs. This system used potassium hydroxide as the electrolyte and compressed hydrogen and oxygen as the reactants. Later in 1959, Bacon and his colleagues demonstrated a practical five-kilowatt unit capable of powering a welding machine. In the 1960s, Pratt and Whitney licensed Bacon's U.S. patents for use in the U.S. space program to supply electricity and drinking water (hydrogen and oxygen being readily available from the spacecraft tanks).
United Technology Corp.'s UTC Power subsidiary was the first company to manufacture and commercialize a large, stationary fuel cell system for use as a co-generation power plant in hospitals, universities and large office buildings. UTC Power continues to market this fuel cell as the PureCell 200, a 200 kW system. UTC Power continues to be the sole supplier of fuel cells to NASA for use in space vehicles, having supplied the Apollo missions, and currently the Space Shuttle program, and is developing fuel cells for automobiles, buses, and cell phone towers; the company has demonstrated the first fuel cell capable of starting under freezing conditions with its proton exchange membrane automotive fuel cell.
Types Of Fuel Cells
There are several different types of fuel cells, each using a different chemistry. Fuel cells are usually classified by their operating temperature and the type of electrolyte they use. Some types of fuel cells work well for use in stationary power generation plants. Others may be useful for small portable applications or for powering cars. The main types of fuel cells include:
Polymer exchange membrane fuel cell (PEMFC)
The Department of Energy (DOE) is focusing on the PEMFC as the most likely candidate for transportation applications. The PEMFC has a high power density and a relatively low operating temperature (ranging from 60 to 80 degrees Celsius, or 140 to 176 degrees Fahrenheit). The low operating temperature means that it doesn't take very long for the fuel cell to warm up and begin generating electricity. We?ll take a closer look at the PEMFC in the next section.
Solid oxide fuel cell (SOFC)
These fuel cells are best suited for large-scale stationary power generators that could provide electricity for factories or towns. This type of fuel cell operates at very high temperatures (between 700 and 1,000 degrees Celsius). This high temperature makes reliability a problem, because parts of the fuel cell can break down after cycling on and off repeatedly. However, solid oxide fuel cells are very stable when in continuous use. In fact, the SOFC has demonstrated the longest operating life of any fuel cell under certain operating conditions. The high temperature also has an advantage: the steam produced by the fuel cell can be channeled into turbines to generate more electricity. This process is called co-generation of heat and power (CHP) and it improves the overall efficiency of the system.
Alkaline fuel cell (AFC)
This is one of the oldest designs for fuel cells; the United States space program has used them since the 1960s. The AFC is very susceptible to contamination, so it requires pure hydrogen and oxygen. It is also very expensive, so this type of fuel cell is unlikely to be commercialized.
Molten-carbonate fuel cell (MCFC)
Like the SOFC, these fuel cells are also best suited for large stationary power generators. They operate at 600 degrees Celsius, so they can generate steam that can be used to generate more power. They have a lower operating temperature than solid oxide fuel cells, which means they don't need such exotic materials. This makes the design a little less expensive.
Phosphoric-acid fuel cell (PAFC)
The phosphoric-acid fuel cell has potential for use in small stationary power-generation systems. It operates at a higher temperature than polymer exchange membrane fuel cells, so it has a longer warm-up time. This makes it unsuitable for use in cars.
Direct-methanol fuel cell (DMFC)
Methanol fuel cells are comparable to a PEMFC in regards to operating temperature, but are not as efficient. Also, the DMFC requires a relatively large amount of platinum to act as a catalyst, which makes these fuel cells expensive.
In the following section, we will take a closer look at the kind of fuel cell the DOE plans to use to power future vehicles -- the PEMFC.
Effeciency Of Fuel Cell
The efficiency of a fuel cell is dependent on the amount of power drawn from it. Drawing more power means drawing more current, which increases the losses in the fuel cell. As a general rule, the more power (current) drawn, the lower the efficiency. Most losses manifest themselves as a voltage drop in the cell, so the efficiency of a cell is almost proportional to its voltage. For this reason, it is common to show graphs of voltage versus current (so-called polarization curves) for fuel cells. A typical cell running at 0.7 V has an efficiency of about 50%, meaning that 50% of the energy content of the hydrogen is converted into electrical energy; the remaining 50% will be converted into heat. (Depending on the fuel cell system design, some fuel might leave the system unreacted, constituting an additional loss.)
For a hydrogen cell operating at standard conditions with no reactant leaks, the efficiency is equal to the cell voltage divided by 1.48 V, based on the enthalpy, or heating value, of the reaction. For the same cell, the second law efficiency is equal to cell voltage divided by 1.23 V. (This voltage varies with fuel used, and quality and temperature of the cell.) The difference between these numbers represents the difference between the reaction's enthalpy and Gibbs free energy. This difference always appears as heat, along with any losses in electrical conversion efficiency.
Fuel cells do not operate on a thermal cycle. As such, they are not constrained, as combustion engines are, in the same way by thermodynamic limits, such as Carnot cycle efficiency. At times this is misrepresented by saying that fuel cells are exempt from the laws of thermodynamics, because most people think of thermodynamics in terms of combustion processes (enthalpy of formation). The laws of thermodynamics also hold for chemical processes (Gibbs free energy) like fuel cells, but the maximum theoretical efficiency is higher (83% efficient at 298K) than the Otto cycle thermal efficiency (60% for compression ratio of 10 and specific heat ratio of 1.4). Comparing limits imposed by thermodynamics is not a good predictor of practically achievable efficiencies. Also, if propulsion is the goal, electrical output of the fuel cell has to still be converted into mechanical power with the corresponding inefficiency. In reference to the exemption claim, the correct claim is that the "limitations imposed by the second law of thermodynamics on the operation of fuel cells are much less severe than the limitations imposed on conventional energy conversion systems". Consequently, they can have very high efficiencies in converting chemical energy to electrical energy, especially when they are operated at low power density, and using pure hydrogen and oxygen as reactants.
For a fuel cell operating on air (rather than bottled oxygen), losses due to the air supply system must also be taken into account. This refers to the pressurization of the air and dehumidifying it. This reduces the efficiency significantly and brings it near to that of a compression ignition engine. Furthermore fuel cell efficiency decreases as load increases.
The tank-to-wheel efficiency of a fuel cell vehicle is about 45% at low loads and shows average values of about 36% when a driving cycle like the NEDC (New European Driving Cycle) is used as test procedure. The comparable NEDC value for a Diesel vehicle is 22%.
It is also important to take losses due to fuel production, transportation, and storage into account. Fuel cell vehicles running on compressed hydrogen may have a power-plant-to-wheel efficiency of 22% if the hydrogen is stored as high-pressure gas, and 17% if it is stored as liquid hydrogen.
Fuel cells cannot store energy like a battery, but in some applications, such as stand-alone power plants based on discontinuous sources such as solar or wind power, they are combined with electrolyzers and storage systems to form an energy storage system. The overall efficiency (electricity to hydrogen and back to electricity) of such plants (known as round-trip efficiency) is between 30 and 50%, depending on conditions While a much cheaper lead-acid battery might return about 90%, the electrolyzer/fuel cell system can store indefinite quantities of hydrogen, and is therefore better suited for long-term storage.
Solid-oxide fuel cells produce exothermic heat from the recombination of the oxygen and hydrogen. The ceramic can run as hot as 800 degrees Celsius. This heat can be captured and used to heat water in a micro combined heat and power (m-CHP) application. When the heat is captured, total efficiency can reach 80-90%. CHP units are being developed today for the European home market.
Design Issues And Advancements
• Costs. In 2002, typical cells had a catalyst content of US$1000 per kilowatt of electric power output. In 2008 UTC Power has 400kw Fuel cells for $1,000,000 per 400kW installed costs. The goal is to reduce the cost in order to compete with current market technologies including gasoline internal combustion engines. Many companies are working on techniques to reduce cost in a variety of ways including reducing the amount of platinum needed in each individual cell. Ballard Power Systems have experiments with a catalyst enhanced with carbon silk which allows a 30% reduction (1 mg/cm² to 0.7 mg/cm²) in platinum usage without reduction in performance. Monash University, Melbourne uses PEDOT instead of platinum.
• The production costs of the PEM (proton exchange membrane). The Nafion membrane currently costs €400/m². In 2005 Ballard Power Systems announced that its fuel cells will use Solupor, a porous polyethylene film patented by DSM.
• Water and air management (in PEMFCs). In this type of fuel cell, the membrane must be hydrated, requiring water to be evaporated at precisely the same rate that it is produced. If water is evaporated too quickly, the membrane dries, resistance across it increases, and eventually it will crack, creating a gas "short circuit" where hydrogen and oxygen combine directly, generating heat that will damage the fuel cell. If the water is evaporated too slowly, the electrodes will flood, preventing the reactants from reaching the catalyst and stopping the reaction. Methods to manage water in cells are being developed like electroosmotic pumps focusing on flow control. Just as in a combustion engine, a steady ratio between the reactant and oxygen is necessary to keep the fuel cell operating efficiently.
• Temperature management. The same temperature must be maintained throughout the cell in order to prevent destruction of the cell through thermal loading. This is particularly challenging as the 2H2 + O2 -> 2H2O reaction is highly exothermic, so a large quantity of heat is generated within the fuel cell.
• Durability, service life, and special requirements for some type of cells. Stationary fuel cell applications typically require more than 40,000 hours of reliable operation at a temperature of -35 °C to 40 °C (-31 °F to 104 °F), while automotive fuel cells require a 5,000 hour lifespan (the equivalent of 150,000 miles) under extreme temperatures. Automotive engines must also be able to start reliably at -30 °C (-22 °F) and have a high power to volume ratio (typically 2.5 kW per liter).
Fuel cell applications
Type 212 submarine with fuel cell propulsion of the German Navy in dock
Fuel cells are very useful as power sources in remote locations, such as spacecraft, remote weather stations, large parks, rural locations, and in certain military applications. A fuel cell system running on hydrogen can be compact and lightweight, and have no major moving parts. Because fuel cells have no moving parts and do not involve combustion, in ideal conditions they can achieve up to 99.9999% reliability. This equates to around one minute of down time in a two year period.
A new application is micro combined heat and power, which is cogeneration for family homes, office buildings and factories. The stationary fuel cell application generates constant electric power (selling excess power back to the grid when it is not consumed), and at the same time produces hot air and water from the waste heat. A lower fuel-to-electricity conversion efficiency is tolerated (typically 15-20%), because most of the energy not converted into electricity is utilized as heat. Some heat is lost with the exhaust gas just as in a normal furnace, so the combined heat and power efficiency is still lower than 100%, typically around 80%. In terms of exergy however, the process is inefficient, and one could do better by maximizing the electricity generated and then using the electricity to drive a heat pump. Phosphoric-acid fuel cells (PAFC) comprise the largest segment of existing CHP products worldwide and can provide combined efficiencies close to 90% (35-50% electric + remainder as thermal) Molten-carbonate fuel cells have also been installed in these applications, and solid-oxide fuel cell prototypes exist.
The world's first certified Fuel Cell Boat (HYDRA), in Leipzig/Germany
Since electrolyzer systems do not store fuel in themselves, but rather rely on external storage units, they can be successfully applied in large-scale energy storage, rural areas being one example. In this application, batteries would have to be largely oversized to meet the storage demand, but fuel cells only need a larger storage unit (typically cheaper than an electrochemical device).
One such pilot program is operating on Stuart Island in Washington State. There the Stuart Island Energy Initiative has built a complete, closed-loop system: Solar panels power an electrolyzer which makes hydrogen. The hydrogen is stored in a 500 gallon tank at 200 PSI, and runs a ReliOn fuel cell to provide full electric back-up to the off-the-grid residence. The SIEI website gives extensive technical details.
The world's first Fuel Cell Boat HYDRA used an AFC system with 6.5 kW net output.
Suggested applications
• Base load power plants
• Electric and hybrid vehicles.
• Auxiliary power
• Off-grid power supply
• Notebook computers for applications where AC charging may not be available for weeks at a time.
• Portable charging docks for small electronics (e.g. a belt clip that charges your cell phone or PDA).
• Smartphones with high power consumption due to large displays and additional features like GPS might be equipped with micro fuel cells.
Toyota FCHV PEM FC fuel cell vehicle
The first public hydrogen refueling station was opened in Reykjavík, Iceland in April 2003. This station serves three buses built by DaimlerChrysler that are in service in the public transport net of Reykjavík. The station produces the hydrogen it needs by itself, with an electrolyzing unit (produced by Norsk Hydro), and does not need refilling: all that enters is electricity and water. Royal Dutch Shell is also a partner in the project. The station has no roof, in order to allow any leaked hydrogen to escape to the atmosphere.
The GM 1966 Electrovan was the automotive industry's first attempt at an automobile powered by a hydrogen fuel cell. The Electrovan, which weighed more than twice as much as a normal van, could travel up to 70mph for 30 seconds
The 2001 Chrysler Natrium used its own on-board hydrogen processor. It produces hydrogen for the fuel cell by reacting sodium borohydride fuel with Borax, both of which Chrysler claimed were naturally occurring in great quantity in the United States. The hydrogen produces electric power in the fuel cell for near-silent operation and a range of 300 miles without impinging on passenger space. Chrysler also developed vehicles which separated hydrogen from gasoline in the vehicle, the purpose being to reduce emissions without relying on a nonexistent hydrogen infrastructure and to avoid large storage tanks.
In 2003 President George Bush proposed what is called the Hydrogen Fuel Initiative (HFI), which was later implemented by legislation through the 2005 Energy Policy Act and the 2006 Advanced Energy Initiative. These aim at further developing hydrogen fuel cells and its infrastructure technologies with the ultimate goal to produce fuel cell vehicles that are both practical and cost-effective by 2020. Thus far the United States has contributed 1 billion dollars to this project.
In 2005 the British firm Intelligent Energy produced the first ever working hydrogen run motorcycle called the ENV (Emission Neutral Vehicle). The motorcycle holds enough fuel to run for four hours, and to travel 100 miles in an urban area, at a top speed of 50 miles per hour. It will cost around $6,000 Honda is also going to offer fuel-cell motorcycles
A hydrogen fuel cell public bus accelerating at traffic lights in Perth, Western Australia
There are numerous prototype or production cars and buses based on fuel cell technology being researched or manufactured. Research is ongoing at a variety of motor car manufacturers. Honda has announced the release of a hydrogen vehicle in 2008.
Type 212 submarines use fuel cells to remain submerged for weeks without the need to surface.
Boeing researchers and industry partners throughout Europe are planning to conduct experimental flight tests in 2007 of a manned airplane powered only by a fuel cell and lightweight batteries. The Fuel Cell Demonstrator Airplane research project was completed recently and thorough systems integration testing is now under way in preparation for upcoming ground and flight testing. The Boeing demonstrator uses a Proton Exchange Membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which is coupled to a conventional propeller.
Fuel cell powered race vehicles, designed and built by university students from around the world, competed in the world's first hydrogen race series called the 2008 Formula Zero Championship, which began on August 22nd, 2008 in Rotterdam, the Netherlands. The next race is in South Carolina in March 2009.
Not all geographic markets are ready for SOFC powered m-CHP appliances. Currently, the regions that lead the race in Distributed Generation and deployment of fuel cell m-CHP units are the EU and Japan.
Hydrogen economy
Electrochemical extraction of energy from hydrogen via fuel cells is an especially clean method of meeting power requirements, but not an efficient one, due to the necessity of adding large amounts of energy to either water or hydrocarbon fuels in order to produce the hydrogen. Additionally, during the extraction of hydrogen from hydrocarbons, carbon monoxide is released. Although this gas is artificially converted into carbon dioxide, such a method of extracting hydrogen remains environmentally injurious. It must however be noted that regarding the concept of the hydrogen vehicle, burning/combustion of hydrogen in an internal combustion engine (IC/ICE) is often confused with the electrochemical process of generating electricity via fuel cells (FC) in which there is no combustion (though there is a small byproduct of heat in the reaction). Both processes require the establishment of a hydrogen economy before they may be considered commercially viable, and even then, the aforementioned energy costs make a hydrogen economy of questionable environmental value. Hydrogen combustion is similar to petroleum combustion, and like petroleum combustion, still results in nitrogen oxides as a by-product of the combustion, which lead to smog. Hydrogen combustion, like that of petroleum, is limited by the Carnot efficiency, but is completely different from the hydrogen fuel cell's chemical conversion process of hydrogen to electricity and water without combustion. Hydrogen fuel cells emit only water during use, while producing carbon dioxide emissions during the majority of hydrogen production, which comes from natural gas. Direct methane or natural gas conversion (whether IC or FC) also generate carbon dioxide emissions, but direct hydrocarbon conversion in high-temperature fuel cells produces lower carbon dioxide emissions than either combustion of the same fuel (due to the higher efficiency of the fuel cell process compared to combustion), and also lower carbon dioxide emissions than hydrogen fuel cells, which use methane less efficiently than high-temperature fuel cells by first converting it to high purity hydrogen by steam reforming. Although hydrogen can also be produced by electrolysis of water using renewable energy, at present less than 3% of hydrogen is produced in this way.
Hydrogen is an energy carrier, and not an energy source, because it is usually produced from other energy sources via petroleum combustion, wind power, or solar photovoltaic cells. Hydrogen may be produced from subsurface reservoirs of methane and natural gas by a combination of steam reforming with the water gas shift reaction, from coal by coal gasification, or from oil shale by oil shale gasification. low pressure electrolysis of water or high pressure electrolysis, which requires electricity, and high-temperature electrolysis/thermochemical production, which requires high temperatures (ideal the for expected Generation IV reactors), are two primary methods for the extraction of hydrogen from water.
As of 2006, 49.0% of the electricity produced in the UnitedStates comes from coal, 19.4% comes from nuclear, 20.0% comes from natural gas, 7.0% from hydroelectricity, 1.6% from petroleum and the remaining 3.1% mostly coming from geothermal, solar and biomass. When hydrogen is produced through electrolysis, the energy comes from these sources. Though the fuel cell itself will only emit heat and water as waste, pollution is often caused when generating the electricity required to produce the hydrogen that the fuel cell uses as its power source (for example, when coal, oil, or natural gas-generated electricity is used). This will be the case unless the hydrogen is produced using electricity generated by hydroelectric, geothermal, solar, wind or other clean power sources (which may or may not include nuclear power, depending on one's attitude to the nuclear waste byproducts); hydrogen is only as clean as the energy sources used to produce it. A holistic approach has to take into consideration the impacts of an extended hydrogen scenario, including the production, the use and the disposal of infrastructure and energy converters.
Nowadays low temperature fuel cell stacks proton exchange membrane fuel cell (PEMFC), direct methanol fuel cell (DMFC) and phosphoric acid fuel cell (PAFC) make extensive use of catalysts. Impurities create catalyst poisoning (reducing activity and efficiency), thus high hydrogen purity or higher catalyst densities are required. Limited reserves of platinum quicken the synthesis of an inorganic complex. Although platinum is seen by some as one of the major "showstoppers" to mass market fuel cell commercialization companies, most predictions of platinum running out and/or platinum prices soaring do not take into account effects of thrifting (reduction in catalyst loading) and recycling. Recent research at Brookhaven National Laboratory could lead to the replacement of platinum by a gold-palladium coating which may be less susceptible to poisoning and thereby improve fuel cell lifetime considerably. Current targets for a transport PEM fuel cells are 0.2 g/kW Pt – which is a factor of 5 decrease over current loadings – and recent comments from major original equipment manufacturers (OEMs) indicate that this is possible. Also it is fully anticipated that recycling of fuel cells components, including platinum, will kick-in. High-temperature fuel cells, including molten carbonate fuel cells (MCFC's) and solid oxide fuel cells (SOFC's), do not use platinum as catalysts, but instead use cheaper materials such as nickel and nickel oxide, which are considerably more abundant (for example, nickel is used in fairly large quantities in common stainless steel).
Research and development
August 2005: Georgia Institute of Technology researchers use triazole to raise the operating temperature of PEM fuel cells from below 100 °C to over 125 °C, claiming this will require less carbon-monoxide purification of the hydrogen fuel.
2006: Staxon introduced an inexpensive OEM fuel cell module for system integration. In 2006 Angstrom Power, a British Columbia based company, began commercial sales of portable devices using proprietary hydrogen fuel cell technology, trademarked as "micro hydrogen
Subscribe to:
Posts (Atom)