option
list | question
stringlengths 11
354
| article
stringlengths 231
6.74k
| id
stringlengths 5
8
| label
int64 0
3
|
---|---|---|---|---|
[
"Drinking less tea in the future.",
"Drinking no tea at all.",
"Drinking tea that is not too hot.",
"Drinking green tea instead of black tea."
]
| Which of the following instructions is encouraged to practice? | Recent reports suggest that tea can cause brittle bones-but you'll probably be safe if you drink less than a gallon a day.
Do you fancy a cup of tea? We drink, on average, three mugs a day. But you might want to try another strong alcohol after hearing the case of a 47-year-old woman, published in the New England Journal of Medicine (NEJM), who developed brittle bones and lost all of her teeth after drinking too much tea.
Tea may not be so great for prostates either. Last year, research from the University of Glasgow found that men who drank more than seven or more cups of tea a day had a 50% higher risk of prostate cancer. And in 2009 a paper in the British Medical Journal showed that drinking very hot tea(70oC or more) increased the likelihood of esophageal cancer.
Still gasping for that cup of tea? There is some evidence that tea can be good for you too, with antioxidant properties, so maybe you're not actually drinking enough of the stuff.
The poor woman in the NEJM study is not alone. There are a few other cases of people who have damaged their bones through too much tea. But she (like those in other studies) was drinking excessive amounts: 100 - 150 tea bags a day to make 12 cups of tea. A litre of tea can contain up to 9mg of fluoride, which in excess can cause skeletal fluorosis, reducing bone quality and causing pain and stiffening of the ligaments. Other studies show you generally need to drink a gallon a day for three decades to develop this condition.
You also shouldn't worry about the Glasgow study as it wasn't designed to show that drinking tea actually caused prostate cancer. All it proved was an association and people were only asked how much tea they drank at the start of the study, which went on for about 28 years.
The National Cancer Institute in the U.S. concludes that the evidence isn't good enough to say tea either harms or helps our health. However it does seem sensible in the light of the BMJ study to wait for your tea to cool down for a few minutes.
Black tea, which makes up 75% of the world's consumption, may have healthy properties from its plant chemicals called poly phenols, which are antioxidants. Green tea contains more poly phenols but isn't so nice to dunk digestives into.
A review of the evidence in the European Journal of Clinical Nutrition, sponsored by the Tea Council--which, the authors say, had no part in the study--found the research showed more than three cups of black tea a day reduced heart disease. It found no evidence of harm "in amounts typically consumed". So as long as you drink less than a gallon of tea a day you should be absolutely fine. | 893.txt | 2 |
[
"it contains antioxidants",
"it is made from plant",
"poly phenols are added to it",
"it helps one digest"
]
| Black tea is considered as healthy drink because | Recent reports suggest that tea can cause brittle bones-but you'll probably be safe if you drink less than a gallon a day.
Do you fancy a cup of tea? We drink, on average, three mugs a day. But you might want to try another strong alcohol after hearing the case of a 47-year-old woman, published in the New England Journal of Medicine (NEJM), who developed brittle bones and lost all of her teeth after drinking too much tea.
Tea may not be so great for prostates either. Last year, research from the University of Glasgow found that men who drank more than seven or more cups of tea a day had a 50% higher risk of prostate cancer. And in 2009 a paper in the British Medical Journal showed that drinking very hot tea(70oC or more) increased the likelihood of esophageal cancer.
Still gasping for that cup of tea? There is some evidence that tea can be good for you too, with antioxidant properties, so maybe you're not actually drinking enough of the stuff.
The poor woman in the NEJM study is not alone. There are a few other cases of people who have damaged their bones through too much tea. But she (like those in other studies) was drinking excessive amounts: 100 - 150 tea bags a day to make 12 cups of tea. A litre of tea can contain up to 9mg of fluoride, which in excess can cause skeletal fluorosis, reducing bone quality and causing pain and stiffening of the ligaments. Other studies show you generally need to drink a gallon a day for three decades to develop this condition.
You also shouldn't worry about the Glasgow study as it wasn't designed to show that drinking tea actually caused prostate cancer. All it proved was an association and people were only asked how much tea they drank at the start of the study, which went on for about 28 years.
The National Cancer Institute in the U.S. concludes that the evidence isn't good enough to say tea either harms or helps our health. However it does seem sensible in the light of the BMJ study to wait for your tea to cool down for a few minutes.
Black tea, which makes up 75% of the world's consumption, may have healthy properties from its plant chemicals called poly phenols, which are antioxidants. Green tea contains more poly phenols but isn't so nice to dunk digestives into.
A review of the evidence in the European Journal of Clinical Nutrition, sponsored by the Tea Council--which, the authors say, had no part in the study--found the research showed more than three cups of black tea a day reduced heart disease. It found no evidence of harm "in amounts typically consumed". So as long as you drink less than a gallon of tea a day you should be absolutely fine. | 893.txt | 0 |
[
"Under no circumstance can you drink more than a gallon of tea a day.",
"Black tea can be seen as a cure for heart disease.",
"Drinking tea does no harm at all, regardless of how much you consume.",
"Tea Council's participation into the research may decrease its credibility."
]
| What can be inferred from the last paragraph? | Recent reports suggest that tea can cause brittle bones-but you'll probably be safe if you drink less than a gallon a day.
Do you fancy a cup of tea? We drink, on average, three mugs a day. But you might want to try another strong alcohol after hearing the case of a 47-year-old woman, published in the New England Journal of Medicine (NEJM), who developed brittle bones and lost all of her teeth after drinking too much tea.
Tea may not be so great for prostates either. Last year, research from the University of Glasgow found that men who drank more than seven or more cups of tea a day had a 50% higher risk of prostate cancer. And in 2009 a paper in the British Medical Journal showed that drinking very hot tea(70oC or more) increased the likelihood of esophageal cancer.
Still gasping for that cup of tea? There is some evidence that tea can be good for you too, with antioxidant properties, so maybe you're not actually drinking enough of the stuff.
The poor woman in the NEJM study is not alone. There are a few other cases of people who have damaged their bones through too much tea. But she (like those in other studies) was drinking excessive amounts: 100 - 150 tea bags a day to make 12 cups of tea. A litre of tea can contain up to 9mg of fluoride, which in excess can cause skeletal fluorosis, reducing bone quality and causing pain and stiffening of the ligaments. Other studies show you generally need to drink a gallon a day for three decades to develop this condition.
You also shouldn't worry about the Glasgow study as it wasn't designed to show that drinking tea actually caused prostate cancer. All it proved was an association and people were only asked how much tea they drank at the start of the study, which went on for about 28 years.
The National Cancer Institute in the U.S. concludes that the evidence isn't good enough to say tea either harms or helps our health. However it does seem sensible in the light of the BMJ study to wait for your tea to cool down for a few minutes.
Black tea, which makes up 75% of the world's consumption, may have healthy properties from its plant chemicals called poly phenols, which are antioxidants. Green tea contains more poly phenols but isn't so nice to dunk digestives into.
A review of the evidence in the European Journal of Clinical Nutrition, sponsored by the Tea Council--which, the authors say, had no part in the study--found the research showed more than three cups of black tea a day reduced heart disease. It found no evidence of harm "in amounts typically consumed". So as long as you drink less than a gallon of tea a day you should be absolutely fine. | 893.txt | 3 |
[
"beneficial,because their inventors are famous",
"beneficial,though their inventors are less famous",
"not useful, because their inventors are less famous",
"not useful, though their inventors are famous"
]
| By mentionong "traffic light"and "windshield wiper",the author indicates that countless inventions are. | We know the famous ones-the Thomas Edisons and the Alexander Graham Bells -but what about the less famous inventors? What about the people who invented the traffic light and the windshield wiper?Shouldn't we know who they are?
Joan Mclean think so. In fact, Mclean, a professor of physics at Mountain University in Range, feels so strongly about this matter that she's developed a course on the topic. In addition to learning "who"invented"what", however, Mclean also likes her students to learn the answers to the"why" and "how" questions. According to Mclean,"When students learn the answers to these questions, they are better prepared to recognize opportunities for inventing and more motivated to give inventing a try."
So,just what is the story behind the windshield wiper? Well,Mary Anderson came up with the idea in 1902 after a visit to New York City.The day was cold and stormy, but Anderson still wanted to see the sights ,so she jumped aboard a streetcar. Noticing that the driver was struggling to see through the snow covering the winshield,she found hersefe wondering why there couldn't be a buolt-in devic for cleaing the window. Still wondering about this when she returned home to Birmingham, Alabama, Anderson started drafting out solutions. One of her ideas, a leveron the inside of a vehicle that would contral an arm on the outside, became the first windshield wiper. Today we benefit from countless inventions and innovations,It's hard to imagine driving without Garrett A.Morgan's traffic light. It's equally impossible to picture a world without Katherine J.Blodgett's innovation that makes glass invisible, Can you picture life without clear windows and eyeglasses? | 3298.txt | 1 |
[
"add colour and variety to students' campus life",
"inform students of the windshield wiper's invention",
"carry out the requirements by Mountain University",
"pre[are students to try theie own invention"
]
| Professor Joan McLean's course aims to _ . | We know the famous ones-the Thomas Edisons and the Alexander Graham Bells -but what about the less famous inventors? What about the people who invented the traffic light and the windshield wiper?Shouldn't we know who they are?
Joan Mclean think so. In fact, Mclean, a professor of physics at Mountain University in Range, feels so strongly about this matter that she's developed a course on the topic. In addition to learning "who"invented"what", however, Mclean also likes her students to learn the answers to the"why" and "how" questions. According to Mclean,"When students learn the answers to these questions, they are better prepared to recognize opportunities for inventing and more motivated to give inventing a try."
So,just what is the story behind the windshield wiper? Well,Mary Anderson came up with the idea in 1902 after a visit to New York City.The day was cold and stormy, but Anderson still wanted to see the sights ,so she jumped aboard a streetcar. Noticing that the driver was struggling to see through the snow covering the winshield,she found hersefe wondering why there couldn't be a buolt-in devic for cleaing the window. Still wondering about this when she returned home to Birmingham, Alabama, Anderson started drafting out solutions. One of her ideas, a leveron the inside of a vehicle that would contral an arm on the outside, became the first windshield wiper. Today we benefit from countless inventions and innovations,It's hard to imagine driving without Garrett A.Morgan's traffic light. It's equally impossible to picture a world without Katherine J.Blodgett's innovation that makes glass invisible, Can you picture life without clear windows and eyeglasses? | 3298.txt | 3 |
[
"not eventually accepted by the umbrella producer",
"inspired by the story behind the windshield wiper",
"due to his dream of being caught in a rainstorm",
"not related to Professor Joan McLean's lectures"
]
| Tommy Lee's invention of the unbreakable umbrella was _ . | We know the famous ones-the Thomas Edisons and the Alexander Graham Bells -but what about the less famous inventors? What about the people who invented the traffic light and the windshield wiper?Shouldn't we know who they are?
Joan Mclean think so. In fact, Mclean, a professor of physics at Mountain University in Range, feels so strongly about this matter that she's developed a course on the topic. In addition to learning "who"invented"what", however, Mclean also likes her students to learn the answers to the"why" and "how" questions. According to Mclean,"When students learn the answers to these questions, they are better prepared to recognize opportunities for inventing and more motivated to give inventing a try."
So,just what is the story behind the windshield wiper? Well,Mary Anderson came up with the idea in 1902 after a visit to New York City.The day was cold and stormy, but Anderson still wanted to see the sights ,so she jumped aboard a streetcar. Noticing that the driver was struggling to see through the snow covering the winshield,she found hersefe wondering why there couldn't be a buolt-in devic for cleaing the window. Still wondering about this when she returned home to Birmingham, Alabama, Anderson started drafting out solutions. One of her ideas, a leveron the inside of a vehicle that would contral an arm on the outside, became the first windshield wiper. Today we benefit from countless inventions and innovations,It's hard to imagine driving without Garrett A.Morgan's traffic light. It's equally impossible to picture a world without Katherine J.Blodgett's innovation that makes glass invisible, Can you picture life without clear windows and eyeglasses? | 3298.txt | 1 |
[
"How to Help Students to Sell Their Inventions to Producers",
"How to Design a Built-in Dervice for Cleaning the Window",
"Shouldn't We Know Who Inventd the Windshield Wiper",
"Shouldn't We Develop Invention Courses in Universities"
]
| Which 0f the following can best serve as the title of this passage? | We know the famous ones-the Thomas Edisons and the Alexander Graham Bells -but what about the less famous inventors? What about the people who invented the traffic light and the windshield wiper?Shouldn't we know who they are?
Joan Mclean think so. In fact, Mclean, a professor of physics at Mountain University in Range, feels so strongly about this matter that she's developed a course on the topic. In addition to learning "who"invented"what", however, Mclean also likes her students to learn the answers to the"why" and "how" questions. According to Mclean,"When students learn the answers to these questions, they are better prepared to recognize opportunities for inventing and more motivated to give inventing a try."
So,just what is the story behind the windshield wiper? Well,Mary Anderson came up with the idea in 1902 after a visit to New York City.The day was cold and stormy, but Anderson still wanted to see the sights ,so she jumped aboard a streetcar. Noticing that the driver was struggling to see through the snow covering the winshield,she found hersefe wondering why there couldn't be a buolt-in devic for cleaing the window. Still wondering about this when she returned home to Birmingham, Alabama, Anderson started drafting out solutions. One of her ideas, a leveron the inside of a vehicle that would contral an arm on the outside, became the first windshield wiper. Today we benefit from countless inventions and innovations,It's hard to imagine driving without Garrett A.Morgan's traffic light. It's equally impossible to picture a world without Katherine J.Blodgett's innovation that makes glass invisible, Can you picture life without clear windows and eyeglasses? | 3298.txt | 2 |
[
"Many children's books have been adapted from films.",
"Many high-quality children's books have been published.",
"The sales of classics have led to the popularity of films.",
"The sales of presents for children have increased."
]
| Which of the following is true of Paragraph 1? | If you look for a book as a present for a child. You will be spoiled for choice even in a year when there is no new Harry Patten J.K Rowling's wizard is not alone: the past decade has been a harvest for good children's books ,which has set off a large quantity of films and in turn led to increased sales of classics such as The Lord of the Rings.
Yet despite that ,reading is increasingly unpopular among children.According to statistics, in 1997 23% said they didn't like reading at all. In 2003, 35% did. And around 6% of children leave primary school each year unable to read properly.
Maybe the decline is caused by the increasing availability of computer games. Maybe the books boom has affected only the top of the educational pile. Either way , Chancellor Cordon Brown plans to change things for the bottom of the class.In his pre-budget report , he announced the national project of Reading Recovery to help the children struggling most.
Reading Recovery is aimed at six-year-olds ,who receive four months of individual daily half-hour classes with a specially trained teacher. An evaluation earlier this year reported that children on the scheme made 20 months' progress in just one year, whereas similarly weak readers without special help made just five months' progress ,and so ended the year even further below the level expected for their age.
International research tends to find that when British children leave primary school they read well ,but read less often for fun than those elsewhere.Reading for fun matters because children who are keen on reading can expect lifelong pleasure and loving books is an excellent indicator of future educational success. According to the OECD, being a regular and enthusiastic reader is of great advantage. | 3705.txt | 1 |
[
"the number of top students increased with the use of computers",
"a decreasing number of children showed interest in reading",
"a minority of primacy school children read properly",
"a large percentage of children read regularly"
]
| Statistics suggested that _ . | If you look for a book as a present for a child. You will be spoiled for choice even in a year when there is no new Harry Patten J.K Rowling's wizard is not alone: the past decade has been a harvest for good children's books ,which has set off a large quantity of films and in turn led to increased sales of classics such as The Lord of the Rings.
Yet despite that ,reading is increasingly unpopular among children.According to statistics, in 1997 23% said they didn't like reading at all. In 2003, 35% did. And around 6% of children leave primary school each year unable to read properly.
Maybe the decline is caused by the increasing availability of computer games. Maybe the books boom has affected only the top of the educational pile. Either way , Chancellor Cordon Brown plans to change things for the bottom of the class.In his pre-budget report , he announced the national project of Reading Recovery to help the children struggling most.
Reading Recovery is aimed at six-year-olds ,who receive four months of individual daily half-hour classes with a specially trained teacher. An evaluation earlier this year reported that children on the scheme made 20 months' progress in just one year, whereas similarly weak readers without special help made just five months' progress ,and so ended the year even further below the level expected for their age.
International research tends to find that when British children leave primary school they read well ,but read less often for fun than those elsewhere.Reading for fun matters because children who are keen on reading can expect lifelong pleasure and loving books is an excellent indicator of future educational success. According to the OECD, being a regular and enthusiastic reader is of great advantage. | 3705.txt | 1 |
[
"An evaluation of it will be made sometime this year.",
"Weak readers on the project were the most hardworking.",
"It aims to train special teachers to help children with reading.",
"Children on the project showed noticeable progress in reading."
]
| What do we know about Reading Recovery? | If you look for a book as a present for a child. You will be spoiled for choice even in a year when there is no new Harry Patten J.K Rowling's wizard is not alone: the past decade has been a harvest for good children's books ,which has set off a large quantity of films and in turn led to increased sales of classics such as The Lord of the Rings.
Yet despite that ,reading is increasingly unpopular among children.According to statistics, in 1997 23% said they didn't like reading at all. In 2003, 35% did. And around 6% of children leave primary school each year unable to read properly.
Maybe the decline is caused by the increasing availability of computer games. Maybe the books boom has affected only the top of the educational pile. Either way , Chancellor Cordon Brown plans to change things for the bottom of the class.In his pre-budget report , he announced the national project of Reading Recovery to help the children struggling most.
Reading Recovery is aimed at six-year-olds ,who receive four months of individual daily half-hour classes with a specially trained teacher. An evaluation earlier this year reported that children on the scheme made 20 months' progress in just one year, whereas similarly weak readers without special help made just five months' progress ,and so ended the year even further below the level expected for their age.
International research tends to find that when British children leave primary school they read well ,but read less often for fun than those elsewhere.Reading for fun matters because children who are keen on reading can expect lifelong pleasure and loving books is an excellent indicator of future educational success. According to the OECD, being a regular and enthusiastic reader is of great advantage. | 3705.txt | 3 |
[
"take greater advantage of the project",
"show the potential to enjoy a long life",
"are likely to succeed in their education.",
"would make excellent future researchers"
]
| Reading for fun is important because book-loving children _ . | If you look for a book as a present for a child. You will be spoiled for choice even in a year when there is no new Harry Patten J.K Rowling's wizard is not alone: the past decade has been a harvest for good children's books ,which has set off a large quantity of films and in turn led to increased sales of classics such as The Lord of the Rings.
Yet despite that ,reading is increasingly unpopular among children.According to statistics, in 1997 23% said they didn't like reading at all. In 2003, 35% did. And around 6% of children leave primary school each year unable to read properly.
Maybe the decline is caused by the increasing availability of computer games. Maybe the books boom has affected only the top of the educational pile. Either way , Chancellor Cordon Brown plans to change things for the bottom of the class.In his pre-budget report , he announced the national project of Reading Recovery to help the children struggling most.
Reading Recovery is aimed at six-year-olds ,who receive four months of individual daily half-hour classes with a specially trained teacher. An evaluation earlier this year reported that children on the scheme made 20 months' progress in just one year, whereas similarly weak readers without special help made just five months' progress ,and so ended the year even further below the level expected for their age.
International research tends to find that when British children leave primary school they read well ,but read less often for fun than those elsewhere.Reading for fun matters because children who are keen on reading can expect lifelong pleasure and loving books is an excellent indicator of future educational success. According to the OECD, being a regular and enthusiastic reader is of great advantage. | 3705.txt | 2 |
[
"to overcome primary school pupils reading difficulty.",
"to encourage the publication of more children's books",
"to remind children of the importance of reading for fun",
"to introduce a way to improve early childhood reading"
]
| The aim of this text would probably be _ . | If you look for a book as a present for a child. You will be spoiled for choice even in a year when there is no new Harry Patten J.K Rowling's wizard is not alone: the past decade has been a harvest for good children's books ,which has set off a large quantity of films and in turn led to increased sales of classics such as The Lord of the Rings.
Yet despite that ,reading is increasingly unpopular among children.According to statistics, in 1997 23% said they didn't like reading at all. In 2003, 35% did. And around 6% of children leave primary school each year unable to read properly.
Maybe the decline is caused by the increasing availability of computer games. Maybe the books boom has affected only the top of the educational pile. Either way , Chancellor Cordon Brown plans to change things for the bottom of the class.In his pre-budget report , he announced the national project of Reading Recovery to help the children struggling most.
Reading Recovery is aimed at six-year-olds ,who receive four months of individual daily half-hour classes with a specially trained teacher. An evaluation earlier this year reported that children on the scheme made 20 months' progress in just one year, whereas similarly weak readers without special help made just five months' progress ,and so ended the year even further below the level expected for their age.
International research tends to find that when British children leave primary school they read well ,but read less often for fun than those elsewhere.Reading for fun matters because children who are keen on reading can expect lifelong pleasure and loving books is an excellent indicator of future educational success. According to the OECD, being a regular and enthusiastic reader is of great advantage. | 3705.txt | 3 |
[
"parents of teenagers",
"newspaper readers",
"those who give advice to teenagers",
"teenagers"
]
| The author is primarily addressing ________. | It is natural for young people to be critical of their parents at times and to blame them for most of the misunderstandings between them. They have always complained, more or less justly, that their parents are out of touch with modern ways; that they are possessive and dominant that they do not trust their children to deal with crises; that they talk too much about certain problems and that they have no sense of humour, at least in parent-child relationships.
I think it is true that parents often underestimate their teenage children and also forget how they themselves felt when young.
Young people often irritate their parents with their choices in clothes and hairstyles, in entertainers and music. This is not their motive. They feel cut off from the adult world into which they have not yet been accepted. So they create a culture and society of their own. Then, if it turns out that their music or entertainers or vocabulary or clothes or hairstyles irritate their parents, this gives them additional enjoyment. They feel they are superior, at least in a small way, and that they are leaders in style and taste.
Sometimes you are resistant, and proud because you do not want your parents to approve of what you do. If they did approve, it looks as if you are betraying your own age group. But in that case, you are assuming that you are the underdog: you can't win but at least you can keep your honour. This is a passive way of looking at things. It is natural enough after long years of childhood, when you were completely under your parents' control. But it ignores the fact that you are now beginning to be responsible for yourself.
If you plan to control your life, co-operation can be part of that plan. You can charm others, especially parents, into doing things the ways you want. You can impress others with your sense of responsibility and initiative, so that they will give you the authority to do what you want to do. | 3749.txt | 0 |
[
"the teenagers' criticism of their parents",
"misunderstandings between teenagers and their parents",
"the dominance of the parents over their children",
"the teenagers' ability to deal with crises"
]
| The first paragraph is mainly about ________. | It is natural for young people to be critical of their parents at times and to blame them for most of the misunderstandings between them. They have always complained, more or less justly, that their parents are out of touch with modern ways; that they are possessive and dominant that they do not trust their children to deal with crises; that they talk too much about certain problems and that they have no sense of humour, at least in parent-child relationships.
I think it is true that parents often underestimate their teenage children and also forget how they themselves felt when young.
Young people often irritate their parents with their choices in clothes and hairstyles, in entertainers and music. This is not their motive. They feel cut off from the adult world into which they have not yet been accepted. So they create a culture and society of their own. Then, if it turns out that their music or entertainers or vocabulary or clothes or hairstyles irritate their parents, this gives them additional enjoyment. They feel they are superior, at least in a small way, and that they are leaders in style and taste.
Sometimes you are resistant, and proud because you do not want your parents to approve of what you do. If they did approve, it looks as if you are betraying your own age group. But in that case, you are assuming that you are the underdog: you can't win but at least you can keep your honour. This is a passive way of looking at things. It is natural enough after long years of childhood, when you were completely under your parents' control. But it ignores the fact that you are now beginning to be responsible for yourself.
If you plan to control your life, co-operation can be part of that plan. You can charm others, especially parents, into doing things the ways you want. You can impress others with your sense of responsibility and initiative, so that they will give you the authority to do what you want to do. | 3749.txt | 1 |
[
"want to show their existence by creating a culture of their own",
"have a strong desire to be leaders in style and taste",
"have no other way to enjoy themselves better",
"want to irritate their parents"
]
| Teenagers tend to have strange clothes and hairstyles because they ________. | It is natural for young people to be critical of their parents at times and to blame them for most of the misunderstandings between them. They have always complained, more or less justly, that their parents are out of touch with modern ways; that they are possessive and dominant that they do not trust their children to deal with crises; that they talk too much about certain problems and that they have no sense of humour, at least in parent-child relationships.
I think it is true that parents often underestimate their teenage children and also forget how they themselves felt when young.
Young people often irritate their parents with their choices in clothes and hairstyles, in entertainers and music. This is not their motive. They feel cut off from the adult world into which they have not yet been accepted. So they create a culture and society of their own. Then, if it turns out that their music or entertainers or vocabulary or clothes or hairstyles irritate their parents, this gives them additional enjoyment. They feel they are superior, at least in a small way, and that they are leaders in style and taste.
Sometimes you are resistant, and proud because you do not want your parents to approve of what you do. If they did approve, it looks as if you are betraying your own age group. But in that case, you are assuming that you are the underdog: you can't win but at least you can keep your honour. This is a passive way of looking at things. It is natural enough after long years of childhood, when you were completely under your parents' control. But it ignores the fact that you are now beginning to be responsible for yourself.
If you plan to control your life, co-operation can be part of that plan. You can charm others, especially parents, into doing things the ways you want. You can impress others with your sense of responsibility and initiative, so that they will give you the authority to do what you want to do. | 3749.txt | 0 |
[
"have already been accepted into the adult world",
"feel that they are superior in a small way to the adults",
"are not likely to win over the adults",
"have a desire to be independent"
]
| Teenagers do not want their parents to approve of whatever they do because they ________. | It is natural for young people to be critical of their parents at times and to blame them for most of the misunderstandings between them. They have always complained, more or less justly, that their parents are out of touch with modern ways; that they are possessive and dominant that they do not trust their children to deal with crises; that they talk too much about certain problems and that they have no sense of humour, at least in parent-child relationships.
I think it is true that parents often underestimate their teenage children and also forget how they themselves felt when young.
Young people often irritate their parents with their choices in clothes and hairstyles, in entertainers and music. This is not their motive. They feel cut off from the adult world into which they have not yet been accepted. So they create a culture and society of their own. Then, if it turns out that their music or entertainers or vocabulary or clothes or hairstyles irritate their parents, this gives them additional enjoyment. They feel they are superior, at least in a small way, and that they are leaders in style and taste.
Sometimes you are resistant, and proud because you do not want your parents to approve of what you do. If they did approve, it looks as if you are betraying your own age group. But in that case, you are assuming that you are the underdog: you can't win but at least you can keep your honour. This is a passive way of looking at things. It is natural enough after long years of childhood, when you were completely under your parents' control. But it ignores the fact that you are now beginning to be responsible for yourself.
If you plan to control your life, co-operation can be part of that plan. You can charm others, especially parents, into doing things the ways you want. You can impress others with your sense of responsibility and initiative, so that they will give you the authority to do what you want to do. | 3749.txt | 3 |
[
"obedient",
"responsible",
"co-operative",
"independent"
]
| To improve parent-child relationships, teenagers are advised to be ________. | It is natural for young people to be critical of their parents at times and to blame them for most of the misunderstandings between them. They have always complained, more or less justly, that their parents are out of touch with modern ways; that they are possessive and dominant that they do not trust their children to deal with crises; that they talk too much about certain problems and that they have no sense of humour, at least in parent-child relationships.
I think it is true that parents often underestimate their teenage children and also forget how they themselves felt when young.
Young people often irritate their parents with their choices in clothes and hairstyles, in entertainers and music. This is not their motive. They feel cut off from the adult world into which they have not yet been accepted. So they create a culture and society of their own. Then, if it turns out that their music or entertainers or vocabulary or clothes or hairstyles irritate their parents, this gives them additional enjoyment. They feel they are superior, at least in a small way, and that they are leaders in style and taste.
Sometimes you are resistant, and proud because you do not want your parents to approve of what you do. If they did approve, it looks as if you are betraying your own age group. But in that case, you are assuming that you are the underdog: you can't win but at least you can keep your honour. This is a passive way of looking at things. It is natural enough after long years of childhood, when you were completely under your parents' control. But it ignores the fact that you are now beginning to be responsible for yourself.
If you plan to control your life, co-operation can be part of that plan. You can charm others, especially parents, into doing things the ways you want. You can impress others with your sense of responsibility and initiative, so that they will give you the authority to do what you want to do. | 3749.txt | 2 |
[
"It is extremely important to develop tourism.",
"Building roads and hotels is essential.",
"Support facilities are highly necessary.",
"Planning is of great importance to tourism."
]
| Which of the following do you think has been discussed in the part before this selection? | Too much tourism can be a problem. If tourism grows too quickly, people must leave other jobs to work in the tourism industry. This means that other parts of the country's economy can suffer.
On the other hand, if there is not enough tourism, people can lose jobs. Businesses can also lose money. It costs a great deal of money to build large hotels, airports, air terminals, first- class roads, and other support facilitiesneeded by tourist attractions. For example, a major international class tourism hotel can cost as much as 50 thousand dollars per room to build. If this room is not used most of the time, the owners of the hotel lose money.
Building a hotel is just a beginning. There must be many support facilities as well, including roads to get to the hotel, electricity, sewers to handle waste, and water. All of these support facilities cost money. If they are not used because there are not enough tourists , jobs and money are lost. | 3136.txt | 3 |
[
"a bad effect on other industries",
"a change of tourists' customs",
"over - crowdedness of places of interest",
"pressure on traffic"
]
| Too much tourism can cause all these problems EXCEPT _ . | Too much tourism can be a problem. If tourism grows too quickly, people must leave other jobs to work in the tourism industry. This means that other parts of the country's economy can suffer.
On the other hand, if there is not enough tourism, people can lose jobs. Businesses can also lose money. It costs a great deal of money to build large hotels, airports, air terminals, first- class roads, and other support facilitiesneeded by tourist attractions. For example, a major international class tourism hotel can cost as much as 50 thousand dollars per room to build. If this room is not used most of the time, the owners of the hotel lose money.
Building a hotel is just a beginning. There must be many support facilities as well, including roads to get to the hotel, electricity, sewers to handle waste, and water. All of these support facilities cost money. If they are not used because there are not enough tourists , jobs and money are lost. | 3136.txt | 1 |
[
"the author doesn't like tourism developing so fast",
"local people will benefit from tourist attraction",
"other parts of a country's economy won't benefit from tourism much",
"we can't build too many support facilities"
]
| It can be inferred from the text that _ . | Too much tourism can be a problem. If tourism grows too quickly, people must leave other jobs to work in the tourism industry. This means that other parts of the country's economy can suffer.
On the other hand, if there is not enough tourism, people can lose jobs. Businesses can also lose money. It costs a great deal of money to build large hotels, airports, air terminals, first- class roads, and other support facilitiesneeded by tourist attractions. For example, a major international class tourism hotel can cost as much as 50 thousand dollars per room to build. If this room is not used most of the time, the owners of the hotel lose money.
Building a hotel is just a beginning. There must be many support facilities as well, including roads to get to the hotel, electricity, sewers to handle waste, and water. All of these support facilities cost money. If they are not used because there are not enough tourists , jobs and money are lost. | 3136.txt | 1 |
[
"waste a lot of money",
"weaken their economy",
"help establish their customs",
"help improve their life"
]
| The author thinks it is good for local people to know that tourism will _ . | Too much tourism can be a problem. If tourism grows too quickly, people must leave other jobs to work in the tourism industry. This means that other parts of the country's economy can suffer.
On the other hand, if there is not enough tourism, people can lose jobs. Businesses can also lose money. It costs a great deal of money to build large hotels, airports, air terminals, first- class roads, and other support facilitiesneeded by tourist attractions. For example, a major international class tourism hotel can cost as much as 50 thousand dollars per room to build. If this room is not used most of the time, the owners of the hotel lose money.
Building a hotel is just a beginning. There must be many support facilities as well, including roads to get to the hotel, electricity, sewers to handle waste, and water. All of these support facilities cost money. If they are not used because there are not enough tourists , jobs and money are lost. | 3136.txt | 3 |
[
"environment is crucial for wildlife",
"tour books are not always a reliable source of information",
"London is a city of fox",
"foxes are highly adaptable to environment"
]
| The first paragraph suggests that _ . | One thing the tour books don't tell you about London is that 2,000 of its residents are foxes. As native as the royal family, they fled the city about centuries ago after developers and pollution moved in. But now that the environment is cleaner, the foxes have come home, one of the many wild animals that have moved into urban areas around the world.
"The number and variety of wild animals in urban areas is increasing," says Gomer Jones, president of the National Institute for Urban Wildlife, in Columbia, Maryland. A survey of the wildlife in New York's Central Park last year tallied the species of mammals, including muskrats, shrews and flying squirrels. A similar survey conducted in the 1890s counted only five species. One of the country's largest populations of raccoons now lives in Washington D.C., and moose are regularly seen wandering into Maine towns. Peregrine falcons dive from the window ledges of buildings in the largest U.S. cities to prey on pigeons.
Several changes have brought wild animals to the cities. Foremost is that air and water quality in many cities has improved as a result of the 1970s' pollution-control efforts. Meanwhile, rural areas have been built up, leaving many animals on the edges of suburbia. In addition, conservationists have created urban wildlife refuges.
The Greater London Council last year spent $750,000 to buy land and build 10 permanent wildlife refuges in the city. Over 1,000 volunteers have donated money and cleared rubble from derelict lots. As a result, pheasants now strut in the East End and badgers scuttle across lawns near the center of town. A colony of rare house martins nests on a window ledge beside Harrods, and one evening last year a fox was seen on Westminster Bridge looking up at Big Ben.
For peregrine falcons, cities are actually safer than rural cliff dwellings. By 1970 the birds were extinct east of the Mississippi because the DDT had made their eggs too thin to support life. That year, ornithologist Tom Cade of Cornell University began rising the birds for release in cities, for cities afforded abundant food and contained none of the peregrine's natural predators.
"Before they were exterminated, some migrated to cities on their own because they had run out of cliff space," Cade says. "To peregrines, buildings are just like cliffs." He has released about 30 birds since 1975 in New York, Baltimore, Philadelphia and Norfolk, and of the 20 pairs now living in the East, half are urbanites. "A few of the young ones have gotten into trouble by falling down chimneys and crashing into window-glass, but overall their adjustment has been successful." | 2683.txt | 0 |
[
"wildlife of all kinds returning to large cities to live",
"falcons in New York, Baltimore, Philadelphia, and Norfolk",
"moose stumbling into plate-glass storefronts",
"foxes returning to London"
]
| The selection is primarily concerned with _ . | One thing the tour books don't tell you about London is that 2,000 of its residents are foxes. As native as the royal family, they fled the city about centuries ago after developers and pollution moved in. But now that the environment is cleaner, the foxes have come home, one of the many wild animals that have moved into urban areas around the world.
"The number and variety of wild animals in urban areas is increasing," says Gomer Jones, president of the National Institute for Urban Wildlife, in Columbia, Maryland. A survey of the wildlife in New York's Central Park last year tallied the species of mammals, including muskrats, shrews and flying squirrels. A similar survey conducted in the 1890s counted only five species. One of the country's largest populations of raccoons now lives in Washington D.C., and moose are regularly seen wandering into Maine towns. Peregrine falcons dive from the window ledges of buildings in the largest U.S. cities to prey on pigeons.
Several changes have brought wild animals to the cities. Foremost is that air and water quality in many cities has improved as a result of the 1970s' pollution-control efforts. Meanwhile, rural areas have been built up, leaving many animals on the edges of suburbia. In addition, conservationists have created urban wildlife refuges.
The Greater London Council last year spent $750,000 to buy land and build 10 permanent wildlife refuges in the city. Over 1,000 volunteers have donated money and cleared rubble from derelict lots. As a result, pheasants now strut in the East End and badgers scuttle across lawns near the center of town. A colony of rare house martins nests on a window ledge beside Harrods, and one evening last year a fox was seen on Westminster Bridge looking up at Big Ben.
For peregrine falcons, cities are actually safer than rural cliff dwellings. By 1970 the birds were extinct east of the Mississippi because the DDT had made their eggs too thin to support life. That year, ornithologist Tom Cade of Cornell University began rising the birds for release in cities, for cities afforded abundant food and contained none of the peregrine's natural predators.
"Before they were exterminated, some migrated to cities on their own because they had run out of cliff space," Cade says. "To peregrines, buildings are just like cliffs." He has released about 30 birds since 1975 in New York, Baltimore, Philadelphia and Norfolk, and of the 20 pairs now living in the East, half are urbanites. "A few of the young ones have gotten into trouble by falling down chimneys and crashing into window-glass, but overall their adjustment has been successful." | 2683.txt | 0 |
[
"explain their living habit",
"make known their habitat",
"show the endeavors of Londoners to make the city habitable for wildlife",
"encourage volunteers to do something for the species"
]
| In the 4th paragraph the pheasants, badgers, and martins etc. are mentioned to _ . | One thing the tour books don't tell you about London is that 2,000 of its residents are foxes. As native as the royal family, they fled the city about centuries ago after developers and pollution moved in. But now that the environment is cleaner, the foxes have come home, one of the many wild animals that have moved into urban areas around the world.
"The number and variety of wild animals in urban areas is increasing," says Gomer Jones, president of the National Institute for Urban Wildlife, in Columbia, Maryland. A survey of the wildlife in New York's Central Park last year tallied the species of mammals, including muskrats, shrews and flying squirrels. A similar survey conducted in the 1890s counted only five species. One of the country's largest populations of raccoons now lives in Washington D.C., and moose are regularly seen wandering into Maine towns. Peregrine falcons dive from the window ledges of buildings in the largest U.S. cities to prey on pigeons.
Several changes have brought wild animals to the cities. Foremost is that air and water quality in many cities has improved as a result of the 1970s' pollution-control efforts. Meanwhile, rural areas have been built up, leaving many animals on the edges of suburbia. In addition, conservationists have created urban wildlife refuges.
The Greater London Council last year spent $750,000 to buy land and build 10 permanent wildlife refuges in the city. Over 1,000 volunteers have donated money and cleared rubble from derelict lots. As a result, pheasants now strut in the East End and badgers scuttle across lawns near the center of town. A colony of rare house martins nests on a window ledge beside Harrods, and one evening last year a fox was seen on Westminster Bridge looking up at Big Ben.
For peregrine falcons, cities are actually safer than rural cliff dwellings. By 1970 the birds were extinct east of the Mississippi because the DDT had made their eggs too thin to support life. That year, ornithologist Tom Cade of Cornell University began rising the birds for release in cities, for cities afforded abundant food and contained none of the peregrine's natural predators.
"Before they were exterminated, some migrated to cities on their own because they had run out of cliff space," Cade says. "To peregrines, buildings are just like cliffs." He has released about 30 birds since 1975 in New York, Baltimore, Philadelphia and Norfolk, and of the 20 pairs now living in the East, half are urbanites. "A few of the young ones have gotten into trouble by falling down chimneys and crashing into window-glass, but overall their adjustment has been successful." | 2683.txt | 2 |
[
"that air and water quality has improved in the cities",
"why wildlife likes the noise and commotion in the cities",
"that wildlife refuges have been built in the cities",
"why wildlife is returning to cities"
]
| The main idea of paragraph 3 is _ . | One thing the tour books don't tell you about London is that 2,000 of its residents are foxes. As native as the royal family, they fled the city about centuries ago after developers and pollution moved in. But now that the environment is cleaner, the foxes have come home, one of the many wild animals that have moved into urban areas around the world.
"The number and variety of wild animals in urban areas is increasing," says Gomer Jones, president of the National Institute for Urban Wildlife, in Columbia, Maryland. A survey of the wildlife in New York's Central Park last year tallied the species of mammals, including muskrats, shrews and flying squirrels. A similar survey conducted in the 1890s counted only five species. One of the country's largest populations of raccoons now lives in Washington D.C., and moose are regularly seen wandering into Maine towns. Peregrine falcons dive from the window ledges of buildings in the largest U.S. cities to prey on pigeons.
Several changes have brought wild animals to the cities. Foremost is that air and water quality in many cities has improved as a result of the 1970s' pollution-control efforts. Meanwhile, rural areas have been built up, leaving many animals on the edges of suburbia. In addition, conservationists have created urban wildlife refuges.
The Greater London Council last year spent $750,000 to buy land and build 10 permanent wildlife refuges in the city. Over 1,000 volunteers have donated money and cleared rubble from derelict lots. As a result, pheasants now strut in the East End and badgers scuttle across lawns near the center of town. A colony of rare house martins nests on a window ledge beside Harrods, and one evening last year a fox was seen on Westminster Bridge looking up at Big Ben.
For peregrine falcons, cities are actually safer than rural cliff dwellings. By 1970 the birds were extinct east of the Mississippi because the DDT had made their eggs too thin to support life. That year, ornithologist Tom Cade of Cornell University began rising the birds for release in cities, for cities afforded abundant food and contained none of the peregrine's natural predators.
"Before they were exterminated, some migrated to cities on their own because they had run out of cliff space," Cade says. "To peregrines, buildings are just like cliffs." He has released about 30 birds since 1975 in New York, Baltimore, Philadelphia and Norfolk, and of the 20 pairs now living in the East, half are urbanites. "A few of the young ones have gotten into trouble by falling down chimneys and crashing into window-glass, but overall their adjustment has been successful." | 2683.txt | 3 |
[
"bountiful nesting areas, abundant food, and rainwater control basins",
"abundant food, buildings that resemble cliffs, and no natural predators",
"large buildings with chimneys other wildlife, and well-lighted nesting areas",
"abundant food, chimneys, rubble, and window sills"
]
| Cities make good homes for peregrine falcons because they provide _ . | One thing the tour books don't tell you about London is that 2,000 of its residents are foxes. As native as the royal family, they fled the city about centuries ago after developers and pollution moved in. But now that the environment is cleaner, the foxes have come home, one of the many wild animals that have moved into urban areas around the world.
"The number and variety of wild animals in urban areas is increasing," says Gomer Jones, president of the National Institute for Urban Wildlife, in Columbia, Maryland. A survey of the wildlife in New York's Central Park last year tallied the species of mammals, including muskrats, shrews and flying squirrels. A similar survey conducted in the 1890s counted only five species. One of the country's largest populations of raccoons now lives in Washington D.C., and moose are regularly seen wandering into Maine towns. Peregrine falcons dive from the window ledges of buildings in the largest U.S. cities to prey on pigeons.
Several changes have brought wild animals to the cities. Foremost is that air and water quality in many cities has improved as a result of the 1970s' pollution-control efforts. Meanwhile, rural areas have been built up, leaving many animals on the edges of suburbia. In addition, conservationists have created urban wildlife refuges.
The Greater London Council last year spent $750,000 to buy land and build 10 permanent wildlife refuges in the city. Over 1,000 volunteers have donated money and cleared rubble from derelict lots. As a result, pheasants now strut in the East End and badgers scuttle across lawns near the center of town. A colony of rare house martins nests on a window ledge beside Harrods, and one evening last year a fox was seen on Westminster Bridge looking up at Big Ben.
For peregrine falcons, cities are actually safer than rural cliff dwellings. By 1970 the birds were extinct east of the Mississippi because the DDT had made their eggs too thin to support life. That year, ornithologist Tom Cade of Cornell University began rising the birds for release in cities, for cities afforded abundant food and contained none of the peregrine's natural predators.
"Before they were exterminated, some migrated to cities on their own because they had run out of cliff space," Cade says. "To peregrines, buildings are just like cliffs." He has released about 30 birds since 1975 in New York, Baltimore, Philadelphia and Norfolk, and of the 20 pairs now living in the East, half are urbanites. "A few of the young ones have gotten into trouble by falling down chimneys and crashing into window-glass, but overall their adjustment has been successful." | 2683.txt | 1 |
[
"produce a report on sexual discrimination",
"call for further improvement in their working conditions",
"spend their energies and time fighting against sexual discrimination",
"spend more time and energy doing scholarly activities"
]
| According to Spirduso,women need to _ . | Women are also underrepresented in the administration and this is because there are so few women full professors. In 1985,Regent Beryl Milburn produced a report blasting the University of Texas System adminitration for not encouraging women.The University was rated among the lowest for the system.In a 1987 update ,Milburn commended the progress that was made and called for even more improvement.
One of the positive results from her study was a System-wide program to inform women of available administrative jobs.
College of Communication Associate Dean Patrica Witherspoon,said it is important that woman be flexible when it comesto relocating if they want to rise in the ranks.
Although a woman may face a chilly climate on campus , many times in order for her to succeed , she must rise above the problems around her and concentrate on her work.
Until women make up a greater percentage of the senior positions in the University and all academia,inequities will exist.
"Women need to spend their energies and time doing scholarly activities that are important here at the University." Spirduso said. "If they do that will be successful in this system.If they spend their time in little groups mourning the sexual discrimination that they think exists here, they are wasting valuable study time." | 1626.txt | 3 |
[
"there are many women full professors in the University of Texas",
"women play an important part in adminitrating the University",
"the weather on the campus is chilly",
"women make up a small percentage of the senior positions in the University"
]
| From this passage ,we know that _ . | Women are also underrepresented in the administration and this is because there are so few women full professors. In 1985,Regent Beryl Milburn produced a report blasting the University of Texas System adminitration for not encouraging women.The University was rated among the lowest for the system.In a 1987 update ,Milburn commended the progress that was made and called for even more improvement.
One of the positive results from her study was a System-wide program to inform women of available administrative jobs.
College of Communication Associate Dean Patrica Witherspoon,said it is important that woman be flexible when it comesto relocating if they want to rise in the ranks.
Although a woman may face a chilly climate on campus , many times in order for her to succeed , she must rise above the problems around her and concentrate on her work.
Until women make up a greater percentage of the senior positions in the University and all academia,inequities will exist.
"Women need to spend their energies and time doing scholarly activities that are important here at the University." Spirduso said. "If they do that will be successful in this system.If they spend their time in little groups mourning the sexual discrimination that they think exists here, they are wasting valuable study time." | 1626.txt | 3 |
[
"the number of women professors in the University in 1987 was greater than that of 1985",
"the number of women professors in the University in 1987 was smaller than that of 1985",
"the number of women professors was the same as that of 1985",
"more and more women professors thought that sexual discrimination did exit in the University"
]
| Which of the following statements is true? | Women are also underrepresented in the administration and this is because there are so few women full professors. In 1985,Regent Beryl Milburn produced a report blasting the University of Texas System adminitration for not encouraging women.The University was rated among the lowest for the system.In a 1987 update ,Milburn commended the progress that was made and called for even more improvement.
One of the positive results from her study was a System-wide program to inform women of available administrative jobs.
College of Communication Associate Dean Patrica Witherspoon,said it is important that woman be flexible when it comesto relocating if they want to rise in the ranks.
Although a woman may face a chilly climate on campus , many times in order for her to succeed , she must rise above the problems around her and concentrate on her work.
Until women make up a greater percentage of the senior positions in the University and all academia,inequities will exist.
"Women need to spend their energies and time doing scholarly activities that are important here at the University." Spirduso said. "If they do that will be successful in this system.If they spend their time in little groups mourning the sexual discrimination that they think exists here, they are wasting valuable study time." | 1626.txt | 0 |
[
"women were told to con centrate on teir work",
"women were given information about available administrative jobs",
"women were encouraged to take on all the administrative jobs in the Unversity",
"women were encouraged to do more scholarly activities"
]
| One of the positive results from Milburn's study was that _ . | Women are also underrepresented in the administration and this is because there are so few women full professors. In 1985,Regent Beryl Milburn produced a report blasting the University of Texas System adminitration for not encouraging women.The University was rated among the lowest for the system.In a 1987 update ,Milburn commended the progress that was made and called for even more improvement.
One of the positive results from her study was a System-wide program to inform women of available administrative jobs.
College of Communication Associate Dean Patrica Witherspoon,said it is important that woman be flexible when it comesto relocating if they want to rise in the ranks.
Although a woman may face a chilly climate on campus , many times in order for her to succeed , she must rise above the problems around her and concentrate on her work.
Until women make up a greater percentage of the senior positions in the University and all academia,inequities will exist.
"Women need to spend their energies and time doing scholarly activities that are important here at the University." Spirduso said. "If they do that will be successful in this system.If they spend their time in little groups mourning the sexual discrimination that they think exists here, they are wasting valuable study time." | 1626.txt | 1 |
[
"The University of Texas",
"Milburn's Report",
"Women Professors",
"Sexual Discrimination in Academia"
]
| The title for this passage should be _ . | Women are also underrepresented in the administration and this is because there are so few women full professors. In 1985,Regent Beryl Milburn produced a report blasting the University of Texas System adminitration for not encouraging women.The University was rated among the lowest for the system.In a 1987 update ,Milburn commended the progress that was made and called for even more improvement.
One of the positive results from her study was a System-wide program to inform women of available administrative jobs.
College of Communication Associate Dean Patrica Witherspoon,said it is important that woman be flexible when it comesto relocating if they want to rise in the ranks.
Although a woman may face a chilly climate on campus , many times in order for her to succeed , she must rise above the problems around her and concentrate on her work.
Until women make up a greater percentage of the senior positions in the University and all academia,inequities will exist.
"Women need to spend their energies and time doing scholarly activities that are important here at the University." Spirduso said. "If they do that will be successful in this system.If they spend their time in little groups mourning the sexual discrimination that they think exists here, they are wasting valuable study time." | 1626.txt | 3 |
[
"The person who was well educated.",
"The person who had great abilities.",
"The person who was physically attractive.",
"The person who was appreciated by personnel officer in a certain aspect."
]
| In the past, who would be sure to be recruited after an interview? | Nowadays more and more foreign enterprises and companies are no longer relying on interviews for recruitment . Years of studying interviewing have made clear that it is not a very objective process. Personnel officers often hire the person they like best, or even the one they think most physically attractive. Looking good is no guarantee of doing the job well, however. Uglies or those who are aesthetically challenged, lose heart.
To get a more objective view, many companies are also using psychological tests to hire both for relatively routine jobs and for positions at senior levels of management. It is impossible to say how many employers use tests, but estimates of test sales in the UK for 1993 were over 1 million.
The basic reason employers use tests is clear: tests claim to be scientific and objective. A large body of research has shown that interviews by themselves are not very reliable as a method of selection. People's judgment are often very subjective: whether they like the look of someone counts for more than almost anything else. But reliable and valid tests can offer rapid and more objective information about would-be employee. If a candidate talks well in an interview but his test results suggest that he is a careless person who can not concentrate, an employer is likely to think twice about hiring him.
Taking a serious test for a job is rather different from taking a game-like test. You can spend just a little time in answering questions of that kind of test, and you deny the answers and say they are not accurate. But you can not go to a serious test without enough preparation since you can not afford to be denied and eliminated again and again. | 1484.txt | 3 |
[
"good-looking",
"guarantee of doing the job well",
"not attractive judging from appearance",
"given the job of interviewing the candidates"
]
| According to the passage, "those who are aesthetically challenged" refer to those who are _ . | Nowadays more and more foreign enterprises and companies are no longer relying on interviews for recruitment . Years of studying interviewing have made clear that it is not a very objective process. Personnel officers often hire the person they like best, or even the one they think most physically attractive. Looking good is no guarantee of doing the job well, however. Uglies or those who are aesthetically challenged, lose heart.
To get a more objective view, many companies are also using psychological tests to hire both for relatively routine jobs and for positions at senior levels of management. It is impossible to say how many employers use tests, but estimates of test sales in the UK for 1993 were over 1 million.
The basic reason employers use tests is clear: tests claim to be scientific and objective. A large body of research has shown that interviews by themselves are not very reliable as a method of selection. People's judgment are often very subjective: whether they like the look of someone counts for more than almost anything else. But reliable and valid tests can offer rapid and more objective information about would-be employee. If a candidate talks well in an interview but his test results suggest that he is a careless person who can not concentrate, an employer is likely to think twice about hiring him.
Taking a serious test for a job is rather different from taking a game-like test. You can spend just a little time in answering questions of that kind of test, and you deny the answers and say they are not accurate. But you can not go to a serious test without enough preparation since you can not afford to be denied and eliminated again and again. | 1484.txt | 2 |
[
"to take the place of interviews",
"just to select common clerks",
"to make the recruitment more difficult for candidates",
"to get really reliable and fair information about candidates"
]
| Many companies use psychological tests_ . | Nowadays more and more foreign enterprises and companies are no longer relying on interviews for recruitment . Years of studying interviewing have made clear that it is not a very objective process. Personnel officers often hire the person they like best, or even the one they think most physically attractive. Looking good is no guarantee of doing the job well, however. Uglies or those who are aesthetically challenged, lose heart.
To get a more objective view, many companies are also using psychological tests to hire both for relatively routine jobs and for positions at senior levels of management. It is impossible to say how many employers use tests, but estimates of test sales in the UK for 1993 were over 1 million.
The basic reason employers use tests is clear: tests claim to be scientific and objective. A large body of research has shown that interviews by themselves are not very reliable as a method of selection. People's judgment are often very subjective: whether they like the look of someone counts for more than almost anything else. But reliable and valid tests can offer rapid and more objective information about would-be employee. If a candidate talks well in an interview but his test results suggest that he is a careless person who can not concentrate, an employer is likely to think twice about hiring him.
Taking a serious test for a job is rather different from taking a game-like test. You can spend just a little time in answering questions of that kind of test, and you deny the answers and say they are not accurate. But you can not go to a serious test without enough preparation since you can not afford to be denied and eliminated again and again. | 1484.txt | 3 |
[
"an interview",
"a serious test",
"a game-like test",
"an objective test"
]
| "That kind of test" in the last paragraph refers to_ . | Nowadays more and more foreign enterprises and companies are no longer relying on interviews for recruitment . Years of studying interviewing have made clear that it is not a very objective process. Personnel officers often hire the person they like best, or even the one they think most physically attractive. Looking good is no guarantee of doing the job well, however. Uglies or those who are aesthetically challenged, lose heart.
To get a more objective view, many companies are also using psychological tests to hire both for relatively routine jobs and for positions at senior levels of management. It is impossible to say how many employers use tests, but estimates of test sales in the UK for 1993 were over 1 million.
The basic reason employers use tests is clear: tests claim to be scientific and objective. A large body of research has shown that interviews by themselves are not very reliable as a method of selection. People's judgment are often very subjective: whether they like the look of someone counts for more than almost anything else. But reliable and valid tests can offer rapid and more objective information about would-be employee. If a candidate talks well in an interview but his test results suggest that he is a careless person who can not concentrate, an employer is likely to think twice about hiring him.
Taking a serious test for a job is rather different from taking a game-like test. You can spend just a little time in answering questions of that kind of test, and you deny the answers and say they are not accurate. But you can not go to a serious test without enough preparation since you can not afford to be denied and eliminated again and again. | 1484.txt | 2 |
[
"For a certain time, psychological tests and interviews will exist together.",
"Psychological tests have been recognized valuable more and more.",
"The employer will surely hire a person who does well in the interview but poorly in the psychological tests.",
"People seldom attend a serious test without enough preparation unless they are confident of it."
]
| Which of the. following statements is NOT TRUE according to the passage? | Nowadays more and more foreign enterprises and companies are no longer relying on interviews for recruitment . Years of studying interviewing have made clear that it is not a very objective process. Personnel officers often hire the person they like best, or even the one they think most physically attractive. Looking good is no guarantee of doing the job well, however. Uglies or those who are aesthetically challenged, lose heart.
To get a more objective view, many companies are also using psychological tests to hire both for relatively routine jobs and for positions at senior levels of management. It is impossible to say how many employers use tests, but estimates of test sales in the UK for 1993 were over 1 million.
The basic reason employers use tests is clear: tests claim to be scientific and objective. A large body of research has shown that interviews by themselves are not very reliable as a method of selection. People's judgment are often very subjective: whether they like the look of someone counts for more than almost anything else. But reliable and valid tests can offer rapid and more objective information about would-be employee. If a candidate talks well in an interview but his test results suggest that he is a careless person who can not concentrate, an employer is likely to think twice about hiring him.
Taking a serious test for a job is rather different from taking a game-like test. You can spend just a little time in answering questions of that kind of test, and you deny the answers and say they are not accurate. But you can not go to a serious test without enough preparation since you can not afford to be denied and eliminated again and again. | 1484.txt | 2 |
[
"he brought both positive and negative effect to the development of the Modernist movement.",
"he was both a poet and a person with mental problem.",
"he was politically a racist while he was also pro-Fascist.",
"he was a man of complex and unintelligible personality."
]
| Pound was a divisive figure because _ | "WHANG-Boom-Boom-cast delicacy to the winds." Thus Ezra Pound in a letter to his father, urging the old man to help promote his first published collection. It might have been the poet's manifesto.
Pound is as divisive a figure today as he was in his own lifetime. For some he was the leading figure of the Modernist movement who redefined what poetry was and could be; and who, in his role as cultural impresario, gave vital impetus to the literary careers of T.S. Eliot, James Joyce and Wyndham Lewis, among others. But for many Pound remains a freak and an embarrassment, a clinical nutcase and vicious anti-Semite who churned out a lot of impenetrable tosh before losing the plot completely.
During the second world war he broadcast pro-Fascist radio programmes from Italy and later avoided trial for treason at home only because he was declared insane. On his release from St Elizabeth's Hospital near Washington, DC, he returned to Italy ("America is a lunatic asylum"), where he died in 1972 aged 87.
David Moody, emeritus professor of English at York University, makes a strong case for Pound's "generous energy" and the "disruptive, regenerative force of his genius". His approach (unlike Pound's) is uncontroversial. He follows the poet's progress chronologically from his childhood in Idaho-still, at the time of his birth in 1885, part of the wild west-to his conquest of literary London between 1908 and 1920. He marshals Pound's staggering output of poetry, prose and correspondence to excellent effect, and offers clear, perceptive commentary on it. He helps us to see poems, such as this famous, peculiarly haunting 19-syllable haiku, in a new light:
The apparition of these faces in the crowd:
Petals on a wet, black bough.
That Mr Moody is constantly being upstaged by the subject of his study is not surprising. Pound was one of the most colourful artistic figures in a period full of them.
According to Ford Madox Ford, who became a good friend of Pound's shortly after the bumptious young American arrived in London: "Ezra would approach with the step of a dancer, making passes with a cane at an imaginary opponent. He would wear trousers made of green billiard cloth, a pink coat, a blue shirt, a tie hand-painted by a Japanese friend, an immense sombrero, a flaming beard cut to a point and a single large blue earring." W.B. Yeats's simple assessment was that: "There is no younger generation of poets. E.P. is a solitary volcano."
A great merit of Mr Moody's approach is the space he gives to Pound's writings. It is love-it-or-hate-it stuff, but, either way, undeniably fascinating. "All good art is realism of one kind or another," Pound said. Reconciling that tidy statement with practically any of his poems is hard work but, as Mr Moody shows over and over again, hard work that offers huge rewards. His first volume ends in 1920, with Pound quitting London in a huff, finally fed up-after more than a decade of doing everything in his power to rattle the intellectual establishment-with "British insensitivity to, and irritation with, mental agility in any and every form". His disgraceful radio programmes and the full blooming of his loopiness lie ahead. So, too, do most of his exquisite Cantos. | 3531.txt | 3 |
[
"Italy was his hometown.",
"he was persecuted by Americans.",
"he disliked America.",
"he was out of his mind."
]
| When Pound was released from hospital, he returned to Italy because _ | "WHANG-Boom-Boom-cast delicacy to the winds." Thus Ezra Pound in a letter to his father, urging the old man to help promote his first published collection. It might have been the poet's manifesto.
Pound is as divisive a figure today as he was in his own lifetime. For some he was the leading figure of the Modernist movement who redefined what poetry was and could be; and who, in his role as cultural impresario, gave vital impetus to the literary careers of T.S. Eliot, James Joyce and Wyndham Lewis, among others. But for many Pound remains a freak and an embarrassment, a clinical nutcase and vicious anti-Semite who churned out a lot of impenetrable tosh before losing the plot completely.
During the second world war he broadcast pro-Fascist radio programmes from Italy and later avoided trial for treason at home only because he was declared insane. On his release from St Elizabeth's Hospital near Washington, DC, he returned to Italy ("America is a lunatic asylum"), where he died in 1972 aged 87.
David Moody, emeritus professor of English at York University, makes a strong case for Pound's "generous energy" and the "disruptive, regenerative force of his genius". His approach (unlike Pound's) is uncontroversial. He follows the poet's progress chronologically from his childhood in Idaho-still, at the time of his birth in 1885, part of the wild west-to his conquest of literary London between 1908 and 1920. He marshals Pound's staggering output of poetry, prose and correspondence to excellent effect, and offers clear, perceptive commentary on it. He helps us to see poems, such as this famous, peculiarly haunting 19-syllable haiku, in a new light:
The apparition of these faces in the crowd:
Petals on a wet, black bough.
That Mr Moody is constantly being upstaged by the subject of his study is not surprising. Pound was one of the most colourful artistic figures in a period full of them.
According to Ford Madox Ford, who became a good friend of Pound's shortly after the bumptious young American arrived in London: "Ezra would approach with the step of a dancer, making passes with a cane at an imaginary opponent. He would wear trousers made of green billiard cloth, a pink coat, a blue shirt, a tie hand-painted by a Japanese friend, an immense sombrero, a flaming beard cut to a point and a single large blue earring." W.B. Yeats's simple assessment was that: "There is no younger generation of poets. E.P. is a solitary volcano."
A great merit of Mr Moody's approach is the space he gives to Pound's writings. It is love-it-or-hate-it stuff, but, either way, undeniably fascinating. "All good art is realism of one kind or another," Pound said. Reconciling that tidy statement with practically any of his poems is hard work but, as Mr Moody shows over and over again, hard work that offers huge rewards. His first volume ends in 1920, with Pound quitting London in a huff, finally fed up-after more than a decade of doing everything in his power to rattle the intellectual establishment-with "British insensitivity to, and irritation with, mental agility in any and every form". His disgraceful radio programmes and the full blooming of his loopiness lie ahead. So, too, do most of his exquisite Cantos. | 3531.txt | 2 |
[
"His literary approach is unlike that of Pound's, being less contradictory.",
"He focuses on Pound's poetry itself instead of his personality, attempting to keep objective",
"He traces the poet's life in time order to study Pound's achievement.",
"His study offers a fresh sight of Pound's work"
]
| Which one of the following statements is NOT true of David Moody's study on Pound? | "WHANG-Boom-Boom-cast delicacy to the winds." Thus Ezra Pound in a letter to his father, urging the old man to help promote his first published collection. It might have been the poet's manifesto.
Pound is as divisive a figure today as he was in his own lifetime. For some he was the leading figure of the Modernist movement who redefined what poetry was and could be; and who, in his role as cultural impresario, gave vital impetus to the literary careers of T.S. Eliot, James Joyce and Wyndham Lewis, among others. But for many Pound remains a freak and an embarrassment, a clinical nutcase and vicious anti-Semite who churned out a lot of impenetrable tosh before losing the plot completely.
During the second world war he broadcast pro-Fascist radio programmes from Italy and later avoided trial for treason at home only because he was declared insane. On his release from St Elizabeth's Hospital near Washington, DC, he returned to Italy ("America is a lunatic asylum"), where he died in 1972 aged 87.
David Moody, emeritus professor of English at York University, makes a strong case for Pound's "generous energy" and the "disruptive, regenerative force of his genius". His approach (unlike Pound's) is uncontroversial. He follows the poet's progress chronologically from his childhood in Idaho-still, at the time of his birth in 1885, part of the wild west-to his conquest of literary London between 1908 and 1920. He marshals Pound's staggering output of poetry, prose and correspondence to excellent effect, and offers clear, perceptive commentary on it. He helps us to see poems, such as this famous, peculiarly haunting 19-syllable haiku, in a new light:
The apparition of these faces in the crowd:
Petals on a wet, black bough.
That Mr Moody is constantly being upstaged by the subject of his study is not surprising. Pound was one of the most colourful artistic figures in a period full of them.
According to Ford Madox Ford, who became a good friend of Pound's shortly after the bumptious young American arrived in London: "Ezra would approach with the step of a dancer, making passes with a cane at an imaginary opponent. He would wear trousers made of green billiard cloth, a pink coat, a blue shirt, a tie hand-painted by a Japanese friend, an immense sombrero, a flaming beard cut to a point and a single large blue earring." W.B. Yeats's simple assessment was that: "There is no younger generation of poets. E.P. is a solitary volcano."
A great merit of Mr Moody's approach is the space he gives to Pound's writings. It is love-it-or-hate-it stuff, but, either way, undeniably fascinating. "All good art is realism of one kind or another," Pound said. Reconciling that tidy statement with practically any of his poems is hard work but, as Mr Moody shows over and over again, hard work that offers huge rewards. His first volume ends in 1920, with Pound quitting London in a huff, finally fed up-after more than a decade of doing everything in his power to rattle the intellectual establishment-with "British insensitivity to, and irritation with, mental agility in any and every form". His disgraceful radio programmes and the full blooming of his loopiness lie ahead. So, too, do most of his exquisite Cantos. | 3531.txt | 0 |
[
"Pound was of exploding power in his literary creation.",
"Pound's achievement could hardly be reached by later poets.",
"Pound's excellence was unsurpassable in his time.",
"It would take a long time for Pound's generation to fully understand him."
]
| From Keats's simple assessment, it can be inferred that _ | "WHANG-Boom-Boom-cast delicacy to the winds." Thus Ezra Pound in a letter to his father, urging the old man to help promote his first published collection. It might have been the poet's manifesto.
Pound is as divisive a figure today as he was in his own lifetime. For some he was the leading figure of the Modernist movement who redefined what poetry was and could be; and who, in his role as cultural impresario, gave vital impetus to the literary careers of T.S. Eliot, James Joyce and Wyndham Lewis, among others. But for many Pound remains a freak and an embarrassment, a clinical nutcase and vicious anti-Semite who churned out a lot of impenetrable tosh before losing the plot completely.
During the second world war he broadcast pro-Fascist radio programmes from Italy and later avoided trial for treason at home only because he was declared insane. On his release from St Elizabeth's Hospital near Washington, DC, he returned to Italy ("America is a lunatic asylum"), where he died in 1972 aged 87.
David Moody, emeritus professor of English at York University, makes a strong case for Pound's "generous energy" and the "disruptive, regenerative force of his genius". His approach (unlike Pound's) is uncontroversial. He follows the poet's progress chronologically from his childhood in Idaho-still, at the time of his birth in 1885, part of the wild west-to his conquest of literary London between 1908 and 1920. He marshals Pound's staggering output of poetry, prose and correspondence to excellent effect, and offers clear, perceptive commentary on it. He helps us to see poems, such as this famous, peculiarly haunting 19-syllable haiku, in a new light:
The apparition of these faces in the crowd:
Petals on a wet, black bough.
That Mr Moody is constantly being upstaged by the subject of his study is not surprising. Pound was one of the most colourful artistic figures in a period full of them.
According to Ford Madox Ford, who became a good friend of Pound's shortly after the bumptious young American arrived in London: "Ezra would approach with the step of a dancer, making passes with a cane at an imaginary opponent. He would wear trousers made of green billiard cloth, a pink coat, a blue shirt, a tie hand-painted by a Japanese friend, an immense sombrero, a flaming beard cut to a point and a single large blue earring." W.B. Yeats's simple assessment was that: "There is no younger generation of poets. E.P. is a solitary volcano."
A great merit of Mr Moody's approach is the space he gives to Pound's writings. It is love-it-or-hate-it stuff, but, either way, undeniably fascinating. "All good art is realism of one kind or another," Pound said. Reconciling that tidy statement with practically any of his poems is hard work but, as Mr Moody shows over and over again, hard work that offers huge rewards. His first volume ends in 1920, with Pound quitting London in a huff, finally fed up-after more than a decade of doing everything in his power to rattle the intellectual establishment-with "British insensitivity to, and irritation with, mental agility in any and every form". His disgraceful radio programmes and the full blooming of his loopiness lie ahead. So, too, do most of his exquisite Cantos. | 3531.txt | 2 |
[
"set up.",
"destroy.",
"struggle.",
"disturb."
]
| The word "rattle"(Line 6, Paragraph 7) most probably means _ | "WHANG-Boom-Boom-cast delicacy to the winds." Thus Ezra Pound in a letter to his father, urging the old man to help promote his first published collection. It might have been the poet's manifesto.
Pound is as divisive a figure today as he was in his own lifetime. For some he was the leading figure of the Modernist movement who redefined what poetry was and could be; and who, in his role as cultural impresario, gave vital impetus to the literary careers of T.S. Eliot, James Joyce and Wyndham Lewis, among others. But for many Pound remains a freak and an embarrassment, a clinical nutcase and vicious anti-Semite who churned out a lot of impenetrable tosh before losing the plot completely.
During the second world war he broadcast pro-Fascist radio programmes from Italy and later avoided trial for treason at home only because he was declared insane. On his release from St Elizabeth's Hospital near Washington, DC, he returned to Italy ("America is a lunatic asylum"), where he died in 1972 aged 87.
David Moody, emeritus professor of English at York University, makes a strong case for Pound's "generous energy" and the "disruptive, regenerative force of his genius". His approach (unlike Pound's) is uncontroversial. He follows the poet's progress chronologically from his childhood in Idaho-still, at the time of his birth in 1885, part of the wild west-to his conquest of literary London between 1908 and 1920. He marshals Pound's staggering output of poetry, prose and correspondence to excellent effect, and offers clear, perceptive commentary on it. He helps us to see poems, such as this famous, peculiarly haunting 19-syllable haiku, in a new light:
The apparition of these faces in the crowd:
Petals on a wet, black bough.
That Mr Moody is constantly being upstaged by the subject of his study is not surprising. Pound was one of the most colourful artistic figures in a period full of them.
According to Ford Madox Ford, who became a good friend of Pound's shortly after the bumptious young American arrived in London: "Ezra would approach with the step of a dancer, making passes with a cane at an imaginary opponent. He would wear trousers made of green billiard cloth, a pink coat, a blue shirt, a tie hand-painted by a Japanese friend, an immense sombrero, a flaming beard cut to a point and a single large blue earring." W.B. Yeats's simple assessment was that: "There is no younger generation of poets. E.P. is a solitary volcano."
A great merit of Mr Moody's approach is the space he gives to Pound's writings. It is love-it-or-hate-it stuff, but, either way, undeniably fascinating. "All good art is realism of one kind or another," Pound said. Reconciling that tidy statement with practically any of his poems is hard work but, as Mr Moody shows over and over again, hard work that offers huge rewards. His first volume ends in 1920, with Pound quitting London in a huff, finally fed up-after more than a decade of doing everything in his power to rattle the intellectual establishment-with "British insensitivity to, and irritation with, mental agility in any and every form". His disgraceful radio programmes and the full blooming of his loopiness lie ahead. So, too, do most of his exquisite Cantos. | 3531.txt | 3 |
[
"global inflation",
"reduction in supply",
"fast growth in economy",
"Iraq's suspension of exports"
]
| The main reason for the latest rise of oil price is _ . | Could the bad old days of economic decline be about to return? Since OPEC agreed to supply-cuts in March, the price of crude oil has jumped to almost $26 a barrel, up from less than $10 last December. This near-tripling of oil prices calls up scary memories of the 1973 oil shock, when prices quadrupled, and 1979-80, when they also almost tripled. Both previous shocks resulted in double-digit inflation and global economic decline. So where are the headlines warning of gloom and doom this time?
The oil price was given another push up this week when Iraq suspended oil exports. Strengthening economic growth, at the same time as winter grips the northern hemisphere, could push the price higher still in the short term.
Yet there are good reasons to expect the economic consequences now to be less severe than in the 1970s. In most countries the cost of crude oil now accounts for a smaller share of the price of petrol than it did in the 1970s. In Europe, taxes account for up to four-fifths of the retail price, so even quite big changes in the price of crude have a more muted effect on pump prices than in the past.
Rich economies are also less dependent on oil than they were, and so less sensitive to swings in the oil price. Energy conservation, a shift to other fuels and a decline in the importance of heavy, energy-intensive industries have reduced oil consumption. Software, consultancy and mobile telephones use far less oil than steel or car production. For each dollar of GDP (in constant prices) rich economies now use nearly 50% less oil than in 1973. The OECD estimates in its latest Economic Outlook that, if oil prices averaged $22 a barrel for a full year, compared with $13 in 1998, this would increase the oil import bill in rich economies by only 0.25-0.5% of GDP. That is less than one-quarter of the income loss in 1974 or 1980. On the other hand, oil-importing emerging economies to which heavy industry has shifted have become more energy-intensive, and so could be more seriously squeezed.
One more reason not to lose sleep over the rise in oil prices is that, unlike the rises in the 1970s, it has not occurred against the background of general commodity-price inflation and global excess demand. A sizable portion of the world is only just emerging from economic decline. The Economist's commodity price index is broadly unchanging from a year ago. In 1973 commodity prices jumped by 70%, and in 1979 by almost 30%. | 3358.txt | 1 |
[
"price of crude rises",
"commodity prices rise",
"consumption rises",
"oil taxes rise"
]
| It can be inferred from the text that the retail price of petrol will go up dramatically if _ . | Could the bad old days of economic decline be about to return? Since OPEC agreed to supply-cuts in March, the price of crude oil has jumped to almost $26 a barrel, up from less than $10 last December. This near-tripling of oil prices calls up scary memories of the 1973 oil shock, when prices quadrupled, and 1979-80, when they also almost tripled. Both previous shocks resulted in double-digit inflation and global economic decline. So where are the headlines warning of gloom and doom this time?
The oil price was given another push up this week when Iraq suspended oil exports. Strengthening economic growth, at the same time as winter grips the northern hemisphere, could push the price higher still in the short term.
Yet there are good reasons to expect the economic consequences now to be less severe than in the 1970s. In most countries the cost of crude oil now accounts for a smaller share of the price of petrol than it did in the 1970s. In Europe, taxes account for up to four-fifths of the retail price, so even quite big changes in the price of crude have a more muted effect on pump prices than in the past.
Rich economies are also less dependent on oil than they were, and so less sensitive to swings in the oil price. Energy conservation, a shift to other fuels and a decline in the importance of heavy, energy-intensive industries have reduced oil consumption. Software, consultancy and mobile telephones use far less oil than steel or car production. For each dollar of GDP (in constant prices) rich economies now use nearly 50% less oil than in 1973. The OECD estimates in its latest Economic Outlook that, if oil prices averaged $22 a barrel for a full year, compared with $13 in 1998, this would increase the oil import bill in rich economies by only 0.25-0.5% of GDP. That is less than one-quarter of the income loss in 1974 or 1980. On the other hand, oil-importing emerging economies to which heavy industry has shifted have become more energy-intensive, and so could be more seriously squeezed.
One more reason not to lose sleep over the rise in oil prices is that, unlike the rises in the 1970s, it has not occurred against the background of general commodity-price inflation and global excess demand. A sizable portion of the world is only just emerging from economic decline. The Economist's commodity price index is broadly unchanging from a year ago. In 1973 commodity prices jumped by 70%, and in 1979 by almost 30%. | 3358.txt | 3 |
[
"heavy industry becomes more energy-intensive",
"income loss mainly results from fluctuating crude oil prices",
"manufacturing industry has been seriously squeezed",
"oil price changes have no significant impact on GDP"
]
| The estimates in Economic Outlook show that in rich countries _ . | Could the bad old days of economic decline be about to return? Since OPEC agreed to supply-cuts in March, the price of crude oil has jumped to almost $26 a barrel, up from less than $10 last December. This near-tripling of oil prices calls up scary memories of the 1973 oil shock, when prices quadrupled, and 1979-80, when they also almost tripled. Both previous shocks resulted in double-digit inflation and global economic decline. So where are the headlines warning of gloom and doom this time?
The oil price was given another push up this week when Iraq suspended oil exports. Strengthening economic growth, at the same time as winter grips the northern hemisphere, could push the price higher still in the short term.
Yet there are good reasons to expect the economic consequences now to be less severe than in the 1970s. In most countries the cost of crude oil now accounts for a smaller share of the price of petrol than it did in the 1970s. In Europe, taxes account for up to four-fifths of the retail price, so even quite big changes in the price of crude have a more muted effect on pump prices than in the past.
Rich economies are also less dependent on oil than they were, and so less sensitive to swings in the oil price. Energy conservation, a shift to other fuels and a decline in the importance of heavy, energy-intensive industries have reduced oil consumption. Software, consultancy and mobile telephones use far less oil than steel or car production. For each dollar of GDP (in constant prices) rich economies now use nearly 50% less oil than in 1973. The OECD estimates in its latest Economic Outlook that, if oil prices averaged $22 a barrel for a full year, compared with $13 in 1998, this would increase the oil import bill in rich economies by only 0.25-0.5% of GDP. That is less than one-quarter of the income loss in 1974 or 1980. On the other hand, oil-importing emerging economies to which heavy industry has shifted have become more energy-intensive, and so could be more seriously squeezed.
One more reason not to lose sleep over the rise in oil prices is that, unlike the rises in the 1970s, it has not occurred against the background of general commodity-price inflation and global excess demand. A sizable portion of the world is only just emerging from economic decline. The Economist's commodity price index is broadly unchanging from a year ago. In 1973 commodity prices jumped by 70%, and in 1979 by almost 30%. | 3358.txt | 3 |
[
"oil-price shocks are less shocking now",
"inflation seems irrelevant to oil-price shocks",
"energy conservation can keep down the oil prices",
"the price rise of crude leads to the shrinking of heavy industry"
]
| We can draw a conclusion from the text that _ . | Could the bad old days of economic decline be about to return? Since OPEC agreed to supply-cuts in March, the price of crude oil has jumped to almost $26 a barrel, up from less than $10 last December. This near-tripling of oil prices calls up scary memories of the 1973 oil shock, when prices quadrupled, and 1979-80, when they also almost tripled. Both previous shocks resulted in double-digit inflation and global economic decline. So where are the headlines warning of gloom and doom this time?
The oil price was given another push up this week when Iraq suspended oil exports. Strengthening economic growth, at the same time as winter grips the northern hemisphere, could push the price higher still in the short term.
Yet there are good reasons to expect the economic consequences now to be less severe than in the 1970s. In most countries the cost of crude oil now accounts for a smaller share of the price of petrol than it did in the 1970s. In Europe, taxes account for up to four-fifths of the retail price, so even quite big changes in the price of crude have a more muted effect on pump prices than in the past.
Rich economies are also less dependent on oil than they were, and so less sensitive to swings in the oil price. Energy conservation, a shift to other fuels and a decline in the importance of heavy, energy-intensive industries have reduced oil consumption. Software, consultancy and mobile telephones use far less oil than steel or car production. For each dollar of GDP (in constant prices) rich economies now use nearly 50% less oil than in 1973. The OECD estimates in its latest Economic Outlook that, if oil prices averaged $22 a barrel for a full year, compared with $13 in 1998, this would increase the oil import bill in rich economies by only 0.25-0.5% of GDP. That is less than one-quarter of the income loss in 1974 or 1980. On the other hand, oil-importing emerging economies to which heavy industry has shifted have become more energy-intensive, and so could be more seriously squeezed.
One more reason not to lose sleep over the rise in oil prices is that, unlike the rises in the 1970s, it has not occurred against the background of general commodity-price inflation and global excess demand. A sizable portion of the world is only just emerging from economic decline. The Economist's commodity price index is broadly unchanging from a year ago. In 1973 commodity prices jumped by 70%, and in 1979 by almost 30%. | 3358.txt | 0 |
[
"optimistic",
"sensitive",
"gloomy",
"scared"
]
| From the text we can see that the writer seems _ . | Could the bad old days of economic decline be about to return? Since OPEC agreed to supply-cuts in March, the price of crude oil has jumped to almost $26 a barrel, up from less than $10 last December. This near-tripling of oil prices calls up scary memories of the 1973 oil shock, when prices quadrupled, and 1979-80, when they also almost tripled. Both previous shocks resulted in double-digit inflation and global economic decline. So where are the headlines warning of gloom and doom this time?
The oil price was given another push up this week when Iraq suspended oil exports. Strengthening economic growth, at the same time as winter grips the northern hemisphere, could push the price higher still in the short term.
Yet there are good reasons to expect the economic consequences now to be less severe than in the 1970s. In most countries the cost of crude oil now accounts for a smaller share of the price of petrol than it did in the 1970s. In Europe, taxes account for up to four-fifths of the retail price, so even quite big changes in the price of crude have a more muted effect on pump prices than in the past.
Rich economies are also less dependent on oil than they were, and so less sensitive to swings in the oil price. Energy conservation, a shift to other fuels and a decline in the importance of heavy, energy-intensive industries have reduced oil consumption. Software, consultancy and mobile telephones use far less oil than steel or car production. For each dollar of GDP (in constant prices) rich economies now use nearly 50% less oil than in 1973. The OECD estimates in its latest Economic Outlook that, if oil prices averaged $22 a barrel for a full year, compared with $13 in 1998, this would increase the oil import bill in rich economies by only 0.25-0.5% of GDP. That is less than one-quarter of the income loss in 1974 or 1980. On the other hand, oil-importing emerging economies to which heavy industry has shifted have become more energy-intensive, and so could be more seriously squeezed.
One more reason not to lose sleep over the rise in oil prices is that, unlike the rises in the 1970s, it has not occurred against the background of general commodity-price inflation and global excess demand. A sizable portion of the world is only just emerging from economic decline. The Economist's commodity price index is broadly unchanging from a year ago. In 1973 commodity prices jumped by 70%, and in 1979 by almost 30%. | 3358.txt | 0 |
[
"Working at the office is safer than staying at home.",
"Traverlling to work on public transport is safer than working at the office.",
"Staying at home is safer than working in the chemical industry.",
"Working in the chemical industry is safer than traveling by air."
]
| Which of the following statements is true? | Which is safer-staying at home, traveling to work on public transport, or working in the office? Surprisingly, each of these carries the same risk, which is very low. However, what about flying compared to working in the chemical industry? Unfortunately, the former is 65 times riskier than the latter! In fact, the accident rate of workers in the chemical industry is less than that of almost any of human activity, and almost as safe as staying at home.
The trouble with the chemical industry is that when things go wrong they often cause death to those living nearby. It is this which makes chemical accidents so newsworthy. Fortunately, they are extremely rare. The most famous ones happened at Texas City (1947),Flixborough (1974), Seveso (1976), Pemex (1984) and Bhopal (1984).
Some of these are always in the minds of the people even though the loss of life was small. No one died at Seveso, and only 28 workers at Flixborough. The worst accident of all was Bhopal, where up to 3,000 were killed. The Texas City explosion of fertilizer killed 552. The Pemex fire at a storage plant for natural gas in the suburbs of Mexico City took 542 lives, just a month before the unfortunate event at Bhopal.
Some experts have discussed these accidents and used each accident to illustrate a particular danger. Thus the Texas City explosion was caused by tons of ammonium nitrate(),which is safe unless stored in great quantity. The Flixborough fireball was the fault of management, which took risks to keep production going during essential repairs. The Seveso accident shows what happens if the local authorities lack knowledge of the danger on their doorstep. When the poisonous gas drifted over the town, local leaders were incapable of taking effective action. The Pemex fire was made worse by an overloaded site in an overcrowded suburb. The fire set off a chain reaction os exploding storage tanks. Yet, by a miracle, the two largest tanks did not explode. Had these caught fire, then 3,000 strong rescue team and fire fighters would all have died. | 60.txt | 3 |
[
"they are very rare",
"they often cause loss of life",
"they always occur in big cities",
"they arouse the interest of all the readers"
]
| Chemical accidents are usually important enough to be reported as news because _ . | Which is safer-staying at home, traveling to work on public transport, or working in the office? Surprisingly, each of these carries the same risk, which is very low. However, what about flying compared to working in the chemical industry? Unfortunately, the former is 65 times riskier than the latter! In fact, the accident rate of workers in the chemical industry is less than that of almost any of human activity, and almost as safe as staying at home.
The trouble with the chemical industry is that when things go wrong they often cause death to those living nearby. It is this which makes chemical accidents so newsworthy. Fortunately, they are extremely rare. The most famous ones happened at Texas City (1947),Flixborough (1974), Seveso (1976), Pemex (1984) and Bhopal (1984).
Some of these are always in the minds of the people even though the loss of life was small. No one died at Seveso, and only 28 workers at Flixborough. The worst accident of all was Bhopal, where up to 3,000 were killed. The Texas City explosion of fertilizer killed 552. The Pemex fire at a storage plant for natural gas in the suburbs of Mexico City took 542 lives, just a month before the unfortunate event at Bhopal.
Some experts have discussed these accidents and used each accident to illustrate a particular danger. Thus the Texas City explosion was caused by tons of ammonium nitrate(),which is safe unless stored in great quantity. The Flixborough fireball was the fault of management, which took risks to keep production going during essential repairs. The Seveso accident shows what happens if the local authorities lack knowledge of the danger on their doorstep. When the poisonous gas drifted over the town, local leaders were incapable of taking effective action. The Pemex fire was made worse by an overloaded site in an overcrowded suburb. The fire set off a chain reaction os exploding storage tanks. Yet, by a miracle, the two largest tanks did not explode. Had these caught fire, then 3,000 strong rescue team and fire fighters would all have died. | 60.txt | 1 |
[
"Texas city",
"Flixborough",
"Seveso",
"Mexico City"
]
| According to passage, the chemical accident that caused by the fault of management happened at _ . | Which is safer-staying at home, traveling to work on public transport, or working in the office? Surprisingly, each of these carries the same risk, which is very low. However, what about flying compared to working in the chemical industry? Unfortunately, the former is 65 times riskier than the latter! In fact, the accident rate of workers in the chemical industry is less than that of almost any of human activity, and almost as safe as staying at home.
The trouble with the chemical industry is that when things go wrong they often cause death to those living nearby. It is this which makes chemical accidents so newsworthy. Fortunately, they are extremely rare. The most famous ones happened at Texas City (1947),Flixborough (1974), Seveso (1976), Pemex (1984) and Bhopal (1984).
Some of these are always in the minds of the people even though the loss of life was small. No one died at Seveso, and only 28 workers at Flixborough. The worst accident of all was Bhopal, where up to 3,000 were killed. The Texas City explosion of fertilizer killed 552. The Pemex fire at a storage plant for natural gas in the suburbs of Mexico City took 542 lives, just a month before the unfortunate event at Bhopal.
Some experts have discussed these accidents and used each accident to illustrate a particular danger. Thus the Texas City explosion was caused by tons of ammonium nitrate(),which is safe unless stored in great quantity. The Flixborough fireball was the fault of management, which took risks to keep production going during essential repairs. The Seveso accident shows what happens if the local authorities lack knowledge of the danger on their doorstep. When the poisonous gas drifted over the town, local leaders were incapable of taking effective action. The Pemex fire was made worse by an overloaded site in an overcrowded suburb. The fire set off a chain reaction os exploding storage tanks. Yet, by a miracle, the two largest tanks did not explode. Had these caught fire, then 3,000 strong rescue team and fire fighters would all have died. | 60.txt | 0 |
[
"natural gas, which can easily catch fire",
"fertilizer, which can't be stored in a great quantity",
"poisonous substance, which can't be used in overcrowded areas",
"fuel, which is stored in large tanks"
]
| From the passage we know that ammonium nitrate is a kind of _ . | Which is safer-staying at home, traveling to work on public transport, or working in the office? Surprisingly, each of these carries the same risk, which is very low. However, what about flying compared to working in the chemical industry? Unfortunately, the former is 65 times riskier than the latter! In fact, the accident rate of workers in the chemical industry is less than that of almost any of human activity, and almost as safe as staying at home.
The trouble with the chemical industry is that when things go wrong they often cause death to those living nearby. It is this which makes chemical accidents so newsworthy. Fortunately, they are extremely rare. The most famous ones happened at Texas City (1947),Flixborough (1974), Seveso (1976), Pemex (1984) and Bhopal (1984).
Some of these are always in the minds of the people even though the loss of life was small. No one died at Seveso, and only 28 workers at Flixborough. The worst accident of all was Bhopal, where up to 3,000 were killed. The Texas City explosion of fertilizer killed 552. The Pemex fire at a storage plant for natural gas in the suburbs of Mexico City took 542 lives, just a month before the unfortunate event at Bhopal.
Some experts have discussed these accidents and used each accident to illustrate a particular danger. Thus the Texas City explosion was caused by tons of ammonium nitrate(),which is safe unless stored in great quantity. The Flixborough fireball was the fault of management, which took risks to keep production going during essential repairs. The Seveso accident shows what happens if the local authorities lack knowledge of the danger on their doorstep. When the poisonous gas drifted over the town, local leaders were incapable of taking effective action. The Pemex fire was made worse by an overloaded site in an overcrowded suburb. The fire set off a chain reaction os exploding storage tanks. Yet, by a miracle, the two largest tanks did not explode. Had these caught fire, then 3,000 strong rescue team and fire fighters would all have died. | 60.txt | 1 |
[
"to avoid any accidents we should not repair the facilities in chemical industry",
"the local authorities should not be concerned with the production of the chemical industry",
"all these accidents could have been avoided or controlled if effective measure had been taken",
"natural gas stored in very large tanks is always safe"
]
| From the discussion among some experts we may coclude that _ . | Which is safer-staying at home, traveling to work on public transport, or working in the office? Surprisingly, each of these carries the same risk, which is very low. However, what about flying compared to working in the chemical industry? Unfortunately, the former is 65 times riskier than the latter! In fact, the accident rate of workers in the chemical industry is less than that of almost any of human activity, and almost as safe as staying at home.
The trouble with the chemical industry is that when things go wrong they often cause death to those living nearby. It is this which makes chemical accidents so newsworthy. Fortunately, they are extremely rare. The most famous ones happened at Texas City (1947),Flixborough (1974), Seveso (1976), Pemex (1984) and Bhopal (1984).
Some of these are always in the minds of the people even though the loss of life was small. No one died at Seveso, and only 28 workers at Flixborough. The worst accident of all was Bhopal, where up to 3,000 were killed. The Texas City explosion of fertilizer killed 552. The Pemex fire at a storage plant for natural gas in the suburbs of Mexico City took 542 lives, just a month before the unfortunate event at Bhopal.
Some experts have discussed these accidents and used each accident to illustrate a particular danger. Thus the Texas City explosion was caused by tons of ammonium nitrate(),which is safe unless stored in great quantity. The Flixborough fireball was the fault of management, which took risks to keep production going during essential repairs. The Seveso accident shows what happens if the local authorities lack knowledge of the danger on their doorstep. When the poisonous gas drifted over the town, local leaders were incapable of taking effective action. The Pemex fire was made worse by an overloaded site in an overcrowded suburb. The fire set off a chain reaction os exploding storage tanks. Yet, by a miracle, the two largest tanks did not explode. Had these caught fire, then 3,000 strong rescue team and fire fighters would all have died. | 60.txt | 2 |
[
"the service industry is relying more and more on the female work force",
"manufacturing industries are steadily increasing",
"people find it harder and harder to earn a living by working in factories",
"most of the job opportunities can now be found in the service industry"
]
| A characteristic of the information age is that ________. | A new era is upon us. Call it what you will: the service economy, the information age, the knowledge society. It all translates to a fundamental change in the way we work. Already we're partly there. The percentage of people who earn their living by making things has fallen dramatically in the Western World. Today the majority of jobs in America, Europe and Japan (two thirds or more in many of these countries) are in the service industry, and the number is on the rise. More women are in the work force than ever before. There are more part-time jobs. More people are self-employed. But the breadth of the economic transformation can't be measured by numbers alone, because it also is giving rise to a radical new way of thinking about the nature of work itself. Long-held notions about jobs and careers, the skills needed to succeed, even the relation between individuals and employers-all these are being challenged.
We have only to look behind us to get some sense of what may lie ahead. No one looking ahead 20 years possibly could have foreseen the ways in which a single invention, the chip , would transform our world thanks to its applications in personal computers, digital communications and factory robots. Tomorrow's achievements in biotechnology, artificial intelligence or even some still unimagined technology could produce a similar wave of dramatic changes. But one thing is certain: information and knowledge will become even more vital, and the people who possess it, whether they work in manufacturing or services, will have the advantage and produce the wealth. Computer knowledge will become as basic a requirement as the ability to read and write. The ability to solve problems by applying information instead of performing routine tasks will be valued above all else. If you cast your mind ahead 10 years, information services will be predominant. It will be the way you do your job. | 2340.txt | 3 |
[
"the difference between the employee and the employer has become insignificant",
"people's traditional concepts about work no longer hold true",
"most people have to take part-time jobs",
"people have to change their jobs from time to time"
]
| One of the great changes brought about by the knowledge society is that ________. | A new era is upon us. Call it what you will: the service economy, the information age, the knowledge society. It all translates to a fundamental change in the way we work. Already we're partly there. The percentage of people who earn their living by making things has fallen dramatically in the Western World. Today the majority of jobs in America, Europe and Japan (two thirds or more in many of these countries) are in the service industry, and the number is on the rise. More women are in the work force than ever before. There are more part-time jobs. More people are self-employed. But the breadth of the economic transformation can't be measured by numbers alone, because it also is giving rise to a radical new way of thinking about the nature of work itself. Long-held notions about jobs and careers, the skills needed to succeed, even the relation between individuals and employers-all these are being challenged.
We have only to look behind us to get some sense of what may lie ahead. No one looking ahead 20 years possibly could have foreseen the ways in which a single invention, the chip , would transform our world thanks to its applications in personal computers, digital communications and factory robots. Tomorrow's achievements in biotechnology, artificial intelligence or even some still unimagined technology could produce a similar wave of dramatic changes. But one thing is certain: information and knowledge will become even more vital, and the people who possess it, whether they work in manufacturing or services, will have the advantage and produce the wealth. Computer knowledge will become as basic a requirement as the ability to read and write. The ability to solve problems by applying information instead of performing routine tasks will be valued above all else. If you cast your mind ahead 10 years, information services will be predominant. It will be the way you do your job. | 2340.txt | 1 |
[
"people should be able to respond quickly to the advancement of technology",
"future achievements in technology will bring about inconceivable dramatic changes",
"the importance of high technology has been overlooked",
"computer science will play a leading role in the future information services"
]
| By referring to computers and other inventions, the author means to say that ________. | A new era is upon us. Call it what you will: the service economy, the information age, the knowledge society. It all translates to a fundamental change in the way we work. Already we're partly there. The percentage of people who earn their living by making things has fallen dramatically in the Western World. Today the majority of jobs in America, Europe and Japan (two thirds or more in many of these countries) are in the service industry, and the number is on the rise. More women are in the work force than ever before. There are more part-time jobs. More people are self-employed. But the breadth of the economic transformation can't be measured by numbers alone, because it also is giving rise to a radical new way of thinking about the nature of work itself. Long-held notions about jobs and careers, the skills needed to succeed, even the relation between individuals and employers-all these are being challenged.
We have only to look behind us to get some sense of what may lie ahead. No one looking ahead 20 years possibly could have foreseen the ways in which a single invention, the chip , would transform our world thanks to its applications in personal computers, digital communications and factory robots. Tomorrow's achievements in biotechnology, artificial intelligence or even some still unimagined technology could produce a similar wave of dramatic changes. But one thing is certain: information and knowledge will become even more vital, and the people who possess it, whether they work in manufacturing or services, will have the advantage and produce the wealth. Computer knowledge will become as basic a requirement as the ability to read and write. The ability to solve problems by applying information instead of performing routine tasks will be valued above all else. If you cast your mind ahead 10 years, information services will be predominant. It will be the way you do your job. | 2340.txt | 1 |
[
"possess and know how to make use of information",
"give full play to their brain potential",
"involve themselves in service industries",
"cast their minds ahead instead of looking back"
]
| The future will probably belong to those who ________. | A new era is upon us. Call it what you will: the service economy, the information age, the knowledge society. It all translates to a fundamental change in the way we work. Already we're partly there. The percentage of people who earn their living by making things has fallen dramatically in the Western World. Today the majority of jobs in America, Europe and Japan (two thirds or more in many of these countries) are in the service industry, and the number is on the rise. More women are in the work force than ever before. There are more part-time jobs. More people are self-employed. But the breadth of the economic transformation can't be measured by numbers alone, because it also is giving rise to a radical new way of thinking about the nature of work itself. Long-held notions about jobs and careers, the skills needed to succeed, even the relation between individuals and employers-all these are being challenged.
We have only to look behind us to get some sense of what may lie ahead. No one looking ahead 20 years possibly could have foreseen the ways in which a single invention, the chip , would transform our world thanks to its applications in personal computers, digital communications and factory robots. Tomorrow's achievements in biotechnology, artificial intelligence or even some still unimagined technology could produce a similar wave of dramatic changes. But one thing is certain: information and knowledge will become even more vital, and the people who possess it, whether they work in manufacturing or services, will have the advantage and produce the wealth. Computer knowledge will become as basic a requirement as the ability to read and write. The ability to solve problems by applying information instead of performing routine tasks will be valued above all else. If you cast your mind ahead 10 years, information services will be predominant. It will be the way you do your job. | 2340.txt | 0 |
[
"Computers and the Knowledge Society",
"Service Industries in Modern Society",
"Features and Implications of the New Era",
"Rapid Advancement of Information Technology"
]
| Which of the following would be the best title for the passage? | A new era is upon us. Call it what you will: the service economy, the information age, the knowledge society. It all translates to a fundamental change in the way we work. Already we're partly there. The percentage of people who earn their living by making things has fallen dramatically in the Western World. Today the majority of jobs in America, Europe and Japan (two thirds or more in many of these countries) are in the service industry, and the number is on the rise. More women are in the work force than ever before. There are more part-time jobs. More people are self-employed. But the breadth of the economic transformation can't be measured by numbers alone, because it also is giving rise to a radical new way of thinking about the nature of work itself. Long-held notions about jobs and careers, the skills needed to succeed, even the relation between individuals and employers-all these are being challenged.
We have only to look behind us to get some sense of what may lie ahead. No one looking ahead 20 years possibly could have foreseen the ways in which a single invention, the chip , would transform our world thanks to its applications in personal computers, digital communications and factory robots. Tomorrow's achievements in biotechnology, artificial intelligence or even some still unimagined technology could produce a similar wave of dramatic changes. But one thing is certain: information and knowledge will become even more vital, and the people who possess it, whether they work in manufacturing or services, will have the advantage and produce the wealth. Computer knowledge will become as basic a requirement as the ability to read and write. The ability to solve problems by applying information instead of performing routine tasks will be valued above all else. If you cast your mind ahead 10 years, information services will be predominant. It will be the way you do your job. | 2340.txt | 2 |
[
"they look down upon",
"that can be exchanged in the market",
"worth people's reverence",
"that should be replaced by other forms of money"
]
| In economists' eyes, gold is something _ . | Most economists hate gold. Not, you understand, that they would turn up their noses at a bar or two. But they find the reverence in which many hold the metal almost irrational. That it was used as money for millennia is irrelevant: it isn't any more. Modern money takes the form of paper or, more often, electronic data. To economists, gold is now just another commodity.
So why is its price soaring? Over the past week, this has topped $450 a troy ounce, up by 9% since the beginning of the year and 77% since April 2001. Ah, comes the reply, gold transactions are denominated in dollars, and the rise in the price simply reflects the dollar's fall in terms of other currencies, especially the euro, against which it hit a new low this week. Expressed in euros, the gold price has moved much less. However, there is no iron link, as it were, between the value of the dollar and the value of gold. A rising price of gold, like that of anything else, can reflect an increase in demand as well as a depreciation of its unit of account.
This is where gold bulls come in. The fall in the dollar is important, but mainly because as a store of value the dollar stinks. With a few longish rallies, the greenback has been on a downward trend since it came off the gold standard in 1971. Now it is suffering one of its sharper declines. At the margin, extra demand has come from those who think dollars-indeed any money backed by nothing more than promises to keep inflation low-a decidedly risky investment, mainly because America, with the world's reserve currency, has been able to create and borrow so many of them. The least painful way of repaying those dollars is to make them worth less.
The striking exception to this extra demand comes from central banks, which would like to sell some of the gold they already have. As a legacy of the days when their currencies were backed by the metal, central banks still hold one-fifth of the world's gold. Last month the Bank of France said it would sell 500 tonnes in coming years. But big sales by central banks can cause the price to plunge-as when the Bank of England sold 395 tonnes between 1999 and 2002. The result was an agreement between central banks to co-ordinate and limit future sales.
If the price of gold marches higher, this agreement will presumably be ripped up, although a dollar crisis might make central banks think twice about switching into paper money. Will the overhang of central-bank gold drag the price down again? Not necessarily. As James Grant, gold bug and publisher of Grant's Interest Rate Observer, a newsletter, points out, in recent years the huge glut of government debt has not stopped a sharp rise in its price. | 429.txt | 1 |
[
"the increasing demand for gold",
"the depreciation of the euro",
"the link between the dollar and gold",
"the increment of the value of the dollar"
]
| According to the author, one of the reasons for the rising of gold price is _ . | Most economists hate gold. Not, you understand, that they would turn up their noses at a bar or two. But they find the reverence in which many hold the metal almost irrational. That it was used as money for millennia is irrelevant: it isn't any more. Modern money takes the form of paper or, more often, electronic data. To economists, gold is now just another commodity.
So why is its price soaring? Over the past week, this has topped $450 a troy ounce, up by 9% since the beginning of the year and 77% since April 2001. Ah, comes the reply, gold transactions are denominated in dollars, and the rise in the price simply reflects the dollar's fall in terms of other currencies, especially the euro, against which it hit a new low this week. Expressed in euros, the gold price has moved much less. However, there is no iron link, as it were, between the value of the dollar and the value of gold. A rising price of gold, like that of anything else, can reflect an increase in demand as well as a depreciation of its unit of account.
This is where gold bulls come in. The fall in the dollar is important, but mainly because as a store of value the dollar stinks. With a few longish rallies, the greenback has been on a downward trend since it came off the gold standard in 1971. Now it is suffering one of its sharper declines. At the margin, extra demand has come from those who think dollars-indeed any money backed by nothing more than promises to keep inflation low-a decidedly risky investment, mainly because America, with the world's reserve currency, has been able to create and borrow so many of them. The least painful way of repaying those dollars is to make them worth less.
The striking exception to this extra demand comes from central banks, which would like to sell some of the gold they already have. As a legacy of the days when their currencies were backed by the metal, central banks still hold one-fifth of the world's gold. Last month the Bank of France said it would sell 500 tonnes in coming years. But big sales by central banks can cause the price to plunge-as when the Bank of England sold 395 tonnes between 1999 and 2002. The result was an agreement between central banks to co-ordinate and limit future sales.
If the price of gold marches higher, this agreement will presumably be ripped up, although a dollar crisis might make central banks think twice about switching into paper money. Will the overhang of central-bank gold drag the price down again? Not necessarily. As James Grant, gold bug and publisher of Grant's Interest Rate Observer, a newsletter, points out, in recent years the huge glut of government debt has not stopped a sharp rise in its price. | 429.txt | 0 |
[
"the decline of the dollar is inevitable",
"America benefits from the depreciation of the dollar",
"the depreciation of the dollar is good news to other currencies",
"investment in the dollar yields more returns than that in gold"
]
| We can infer from the third Paragraph that _ . | Most economists hate gold. Not, you understand, that they would turn up their noses at a bar or two. But they find the reverence in which many hold the metal almost irrational. That it was used as money for millennia is irrelevant: it isn't any more. Modern money takes the form of paper or, more often, electronic data. To economists, gold is now just another commodity.
So why is its price soaring? Over the past week, this has topped $450 a troy ounce, up by 9% since the beginning of the year and 77% since April 2001. Ah, comes the reply, gold transactions are denominated in dollars, and the rise in the price simply reflects the dollar's fall in terms of other currencies, especially the euro, against which it hit a new low this week. Expressed in euros, the gold price has moved much less. However, there is no iron link, as it were, between the value of the dollar and the value of gold. A rising price of gold, like that of anything else, can reflect an increase in demand as well as a depreciation of its unit of account.
This is where gold bulls come in. The fall in the dollar is important, but mainly because as a store of value the dollar stinks. With a few longish rallies, the greenback has been on a downward trend since it came off the gold standard in 1971. Now it is suffering one of its sharper declines. At the margin, extra demand has come from those who think dollars-indeed any money backed by nothing more than promises to keep inflation low-a decidedly risky investment, mainly because America, with the world's reserve currency, has been able to create and borrow so many of them. The least painful way of repaying those dollars is to make them worth less.
The striking exception to this extra demand comes from central banks, which would like to sell some of the gold they already have. As a legacy of the days when their currencies were backed by the metal, central banks still hold one-fifth of the world's gold. Last month the Bank of France said it would sell 500 tonnes in coming years. But big sales by central banks can cause the price to plunge-as when the Bank of England sold 395 tonnes between 1999 and 2002. The result was an agreement between central banks to co-ordinate and limit future sales.
If the price of gold marches higher, this agreement will presumably be ripped up, although a dollar crisis might make central banks think twice about switching into paper money. Will the overhang of central-bank gold drag the price down again? Not necessarily. As James Grant, gold bug and publisher of Grant's Interest Rate Observer, a newsletter, points out, in recent years the huge glut of government debt has not stopped a sharp rise in its price. | 429.txt | 1 |
[
"strengthened",
"broadened",
"renegotiated",
"torn up"
]
| The phrase" ripped up" (Line 1, Paragraph 5. most probably means _ . | Most economists hate gold. Not, you understand, that they would turn up their noses at a bar or two. But they find the reverence in which many hold the metal almost irrational. That it was used as money for millennia is irrelevant: it isn't any more. Modern money takes the form of paper or, more often, electronic data. To economists, gold is now just another commodity.
So why is its price soaring? Over the past week, this has topped $450 a troy ounce, up by 9% since the beginning of the year and 77% since April 2001. Ah, comes the reply, gold transactions are denominated in dollars, and the rise in the price simply reflects the dollar's fall in terms of other currencies, especially the euro, against which it hit a new low this week. Expressed in euros, the gold price has moved much less. However, there is no iron link, as it were, between the value of the dollar and the value of gold. A rising price of gold, like that of anything else, can reflect an increase in demand as well as a depreciation of its unit of account.
This is where gold bulls come in. The fall in the dollar is important, but mainly because as a store of value the dollar stinks. With a few longish rallies, the greenback has been on a downward trend since it came off the gold standard in 1971. Now it is suffering one of its sharper declines. At the margin, extra demand has come from those who think dollars-indeed any money backed by nothing more than promises to keep inflation low-a decidedly risky investment, mainly because America, with the world's reserve currency, has been able to create and borrow so many of them. The least painful way of repaying those dollars is to make them worth less.
The striking exception to this extra demand comes from central banks, which would like to sell some of the gold they already have. As a legacy of the days when their currencies were backed by the metal, central banks still hold one-fifth of the world's gold. Last month the Bank of France said it would sell 500 tonnes in coming years. But big sales by central banks can cause the price to plunge-as when the Bank of England sold 395 tonnes between 1999 and 2002. The result was an agreement between central banks to co-ordinate and limit future sales.
If the price of gold marches higher, this agreement will presumably be ripped up, although a dollar crisis might make central banks think twice about switching into paper money. Will the overhang of central-bank gold drag the price down again? Not necessarily. As James Grant, gold bug and publisher of Grant's Interest Rate Observer, a newsletter, points out, in recent years the huge glut of government debt has not stopped a sharp rise in its price. | 429.txt | 3 |
[
"will not last long",
"will attract some central banks to sell gold",
"will impel central banks to switch into paper money",
"will lead to a dollar crisis"
]
| According to the passage, the rise of gold price _ . | Most economists hate gold. Not, you understand, that they would turn up their noses at a bar or two. But they find the reverence in which many hold the metal almost irrational. That it was used as money for millennia is irrelevant: it isn't any more. Modern money takes the form of paper or, more often, electronic data. To economists, gold is now just another commodity.
So why is its price soaring? Over the past week, this has topped $450 a troy ounce, up by 9% since the beginning of the year and 77% since April 2001. Ah, comes the reply, gold transactions are denominated in dollars, and the rise in the price simply reflects the dollar's fall in terms of other currencies, especially the euro, against which it hit a new low this week. Expressed in euros, the gold price has moved much less. However, there is no iron link, as it were, between the value of the dollar and the value of gold. A rising price of gold, like that of anything else, can reflect an increase in demand as well as a depreciation of its unit of account.
This is where gold bulls come in. The fall in the dollar is important, but mainly because as a store of value the dollar stinks. With a few longish rallies, the greenback has been on a downward trend since it came off the gold standard in 1971. Now it is suffering one of its sharper declines. At the margin, extra demand has come from those who think dollars-indeed any money backed by nothing more than promises to keep inflation low-a decidedly risky investment, mainly because America, with the world's reserve currency, has been able to create and borrow so many of them. The least painful way of repaying those dollars is to make them worth less.
The striking exception to this extra demand comes from central banks, which would like to sell some of the gold they already have. As a legacy of the days when their currencies were backed by the metal, central banks still hold one-fifth of the world's gold. Last month the Bank of France said it would sell 500 tonnes in coming years. But big sales by central banks can cause the price to plunge-as when the Bank of England sold 395 tonnes between 1999 and 2002. The result was an agreement between central banks to co-ordinate and limit future sales.
If the price of gold marches higher, this agreement will presumably be ripped up, although a dollar crisis might make central banks think twice about switching into paper money. Will the overhang of central-bank gold drag the price down again? Not necessarily. As James Grant, gold bug and publisher of Grant's Interest Rate Observer, a newsletter, points out, in recent years the huge glut of government debt has not stopped a sharp rise in its price. | 429.txt | 1 |
[
"believe the world's environment is in an undesirable condition",
"agree that the environment of the world is not as bad as it is thought to be",
"get high marks for their good knowledge of the world's environment",
"appear somewhat unconcerned about the state of the world's environment"
]
| According to the author, most students ________. | "The world's environment is surprisingly healthy. Discuss." If that were an examination topic, most students would tear it apart, offering a long list of complaints: from local smog to global climate change, from the felling of forests to the extinction of species. The list would largely be accurate, the concern legitimate. Yet the students who should be given the highest marks would actually be those who agreed with the statement. The surprise is how good things are, not how bad.
After all, the world's population has more than tripled during this century, and world output has risen hugely, so you would expect the earth itself to have been affected. Indeed, if people lived, consumed and produced things in the same way as they did in 1900 (or 1950, or indeed 1980), the world by now would be a pretty disgusting place: smelly, dirty, toxic and dangerous.
But they don't. The reasons why they don't, and why the environment has not been mined, have to do with prices, technological innovation, social change and government regulation in response to popular pressure. That is why, today's environmental problems in the poor countries ought, in principle, to be solvable.
Raw materials have not run out, and show no sign of doing so. Logically, one day they must: the planet is a finite place. Yet it is also very big, and man is very ingenious. What has happened is that every time a material seems to be running short, the price has risen and, in response, people have looked for new sources of supply, tried to find ways to use less of the material, or looked for a new substitute. For this reason prices for energy and for minerals have fallen in real terms during the century. The same is true for food. Prices fluctuate, in response to harvests, natural disasters and political instability; and when they rise, it takes some time before new sources of supply become available. But they always do, assisted by new farming and crop technology. The long term trend has been downwards.
It is where prices and markets do not operate properly that this benign trend begins to stumble, and the genuine problems arise. Markets cannot always keep the environment healthy. If no one owns the resource concerned, no one has an interest in conserving it or fostering it: fish is the best example of this. | 3945.txt | 0 |
[
"has made the world a worse place to live in",
"has had a positive influence on the environment",
"has not significantly affected the environment",
"has made the world a dangerous place to live in"
]
| The huge increase in world production and population ________. | "The world's environment is surprisingly healthy. Discuss." If that were an examination topic, most students would tear it apart, offering a long list of complaints: from local smog to global climate change, from the felling of forests to the extinction of species. The list would largely be accurate, the concern legitimate. Yet the students who should be given the highest marks would actually be those who agreed with the statement. The surprise is how good things are, not how bad.
After all, the world's population has more than tripled during this century, and world output has risen hugely, so you would expect the earth itself to have been affected. Indeed, if people lived, consumed and produced things in the same way as they did in 1900 (or 1950, or indeed 1980), the world by now would be a pretty disgusting place: smelly, dirty, toxic and dangerous.
But they don't. The reasons why they don't, and why the environment has not been mined, have to do with prices, technological innovation, social change and government regulation in response to popular pressure. That is why, today's environmental problems in the poor countries ought, in principle, to be solvable.
Raw materials have not run out, and show no sign of doing so. Logically, one day they must: the planet is a finite place. Yet it is also very big, and man is very ingenious. What has happened is that every time a material seems to be running short, the price has risen and, in response, people have looked for new sources of supply, tried to find ways to use less of the material, or looked for a new substitute. For this reason prices for energy and for minerals have fallen in real terms during the century. The same is true for food. Prices fluctuate, in response to harvests, natural disasters and political instability; and when they rise, it takes some time before new sources of supply become available. But they always do, assisted by new farming and crop technology. The long term trend has been downwards.
It is where prices and markets do not operate properly that this benign trend begins to stumble, and the genuine problems arise. Markets cannot always keep the environment healthy. If no one owns the resource concerned, no one has an interest in conserving it or fostering it: fish is the best example of this. | 3945.txt | 2 |
[
"technological innovation can promote social stability",
"political instability will cause consumption to drop",
"new farming and crop technology can lead to overproduction",
"new sources are always becoming available"
]
| One of the reasons why the long-term trend of prices has been downwards is that ________. | "The world's environment is surprisingly healthy. Discuss." If that were an examination topic, most students would tear it apart, offering a long list of complaints: from local smog to global climate change, from the felling of forests to the extinction of species. The list would largely be accurate, the concern legitimate. Yet the students who should be given the highest marks would actually be those who agreed with the statement. The surprise is how good things are, not how bad.
After all, the world's population has more than tripled during this century, and world output has risen hugely, so you would expect the earth itself to have been affected. Indeed, if people lived, consumed and produced things in the same way as they did in 1900 (or 1950, or indeed 1980), the world by now would be a pretty disgusting place: smelly, dirty, toxic and dangerous.
But they don't. The reasons why they don't, and why the environment has not been mined, have to do with prices, technological innovation, social change and government regulation in response to popular pressure. That is why, today's environmental problems in the poor countries ought, in principle, to be solvable.
Raw materials have not run out, and show no sign of doing so. Logically, one day they must: the planet is a finite place. Yet it is also very big, and man is very ingenious. What has happened is that every time a material seems to be running short, the price has risen and, in response, people have looked for new sources of supply, tried to find ways to use less of the material, or looked for a new substitute. For this reason prices for energy and for minerals have fallen in real terms during the century. The same is true for food. Prices fluctuate, in response to harvests, natural disasters and political instability; and when they rise, it takes some time before new sources of supply become available. But they always do, assisted by new farming and crop technology. The long term trend has been downwards.
It is where prices and markets do not operate properly that this benign trend begins to stumble, and the genuine problems arise. Markets cannot always keep the environment healthy. If no one owns the resource concerned, no one has an interest in conserving it or fostering it: fish is the best example of this. | 3945.txt | 3 |
[
"no new substitutes can be found in large quantities",
"they are not owned by any particular entity",
"improper methods of fishing have mined the fishing grounds",
"water pollution is extremely serious"
]
| Fish resources are diminishing because ________. | "The world's environment is surprisingly healthy. Discuss." If that were an examination topic, most students would tear it apart, offering a long list of complaints: from local smog to global climate change, from the felling of forests to the extinction of species. The list would largely be accurate, the concern legitimate. Yet the students who should be given the highest marks would actually be those who agreed with the statement. The surprise is how good things are, not how bad.
After all, the world's population has more than tripled during this century, and world output has risen hugely, so you would expect the earth itself to have been affected. Indeed, if people lived, consumed and produced things in the same way as they did in 1900 (or 1950, or indeed 1980), the world by now would be a pretty disgusting place: smelly, dirty, toxic and dangerous.
But they don't. The reasons why they don't, and why the environment has not been mined, have to do with prices, technological innovation, social change and government regulation in response to popular pressure. That is why, today's environmental problems in the poor countries ought, in principle, to be solvable.
Raw materials have not run out, and show no sign of doing so. Logically, one day they must: the planet is a finite place. Yet it is also very big, and man is very ingenious. What has happened is that every time a material seems to be running short, the price has risen and, in response, people have looked for new sources of supply, tried to find ways to use less of the material, or looked for a new substitute. For this reason prices for energy and for minerals have fallen in real terms during the century. The same is true for food. Prices fluctuate, in response to harvests, natural disasters and political instability; and when they rise, it takes some time before new sources of supply become available. But they always do, assisted by new farming and crop technology. The long term trend has been downwards.
It is where prices and markets do not operate properly that this benign trend begins to stumble, and the genuine problems arise. Markets cannot always keep the environment healthy. If no one owns the resource concerned, no one has an interest in conserving it or fostering it: fish is the best example of this. | 3945.txt | 1 |
[
"to allow market forces to operate properly",
"to curb consumption of natural resources",
"to limit the growth of the world population",
"to avoid fluctuations in prices"
]
| The primary solution to environmental problems is ________. | "The world's environment is surprisingly healthy. Discuss." If that were an examination topic, most students would tear it apart, offering a long list of complaints: from local smog to global climate change, from the felling of forests to the extinction of species. The list would largely be accurate, the concern legitimate. Yet the students who should be given the highest marks would actually be those who agreed with the statement. The surprise is how good things are, not how bad.
After all, the world's population has more than tripled during this century, and world output has risen hugely, so you would expect the earth itself to have been affected. Indeed, if people lived, consumed and produced things in the same way as they did in 1900 (or 1950, or indeed 1980), the world by now would be a pretty disgusting place: smelly, dirty, toxic and dangerous.
But they don't. The reasons why they don't, and why the environment has not been mined, have to do with prices, technological innovation, social change and government regulation in response to popular pressure. That is why, today's environmental problems in the poor countries ought, in principle, to be solvable.
Raw materials have not run out, and show no sign of doing so. Logically, one day they must: the planet is a finite place. Yet it is also very big, and man is very ingenious. What has happened is that every time a material seems to be running short, the price has risen and, in response, people have looked for new sources of supply, tried to find ways to use less of the material, or looked for a new substitute. For this reason prices for energy and for minerals have fallen in real terms during the century. The same is true for food. Prices fluctuate, in response to harvests, natural disasters and political instability; and when they rise, it takes some time before new sources of supply become available. But they always do, assisted by new farming and crop technology. The long term trend has been downwards.
It is where prices and markets do not operate properly that this benign trend begins to stumble, and the genuine problems arise. Markets cannot always keep the environment healthy. If no one owns the resource concerned, no one has an interest in conserving it or fostering it: fish is the best example of this. | 3945.txt | 0 |
[
"emerged",
"was understood",
"spread",
"developed"
]
| The word "diffused"in the passage(paragraph 1) is closest in meaning to | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 2 |
[
"African lakes and rivers already providedenough food for people to survive without agriculture.",
"The earliest examples of cultivatedplants discovered in Africa are native to Asia.",
"Africa's native plants are very difficultto domesticate.",
"African communities were not large enoughto support agriculture."
]
| According to paragraph 1, why doresearchers doubt that agriculturedeveloped independently in Africa? | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 1 |
[
"The climate was becoming milder, allowingfor a greater variety of crops to be grown.",
"Although periods of drying forced peoplesouth, they returned once their food supply was secure.",
"Population growth along rivers and lakeswas dramatically decreasing the availability of fish.",
"A region that had once supported manypeople was becoming a desert where few could survive."
]
| In paragraph 1, what does the authorimply about changes in the African environment during this time period? | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 3 |
[
"were the first domesticated animal to beintroduced to Africa",
"allowed the people of the West Africansavannahs to carve out large empires",
"helped African peoples defend themselvesagainst Egyptian invaders",
"made it cheaper and easier to cross theSahara"
]
| According to paragraph 2, camels wereimportant because they | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 3 |
[
"Horses and chariots",
"Sheep and goats",
"Hyksos invaders from Egypt",
"Camels and cattle"
]
| According to paragraph 2, which of the following were subjects of rockpaintings in the Sahara? | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 0 |
[
"It contrasts the development of irontechnology in West Asia and West Africa.",
"It discusses a non-agricultural contribution to Africa from Asia.",
"It introduces evidence that a knowledgeof copper working reached Africa and Europe at the same time.",
"It compares the rates at which irontechnology developed in different parts of Africa."
]
| What function does paragraph 3 serve inthe organization of the passageas a whole? | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 1 |
[
"fascinating",
"far-reaching",
"necessary",
"temporary"
]
| The word "profound"in the passage(paragraph 4) is closest in meaning to | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 1 |
[
"military",
"physical",
"ceremonial",
"permanent"
]
| The word "ritual"in the passage(paragraph 4) is closest in meaning to | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 2 |
[
"Access to metal tools and weapons createdgreater social equality.",
"Metal weapons increased the power ofwarriors.",
"Iron tools helped increase the foodsupply.",
"Technical knowledge gave religious powerto its holders."
]
| According to paragraph 4, all of the following were social effects of thenew metal technology in Africa EXCEPT: | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 0 |
[
"afraid of",
"displaced by",
"running away from",
"responding to"
]
| The word "fleeing" in the passage(paragraph 6) is closest in meaning to | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 2 |
[
"superior weapons",
"better hunting skills",
"peaceful migration",
"increased population"
]
| Paragraph 6 mentions all of the following as possible causes of the "Bantu explosion" EXCEPT | There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Sahara Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but west Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.
Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century A.D. This was an important innovation, because the camel's abilities to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.
Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forest and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.
This technological shift cause profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West Africa societies. Those who knew the secrets of making iron gained ritual and sometimes political power.
Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.
The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu ("Bantu" means "the people"), which is the parent tongue of a language of a large number of Bantu languages still spoken throughout sub-Sahara Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration-or simply rapid demographic growth-may have also caused the Bantu explosion. | 1108.txt | 1 |
[
"an ideal egg donor",
"not necessarily an intelligent person",
"more influenced by her parents than by anything else",
"more likely to carry smart-kid genes"
]
| In the author‘s eyes, a female student from an Ivy League college is _ . | Plowing through the New York Times on a recent Sunday, I read in the Metro Section that infertile couples in the market for smart-kid genes regularly place advertisements in the newspapers of their own Ivy League alma maters offering female undergraduates $7,500 for a donated egg. Before I could get that news comfortably digested, I came across an article in the Magazine section describing SAT prep courses for which parents spend thousands in the hope of raising their child's test scores enough to make admission to an Ivy League college possible. So how can people who have found a potential egg donor at an Ivy League college tell whether the donor carries genuine smart-kid genes or just pushy-parents genes?
The donor herself may not even be aware that such a distinction exists. After years of expensive private schooling and math tutors and tennis camps and SAT prep courses and letters of recommendation from important family friends, she's been told that, unlike beneficiaries of affirmative action, she got into an Ivy League college on pure merit.
Since it is probably safe to assume that people intent on securing high-priced Ivy League eggs are carrying some pushy-parents genes themselves, their joining forces with a donor who got into an Ivy League college by dint of her family's willingness to fork over 10 grand to an SAT prep course could result in a child with somewhere between a dose and a half and 2 1/2 doses of pushy-parents genes. Apparently the egg seekers aren't troubled by the prospect of having their grandchildren raised by this sort of person.
If you have any doubts about whether the dosages I cite are based on a thorough grounding in genetics and statistics and advanced microbiology, rest assured that I attended an Ivy League college myself. That was in the days, I'll admit, when any number of people were admitted to such institutions without having shown any evidence of carrying smart-kid genes even in trace elements. Somehow, m
most of these dimmer bulbs managed to graduate--every class needs a lower third in order to have an upper two-thirds--and somehow most of them are now millionaires on Wall Street.
One element many of them had going for them in the admissions process was that they were identified as "legacies"--the offspring of alumni. In Ivy League colleges, alumni children are even now admitted at twice the rate of other applicants. For that reason, egg seekers may not actually need genuine smart-kid genes for their children: after all, an applicant whose mother and father and egg donor were all alumni could be considered a triple legacy.
But how about the college-admission prospects of the grandchildren? As methods are perfected of enhancing a college application through increasingly expensive services--one young man mentioned in the magazine article had $25,000 worth of SAT preparation--it might become more important to have a parent who's a Wall Street millionaire than to have smart-kid genes. Maybe it would be prudent to add a sentence to those ads in college papers: "Preference given to respondents in the lower third of the class." | 1072.txt | 1 |
[
"her own merits",
"the affirmative action",
"her smart-kid genes",
"her parents‘ efforts"
]
| According to the author, what may chiefly be the reason for the donor‘s admission in an Ivy League college? | Plowing through the New York Times on a recent Sunday, I read in the Metro Section that infertile couples in the market for smart-kid genes regularly place advertisements in the newspapers of their own Ivy League alma maters offering female undergraduates $7,500 for a donated egg. Before I could get that news comfortably digested, I came across an article in the Magazine section describing SAT prep courses for which parents spend thousands in the hope of raising their child's test scores enough to make admission to an Ivy League college possible. So how can people who have found a potential egg donor at an Ivy League college tell whether the donor carries genuine smart-kid genes or just pushy-parents genes?
The donor herself may not even be aware that such a distinction exists. After years of expensive private schooling and math tutors and tennis camps and SAT prep courses and letters of recommendation from important family friends, she's been told that, unlike beneficiaries of affirmative action, she got into an Ivy League college on pure merit.
Since it is probably safe to assume that people intent on securing high-priced Ivy League eggs are carrying some pushy-parents genes themselves, their joining forces with a donor who got into an Ivy League college by dint of her family's willingness to fork over 10 grand to an SAT prep course could result in a child with somewhere between a dose and a half and 2 1/2 doses of pushy-parents genes. Apparently the egg seekers aren't troubled by the prospect of having their grandchildren raised by this sort of person.
If you have any doubts about whether the dosages I cite are based on a thorough grounding in genetics and statistics and advanced microbiology, rest assured that I attended an Ivy League college myself. That was in the days, I'll admit, when any number of people were admitted to such institutions without having shown any evidence of carrying smart-kid genes even in trace elements. Somehow, m
most of these dimmer bulbs managed to graduate--every class needs a lower third in order to have an upper two-thirds--and somehow most of them are now millionaires on Wall Street.
One element many of them had going for them in the admissions process was that they were identified as "legacies"--the offspring of alumni. In Ivy League colleges, alumni children are even now admitted at twice the rate of other applicants. For that reason, egg seekers may not actually need genuine smart-kid genes for their children: after all, an applicant whose mother and father and egg donor were all alumni could be considered a triple legacy.
But how about the college-admission prospects of the grandchildren? As methods are perfected of enhancing a college application through increasingly expensive services--one young man mentioned in the magazine article had $25,000 worth of SAT preparation--it might become more important to have a parent who's a Wall Street millionaire than to have smart-kid genes. Maybe it would be prudent to add a sentence to those ads in college papers: "Preference given to respondents in the lower third of the class." | 1072.txt | 3 |
[
"American parents would send their children into an Ivy League college at any cost",
"Ivy Leaguecolleges used to admit students who showed no sign of intelligence",
"alumni children stand a better chance to be admitted than other applicants",
"egg-seekers care nothing about the pushy-parents genes"
]
| Which of the following is true according to the author? | Plowing through the New York Times on a recent Sunday, I read in the Metro Section that infertile couples in the market for smart-kid genes regularly place advertisements in the newspapers of their own Ivy League alma maters offering female undergraduates $7,500 for a donated egg. Before I could get that news comfortably digested, I came across an article in the Magazine section describing SAT prep courses for which parents spend thousands in the hope of raising their child's test scores enough to make admission to an Ivy League college possible. So how can people who have found a potential egg donor at an Ivy League college tell whether the donor carries genuine smart-kid genes or just pushy-parents genes?
The donor herself may not even be aware that such a distinction exists. After years of expensive private schooling and math tutors and tennis camps and SAT prep courses and letters of recommendation from important family friends, she's been told that, unlike beneficiaries of affirmative action, she got into an Ivy League college on pure merit.
Since it is probably safe to assume that people intent on securing high-priced Ivy League eggs are carrying some pushy-parents genes themselves, their joining forces with a donor who got into an Ivy League college by dint of her family's willingness to fork over 10 grand to an SAT prep course could result in a child with somewhere between a dose and a half and 2 1/2 doses of pushy-parents genes. Apparently the egg seekers aren't troubled by the prospect of having their grandchildren raised by this sort of person.
If you have any doubts about whether the dosages I cite are based on a thorough grounding in genetics and statistics and advanced microbiology, rest assured that I attended an Ivy League college myself. That was in the days, I'll admit, when any number of people were admitted to such institutions without having shown any evidence of carrying smart-kid genes even in trace elements. Somehow, m
most of these dimmer bulbs managed to graduate--every class needs a lower third in order to have an upper two-thirds--and somehow most of them are now millionaires on Wall Street.
One element many of them had going for them in the admissions process was that they were identified as "legacies"--the offspring of alumni. In Ivy League colleges, alumni children are even now admitted at twice the rate of other applicants. For that reason, egg seekers may not actually need genuine smart-kid genes for their children: after all, an applicant whose mother and father and egg donor were all alumni could be considered a triple legacy.
But how about the college-admission prospects of the grandchildren? As methods are perfected of enhancing a college application through increasingly expensive services--one young man mentioned in the magazine article had $25,000 worth of SAT preparation--it might become more important to have a parent who's a Wall Street millionaire than to have smart-kid genes. Maybe it would be prudent to add a sentence to those ads in college papers: "Preference given to respondents in the lower third of the class." | 1072.txt | 2 |
[
"approving",
"objective",
"indifferent",
"ironic"
]
| The author‘s attitude towards the issue seems to be _ . | Plowing through the New York Times on a recent Sunday, I read in the Metro Section that infertile couples in the market for smart-kid genes regularly place advertisements in the newspapers of their own Ivy League alma maters offering female undergraduates $7,500 for a donated egg. Before I could get that news comfortably digested, I came across an article in the Magazine section describing SAT prep courses for which parents spend thousands in the hope of raising their child's test scores enough to make admission to an Ivy League college possible. So how can people who have found a potential egg donor at an Ivy League college tell whether the donor carries genuine smart-kid genes or just pushy-parents genes?
The donor herself may not even be aware that such a distinction exists. After years of expensive private schooling and math tutors and tennis camps and SAT prep courses and letters of recommendation from important family friends, she's been told that, unlike beneficiaries of affirmative action, she got into an Ivy League college on pure merit.
Since it is probably safe to assume that people intent on securing high-priced Ivy League eggs are carrying some pushy-parents genes themselves, their joining forces with a donor who got into an Ivy League college by dint of her family's willingness to fork over 10 grand to an SAT prep course could result in a child with somewhere between a dose and a half and 2 1/2 doses of pushy-parents genes. Apparently the egg seekers aren't troubled by the prospect of having their grandchildren raised by this sort of person.
If you have any doubts about whether the dosages I cite are based on a thorough grounding in genetics and statistics and advanced microbiology, rest assured that I attended an Ivy League college myself. That was in the days, I'll admit, when any number of people were admitted to such institutions without having shown any evidence of carrying smart-kid genes even in trace elements. Somehow, m
most of these dimmer bulbs managed to graduate--every class needs a lower third in order to have an upper two-thirds--and somehow most of them are now millionaires on Wall Street.
One element many of them had going for them in the admissions process was that they were identified as "legacies"--the offspring of alumni. In Ivy League colleges, alumni children are even now admitted at twice the rate of other applicants. For that reason, egg seekers may not actually need genuine smart-kid genes for their children: after all, an applicant whose mother and father and egg donor were all alumni could be considered a triple legacy.
But how about the college-admission prospects of the grandchildren? As methods are perfected of enhancing a college application through increasingly expensive services--one young man mentioned in the magazine article had $25,000 worth of SAT preparation--it might become more important to have a parent who's a Wall Street millionaire than to have smart-kid genes. Maybe it would be prudent to add a sentence to those ads in college papers: "Preference given to respondents in the lower third of the class." | 1072.txt | 3 |
[
"wealth is more important than intelligence in application for Ivy League colleges",
"Ivy League colleges are increasingly expensive",
"egg-seekers can get better genes from millionaires",
"the prospects of college-admission are gloomy"
]
| It could be inferred from the text that _ . | Plowing through the New York Times on a recent Sunday, I read in the Metro Section that infertile couples in the market for smart-kid genes regularly place advertisements in the newspapers of their own Ivy League alma maters offering female undergraduates $7,500 for a donated egg. Before I could get that news comfortably digested, I came across an article in the Magazine section describing SAT prep courses for which parents spend thousands in the hope of raising their child's test scores enough to make admission to an Ivy League college possible. So how can people who have found a potential egg donor at an Ivy League college tell whether the donor carries genuine smart-kid genes or just pushy-parents genes?
The donor herself may not even be aware that such a distinction exists. After years of expensive private schooling and math tutors and tennis camps and SAT prep courses and letters of recommendation from important family friends, she's been told that, unlike beneficiaries of affirmative action, she got into an Ivy League college on pure merit.
Since it is probably safe to assume that people intent on securing high-priced Ivy League eggs are carrying some pushy-parents genes themselves, their joining forces with a donor who got into an Ivy League college by dint of her family's willingness to fork over 10 grand to an SAT prep course could result in a child with somewhere between a dose and a half and 2 1/2 doses of pushy-parents genes. Apparently the egg seekers aren't troubled by the prospect of having their grandchildren raised by this sort of person.
If you have any doubts about whether the dosages I cite are based on a thorough grounding in genetics and statistics and advanced microbiology, rest assured that I attended an Ivy League college myself. That was in the days, I'll admit, when any number of people were admitted to such institutions without having shown any evidence of carrying smart-kid genes even in trace elements. Somehow, m
most of these dimmer bulbs managed to graduate--every class needs a lower third in order to have an upper two-thirds--and somehow most of them are now millionaires on Wall Street.
One element many of them had going for them in the admissions process was that they were identified as "legacies"--the offspring of alumni. In Ivy League colleges, alumni children are even now admitted at twice the rate of other applicants. For that reason, egg seekers may not actually need genuine smart-kid genes for their children: after all, an applicant whose mother and father and egg donor were all alumni could be considered a triple legacy.
But how about the college-admission prospects of the grandchildren? As methods are perfected of enhancing a college application through increasingly expensive services--one young man mentioned in the magazine article had $25,000 worth of SAT preparation--it might become more important to have a parent who's a Wall Street millionaire than to have smart-kid genes. Maybe it would be prudent to add a sentence to those ads in college papers: "Preference given to respondents in the lower third of the class." | 1072.txt | 0 |
[
"free education can do nothing to help the world",
"free education will provide us a perfect world",
"all the problems of society can't be solved by education",
"farmers are more important than professors"
]
| From the passage we can conclude that _ . | Education is not an end,but a means to an end.In other words,we do not educate children only for the purpose of educating them.Our purpose is to fit them for life.
In some modern countries it has for some time been fashionable to think that by free education for all-whether rich or poor,clever or stupid-one can solve all the problems of society and build a perfect nation.But we can already see that free education for all is not enough;we find in such countries a far larger number of people with university degrees.They refuse to do what they think "low"work,and,in fact work with hands is thought to be dirty and shameful in such countries.But we have only to think a moment to understand that the work ofa completely uneducated farmer is far more important than that of a professor,we can live without education but we die if we have no food.If no one cleaned our streets and took the rubbish away from our houses,we should get terrible diseases in our towns…
In fact,when we say that all of us must be educated to fit us for life,it means that we must be educated in such a way that,firstly,each of us can do whatever work suited to his brain and ability and,secondly,that we can realie that all jobs are necessary to society,and that it is very bad to be ashamed of one's work.Only such a type of education can be considered valuable to society. | 4028.txt | 2 |
[
"our society need different kinds of people doing all kinds of work",
"work with hands is the most valuable in the world",
"we should respect farmers,for we can't live without them",
"farmers and dustmen do not need education as their jobs are very simple"
]
| It is suggested in this passage that _ . | Education is not an end,but a means to an end.In other words,we do not educate children only for the purpose of educating them.Our purpose is to fit them for life.
In some modern countries it has for some time been fashionable to think that by free education for all-whether rich or poor,clever or stupid-one can solve all the problems of society and build a perfect nation.But we can already see that free education for all is not enough;we find in such countries a far larger number of people with university degrees.They refuse to do what they think "low"work,and,in fact work with hands is thought to be dirty and shameful in such countries.But we have only to think a moment to understand that the work ofa completely uneducated farmer is far more important than that of a professor,we can live without education but we die if we have no food.If no one cleaned our streets and took the rubbish away from our houses,we should get terrible diseases in our towns…
In fact,when we say that all of us must be educated to fit us for life,it means that we must be educated in such a way that,firstly,each of us can do whatever work suited to his brain and ability and,secondly,that we can realie that all jobs are necessary to society,and that it is very bad to be ashamed of one's work.Only such a type of education can be considered valuable to society. | 4028.txt | 0 |
[
"to let everyone get free education",
"to let people not think it is ashamed of one's own work with hands",
"to make children get ready for their future work",
"to choose a system education"
]
| According to the writer,the purpose of education is _ . | Education is not an end,but a means to an end.In other words,we do not educate children only for the purpose of educating them.Our purpose is to fit them for life.
In some modern countries it has for some time been fashionable to think that by free education for all-whether rich or poor,clever or stupid-one can solve all the problems of society and build a perfect nation.But we can already see that free education for all is not enough;we find in such countries a far larger number of people with university degrees.They refuse to do what they think "low"work,and,in fact work with hands is thought to be dirty and shameful in such countries.But we have only to think a moment to understand that the work ofa completely uneducated farmer is far more important than that of a professor,we can live without education but we die if we have no food.If no one cleaned our streets and took the rubbish away from our houses,we should get terrible diseases in our towns…
In fact,when we say that all of us must be educated to fit us for life,it means that we must be educated in such a way that,firstly,each of us can do whatever work suited to his brain and ability and,secondly,that we can realie that all jobs are necessary to society,and that it is very bad to be ashamed of one's work.Only such a type of education can be considered valuable to society. | 4028.txt | 2 |
[
"the means of education",
"the value of education",
"the work children should do in the future",
"the advantage of education"
]
| The passage mainly tells us about _ . | Education is not an end,but a means to an end.In other words,we do not educate children only for the purpose of educating them.Our purpose is to fit them for life.
In some modern countries it has for some time been fashionable to think that by free education for all-whether rich or poor,clever or stupid-one can solve all the problems of society and build a perfect nation.But we can already see that free education for all is not enough;we find in such countries a far larger number of people with university degrees.They refuse to do what they think "low"work,and,in fact work with hands is thought to be dirty and shameful in such countries.But we have only to think a moment to understand that the work ofa completely uneducated farmer is far more important than that of a professor,we can live without education but we die if we have no food.If no one cleaned our streets and took the rubbish away from our houses,we should get terrible diseases in our towns…
In fact,when we say that all of us must be educated to fit us for life,it means that we must be educated in such a way that,firstly,each of us can do whatever work suited to his brain and ability and,secondly,that we can realie that all jobs are necessary to society,and that it is very bad to be ashamed of one's work.Only such a type of education can be considered valuable to society. | 4028.txt | 1 |
[
"To emphasize the variety of environments in which people used sun and water clocks to tell time.",
"To illustrate the disadvantage of sun and water clocks.",
"To provide an example of an area where water clocks have an advantage over sun clocks.",
"To counter the claim that sun and water clocks were used all over Europe."
]
| Why does the author provide the information that "in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night"? | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 1 |
[
"the need of different towns to coordinate timekeeping with each other.",
"the setting of specific times for the opening and closing of markets.",
"the setting of specific time for the start and finish of the working day.",
"the regulation of the performance of daily church rituals."
]
| According to paragraph 2, all of the following are examples of the importance of timekeeping to medieval European society EXCEPT | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 0 |
[
"The alarm warned the monks of discord or strife in the town.",
"The church was responsible for regulating working hours and market hours.",
"The alarm was needed in case fires were not put out each night.",
"One of the church's daily rituals occurred during the night."
]
| According to paragraph 2, why did the medieval church need an alarm arrangement? | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 3 |
[
"actual.",
"important.",
"official.",
"effective."
]
| The word "authoritative" in the passage(paragraph 2)is closest in meaning to | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 2 |
[
"water clocks.",
"the sun.",
"mechanical clocks.",
"the church."
]
| The author uses the phrase "the timekeeper of last resort" to refer to | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 1 |
[
"rare.",
"small.",
"impractical.",
"basic."
]
| The word "rudimentary" in the passage(paragraph 3)is closest in meaning to | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 3 |
[
"Its used mechanical clocks through the period of urban collapse.",
"It used clocks to better understand natural phenomena, like equinoxes.",
"It tried to preserve its own method of keeping time, which was different from mechanical-clock time.",
"It used mechanical clocks to challenge secular, town authorities."
]
| According to paragraph 4, how did the Catholic Church react to the introduction of mechanical clocks? | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 2 |
[
"required.",
"expected by the majority of people.",
"standardized.",
"put in place."
]
| The word "installed" in the passage(paragraph 4)is closest in meaning to | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 3 |
[
"were able to continually make improvements in the accuracy of mechanical clocks.",
"were sometimes not well respected by other engineers.",
"sometimes made claims about the accuracy of mechanical clocks that were not true.",
"rarely shared their expertise with other engineers."
]
| It can be inferred from paragraph 5 that medieval clockmakers | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 0 |
[
"How did early mechanical clocks work",
"Why did the design of mechanical clocks affect engineering in general",
"How were mechanical clocks made",
"What influenced the design of the first mechanical clock"
]
| Paragraph 5 answers which of the following questions about mechanical clocks. | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 1 |
[
"leaders.",
"opponents.",
"employers.",
"guardians."
]
| The word "pioneers" in the passage isclosest in meaning to | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 0 |
[
"It encouraged workers to do more time-filling busywork.",
"It enabled workers to be more task oriented.",
"It pushed workers to work more hours every day.",
"It led to a focus on productivity."
]
| According to paragraph 6, how did the mechanical clock affect labor? | In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days; water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates; but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.
Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper; but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.
We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.
Ironically, the new machine tended to undermine Catholic Church authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature' s time. Day and night were divided into the same number of parts, so that except at the equinoxes, days and night hours were unequal; and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning. The Catholic Church resisted, not coming over to the new hours for about a century. From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one; conquerors seized them as especially precious spoils of war; tourists came to see and hear these machines the way they made pilgrimages to sacred relics.
The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision; they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.
The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy: people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once on can relate performance to uniform time units, work is never the same. One moves from the task-oriented time consciousness of the peasant (working on job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time. | 3894.txt | 3 |
[
"types of natural selection.",
"dangers of natural selection.",
"problems natural selection solves.",
"ways natural selection works."
]
| The phrase "mechanisms of natural selection" in the passage(paragraph 1)is closest in meaning to | When several individuals of the same species or of several different species depend on the same limited resource, a situation may arise that is referred to as competition. The existence of competition has been long known to naturalists; its effects were described by Darwin in considerable detail. Competition among individuals of the same species (intraspecies competition), one of the major mechanisms of natural selection, is the concern of evolutionary biology. Competition among the individuals of different species (interspecies competition) is a major concern of ecology. It is one of the factors controlling the size of competing populations, and extreme cases it may lead to the extinction of one of the competing species. This was described by Darwin for indigenous New Zealand species of animals and plants, which died out when competing species from Europe were introduced.
No serious competition exists when the major needed resource is in superabundant supply, as in most cases of the coexistence of herbivores (plant eaters). Furthermore, most species do not depend entirely on a single resource, if the major resource for a species becomes scarce, the species can usually shift to alternative resources. If more than one species is competing for a scarce resource, the competing species usually switch to different alternative resources. Competition is usually most severe among close relatives with similar demands on the environment. But it may also occur among totally unrelated forms that compete for the same resource, such as seed-eating rodents and ants. The effects of such competition are graphically demonstrated when all the animals or all the plants in an ecosystem come into competition, as happened 2 million years ago at the end of Pliocene, when North and South America became joined by the Isthmus of Panama. North and South American species migrating across the Isthmus now came into competition with each other. The result was the extermination of a large fraction of the South American mammals, which were apparently unable to withstand the competition from invading North American speciesalthough added predation was also an important factor.
To what extent competition determines the composition of a community and the density of particular species has been the source of considerable controversy. The problem is that competition ordinarily cannot be observed directly but must be inferred from the spread or increase of one species and the concurrent reduction or disappearance of another species. The Russian biologist G. F. Gause performed numerous tow-species experiments in the laboratory, in which one of the species became extinct when only a single kind of resource was available. On the basis of these experiments and of field observations, the so-called law of competitive exclusion was formulated, according to which no two species can occupy the same niche. Numerous seeming exceptions to this law have since been found, but they can usually be explained as cases in which the two species, even though competing for a major joint resource, did not really occupy exactly the same niche.
Competition among species is of considerable evolutionary importance. The physical structure of species competing for resources in the same ecological niche tends to gradually evolve in ways that allow them to occupy different niches. Competing species also tend to change their ranges so that their territories no longer overlap. The evolutionary effect of competition on species has been referred to as "species selection"; however, this description is potentially misleading. Only the individuals of a species are subject to the pressures of natural selection. The effect on the well-being and existence of a species is just the result of the effects of selection on all the individuals of the species. Thus species selection is actually a result of individual selection.
Competition may occur for any needed resource. In the case of animals it is usually food; in the case of forest plants it may be light; in the case of substrate inhabitants it may be space, as in many shallow-water bottom-dwelling marine organisms. Indeed, it may be for any of the factors, physical as well as biotic, that are essential for organisms. Competition is usually the more severe the denser the population. Together with predation, it is the most important density-dependent factor in regulating population growth. | 3924.txt | 3 |
[
"It results in the eventual elimination of the resource for which they are competing.",
"It leads to competition among individuals of the same species.",
"It encourages new species to immigrate to an area.",
"It controls the number of individuals in the competing populations."
]
| According to paragraph 1, what is one effect of competition among individuals of different species? | When several individuals of the same species or of several different species depend on the same limited resource, a situation may arise that is referred to as competition. The existence of competition has been long known to naturalists; its effects were described by Darwin in considerable detail. Competition among individuals of the same species (intraspecies competition), one of the major mechanisms of natural selection, is the concern of evolutionary biology. Competition among the individuals of different species (interspecies competition) is a major concern of ecology. It is one of the factors controlling the size of competing populations, and extreme cases it may lead to the extinction of one of the competing species. This was described by Darwin for indigenous New Zealand species of animals and plants, which died out when competing species from Europe were introduced.
No serious competition exists when the major needed resource is in superabundant supply, as in most cases of the coexistence of herbivores (plant eaters). Furthermore, most species do not depend entirely on a single resource, if the major resource for a species becomes scarce, the species can usually shift to alternative resources. If more than one species is competing for a scarce resource, the competing species usually switch to different alternative resources. Competition is usually most severe among close relatives with similar demands on the environment. But it may also occur among totally unrelated forms that compete for the same resource, such as seed-eating rodents and ants. The effects of such competition are graphically demonstrated when all the animals or all the plants in an ecosystem come into competition, as happened 2 million years ago at the end of Pliocene, when North and South America became joined by the Isthmus of Panama. North and South American species migrating across the Isthmus now came into competition with each other. The result was the extermination of a large fraction of the South American mammals, which were apparently unable to withstand the competition from invading North American speciesalthough added predation was also an important factor.
To what extent competition determines the composition of a community and the density of particular species has been the source of considerable controversy. The problem is that competition ordinarily cannot be observed directly but must be inferred from the spread or increase of one species and the concurrent reduction or disappearance of another species. The Russian biologist G. F. Gause performed numerous tow-species experiments in the laboratory, in which one of the species became extinct when only a single kind of resource was available. On the basis of these experiments and of field observations, the so-called law of competitive exclusion was formulated, according to which no two species can occupy the same niche. Numerous seeming exceptions to this law have since been found, but they can usually be explained as cases in which the two species, even though competing for a major joint resource, did not really occupy exactly the same niche.
Competition among species is of considerable evolutionary importance. The physical structure of species competing for resources in the same ecological niche tends to gradually evolve in ways that allow them to occupy different niches. Competing species also tend to change their ranges so that their territories no longer overlap. The evolutionary effect of competition on species has been referred to as "species selection"; however, this description is potentially misleading. Only the individuals of a species are subject to the pressures of natural selection. The effect on the well-being and existence of a species is just the result of the effects of selection on all the individuals of the species. Thus species selection is actually a result of individual selection.
Competition may occur for any needed resource. In the case of animals it is usually food; in the case of forest plants it may be light; in the case of substrate inhabitants it may be space, as in many shallow-water bottom-dwelling marine organisms. Indeed, it may be for any of the factors, physical as well as biotic, that are essential for organisms. Competition is usually the more severe the denser the population. Together with predation, it is the most important density-dependent factor in regulating population growth. | 3924.txt | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.