input
stringlengths 2.6k
28.8k
| output
stringlengths 4
150
|
---|---|
Context:
in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress
masculinity and warmth. the five phases β fire, earth, metal, wood, and water β described a cycle of transformations in nature. the water turned into wood, which turned into the fire when it burned. the ashes left by fire were earth. using these principles, chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc β 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and
was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. bronze significantly advanced shipbuilding technology with better tools and bronze nails. bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. better ships enabled long - distance trade and the advance of civilization. this technological trend apparently began in the fertile crescent and spread outward over time. these developments were not, and still are not, universal. the three - age system does not accurately describe the technology history of groups outside of eurasia, and does not apply at all in the case of some isolated populations, such as the spinifex people, the sentinelese, and various amazonian tribes, which still make use of stone age technology, and have not developed agricultural or metal technology. these villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. = = = = iron age = = = = before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. the iron age involved the adoption of iron smelting technology. it generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. the raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. consequently, iron was produced in many areas. it was not possible to mass manufacture steel or pure iron because of the high temperatures required. furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. steel could be produced by forging bloomery iron to reduce the carbon content in a
casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. forging β a red - hot billet is hammered into shape. rolling β a billet is passed through successively narrower rollers to create a sheet. extrusion β a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. machining β lathes, milling machines and drills cut the cold metal to shape. sintering β a powdered metal is heated in a non - oxidizing environment after being compressed into a die. fabrication β sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape. laser cladding β metallic powder is blown through a movable laser beam ( e. g. mounted on a nc 5 - axis machine ). the resulting melted metal reaches a substrate to form a melt pool. by moving the laser head, it is possible to stack the tracks and build up a three - dimensional piece. 3d printing β sintering or melting amorphous powder metal in a 3d space to make any object to shape. cold - working processes, in which the product ' s shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. work hardening creates microscopic defects in the metal, which resist further changes of shape. = = = heat treatment = = = metals can be heat - treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering : annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft - edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking ; it is also easier to sand, grind, or cut annealed metal. quenching is the process of cooling metal very quickly after heating, thus " freezing " the metal ' s molecules in the very hard martensite form, which makes the metal harder. tempering relieves stresses in the metal that were caused by the hardening process ; tempering makes the metal less hard while making it better able to sustain
high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted
a tradition of scientific inquiry also emerged in ancient china, where taoist alchemists and philosophers experimented with elixirs to extend life and cure ailments. they focused on the yin and yang, or contrasting elements in nature ; the yin was associated with femininity and coldness, while yang was associated with masculinity and warmth. the five phases β fire, earth, metal, wood, and water β described a cycle of transformations in nature. the water turned into wood, which turned into the fire when it burned. the ashes left by fire were earth. using these principles, chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc β 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized
, heat from friction during rolling can cause problems for metal bearings ; problems which are reduced by the use of ceramics. ceramics are also more chemically resistant and can be used in wet environments where steel bearings would rust. the major drawback to using ceramics is a significantly higher cost. in many cases their electrically insulating properties may also be valuable in bearings. in the early 1980s, toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 Β°f ( 3300 Β°c ). ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. fuel efficiency of the engine is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials
his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces β remnants of criminal activity. embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners ' mindset to accept concepts and methodologies in forensic intelligence. recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, underscore the necessity for the establishment of educational and training initiatives in the field of forensic intelligence. this article contends that a discernible gap exists between the perceived and actual comprehension of forensic intelligence among law enforcement and forensic science managers, positing that this asymmetry can be rectified only through educational interventions.
is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical
. the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period,
Question: A metal spoon was left in a pot of boiling soup. The cook burned a finger by touching the spoon. Why did the finger get burned?
A) The metal spoon chemically reacted with the cook's hand.
B) The metal spoon conducted electricity to the cook's hand.
C) The metal spoon conducted heat to the cook's hand.
D) The metal spoon insulated the cook's hand.
|
C) The metal spoon conducted heat to the cook's hand.
|
Context:
in gravitational lensing, the concept of optical depth assumes the lens is dark. several microlensing detections have now been made where the lens may be bright. relations are developed between apparent and absolute optical depth in the regime of the apparent and absolute brightness of the lens. an apparent optical depth through bright lenses is always less than the true, absolute optical depth. the greater the intrinsic brightness of the lens, the more likely it will be found nearer the source.
##physical processes which take place in human beings as they make sense of information received through the visual system. the subject of the image. when developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. these observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy. the capture device. once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. for example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal. the processor. for all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. in practice, there are often multiple processors involved in the creation of a digital image. the display. the display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. examples include paper ( for printed, or " hard copy " images ), television, computer monitor, or projector. note that some imaging scientists will include additional " links " in their description of the imaging chain. for example, some will include the " source " of the energy which " illuminates " or interacts with the subject of the image. others will include storage and / or transmission systems. = = subfields = = subfields within imaging science include : image processing, computer vision, 3d computer graphics, animations, atmospheric optics, astronomical imaging, biological imaging, digital image restoration, digital imaging, color science, digital photography, holography, magnetic resonance imaging, medical imaging, microdensitometry, optics, photography, remote sensing, radar imaging, radiometry, silver halide, ultrasound imaging, photoacoustic imaging, thermal imaging, visual perception, and various printing technologies. = = methodologies = = acoustic imaging coherent imaging uses an active coherent illumination source, such as in radar, synthetic aperture radar ( sar ), medical ultrasound and optical coherence tomography ; non - coherent imaging systems include fluorescent microscopes, optical microscopes, and telescopes. chemical imaging, the simultaneous measurement of spectra and pictures digital imaging, creating digital images, generally by scanning or through digital photography disk image, a file which contains the exact content of a data storage medium document imaging, replicating documents commonly
the luminosity variation of a stellar source due to the gravitational microlensing effect can be considered also if the light rays are defocused ( instead of focused ) toward the observer. in this case, we should detect a gap instead of a peak in the light curve of the source. actually, we describe how the phenomenon depends on the relative position of source and lens with respect to the observer : if the lens is between, we have focusing, if the lens is behind, we have defocusing. it is shown that the number of events with predicted gaps is equal to the number of events with peaks in the light curves.
of measuring methods. x - rays and gamma rays are used in industrial radiography to make images of the inside of solid products, as a means of nondestructive testing and inspection. the piece to be radiographed is placed between the source and a photographic film in a cassette. after a certain exposure time, the film is developed and it shows any internal defects of the material. gauges - gauges use the exponential absorption law of gamma rays level indicators : source and detector are placed at opposite sides of a container, indicating the presence or absence of material in the horizontal radiation path. beta or gamma sources are used, depending on the thickness and the density of the material to be measured. the method is used for containers of liquids or of grainy substances thickness gauges : if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. this is useful for continuous production, like of paper, rubber, etc. electrostatic control - to avoid the build - up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon - shaped source of the alpha emitter 241am can be placed close to the material at the end of the production line. the source ionizes the air to remove electric charges on the material. radioactive tracers - since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. examples : adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube. adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil. oil and gas exploration - nuclear well logging is used to help predict the commercial viability of new or existing wells. the technology involves the use of a neutron or gamma - ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography. [ 1 ] road construction - nuclear moisture / density gauges are used to determine the density of soils, asphalt, and concrete. typically a cesium - 137 source is used. = = = commercial applications = = = radioluminescence tritium illumination : tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. some runway markers and building exit signs use the same technology, to remain illuminated during blackouts. betavoltaics
the curvature radiation is applied to the explain the circular polarization of frbs. significant circular polarization is reported in both apparently non - repeating and repeating frbs. curvature radiation can produce significant circular polarization at the wing of the radiation beam. in the curvature radiation scenario, in order to see significant circular polarization in frbs ( 1 ) more energetic bursts, ( 2 ) burst with electrons having higher lorentz factor, ( 3 ) a slowly rotating neutron star at the centre are required. different rotational period of the central neutron star may explain why some frbs have high circular polarization, while others don ' t. considering possible difference in refractive index for the parallel and perpendicular component of electric field, the position angle may change rapidly over the narrow pulse window of the radiation beam. the position angle swing in frbs may also be explained by this non - geometric origin, besides that of the rotating vector model.
it is hard for us humans to recognize things in nature until we have invented them ourselves. for image - forming optics, nature has made virtually every kind of lens humans have devised. but what about lensless " imaging "? recently, we showed that a bare array of sensors on a curved substrate could achieve resolution not limited by diffraction - without any lens at all provided that the objects imaged conform to our a priori assumptions. is it possible that somewhere in nature we will find this kind of vision system? we think so and provide examples that seem to make no sense whatever unless they are using something like our lensless imaging work.
beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 β 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope ( 329. 15 to 335 mhz ) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is on the correct horizontal and vertical approach. the ils beams are receivable for at least 15 miles, and have a radiated power of 25 watts. ils systems at airports are being replaced by systems that use satellite navigation. non - directional beacon ( ndb ) β legacy fixed radio beacons used before the vo
##hography is quite small, large area patterns must be created by stitching together the small fields. ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. it is capable of generating holes in thin films without any development process. structural depth can be defined either by ion range or by material thickness. aspect ratios up to several 104 can be reached. the technique can shape and texture materials at a defined inclination angle. random pattern, single - ion track structures and an aimed pattern consisting of individual single tracks can be generated. x - ray lithography is a process used in the electronic industry to selectively remove parts of a thin film. it uses x - rays to transfer a geometric pattern from a mask to a light - sensitive chemical photoresist, or simply " resist ", on the substrate. a series of chemical treatments then engraves the produced pattern into the material underneath the photoresist. diamond patterning is a method of forming diamond mems. it is achieved by the lithographic application of diamond films to a substrate such as silicon. the patterns can be formed by selective deposition through a silicon dioxide mask, or by deposition followed by micromachining or focused ion beam milling. = = = etching processes = = = there are two basic categories of etching processes : wet etching and dry etching. in the former, the material is dissolved when immersed in a chemical solution. in the latter, the material is sputtered or dissolved using reactive ions or a vapor phase etchant. = = = = wet etching = = = = wet chemical etching consists of the selective removal of material by dipping a substrate into a solution that dissolves it. the chemical nature of this etching process provides good selectivity, which means the etching rate of the target material is considerably higher than the mask material if selected carefully. wet etching can be performed using either isotropic wet etchants or anisotropic wet etchants. isotropic wet etchant etch in all directions of the crystalline silicon at approximately equal rates. anisotropic wet etchants preferably etch along certain crystal planes at faster rates than other planes, thereby allowing more complicated 3 - d microstructures to be implemented. wet anisotropic etchants are often used in conjunction with boron etch stops wherein the surface of the silicon is heavily doped with boron resulting in a silicon material layer that is
the group velocity of light has been measured at eight different wavelengths between 385 nm and 532 nm in the mediterranean sea at a depth of about 2. 2 km with the antares optical beacon systems. a parametrisation of the dependence of the refractive index on wavelength based on the salinity, pressure and temperature of the sea water at the antares site is in good agreement with these measurements.
##directional range ( vor ) β a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108. 00 and 117. 95 mhz in the very high frequency ( vhf ) band. an automated navigational instrument on the aircraft displays a bearing to a nearby vor transmitter. a vor beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 β 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope ( 329. 15 to 335 mhz ) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is
Question: Light rays are focused by the lens of a camera through the process of
A) reflection.
B) refraction.
C) dispersion.
D) diffraction.
|
B) refraction.
|
Context:
have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became
the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements ' resulting unique chronological timescales would then give inconsistent time estimates. in refutation of young earth claims of inconstant decay rates affecting the reliability of radiometric dating, roger c. wiens, a physicist specializing in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio
variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated.
earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. glaciology is the study of the cryosphere, including glaciers and coverage of the earth by ice and snow. concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. = = ecology = = ecology is the study of the biosphere. this includes the study of nature and of how living things interact with the earth and one another and the consequences of that. it considers how living things use resources such as oxygen, water, and nutrients from the earth to sustain themselves. it also considers how humans and other living creatures cause changes to nature. = = physical geography = = physical geography is the study of earth ' s systems and how they interact with one another as part of a single self - contained system. it incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. physical geography is distinct from human geography, which studies the human populations on earth, though it does include human effects on the environment. = = methodology = = methodologies vary depending on the nature of the subjects being studied. studies typically fall into one of three categories : observational, experimental, or theoretical. earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (
##rozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokar
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and
. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of
they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea
enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the
Question: Which of these provides the best evidence that the distribution of Earth's oceans has changed over time?
A) hot spots on ocean floors
B) seismic activity along plate boundaries
C) sediment buildup on continental slope
D) marine fossils found on land masses
|
D) marine fossils found on land masses
|
Context:
a minus sign is inserted, for good reason, into the formula for the energy - momentum tensor for tachyons. this leads to remarkable theoretical consequences and a plausible explanation for the phenomenon called dark energy in the cosmos.
and ancient egyptian cultures, which produced the first known written evidence of natural philosophy, the precursor of natural science. while the writings show an interest in astronomy, mathematics, and other aspects of the physical world, the ultimate aim of inquiry about nature ' s workings was, in all cases, religious or mythological, not scientific. a tradition of scientific inquiry also emerged in ancient china, where taoist alchemists and philosophers experimented with elixirs to extend life and cure ailments. they focused on the yin and yang, or contrasting elements in nature ; the yin was associated with femininity and coldness, while yang was associated with masculinity and warmth. the five phases β fire, earth, metal, wood, and water β described a cycle of transformations in nature. the water turned into wood, which turned into the fire when it burned. the ashes left by fire were earth. using these principles, chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pytha
if the hazard rate $ \ frac { f ' ( x ) } { 1 - f ( x ) } $ is increasing ( in $ x $ ), then $ \ mathbb e \, ( x _ { n : n } - x _ { n - 1 : n } ) $ is decreasing ( in $ n $ ), and moreover, completely monotone.
the following purposes : allowing cell attachment and migration, delivering and retaining cells and biochemical factors, enabling diffusion of vital cell nutrients and expressed products, and exerting certain mechanical and biological influences to modify the behaviour of the cell phase. in 2009, an interdisciplinary team led by the thoracic surgeon thorsten walles implanted the first bioartificial transplant that provides an innate vascular network for post - transplant graft supply successfully into a patient awaiting tracheal reconstruction. to achieve the goal of tissue reconstruction, scaffolds must meet some specific requirements. high porosity and adequate pore size are necessary to facilitate cell seeding and diffusion throughout the whole structure of both cells and nutrients. biodegradability is often an essential factor since scaffolds should preferably be absorbed by the surrounding tissues without the necessity of surgical removal. the rate at which degradation occurs has to coincide as much as possible with the rate of tissue formation : this means that while cells are fabricating their own natural matrix structure around themselves, the scaffold is able to provide structural integrity within the body and eventually it will break down leaving the newly formed tissue which will take over the mechanical load. injectability is also important for clinical uses. recent research on organ printing is showing how crucial a good control of the 3d environment is to ensure reproducibility of experiments and offer better results. = = = materials = = = material selection is an essential aspect of producing a scaffold. the materials utilized can be natural or synthetic and can be biodegradable or non - biodegradable. additionally, they must be biocompatible, meaning that they do not cause any adverse effects to cells. silicone, for example, is a synthetic, non - biodegradable material commonly used as a drug delivery material, while gelatin is a biodegradable, natural material commonly used in cell - culture scaffolds the material needed for each application is different, and dependent on the desired mechanical properties of the material. tissue engineering of long bone defects for example, will require a rigid scaffold with a compressive strength similar to that of cortical bone ( 100 - 150 mpa ), which is much higher compared to a scaffold for skin regeneration. there are a few versatile synthetic materials used for many different scaffold applications. one of these commonly used materials is polylactic acid ( pla ), a synthetic polymer. pla β polylactic acid. this is a polyester which
what are the implications if the total ' information ' in the universe is conserved? black holes might be ' logic gates ' recomputing the ' lost information ' from incoming ' signals ' from outside their event horizons into outgoing ' signals ' representing evaporative or radiative decay ' products ' of the reconfiguration process of the black hole quantum logic ' gate '. apparent local imbalances in the information flow can be corrected by including the effects of the coupling of the vacuum ' reservoir ' of information as part of the total information involved in any evolutionary process. in this way perhaps the ' vacuum ' computes the future of the observable universe.
the less of it people would be prepared to buy ( other things unchanged ). as the price of a commodity falls, consumers move toward it from relatively more expensive goods ( the substitution effect ). in addition, purchasing power from the price decline increases ability to buy ( the income effect ). other factors can change demand ; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. all determinants are predominantly taken as constant factors of demand and supply. supply is the relation between the price of a good and the quantity available for sale at that price. it may be represented as a table or graph relating price and quantity supplied. producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. supply is typically represented as a function relating price and quantity, if other factors are unchanged. that is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. the higher price makes it profitable to increase production. just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. the " law of supply " states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply. market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. at a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. this is posited to bid the price up. at a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. this pushes the price down. the model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilise at the price that makes quantity supplied equal to quantity demanded. similarly, demand - and - supply theory predicts a new price - quantity combination from a shift in demand ( as to the figure ), or in supply. = = = firms = = = people frequently do not trade directly on markets. instead, on the supply side, they may work
masculinity and warmth. the five phases β fire, earth, metal, wood, and water β described a cycle of transformations in nature. the water turned into wood, which turned into the fire when it burned. the ashes left by fire were earth. using these principles, chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc β 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and
in space, can adversely affect the earth ' s environment. some hypergolic rocket propellants, such as hydrazine, are highly toxic prior to combustion, but decompose into less toxic compounds after burning. rockets using hydrocarbon fuels, such as kerosene, release carbon dioxide and soot in their exhaust. carbon dioxide emissions are insignificant compared to those from other sources ; on average, the united states consumed 803 million us gal ( 3. 0 million m3 ) of liquid fuels per day in 2014, while a single falcon 9 rocket first stage burns around 25, 000 us gallons ( 95 m3 ) of kerosene fuel per launch. even if a falcon 9 were launched every single day, it would only represent 0. 006 % of liquid fuel consumption ( and carbon dioxide emissions ) for that day. additionally, the exhaust from lox - and lh2 - fueled engines, like the ssme, is almost entirely water vapor. nasa addressed environmental concerns with its canceled constellation program in accordance with the national environmental policy act in 2011. in contrast, ion engines use harmless noble gases like xenon for propulsion. an example of nasa ' s environmental efforts is the nasa sustainability base. additionally, the exploration sciences building was awarded the leed gold rating in 2010. on may 8, 2003, the environmental protection agency recognized nasa as the first federal agency to directly use landfill gas to produce energy at one of its facilities β the goddard space flight center, greenbelt, maryland. in 2018, nasa along with other companies including sensor coating systems, pratt & whitney, monitor coating and utrc launched the project caution ( coatings for ultra high temperature detection ). this project aims to enhance the temperature range of the thermal history coating up to 1, 500 Β°c ( 2, 730 Β°f ) and beyond. the final goal of this project is improving the safety of jet engines as well as increasing efficiency and reducing co2 emissions. = = = climate change = = = nasa also researches and publishes on climate change. its statements concur with the global scientific consensus that the climate is warming. bob walker, who has advised former us president donald trump on space issues, has advocated that nasa should focus on space exploration and that its climate study operations should be transferred to other agencies such as noaa. former nasa atmospheric scientist j. marshall shepherd countered that earth science study was built into nasa ' s mission at its creation in the 1958 national aeronautics and space act. nasa won the 2020 webby people ' s voice award for green in the category
when fragile molecules such as glycine, polyglicine, alkanes, and alkanethiols are embedded in liquid helium nanodroplets, electron - impact ionization of the beam leads to fragmentation which is as extensive as that of isolated gas - phase molecules. however, it turns out that if a few molecules of water are co - embedded with the peptide and alkane chains, their fragmentation is drastically reduced or completely eliminated. on the other hand, the fragmentation of alkanethiols remains unaffected. on the basis of these observations, it is proposed that the fragmentation " buffering " effect may correlate with the magnitude of the impurity ' s electric dipole moment, which steers the migration of the ionizing he ^ + hole in the droplet.
we point out consequences of the assumption that supersymmetry breaking is of cosmological origin.
Question: Which change would have the greatest negative impact on the survival of an owl species?
A) an increase in primary consumer population
B) an decrease in acid rain
C) an decrease in size of habitat
D) an increase in producers
|
C) an decrease in size of habitat
|
Context:
cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with support matrices for tissue engineering applications. an adequate environment for promoting cell growth, differentiation, and integration with the existing tissue is a critical factor for cell - based building blocks. manipulation of any of these cell processes create alternative avenues for the development of new tissue ( e. g., cell reprogramming - somatic cells, vascularization ). = = = isolation = = = techniques for cell isolation depend on the cell source. centrifugation and apheresis are techniques used for extracting cells from biofluids ( e. g., blood ). whereas digestion processes, typically using enzymes to remove the extra
this scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a " landmark ". the lab first stripped the cells away from a rat heart ( a process called " decellularization " ) and then injected rat stem cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with support matrices for tissue engineering applications. an adequate environment for promoting cell growth, differentiation, and integration with the existing tissue is a critical factor for cell - based building blocks. manipulation of any of these cell processes create alternative avenues for the development of new tissue ( e. g., cell reprogramming - somatic
##logous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source β induced pluripotent stem cells β may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells are stem cells which can divide into further stem cells or differentiate into any cell type in the body, including extra - embryonic tissue. pluripotent cells are stem cells which can differentiate into any cell type in the body except extra - embryonic tissue. induced pluripotent stem cells ( ipscs ) are subclass of pluripotent stem cells resembling embryonic stem cells ( escs ) that have been derived from adult differentiated cells. ipscs are created by altering the expression of transcriptional factors in adult cells until they become like embryonic stem cells. multipotent stem cells can be differentiated into any cell
##ilage generated without the use of exogenous scaffold material. in this methodology, all material in the construct is cellular produced directly by the cells. bioartificial heart : doris taylor ' s lab constructed a biocompatible rat heart by re - cellularising a de - cellularised rat heart. this scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a " landmark ". the lab first stripped the cells away from a rat heart ( a process called " decellularization " ) and then injected rat stem cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with
". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications
of cells = = = autologous : the donor and the recipient of the cells are the same individual. cells are harvested, cultured or stored, and then reintroduced to the host. as a result of the host ' s own cells being reintroduced, an antigenic response is not elicited. the body ' s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source β induced pluripotent stem cells β may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells
, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc β 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and bee. he investigated chick embryos by breaking open eggs and observing them at various stages of development. aristotle ' s works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. he also presented philosophies about physics, nature, and astronomy using
##artificial liver device, " temporary liver ", extracorporeal liver assist device ( elad ) : the human hepatocyte cell line ( c3a line ) in a hollow fiber bioreactor can mimic the hepatic function of the liver for acute instances of liver failure. a fully capable elad would temporarily function as an individual ' s liver, thus avoiding transplantation and allowing regeneration of their own liver. artificial pancreas : research involves using islet cells to regulate the body ' s blood sugar, particularly in cases of diabetes. biochemical factors may be used to cause human pluripotent stem cells to differentiate ( turn into ) cells that function similarly to beta cells, which are in an islet cell in charge of producing insulin. artificial bladders : anthony atala ( wake forest university ) has successfully implanted artificial bladders, constructed of cultured cells seeded onto a bladder - shaped scaffold, into seven out of approximately 20 human test subjects as part of a long - term experiment. cartilage : lab - grown cartilage, cultured in vitro on a scaffold, was successfully used as an autologous transplant to repair patients ' knees. scaffold - free cartilage : cartilage generated without the use of exogenous scaffold material. in this methodology, all material in the construct is cellular produced directly by the cells. bioartificial heart : doris taylor ' s lab constructed a biocompatible rat heart by re - cellularising a de - cellularised rat heart. this scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a " landmark ". the lab first stripped the cells away from a rat heart ( a process called " decellularization " ) and then injected rat stem cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to
capable elad would temporarily function as an individual ' s liver, thus avoiding transplantation and allowing regeneration of their own liver. artificial pancreas : research involves using islet cells to regulate the body ' s blood sugar, particularly in cases of diabetes. biochemical factors may be used to cause human pluripotent stem cells to differentiate ( turn into ) cells that function similarly to beta cells, which are in an islet cell in charge of producing insulin. artificial bladders : anthony atala ( wake forest university ) has successfully implanted artificial bladders, constructed of cultured cells seeded onto a bladder - shaped scaffold, into seven out of approximately 20 human test subjects as part of a long - term experiment. cartilage : lab - grown cartilage, cultured in vitro on a scaffold, was successfully used as an autologous transplant to repair patients ' knees. scaffold - free cartilage : cartilage generated without the use of exogenous scaffold material. in this methodology, all material in the construct is cellular produced directly by the cells. bioartificial heart : doris taylor ' s lab constructed a biocompatible rat heart by re - cellularising a de - cellularised rat heart. this scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a " landmark ". the lab first stripped the cells away from a rat heart ( a process called " decellularization " ) and then injected rat stem cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function
human blood primarily comprises plasma, red blood cells, white blood cells, and platelets. it plays a vital role in transporting nutrients to different organs, where it stores essential health - related data about the human body. blood cells are utilized to defend the body against diverse infections, including fungi, viruses, and bacteria. hence, blood analysis can help physicians assess an individual ' s physiological condition. blood cells have been sub - classified into eight groups : neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes ( promyelocytes, myelocytes, and metamyelocytes ), erythroblasts, and platelets or thrombocytes on the basis of their nucleus, shape, and cytoplasm. traditionally, pathologists and hematologists in laboratories have examined these blood cells using a microscope before manually classifying them. the manual approach is slower and more prone to human error. therefore, it is essential to automate this process. in our paper, transfer learning with cnn pre - trained models. vgg16, vgg19, resnet - 50, resnet - 101, resnet - 152, inceptionv3, mobilenetv2, and densenet - 20 applied to the pbc dataset ' s normal dib. the overall accuracy achieved with these models lies between 91. 375 and 94. 72 %. hence, inspired by these pre - trained architectures, a model has been proposed to automatically classify the ten types of blood cells with increased accuracy. a novel cnn - based framework has been presented to improve accuracy. the proposed cnn model has been tested on the pbc dataset normal dib. the outcomes of the experiments demonstrate that our cnn - based framework designed for blood cell classification attains an accuracy of 99. 91 % on the pbc dataset. our proposed convolutional neural network model performs competitively when compared to earlier results reported in the literature.
Question: Which organ removes cell waste from the blood?
A) the large intestine
B) the small intestine
C) the kidney
D) the heart
|
C) the kidney
|
Context:
the universe is found to have undergone several phases in which the gravitational constant had different behaviors. during some epochs the energy density of the universe remained constant and the universe remained static. in the radiation dominated epoch the radiation field satisfies stefan ' s formula while the scale factor varies linearly with time. the model enhances the formation of the structure in the universe as observed today.
quantum mechanics is interpreted by the adjacent vacuum that behaves as a virtual particle to be absorbed and emitted by its matter. as described in the vacuum universe model, the adjacent vacuum is derived from the pre - inflationary universe in which the pre - adjacent vacuum is absorbed by the pre - matter. this absorbed pre - adjacent vacuum is emitted to become the added space for the inflation in the inflationary universe whose space - time is separated from the pre - inflationary universe. this added space is the adjacent vacuum. the absorption of the adjacent vacuum as the added space results in the adjacent zero space ( no space ), quantum mechanics is the interaction between matter and the three different types of vacuum : the adjacent vacuum, the adjacent zero space, and the empty space. the absorption of the adjacent vacuum results in the empty space superimposed with the adjacent zero space, confining the matter in the form of particle. when the absorbed vacuum is emitted, the adjacent vacuum can be anywhere instantly in the empty space superimposed with the adjacent zero space where any point can be the starting point ( zero point ) of space - time. consequently, the matter that expands into the adjacent vacuum has the probability to be anywhere instantly in the form of wavefunction. in the vacuum universe model, the universe not only gains its existence from the vacuum but also fattens itself with the vacuum. during the inflation, the adjacent vacuum also generates the periodic table of elementary particles to account for all elementary particles and their masses in a good agreement with the observed values.
one of the greatest discoveries of modern times is that of the expanding universe, almost invariably attributed to hubble ( 1929 ). what is not widely known is that the original treatise by lemaitre ( 1927 ) contained a rich fusion of both theory and of observation. stiglers law of eponymy is yet again affirmed : no scientific discovery is named after its original discoverer ( merton, 1957 ). an appeal is made for a lemaitre telescope, to honour the discoverer of the expanding universe.
the world is changing at an ever - increasing pace. and it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity : structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. how to respond to this challenge? and what are the costs?
the myth that the expansion of the universe was discovered by hubble was first propagated by humason ( 1931 ). the true nature of this discovery turns out to have been both more complex and more interesting.
it seems natural to ask why the universe exists at all. modern physics suggests that the universe can exist all by itself as a self - contained system, without anything external to create or sustain it. but there might not be an absolute answer to why it exists. i argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts ; the universe simply is, without ultimate cause or explanation.
the latest news from $ ^ 3 $ he universe are presented together with the extended map of the universe.
dust grains absorb half of the radiation emitted by stars throughout the history of the universe, re - emitting this energy at infrared wavelengths. polycyclic aromatic hydrocarbons ( pahs ) are large organic molecules that trace millimeter - size dust grains and regulate the cooling of the interstellar gas within galaxies. observations of pah features in very distant galaxies have been difficult due to the limited sensitivity and wavelength coverage of previous infrared telescopes. here we present jwst observations that detect the 3. 3um pah feature in a galaxy observed less than 1. 5 billion years after the big bang. the high equivalent width of the pah feature indicates that star formation, rather than black hole accretion, dominates the infrared emission throughout the galaxy. the light from pah molecules, large dust grains, and stars and hot dust are spatially distinct from one another, leading to order - of - magnitude variations in the pah equivalent width and the ratio of pah to total infrared luminosity across the galaxy. the spatial variations we observe suggest either a physical offset between the pahs and large dust grains or wide variations in the local ultraviolet radiation field. our observations demonstrate that differences in the emission from pah molecules and large dust grains are a complex result of localized processes within early galaxies.
be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way
this is an experimentalist ' s list of questions concerning the physics of the charmed baryon sector which have no satisfactory answer.
Question: According to the Big Bang Theory, how is the universe changing?
A) It is contracting.
B) It is expanding.
C) Only the rim is expanding.
D) Only the center is contracting.
|
B) It is expanding.
|
Context:
the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. there are generally four types of chemical signals : autocrine, paracrine, juxtacrine, and hormones. in autocrine signaling, the ligand affects the same cell that releases it. tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their
is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside
pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin
proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. there are generally four types of chemical signals : autocrine, paracrine, juxtacrine, and hormones. in autocrine signaling, the ligand affects the same cell that releases it. tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self - division. in paracrine signaling, the ligand diffuses to nearby cells and affects them. for example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle
in gravitational lensing, the concept of optical depth assumes the lens is dark. several microlensing detections have now been made where the lens may be bright. relations are developed between apparent and absolute optical depth in the regime of the apparent and absolute brightness of the lens. an apparent optical depth through bright lenses is always less than the true, absolute optical depth. the greater the intrinsic brightness of the lens, the more likely it will be found nearer the source.
substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the
of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and
in order to obtain the keys in this system, a key must be inserted and turned ( like the key at the bottom of the system of the picture ). once the key is turned, the operator may retrieve the remaining keys that will be used to open other doors. once all keys are returned, then the operator will be allowed to take out the original key from the beginning. the key will not turn unless the remaining keys are put back in place. another example is an electric kiln. to prevent access to the inside of an electric kiln, a trapped key system may be used to interlock a disconnecting switch and the kiln door. while the switch is turned on, the key is held by the interlock attached to the disconnecting switch. to open the kiln door, the switch is first opened, which releases the key. the key can then be used to unlock the kiln door. while the key is removed from the switch interlock, a plunger from the interlock mechanically prevents the switch from closing. power cannot be re - applied to the kiln until the kiln door is locked, releasing the key, and the key is then returned to the disconnecting switch interlock. a similar two - part interlock system can be used anywhere it is necessary to ensure the energy supply to a machine is interrupted before the machine is entered for adjustment or maintenance. = = mechanical = = interlocks may be strictly mechanical. an example of a mechanical interlock is a steering wheel of a car. in modern days, most cars have an anti - theft feature that restricts the turning of the steering wheel if the key is not inserted in the ignition. this prevents an individual from pushing the car since the mechanical interlock restricts the directional motion of the front wheels of the car. in the operation of a device such as a press or cutter that is hand fed or the workpiece hand removed, the use of two buttons to actuate the device, one for each hand, greatly reduces the possibility of operation endangering the operator. no such system is fool - proof, and such systems are often augmented by the use of cable β pulled gloves worn by the operator ; these are retracted away from the danger area by the stroke of the machine. a major problem in engineering operator safety is the tendency of operators to ignore safety precautions or even outright disabling forced interlocks due to work pressure and other factors. therefore, such safeties require and perhaps must facilitate operator cooperation. = = electrical =
##physical processes which take place in human beings as they make sense of information received through the visual system. the subject of the image. when developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. these observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy. the capture device. once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. for example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal. the processor. for all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. in practice, there are often multiple processors involved in the creation of a digital image. the display. the display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. examples include paper ( for printed, or " hard copy " images ), television, computer monitor, or projector. note that some imaging scientists will include additional " links " in their description of the imaging chain. for example, some will include the " source " of the energy which " illuminates " or interacts with the subject of the image. others will include storage and / or transmission systems. = = subfields = = subfields within imaging science include : image processing, computer vision, 3d computer graphics, animations, atmospheric optics, astronomical imaging, biological imaging, digital image restoration, digital imaging, color science, digital photography, holography, magnetic resonance imaging, medical imaging, microdensitometry, optics, photography, remote sensing, radar imaging, radiometry, silver halide, ultrasound imaging, photoacoustic imaging, thermal imaging, visual perception, and various printing technologies. = = methodologies = = acoustic imaging coherent imaging uses an active coherent illumination source, such as in radar, synthetic aperture radar ( sar ), medical ultrasound and optical coherence tomography ; non - coherent imaging systems include fluorescent microscopes, optical microscopes, and telescopes. chemical imaging, the simultaneous measurement of spectra and pictures digital imaging, creating digital images, generally by scanning or through digital photography disk image, a file which contains the exact content of a data storage medium document imaging, replicating documents commonly
##itive material by selective exposure to a radiation source such as light. a photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. if a photosensitive material is selectively exposed to radiation ( e. g. by masking some of the radiation ) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs. this exposed region can then be removed or treated providing a mask for the underlying substrate. photolithography is typically used with metal or other thin film deposition, wet and dry etching. sometimes, photolithography is used to create structure without any kind of post etching. one example is su8 based lens where su8 based square blocks are generated. then the photoresist is melted to form a semi - sphere which acts as a lens. electron beam lithography ( often abbreviated as e - beam lithography ) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film ( called the resist ), ( " exposing " the resist ) and of selectively removing either exposed or non - exposed regions of the resist ( " developing " ). the purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. it was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. the primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. this form of maskless lithography has found wide usage in photomask - making used in photolithography, low - volume production of semiconductor components, and research & development. the key limitation of electron beam lithography is throughput, i. e., the very long time it takes to expose an entire silicon wafer or glass substrate. a long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. also, the turn - around time for reworking or re - design is lengthened unnecessarily if the pattern is not being changed the second time. it is known that focused - ion beam lithography has the capability of writing extremely fine lines ( less than 50 nm line and space has been achieved ) without proximity effect. however, because the writing field in ion - beam lit
Question: Light enters the human eye through the
A) retina.
B) pupil.
C) iris.
D) lens.
|
B) pupil.
|
Context:
of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic dna damage and genetic complementation which masks the expression of deleterious recessive mutations. the beneficial effect of genetic complementation, derived from outcrossing ( cross - fertilization ) is also referred to as hybrid vigor or heterosis. charles
( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by
consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna that can move between cells β while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", and as self - replicators. = = ecology = = ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. = = = ecosystems = = = the community of living ( biotic ) organisms in conjunction with the nonliving ( abiotic ) components ( e. g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil ) of their environment is called an ecosystem. these biotic and abiotic components are linked together through nutrient cycles and energy flows. energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. by feeding on plants and on one another, animals move matter and energy through the system. they also influence the quantity of plant and microbial biomass present. by breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and other microbes. = = = populations = = = a population is the group of organisms of the same species that occupies an area and reproduce from generation to generation. population size can be estimated by multiplying population density by the area or volume. the carrying capacity of an environment
##ian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in
be classified as belonging to one of three domains : archaea ( originally archaebacteria ), bacteria ( originally eubacteria ), or eukarya ( includes the fungi, plant, and animal kingdoms ). = = = history of life = = = the history of life on earth traces how organisms have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event
the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea are further divided into multiple recognized phyla. archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of haloquadratum walsbyi. despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. archaea use more energy sources than eukaryotes : these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. archaea reproduce asexually by binary fission, fragmentation, or budding ; unlike bacteria, no known species of archaea form endospores. the first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. improved molecular detection
two steps. in the first variation, the etch cycle is as follows : ( i ) sf6 isotropic etch ; ( ii ) c4f8 passivation ; ( iii ) sf6 anisotropic etch for floor cleaning. in the 2nd variation, steps ( i ) and ( iii ) are combined. both variations operate similarly. the c4f8 creates a polymer on the surface of the substrate, and the second gas composition ( sf6 and o2 ) etches the substrate. the polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. as a result, etching aspect ratios of 50 to 1 can be achieved. the process can easily be used to etch completely through a silicon substrate, and etch rates are 3 β 6 times higher than wet etching. after preparing a large number of mems devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. for some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing. = = manufacturing technologies = = bulk micromachining is the oldest paradigm of silicon - based mems. the whole thickness of a silicon wafer is used for building the micro - mechanical structures. silicon is machined using various etching processes. bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 1990s. surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining mems and integrated circuits on the same silicon wafer. the original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. interdigital comb electrodes were used to produce in - plane forces and to detect in - plane movement capacitively. this
casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. forging β a red - hot billet is hammered into shape. rolling β a billet is passed through successively narrower rollers to create a sheet. extrusion β a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. machining β lathes, milling machines and drills cut the cold metal to shape. sintering β a powdered metal is heated in a non - oxidizing environment after being compressed into a die. fabrication β sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape. laser cladding β metallic powder is blown through a movable laser beam ( e. g. mounted on a nc 5 - axis machine ). the resulting melted metal reaches a substrate to form a melt pool. by moving the laser head, it is possible to stack the tracks and build up a three - dimensional piece. 3d printing β sintering or melting amorphous powder metal in a 3d space to make any object to shape. cold - working processes, in which the product ' s shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. work hardening creates microscopic defects in the metal, which resist further changes of shape. = = = heat treatment = = = metals can be heat - treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering : annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft - edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking ; it is also easier to sand, grind, or cut annealed metal. quenching is the process of cooling metal very quickly after heating, thus " freezing " the metal ' s molecules in the very hard martensite form, which makes the metal harder. tempering relieves stresses in the metal that were caused by the hardening process ; tempering makes the metal less hard while making it better able to sustain
described as having homologous features ( or synapomorphy ). phylogeny provides the basis of biological classification. this classification system is rank - based, with the highest rank being the domain followed by kingdom, phylum, class, order, family, genus, and species. all organisms can be classified as belonging to one of three domains : archaea ( originally archaebacteria ), bacteria ( originally eubacteria ), or eukarya ( includes the fungi, plant, and animal kingdoms ). = = = history of life = = = the history of life on earth traces how organisms have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests
extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea are further divided into multiple recognized phyla. archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of haloquadratum walsbyi. despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. archaea use more energy sources than eukaryotes : these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both
Question: The stages in the life cycle of an organism are shown below. birth -> growth -> development -> reproduction -> death In which life cycle stage will a new organism be made?
A) growth
B) development
C) reproduction
D) death
|
C) reproduction
|
Context:
or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has no adverse effect on pilot vision. = = = = ships = = = = ships have also adopted similar methods. though the earlier american arleigh burke - class destroyers incorporated some signature - reduction features. the norwegian skjold - class corvettes was the first coastal defence and the french la fayette - class frigates the
an electron inside liquid helium forms a bubble of 17 \ aa in radius. in an external magnetic field, the two - level system of a spin 1 / 2 electron is ideal for the implementation of a qubit for quantum computing. the electron spin is well isolated from other thermal reservoirs so that the qubit should have very long coherence time. by confining a chain of single electron bubbles in a linear rf quadrupole trap, a multi - bit quantum register can be implemented. all spins in the register can be initialized to the ground state either by establishing thermal equilibrium at a temperature around 0. 1 k and at a magnetic field of 1 t or by sorting the bubbles to be loaded into the trap with magnetic separation. schemes are designed to address individual spins and to do two - qubit cnot operations between the neighboring spins. the final readout can be carried out through a measurement similar to the stern - gerlach experiment.
angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit
an alternative explanation of 1 / f - noise in manganites is suggested and discussed
and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has
baby while they are in other parts of the house. the wavebands used vary by region, but analog baby monitors generally transmit with low power in the 16, 9. 3 β 49. 9 or 900 mhz wavebands, and digital systems in the 2. 4 ghz waveband. many baby monitors have duplex channels so the parent can talk to the baby, and cameras to show video of the baby. wireless microphone β a battery - powered microphone with a short - range transmitter that is handheld or worn on a person ' s body which transmits its sound by radio to a nearby receiver unit connected to a sound system. wireless microphones are used by public speakers, performers, and television personalities so they can move freely without trailing a microphone cord. traditionally, analog models transmit in fm on unused portions of the television broadcast frequencies in the vhf and uhf bands. some models transmit on two frequency channels for diversity reception to prevent nulls from interrupting transmission as the performer moves around. some models use digital modulation to prevent unauthorized reception by scanner radio receivers ; these operate in the 900 mhz, 2. 4 ghz or 6 ghz ism bands. european standards also support wireless multichannel audio systems ( wmas ) that can better support the use of large numbers of wireless microphones at a single event or venue. as of 2021, u. s. regulators were considering adopting rules for wmas. = = = data communication = = = wireless networking β automated radio links which transmit digital data between computers and other wireless devices using radio waves, linking the devices together transparently in a computer network. computer networks can transmit any form of data : in addition to email and web pages, they also carry phone calls ( voip ), audio, and video content ( called streaming media ). security is more of an issue for wireless networks than for wired networks since anyone nearby with a wireless modem can access the signal and attempt to log in. the radio signals of wireless networks are encrypted using wpa. wireless lan ( wireless local area network or wi - fi ) β based on the ieee 802. 11 standards, these are the most widely used computer networks, used to implement local area networks without cables, linking computers, laptops, cell phones, video game consoles, smart tvs and printers in a home or office together, and to a wireless router connecting them to the internet with a wire or cable connection. wireless routers in public places like libraries, hotels and coffee shops create wireless access points ( hotspots ) to allow the public to
the boron buckyball avoids the high symmetry icosahedral cage structure. the previously reported ih symmetric structure is not an energy minimum in the potential energy surface and exhibits a spontaneous symmetry breaking to yield a puckered cage with a rare th symmetry. the homo - lumo gap is twice as large as the reported value and amounts to 1. 94 ev at b3lyp / 6 - 31g ( d ) level. the valence orbital structure of boron buckyball is identical to the one in the carbon analogue.
a theory is put forward that the electronic phase transition at 0. 2 k in ni - doped bi $ _ { 2 } $ sr $ _ { 2 } $ cacu $ _ { 2 } $ o $ _ { 8 } $ is result of the formation of a spin density wave in the system of ni impurities. the driving force for the transition is the exchange interaction between the impurity spins and the spins of the conduction electrons. this creates a small gap at two of the four nodes of the superconducting gap. the effect is to reduce the thermal conductivity by a factor of two, as observed.
behavioral responses to different stimuli, one can understand something about how those stimuli are processed. lewandowski & strohmetz ( 2009 ) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present ( e. g., litter in a parking lot or readings on an electric meter ). behavioral observations involve the direct witnessing of the actor engaging in the behavior ( e. g., watching how close a person sits next to another person ). behavioral choices are when a person selects between two or more options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream
##s ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up the muck tube. the pressurized air flow must be constant to ensure regular air changes for the workers and prevent excessive inflow of mud or water at the base of the caisson. when the caisson hits bedrock, the sandhogs exit through the airlock and fill the box with concrete, forming a solid foundation pier. a pneumatic ( compressed - air ) caisson has the advantage of providing dry working conditions, which is better for placing concrete. it is also well suited for foundations for which other methods might cause settlement of adjacent structures. construction workers who leave the pressurized environment of the caisson must decompress at a rate that allows symptom - free release of inert gases dissolved in the body tissues if they are to avoid decompression sickness, a condition first identified in caisson workers, and originally named " caisson disease " in recognition of the occupational hazard. construction of the brooklyn bridge, which was built with the help of pressurised caissons, resulted in numerous workers being either killed or permanently injured by caisson disease during its construction. barotrauma of the ears, sinus cavities and lungs and dysbaric osteonecrosis are other risks. = = other uses = = caissons have also been used in the installation of hydraulic elevators where a single - stage ram is installed below the ground level. caissons, codenamed phoenix, were an integral part of the mulberry harbours used during the world war ii allied invasion of normandy. = = other meanings = = boat lift caissons : the word caisson is also used as a synonym for the moving trough part of caisson locks, canal lifts and inclines in which boats and ships rest while being lifted from one canal elevation to another ; the water is retained on the inside of the caisson, or excluded from the caisson
Question: Baby chicks peck their way out of their shells when they hatch. This activity is an example of which of the following types of behavior?
A) instinctive
B) learned
C) planned
D) social
|
A) instinctive
|
Context:
it seems natural to ask why the universe exists at all. modern physics suggests that the universe can exist all by itself as a self - contained system, without anything external to create or sustain it. but there might not be an absolute answer to why it exists. i argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts ; the universe simply is, without ultimate cause or explanation.
? if the latter, an important question is how the internal experiences of others can be measured. self - reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self - deception or selective memory may affect their responses. then even in the case of accurate self - reports, how can responses be compared across individuals? even if two individuals respond with the same answer on a likert scale, they may be experiencing very different things. other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. for example, are humans rational creatures? is there any sense in which they have free will, and how does that relate to the experience of making choices? philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology. philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. in particular, neurophilosophy has just recently become its own field with the works of paul churchland and patricia churchland. philosophy of mind, by contrast, has been a well - established discipline since before psychology was a field of study at all. it is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism. = = = philosophy of social science = = = the philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency. the french philosopher, auguste comte ( 1798 β 1857 ), established the epistemological perspective of positivism in the course in positivist philosophy, a series of texts published between 1830 and 1842. the first three volumes of the course dealt chiefly with the natural sciences already in existence ( geoscience, astronomy, physics, chemistry, biology ), whereas the latter two emphasised the inevitable coming of social science : " sociologie ". for comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex " queen science " of human society
snake called jormungandr. the norse creation account preserved in gylfaginning ( viii ) states that during the creation of the earth, an impassable sea was placed around it : and jafnharr said : " of the blood, which ran and welled forth freely out of his wounds, they made the sea, when they had formed and made firm the earth together, and laid the sea in a ring round. about her ; and it may well seem a hard thing to most men to cross over it. " the late norse konungs skuggsja, on the other hand, explains earth ' s shape as a sphere : if you take a lighted candle and set it in a room, you may expect it to light up the entire interior, unless something should hinder, though the room be quite large. but if you take an apple and hang it close to the flame, so near that it is heated, the apple will darken nearly half the room or even more. however, if you hang the apple near the wall, it will not get hot ; the candle will light up the whole house ; and the shadow on the wall where the apple hangs will be scarcely half as large as the apple itself. from this you may infer that the earth - circle is round like a ball and not equally near the sun at every point. but where the curved surface lies nearest the sun ' s path, there will the greatest heat be ; and some of the lands that lie continuously under the unbroken rays cannot be inhabited. = = = = east asia = = = = in ancient china, the prevailing belief was that the earth was flat and square, while the heavens were round, an assumption virtually unquestioned until the introduction of european astronomy in the 17th century. the english sinologist cullen emphasizes the point that there was no concept of a round earth in ancient chinese astronomy : chinese thought on the form of the earth remained almost unchanged from early times until the first contacts with modern science through the medium of jesuit missionaries in the seventeenth century. while the heavens were variously described as being like an umbrella covering the earth ( the kai tian theory ), or like a sphere surrounding it ( the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 β 139 ad ) to
the end ( for human scientists ) is nigh? the posit of this discourse is that the majority, if not all, scientific research will eventually be undertaken by one, or a number of, weak artificial intelligences.
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
there is an odd tension in electroweak physics. perturbation theory is extremely successful. at the same time, fundamental field theory gives manifold reasons why this should not be the case. this tension is resolved by the fr \ " ohlich - morchio - strocchi mechanism. however, the legacy of this work goes far beyond the resolution of this tension, and may usher in a fundamentally and ontologically different perspective on elementary particles, and even quantum gravity.
the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 β 139 ad ) to describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they
##rozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokar
antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically demonstrated that the world is of a round and spherical form, yet it does not follow that the other side of the earth is bare of water ; nor even, though it be bare, does it immediately follow that it is peopled. for scripture, which proves the truth of its historical statements by the accomplishment of its prophecies, gives no false information ; and it is too absurd to say, that some men might have taken ship and traversed the whole wide ocean, and crossed from this side of the world to the other, and that thus even the inhabitants of that distant region are descended from that one first man. some historians do not view augustine ' s scriptural commentaries as endorsing any particular cosmological model, endorsing instead the view that augustine shared the common view of his contemporaries that the earth is spherical, in line with his endorsement of science in de genesi ad litteram. c. p. e. nothaft, responding to writers like leo ferrari who described augustine as endorsing a flat earth, says that "... other recent writers on the subject treat augustine ' s acceptance of the earth ' s spherical shape as a well - established fact ". while it always remained a minority view, from the mid - fourth to the seventh centuries ad, the flat - earth view experienced a revival, around the time when diodorus of tarsus founded the exegetical school known as the school of antioch, which sought to counter what he saw as the pagan cosmology of the greeks with a return to the traditional cosmology. the writings of diodorus did not survive, but are reconstructed from later criticism. this revival primarily took place in the east syriac world ( with little influence on the latin west ) where it gained proponents such as ephrem the syrian and in the popular hexaemeral homilies of jacob of serugh. chrys
Question: Which conclusion can be made about earthworms because they do not have an internal skeleton?
A) They are invertebrates.
B) They have radial symmetry.
C) They are made of one segment.
D) They have an open circulatory system.
|
A) They are invertebrates.
|
Context:
the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the
the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways
soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the
smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added
venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist frits went. the first known auxin, indole - 3 - acetic acid ( iaa ), which promotes cell growth, was only isolated from plants about 50 years later. this compound mediates the tropic responses of shoots and roots towards light and gravity. the finding in 1939 that plant callus could be maintained in culture containing iaa, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. cytokinins are a class of plant hormones named for their control of cell division ( especially cytokinesis ). the natural cytokinin zeatin was discovered in corn, zea mays, and is a derivative of the purine adenine. zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. the gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl coa via the mevalonate pathway. they are involved in the promotion of germination and dormancy - breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. abscisic acid ( aba ) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. it inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. it was so named because it was originally thought to control abscission. ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. it is now known to be the hormone that stimulates or regulates fruit ripening and abscission,
##aggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist frits went. the first known auxin, indole - 3 - acetic acid ( iaa ), which promotes cell growth, was only isolated from plants about 50 years later. this compound mediates the tropic responses of shoots and roots towards light and gravity. the finding in 1939 that plant callus could be maintained in culture containing iaa, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. cytokinins are a class of plant hormones named for their control of cell division ( especially cytokinesis ). the natural cytokinin zeatin was discovered in corn, zea mays, and is a derivative of the purine adenine. zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. the gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl coa via the mevalonate pathway. they are involved in the promotion of germination and dormancy - breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. abscisic acid ( aba ) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. it inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. it was so named because it was originally thought to control abscission. ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. it is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. another class of phytohormones is the jasmonates, first isolated
or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent β the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell β which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosyn
the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. = = = human nutrition = = = virtually all staple foods come either directly from primary production by plants, or indirectly from animals that
sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabino
frits went. the first known auxin, indole - 3 - acetic acid ( iaa ), which promotes cell growth, was only isolated from plants about 50 years later. this compound mediates the tropic responses of shoots and roots towards light and gravity. the finding in 1939 that plant callus could be maintained in culture containing iaa, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. cytokinins are a class of plant hormones named for their control of cell division ( especially cytokinesis ). the natural cytokinin zeatin was discovered in corn, zea mays, and is a derivative of the purine adenine. zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. the gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl coa via the mevalonate pathway. they are involved in the promotion of germination and dormancy - breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. abscisic acid ( aba ) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. it inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. it was so named because it was originally thought to control abscission. ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. it is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. another class of phytohormones is the jasmonates, first isolated from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how
Question: The smallest unit of a plant that can perform all of the processes of life is the
A) leaf.
B) cell.
C) tissue.
D) root.
|
B) cell.
|
Context:
an electron inside liquid helium forms a bubble of 17 \ aa in radius. in an external magnetic field, the two - level system of a spin 1 / 2 electron is ideal for the implementation of a qubit for quantum computing. the electron spin is well isolated from other thermal reservoirs so that the qubit should have very long coherence time. by confining a chain of single electron bubbles in a linear rf quadrupole trap, a multi - bit quantum register can be implemented. all spins in the register can be initialized to the ground state either by establishing thermal equilibrium at a temperature around 0. 1 k and at a magnetic field of 1 t or by sorting the bubbles to be loaded into the trap with magnetic separation. schemes are designed to address individual spins and to do two - qubit cnot operations between the neighboring spins. the final readout can be carried out through a measurement similar to the stern - gerlach experiment.
in this article i explain in detail a method for making small amounts of liquid oxygen in the classroom if there is no access to a cylinder of compressed oxygen gas. i also discuss two methods for identifying the fact that it is liquid oxygen as opposed to liquid nitrogen.
of substances dissolved in aqueous solution ( that is, in water ). less familiar phases include plasmas, bose β einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. while most familiar phases deal with three - dimensional systems, it is also possible to define analogs in two - dimensional systems, which has received attention for its relevance to systems in biology. = = = bonding = = = atoms sticking together in molecules or crystals are said to be bonded with one another. a chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. more than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. the chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of van der waals force. each of these kinds of bonds is ascribed to some potential. these potentials create the interactions which hold atoms together in molecules or crystals. in many simple compounds, valence bond theory, the valence shell electron pair repulsion model ( vsepr ), and the concept of oxidation number can be used to explain molecular structure and composition. an ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non - metal atom, becoming a negatively charged anion. the two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. for example, sodium ( na ), a metal, loses one electron to become an na + cation while chlorine ( cl ), a non - metal, gains this electron to become clβ. the ions are held together due to electrostatic attraction, and that compound sodium chloride ( nacl ), or common table salt, is formed. in a covalent bond, one or more pairs of valence electrons are shared by two atoms : the resulting electrically neutral group of bonded atoms is termed a molecule. atoms will share valence electrons in such a way as to create a noble gas electron configuration ( eight electrons in their outermost shell ) for each atom. atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. however, some elements like hydrogen and lithium need only two electrons in their outermost shell to
##ulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds = = = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids,
passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another material. cermets are ceramic particles containing some metals. the wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. this process involves the strategic addition of second - phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. this approach enhances fracture toughness, paving the way for the creation of advanced, high - performance ceramics in various industries. = = = composites = = = another application of materials science in industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap
based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another
single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division
the recent report on laser cooling of liquid may contradict the law of energy conservation.
the goal of project gauss is to return samples from the dwarf planet ceres. ceres is the most accessible ocean world candidate and the largest reservoir of water in the inner solar system. it shows active cryovolcanism and hydrothermal activities in recent history that resulted in minerals not found in any other planets to date except for earth ' s upper crust. the possible occurrence of recent subsurface ocean on ceres and the complex geochemistry suggest possible past habitability and even the potential for ongoing habitability. aiming to answer a broad spectrum of questions about the origin and evolution of ceres and its potential habitability, gauss will return samples from this possible ocean world for the first time. the project will address the following top - level scientific questions : 1 ) what is the origin of ceres and the origin and transfer of water and other volatiles in the inner solar system? 2 ) what are the physical properties and internal structure of ceres? what do they tell us about the evolutionary and aqueous alteration history of icy dwarf planets? 3 ) what are the astrobiological implications of ceres? was it habitable in the past and is it still today? 4 ) what are the mineralogical connections between ceres and our current collections of primitive meteorites? gauss will first perform a high - resolution global remote sensing investigation, characterizing the geophysical and geochemical properties of ceres. candidate sampling sites will then be identified, and observation campaigns will be run for an in - depth assessment of the candidate sites. once the sampling site is selected, a lander will be deployed on the surface to collect samples and return them to earth in cryogenic conditions that preserves the volatile and organic composition as well as the original physical status as much as possible.
to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiot
Question: Soda water is a liquid that has bubbles of carbon dioxide in it. Which term best describes soda water?
A) a mixture
B) a molecule
C) an element
D) a compound
|
A) a mixture
|
Context:
the mechanism of stabilization of neutron - excess nuclei in stars is considered. this mechanism must produce the neutronisation process in hot stars in the same way as it occurs in the dwarfs.
while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern.
there are a few different mechanisms that can cause white dwarf stars to vary in brightness, providing opportunities to probe the physics, structures, and formation of these compact stellar remnants. the observational characteristics of the three most common types of white dwarf variability are summarized : stellar pulsations, rotation, and ellipsoidal variations from tidal distortion in binary systems. stellar pulsations are emphasized as the most complex type of variability, which also has the greatest potential to reveal the conditions of white dwarf interiors.
dynamical evolution of spiral galaxies is strongly dependent on non - axisymmetric patterns that develop from gravitational instabilities, either spontaneously or externally triggered. some evolutionary sequences are described through which a galaxy could possibly concentrate mass and build bulges, how external gas accretion from cosmic filaments could be funneled to the galaxy disks, and intermittently driven to the galaxy center, to form nuclear starbursts and fuel an active nucleus. the frequency of both bars and lopsidedness can be used to constrain the gas accretion rate.
planetary nebulae retain the signature of the nucleosynthesis and mixing events that occurred during the previous agb phase. observational signatures complement observations of agb and post - agb stars and their binary companions. the abundances of the elements heavier than iron such as kr and xe in planetary nebulae can be used to complement abundances of sr / y / zr and ba / la / ce in agb stars, respectively, to determine the operation of the slow neutron - capture process ( the s process ) in agb stars. additionally, observations of the rb abundance in type i planetary nebulae may allow us to infer the initial mass of the central star. several noble gas components present in meteoritic stardust silicon carbide ( sic ) grains are associated with implantation into the dust grains in the high - energy environment connected to the fast winds from the central stars during the planetary nebulae phase.
the origin of the arc - shaped stellar complexes in the lmc4 region is still unknown. these perfect arcs could not have been formed by o - stars and sne in their centers ; the strong arguments exist also against the possibility of their formation from infalling gas clouds. the origin from microquasars / grb jets is not excluded, because there is the strong concentration of x - ray binaries in the same region and the massive old cluster ngc 1978, probable site of formation of binaries with compact components, is there also. the last possibility is that the source of energy for formation of the stellar arcs and the lmc4 supershell might be the the giant jet from the nucleus of the milky way, which might be active a dozen myr ago.
galactic nuclei are unique laboratories for the study of processes connected with the accretion of gas onto supermassive black holes. at the same time, they represent challenging environments from the point of view of stellar dynamics due to their extreme densities and masses involved. there is a growing evidence about the importance of the mutual interaction of stars with gas in galactic nuclei. gas rich environment may lead to stellar formation which, on the other hand, may regulate accretion onto the central mass. gas in the form of massive torus or accretion disc further influences stellar dynamics in the central parsec either via gravitational or hydrodynamical interaction. eccentricity oscillations on one hand and energy dissipation on the other hand lead to increased rate of infall of stars into the supermassive black hole. last, but not least, processes related to the stellar dynamics may be detectable with forthcoming gravitational waves detectors.
the formation of supermassive black holes ( smbh ) is intimately related to galaxy formation, although precisely how remains a mystery. i speculate that formation of, and feedback from, smbh may alleviate problems that have arisen in our understanding of the cores of dark halos of galaxies.
this process may release or absorb energy. when the resulting nucleus is lighter than that of iron, energy is normally released ; when the nucleus is heavier than that of iron, energy is generally absorbed. this process of fusion occurs in stars, which derive their energy from hydrogen and helium. they form, through stellar nucleosynthesis, the light elements ( lithium to calcium ) as well as some of the heavy elements ( beyond iron and nickel, via the s - process ). the remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the r - process. of course, these natural processes of astrophysics are not examples of nuclear " technology ". because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. hydrogen bombs, formally known as thermonuclear weapons, obtain their enormous destructive power from fusion, but their energy cannot be controlled. controlled fusion is achieved in particle accelerators ; this is how many synthetic elements are produced. a fusor can also produce controlled fusion and is a useful neutron source. however, both of these devices operate at a net energy loss. controlled, viable fusion power has proven elusive, despite the occasional hoax. technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world. nuclear fusion was initially pursued only in theoretical stages during world war ii, when scientists on the manhattan project ( led by edward teller ) investigated it as a method to build a bomb. the project abandoned fusion after concluding that it would require a fission reaction to detonate. it took until 1952 for the first full hydrogen bomb to be detonated, so - called because it used reactions between deuterium and tritium. fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult. = = nuclear weapons = = a nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. both reactions release vast quantities of energy from relatively small amounts of matter. even small nuclear devices can devastate a city by blast, fire and radiation. nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut. the design of a nuclear weapon is more complicated than it might seem. such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality
we bring you, as usual, the sun and moon and stars, plus some galaxies and a new section on astrobiology. some highlights are short ( the newly identified class of gamma - ray bursts, and the deep impact on comet 9p / tempel 1 ), some long ( the age of the universe, which will be found to have the earth at its center ), and a few metonymic, for instance the term " down - sizing " to describe the evolution of star formation rates with redshift.
Question: All stars start forming in the same manner. Some follow the life cycle of the Sun, while others turn into neutron stars or black holes. Which property determines the fate of a star as it develops?
A) mass
B) location
C) luminosity
D) temperature
|
A) mass
|
Context:
to explain molecular structure and composition. an ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non - metal atom, becoming a negatively charged anion. the two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. for example, sodium ( na ), a metal, loses one electron to become an na + cation while chlorine ( cl ), a non - metal, gains this electron to become clβ. the ions are held together due to electrostatic attraction, and that compound sodium chloride ( nacl ), or common table salt, is formed. in a covalent bond, one or more pairs of valence electrons are shared by two atoms : the resulting electrically neutral group of bonded atoms is termed a molecule. atoms will share valence electrons in such a way as to create a noble gas electron configuration ( eight electrons in their outermost shell ) for each atom. atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. however, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration ; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. similarly, theories from classical physics can be used to predict many ionic structures. with more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of
ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their
set of chemical reactions with other substances. however, this definition only works well for substances that are composed of molecules, which is not true of many substances ( see below ). molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature.
##als force. each of these kinds of bonds is ascribed to some potential. these potentials create the interactions which hold atoms together in molecules or crystals. in many simple compounds, valence bond theory, the valence shell electron pair repulsion model ( vsepr ), and the concept of oxidation number can be used to explain molecular structure and composition. an ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non - metal atom, becoming a negatively charged anion. the two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. for example, sodium ( na ), a metal, loses one electron to become an na + cation while chlorine ( cl ), a non - metal, gains this electron to become clβ. the ions are held together due to electrostatic attraction, and that compound sodium chloride ( nacl ), or common table salt, is formed. in a covalent bond, one or more pairs of valence electrons are shared by two atoms : the resulting electrically neutral group of bonded atoms is termed a molecule. atoms will share valence electrons in such a way as to create a noble gas electron configuration ( eight electrons in their outermost shell ) for each atom. atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. however, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration ; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. similarly, theories from classical physics can be used to predict many ionic structures. with more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants
is the electrostatic force of attraction between them. for example, sodium ( na ), a metal, loses one electron to become an na + cation while chlorine ( cl ), a non - metal, gains this electron to become clβ. the ions are held together due to electrostatic attraction, and that compound sodium chloride ( nacl ), or common table salt, is formed. in a covalent bond, one or more pairs of valence electrons are shared by two atoms : the resulting electrically neutral group of bonded atoms is termed a molecule. atoms will share valence electrons in such a way as to create a noble gas electron configuration ( eight electrons in their outermost shell ) for each atom. atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. however, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration ; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. similarly, theories from classical physics can be used to predict many ionic structures. with more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population
scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons ( reduction ) or losing electrons ( oxidation ). substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. an oxidant removes electrons from another substance. similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. a reductant transfers electrons to another substance and is thus oxidized itself. and because it " donates " electrons it is also called an electron donor. oxidation and reduction properly refer to a change in oxidation number β the actual transfer of electrons may never occur. thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. = = = equilibrium = = = although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. a system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static ; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. = = = chemical laws = = = chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. some of them are : = = history = = the history of chemistry spans a period from the ancient past to the present. since several millennia bc, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. examples include extracting metals from ores
charges in the nuclei and the negative charges oscillating about them. more than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. the chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of van der waals force. each of these kinds of bonds is ascribed to some potential. these potentials create the interactions which hold atoms together in molecules or crystals. in many simple compounds, valence bond theory, the valence shell electron pair repulsion model ( vsepr ), and the concept of oxidation number can be used to explain molecular structure and composition. an ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non - metal atom, becoming a negatively charged anion. the two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. for example, sodium ( na ), a metal, loses one electron to become an na + cation while chlorine ( cl ), a non - metal, gains this electron to become clβ. the ions are held together due to electrostatic attraction, and that compound sodium chloride ( nacl ), or common table salt, is formed. in a covalent bond, one or more pairs of valence electrons are shared by two atoms : the resulting electrically neutral group of bonded atoms is termed a molecule. atoms will share valence electrons in such a way as to create a noble gas electron configuration ( eight electrons in their outermost shell ) for each atom. atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. however, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration ; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. similarly, theories from classical physics can be used to predict many ionic structures. with more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change
i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an
has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well β not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named
other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature. = = = = substance and mixture = = = = a chemical substance is a kind of matter with a definite composition and set of properties. a collection of substances is called a mixture. examples of mixtures are air and alloys. = = = = mole and amount of substance = = = = the mole is a unit
Question: If a neutral atom loses an electron, what is formed?
A) A gas
B) An ion
C) An acid
D) A molecule
|
B) An ion
|
Context:
. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole β dipole interactions. the transfer of
in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid
, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive
endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole β dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer
, like the woodward β hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid
. oxidation, reduction, dissociation, acid β base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward β hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be
interaction between tannin and bovine serum albumin ( bsa ) was examined by the fluorescent quenching. the process of elimination between bsa and tannin was the one of a stationary state, and the coupling coefficient was one. the working strength between the tannin and the beef serum was hydrophobic one.
attain this stable configuration ; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. similarly, theories from classical physics can be used to predict many ionic structures. with more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited
##ist, sio2, silicon nitride, and various metals for masking. its reaction to silicon is " plasmaless ", is purely chemical and spontaneous and is often operated in pulsed mode. models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach. modern vlsi processes avoid wet etching, and use plasma etching instead. plasma etchers can operate in several modes by adjusting the parameters of the plasma. ordinary plasma etching operates between 0. 1 and 5 torr. ( this unit of pressure, commonly used in vacuum engineering, equals approximately 133. 3 pascals. ) the plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. since neutral particles attack the wafer from all angles, this process is isotropic. plasma etching can be isotropic, i. e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i. e., exhibiting a smaller lateral undercut rate than its downward etch rate. such anisotropy is maximized in deep reactive ion etching. the use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation - dependent etching. the source gas for the plasma usually contains small molecules rich in chlorine or fluorine. for instance, carbon tetrachloride ( ccl4 ) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. a plasma containing oxygen is used to oxidize ( " ash " ) photoresist and facilitate its removal. ion milling, or sputter etching, uses lower pressures, often as low as 10β4 torr ( 10 mpa ). it bombards the wafer with energetic ions of noble gases, often ar +, which knock atoms from the substrate by transferring momentum. because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. on the other hand, it tends to display poor selectivity. reactive - ion etching ( rie ) operates under conditions intermediate between sputter and plasma etching ( between 10β3 and 10β1 torr ). deep reactive - ion etching ( drie ) modifies the rie technique to produce deep, narrow features.
a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward β hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water.
Question: A student mixed baking soda with vinegar and observed that the reaction was endothermic. When is a reaction endothermic?
A) when it is reversible
B) when it can be repeated
C) when it requires heat to make it happen
D) when energy is released by the process
|
C) when it requires heat to make it happen
|
Context:
becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under
navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea
, lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges.
weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial
approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with
radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) β military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. it often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. marine radar β an s or x band radar on ships used to detect nearby ships and obstructions like bridges. a rotating antenna sweeps a vertical fan - shaped beam of microwaves around the water surface surrounding the craft out to the horizon. weather radar β a doppler radar which maps weather precipitation intensities and wind speeds with the echoes returned from raindrops and their radial velocity by their doppler shift. phased - array radar β a radar set that uses a phased array, a computer - controlled antenna that can steer the radar beam quickly to point in different directions without moving the antenna. phased - array radars were developed by the military to track fast - moving missiles and aircraft. they are widely used in military equipment and are now spreading to civilian applications. synthetic aperture radar ( sar ) β a specialized airborne radar set that produces a high - resolution map of ground terrain. the radar is mounted on an aircraft or spacecraft and the radar antenna radiates a beam of radio waves sideways at right angles to the direction of motion, toward the ground. in processing the return radar signal, the motion of the vehicle is used to simulate a large antenna, giving the radar a higher resolution. ground - penetrating radar β a specialized radar instrument that is rolled along the ground surface in a cart and transmits a beam of radio waves into the ground, producing an image of subsurface objects. frequencies from 100 mhz to a few ghz are used. since radio waves cannot penetrate very far into earth, the depth of gpr is limited to about 50 feet. collision avoidance system β a short range radar or lidar system on an automobile or vehicle that detects if the vehicle is about to collide with an object and applies the brakes to
annual levels of us landfalling hurricane activity averaged over the last 11 years ( 1995 - 2005 ) are higher than those averaged over the previous 95 years ( 1900 - 1994 ). how, then, should we best predict hurricane activity rates for next year? based on the assumption that the higher rates will continue we use an optimal combination of averages over the long and short time - periods to produce a prediction that minimises mse.
when fast radio burst ( frb ) waves propagate through the local ( < 1 pc ) environment of the frb source, electrons in the plasma undergo large - amplitude oscillations. the finite - amplitude effects cause the effective plasma frequency and cyclotron frequency to be dependent on the wave strength. the dispersion measure and rotation measure should therefore vary slightly from burst to burst for a repeating source, depending on the luminosity and frequency of the individual burst. furthermore, free - free absorption of strong waves is suppressed due to the accelerated electrons ' reduced energy exchange in coulomb collisions. this allows bright low - frequency bursts to propagate through an environment that would be optically thick to low - amplitude waves. given a large sample of bursts from a repeating source, it would be possible to use the deficit of low - frequency and low - luminosity bursts to infer the emission measure of the local intervening plasma and its distance from the source. information about the local environment will shed light on the nature of frb sources.
hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial ( e. g., trunks of trees, boulders and accumulations of gravel ) from a river bed furnishes a simple and efficient means of increasing the discharging capacity of its channel. such removals will consequently lower the height of floods upstream. every impediment to the flow, in proportion to
the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a
Question: What is the primary energy source that drives all weather events, including precipitation, hurricanes, and tornados?
A) the Sun
B) the Moon
C) Earth's gravity
D) Earth's rotation
|
A) the Sun
|
Context:
participates as a consumer, resource, or both in consumer β resource interactions, which form the core of food chains or food webs. there are different trophic levels within any food web, with the lowest level being the primary producers ( or autotrophs ) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. at the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. heterotrophs that consume plants are primary consumers ( or herbivores ) whereas heterotrophs that consume herbivores are secondary consumers ( or carnivores ). and those that eat secondary consumers are tertiary consumers and so on. omnivorous heterotrophs are able to consume at multiple levels. finally, there are decomposers that feed on the waste products or dead bodies of organisms. on average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one - tenth of the energy of the trophic level that it consumes. waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. = = = biosphere = = = in the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. for example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. a biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic ( biosphere ) and the abiotic ( lithosphere, atmosphere, and hydrosphere ) compartments of earth. there are biogeochemical cycles for nitrogen, carbon, and water. = = = conservation = = = conservation biology is the study of the conservation of earth ' s biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. it is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. the concern stems from estimates suggesting that up to 50 % of all species on the planet
##physical processes which take place in human beings as they make sense of information received through the visual system. the subject of the image. when developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. these observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy. the capture device. once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. for example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal. the processor. for all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. in practice, there are often multiple processors involved in the creation of a digital image. the display. the display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. examples include paper ( for printed, or " hard copy " images ), television, computer monitor, or projector. note that some imaging scientists will include additional " links " in their description of the imaging chain. for example, some will include the " source " of the energy which " illuminates " or interacts with the subject of the image. others will include storage and / or transmission systems. = = subfields = = subfields within imaging science include : image processing, computer vision, 3d computer graphics, animations, atmospheric optics, astronomical imaging, biological imaging, digital image restoration, digital imaging, color science, digital photography, holography, magnetic resonance imaging, medical imaging, microdensitometry, optics, photography, remote sensing, radar imaging, radiometry, silver halide, ultrasound imaging, photoacoustic imaging, thermal imaging, visual perception, and various printing technologies. = = methodologies = = acoustic imaging coherent imaging uses an active coherent illumination source, such as in radar, synthetic aperture radar ( sar ), medical ultrasound and optical coherence tomography ; non - coherent imaging systems include fluorescent microscopes, optical microscopes, and telescopes. chemical imaging, the simultaneous measurement of spectra and pictures digital imaging, creating digital images, generally by scanning or through digital photography disk image, a file which contains the exact content of a data storage medium document imaging, replicating documents commonly
digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna that can move between cells β while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", and as self - replicators. = = ecology = = ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. = = = ecosystems = = = the community of living ( biotic ) organisms in conjunction with the nonliving ( abiotic ) components ( e. g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil ) of their environment is called an ecosystem. these biotic and abiotic components are linked together through nutrient cycles and energy flows. energy from the sun enters the system through photosynthesis and is incorporated into plant tissue. by feeding on plants and on one another, animals move matter and energy through the system. they also influence the quantity of plant and microbial biomass present. by breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form
eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant β people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour
system is the main communication channel for air traffic control. for most communication in overland flights in air corridors a vhf - am system using channels between 108 and 137 mhz in the vhf band is used. this system has a typical transmission range of 200 miles ( 320 km ) for aircraft flying at cruising altitude. for flights in more remote areas, such as transoceanic airline flights, aircraft use the hf band or channels on the inmarsat or iridium satphone satellites. military aircraft also use a dedicated uhf - am band from 225. 0 to 399. 95 mhz. marine radio β medium - range transceivers on ships, used for ship - to - ship, ship - to - air, and ship - to - shore communication with harbormasters they use fm channels between 156 and 174 mhz in the vhf band with up to 25 watts power, giving them a range of about 60 miles ( 97 km ). some channels are half - duplex and some are full - duplex, to be compatible with the telephone network, to allow users to make telephone calls through a marine operator. amateur radio β long - range half - duplex two - way radio used by hobbyists for non - commercial purposes : recreational radio contacts with other amateurs, volunteer emergency communication during disasters, contests, and experimentation. radio amateurs must hold an amateur radio license and are given a unique callsign that must be used as an identifier in transmissions. amateur radio is restricted to small frequency bands, the amateur radio bands, spaced throughout the radio spectrum starting at 136 khz. within these bands, amateurs are allowed the freedom to transmit on any frequency using a wide variety of voice modulation methods, along with other forms of communication, such as slow - scan television ( sstv ), and radioteletype ( rtty ). additionally, amateurs are among the only radio operators still using morse code radiotelegraphy. = = = = one - way voice communication = = = = one way, unidirectional radio transmission is called simplex. baby monitor β a crib - side appliance for parents of infants that transmits the baby ' s sounds to a receiver carried by the parent, so they can monitor the baby while they are in other parts of the house. the wavebands used vary by region, but analog baby monitors generally transmit with low power in the 16, 9. 3 β 49. 9 or 900 mhz wavebands, and digital systems in the 2. 4 ghz waveband. many baby monitors have du
side aspect rcs ), compared with three or more on most other types. while writing about radar systems, authors simon kingsley and shaun quegan singled out the vulcan ' s shape as acting to reduce the rcs. in contrast, the tupolev tu - 95 russian long - range bomber ( nato reporting name ' bear ' ) was conspicuous on radar. it is now known that propellers and jet turbine blades produce a bright radar image ; the bear has four pairs of large 18 - foot ( 5. 6 m ) diameter contra - rotating propellers. another important factor is internal construction. some stealth aircraft have skin that is radar transparent or absorbing, behind which are structures termed reentrant triangles. radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. this method was first used on the blackbird series : a - 12, yf - 12a, lockheed sr - 71 blackbird. the most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22
the most puzzling issue in the foundations of quantum mechanics is perhaps that of the status of the wave function of a system in a quantum universe. is the wave function objective or subjective? does it represent the physical state of the system or merely our information about the system? and if the former, does it provide a complete description of the system or only a partial description? we shall address these questions here mainly from a bohmian perspective, and shall argue that part of the difficulty in ascertaining the status of the wave function in quantum mechanics arises from the fact that there are two different sorts of wave functions involved. the most fundamental wave function is that of the universe. from it, together with the configuration of the universe, one can define the wave function of a subsystem. we argue that the fundamental wave function, the wave function of the universe, has a law - like character.
as a traditional tool of external assistance, crutches play an important role in society. they have a wide range of applications to help either the elderly and disabled to walk or to treat certain illnesses or for post - operative rehabilitation. but there are many different types of crutches, including shoulder crutches and elbow crutches. how to choose has become an issue that deserves to be debated. because while crutches help people walk, they also have an impact on the body. inappropriate choice of crutches or long - term misuse can lead to problems such as scoliosis. previous studies were mainly experimental measurements or the construction of dynamic models to calculate the load on joints with crutches. these studies focus only on the level of the joints, ignoring the role that muscles play in this process. although some also take into account the degree of muscle activation, there is still a lack of quantitative analysis. the traditional dynamic model can be used to calculate the load on each joint. however, due to the activation of the muscle, this situation only causes part of the load transmitted to the joint, and the work of the chair will compensate the other part of the load. analysis at the muscle level allows a better understanding of the impact of crutches on the body. by comparing the levels of activation of the trunk muscles, it was found that the use of crutches for walking, especially a single crutch, can cause a large difference in the activation of the back muscles on the left and right sides, and this difference will cause muscle degeneration for a long time, leading to scoliosis. in this article taking scoliosis as an example, by analyzing the muscles around the spine, we can better understand the pathology and can better prevent diseases. the objective of this article is to analyze normal walking compared to walking with one or two crutches using opensim software to obtain the degree of activation of different muscles in order to analyze the impact of crutches on the body.
##icellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna that can move between cells β while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", and as self - replicators. = = ecology = = ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. = = = ecosystems = = = the community of living ( biotic ) organisms in conjunction with the nonliving ( abiotic ) components ( e. g., water, light, radiation, temperature, humidity, atmosphere, acidity, and soil ) of their environment is called an ecosystem. these biotic and abiotic components are linked together through nutrient cycles and energy flows. energy from the sun enters the system through photosynthesis and is incorporated into plant tissue
of cells = = = autologous : the donor and the recipient of the cells are the same individual. cells are harvested, cultured or stored, and then reintroduced to the host. as a result of the host ' s own cells being reintroduced, an antigenic response is not elicited. the body ' s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source β induced pluripotent stem cells β may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells
Question: Which system absorbs and carries food from the digestive system to the rest of the body?
A) nervous system
B) muscular system
C) circulatory system
D) respiratory system
|
C) circulatory system
|
Context:
outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites.
planetary systems can evolve dynamically even after the full growth of the planets themselves. there is actually circumstantial evidence that most planetary systems become unstable after the disappearance of gas from the protoplanetary disk. these instabilities can be due to the original system being too crowded and too closely packed or to external perturbations such as tides, planetesimal scattering, or torques from distant stellar companions. the solar system was not exceptional in this sense. in its inner part, a crowded system of planetary embryos became unstable, leading to a series of mutual impacts that built the terrestrial planets on a timescale of ~ 100 my. in its outer part, the giant planets became temporarily unstable and their orbital configuration expanded under the effect of mutual encounters. a planet might have been ejected in this phase. thus, the orbital distributions of planetary systems that we observe today, both solar and extrasolar ones, can be different from the those emerging from the formation process and it is important to consider possible long - term evolutionary effects to connect the two.
the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations.
armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets.
also launched missions to mercury in 2004, with the messenger probe demonstrating as the first use of a solar sail. nasa also launched probes to the outer solar system starting in the 1960s. pioneer 10 was the first probe to the outer planets, flying by jupiter, while pioneer 11 provided the first close up view of the planet. both probes became the first objects to leave the solar system. the voyager program launched in 1977, conducting flybys of jupiter and saturn, neptune, and uranus on a trajectory to leave the solar system. the galileo spacecraft, deployed from the space shuttle flight sts - 34, was the first spacecraft to orbit jupiter, discovering evidence of subsurface oceans on the europa and observed that the moon may hold ice or liquid water. a joint nasa - european space agency - italian space agency mission, cassini β huygens, was sent to saturn ' s moon titan, which, along with mars and europa, are the only celestial bodies in the solar system suspected of being capable of harboring life. cassini discovered three new moons of saturn and the huygens probe entered titan ' s atmosphere. the mission discovered evidence of liquid hydrocarbon lakes on titan and subsurface water oceans on the moon of enceladus, which could harbor life. finally launched in 2006, the new horizons mission was the first spacecraft to visit pluto and the kuiper belt. beyond interplanetary probes, nasa has launched many space telescopes. launched in the 1960s, the orbiting astronomical observatory were nasa ' s first orbital telescopes, providing ultraviolet, gamma - ray, x - ray, and infrared observations. nasa launched the orbiting geophysical observatory in the 1960s and 1970s to look down at earth and observe its interactions with the sun. the uhuru satellite was the first dedicated x - ray telescope, mapping 85 % of the sky and discovering a large number of black holes. launched in the 1990s and early 2000s, the great observatories program are among nasa ' s most powerful telescopes. the hubble space telescope was launched in 1990 on sts - 31 from the discovery and could view galaxies 15 billion light years away. a major defect in the telescope ' s mirror could have crippled the program, had nasa not used computer enhancement to compensate for the imperfection and launched five space shuttle servicing flights to replace the damaged components. the compton gamma ray observatory was launched from the atlantis on sts - 37 in 1991, discovering a possible source of antimatter at the center of the milky way and observing that the majority of gamma - ray bursts
three planets with minimum masses less than 10 earth masses orbit the star hd 40307, suggesting these planets may be rocky. however, with only radial velocity data, it is impossible to determine if these planets are rocky or gaseous. here we exploit various dynamical features of the system in order to assess the physical properties of the planets. observations allow for circular orbits, but a numerical integration shows that the eccentricities must be at least 0. 0001. also, planets b and c are so close to the star that tidal effects are significant. if planet b has tidal parameters similar to the terrestrial planets in the solar system and a remnant eccentricity larger than 0. 001, then, going back in time, the system would have been unstable within the lifetime of the star ( which we estimate to be 6. 1 + / - 1. 6 gyr ). moreover, if the eccentricities are that large and the inner planet is rocky, then its tidal heating may be an order of magnitude greater than extremely volcanic io, on a per unit surface area basis. if planet b is not terrestrial, e. g. neptune - like, these physical constraints would not apply. this analysis suggests the planets are not terrestrial - like, and are more like our giant planets. in either case, we find that the planets probably formed at larger radii and migrated early - on ( via disk interactions ) into their current orbits. this study demonstrates how the orbital and dynamical properties of exoplanet systems may be used to constrain the planets ' physical properties.
recent surveys have revealed a lack of close - in planets around evolved stars more massive than 1. 2 msun. such planets are common around solar - mass stars. we have calculated the orbital evolution of planets around stars with a range of initial masses, and have shown how planetary orbits are affected by the evolution of the stars all the way to the tip of the red giant branch ( rgb ). we find that tidal interaction can lead to the engulfment of close - in planets by evolved stars. the engulfment is more efficient for more - massive planets and less - massive stars. these results may explain the observed semi - major axis distribution of planets around evolved stars with masses larger than 1. 5 msun. our results also suggest that massive planets may form more efficiently around intermediate - mass stars.
three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes.
a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation.
. both probes became the first objects to leave the solar system. the voyager program launched in 1977, conducting flybys of jupiter and saturn, neptune, and uranus on a trajectory to leave the solar system. the galileo spacecraft, deployed from the space shuttle flight sts - 34, was the first spacecraft to orbit jupiter, discovering evidence of subsurface oceans on the europa and observed that the moon may hold ice or liquid water. a joint nasa - european space agency - italian space agency mission, cassini β huygens, was sent to saturn ' s moon titan, which, along with mars and europa, are the only celestial bodies in the solar system suspected of being capable of harboring life. cassini discovered three new moons of saturn and the huygens probe entered titan ' s atmosphere. the mission discovered evidence of liquid hydrocarbon lakes on titan and subsurface water oceans on the moon of enceladus, which could harbor life. finally launched in 2006, the new horizons mission was the first spacecraft to visit pluto and the kuiper belt. beyond interplanetary probes, nasa has launched many space telescopes. launched in the 1960s, the orbiting astronomical observatory were nasa ' s first orbital telescopes, providing ultraviolet, gamma - ray, x - ray, and infrared observations. nasa launched the orbiting geophysical observatory in the 1960s and 1970s to look down at earth and observe its interactions with the sun. the uhuru satellite was the first dedicated x - ray telescope, mapping 85 % of the sky and discovering a large number of black holes. launched in the 1990s and early 2000s, the great observatories program are among nasa ' s most powerful telescopes. the hubble space telescope was launched in 1990 on sts - 31 from the discovery and could view galaxies 15 billion light years away. a major defect in the telescope ' s mirror could have crippled the program, had nasa not used computer enhancement to compensate for the imperfection and launched five space shuttle servicing flights to replace the damaged components. the compton gamma ray observatory was launched from the atlantis on sts - 37 in 1991, discovering a possible source of antimatter at the center of the milky way and observing that the majority of gamma - ray bursts occur outside of the milky way galaxy. the chandra x - ray observatory was launched from the columbia on sts - 93 in 1999, observing black holes, quasars, supernova, and dark matter. it provided critical observations on the sagittarius a * black hole at the center of the milky way galaxy and
Question: Why do planets stay in orbit around the Sun?
A) attraction of gravity
B) effect of inertia
C) frictional force
D) rotational force
|
A) attraction of gravity
|
Context:
all christian authors held that the earth was round. athenagoras, an eastern christian writing around the year 175 ad, said that the earth was spherical. methodius ( c. 290 ad ), an eastern christian writing against " the theory of the chaldeans and the egyptians " said : " let us first lay bare... the theory of the chaldeans and the egyptians. they say that the circumference of the universe is likened to the turnings of a well - rounded globe, the earth being a central point. they say that since its outline is spherical,... the earth should be the center of the universe, around which the heaven is whirling. " arnobius, another eastern christian writing sometime around 305 ad, described the round earth : " in the first place, indeed, the world itself is neither right nor left. it has neither upper nor lower regions, nor front nor back. for whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture
bare... the theory of the chaldeans and the egyptians. they say that the circumference of the universe is likened to the turnings of a well - rounded globe, the earth being a central point. they say that since its outline is spherical,... the earth should be the center of the universe, around which the heaven is whirling. " arnobius, another eastern christian writing sometime around 305 ad, described the round earth : " in the first place, indeed, the world itself is neither right nor left. it has neither upper nor lower regions, nor front nor back. for whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically
the curvature radiation is applied to the explain the circular polarization of frbs. significant circular polarization is reported in both apparently non - repeating and repeating frbs. curvature radiation can produce significant circular polarization at the wing of the radiation beam. in the curvature radiation scenario, in order to see significant circular polarization in frbs ( 1 ) more energetic bursts, ( 2 ) burst with electrons having higher lorentz factor, ( 3 ) a slowly rotating neutron star at the centre are required. different rotational period of the central neutron star may explain why some frbs have high circular polarization, while others don ' t. considering possible difference in refractive index for the parallel and perpendicular component of electric field, the position angle may change rapidly over the narrow pulse window of the radiation beam. the position angle swing in frbs may also be explained by this non - geometric origin, besides that of the rotating vector model.
.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically demonstrated that the world is of a round and spherical form, yet it does not follow that the other side of the earth is bare of water ; nor even, though it be bare, does it immediately follow that it is peopled. for scripture, which proves the truth of its historical statements by the accomplishment of its prophecies, gives no false information ; and it is too absurd to say, that some men might have taken ship and traversed the whole wide ocean, and crossed from this side of the world to the other, and that thus even the inhabitants of that distant region are descended from that one first man. some historians do not view augustine ' s scriptural commentaries as endorsing any particular cosmological model, endorsing instead the view that augustine shared the common view of his contemporaries that the earth is spherical, in line with his endorsement of science in de genesi ad litteram. c. p. e. nothaft, responding to writers like leo ferrari who described augustine as endorsing a flat earth, says that "... other recent writers on the subject treat augustine ' s acceptance of the earth ' s spherical shape as a well - established fact ". while it always remained a minority view, from the mid - fourth to the seventh centuries ad, the flat - earth view experienced a revival, around the time when diodorus of tarsus founded the exegetical school known as the school of antioch, which sought to counter what he saw as the pagan cosmology of the greeks with a return to the traditional cosmology. the writings
oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars.
three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes.
outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites.
mike lockwood and mathew owens discuss how eclipse observations are aiding the development of a climatology of near - earth space
= = when 0 is said to be neither positive nor negative, the following phrases may refer to the sign of a number : a number is positive if it is greater than zero. a number is negative if it is less than zero. a number is non - negative if it is greater than or equal to zero. a number is non - positive if it is less than or equal to zero. when 0 is said to be both positive and negative, modified phrases are used to refer to the sign of a number : a number is strictly positive if it is greater than zero. a number is strictly negative if it is less than zero. a number is positive if it is greater than or equal to zero. a number is negative if it is less than or equal to zero. for example, the absolute value of a real number is always " non - negative ", but is not necessarily " positive " in the first interpretation, whereas in the second interpretation, it is called " positive " β though not necessarily " strictly positive ". the same terminology is sometimes used for functions that yield real or other signed values. for example, a function would be called a positive function if its values are positive for all arguments of its domain, or a non - negative function if all of its values are non - negative. = = = complex numbers = = = complex numbers are impossible to order, so they cannot carry the structure of an ordered ring, and, accordingly, cannot be partitioned into positive and negative complex numbers. they do, however, share an attribute with the reals, which is called absolute value or magnitude. magnitudes are always non - negative real numbers, and to any non - zero number there belongs a positive real number, its absolute value. for example, the absolute value of β3 and the absolute value of 3 are both equal to 3. this is written in symbols as | β3 | = 3 and | 3 | = 3. in general, any arbitrary real value can be specified by its magnitude and its sign. using the standard encoding, any real value is given by the product of the magnitude and the sign in standard encoding. this relation can be generalized to define a sign for complex numbers. since the real and complex numbers both form a field and contain the positive reals, they also contain the reciprocals of the magnitudes of all non - zero numbers. this means that any non - zero number may be multiplied with the reciprocal of its magnitude, that is, divided by its magnitude. it is immediate that the quotient
antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically demonstrated that the world is of a round and spherical form, yet it does not follow that the other side of the earth is bare of water ; nor even, though it be bare, does it immediately follow that it is peopled. for scripture, which proves the truth of its historical statements by the accomplishment of its prophecies, gives no false information ; and it is too absurd to say, that some men might have taken ship and traversed the whole wide ocean, and crossed from this side of the world to the other, and that thus even the inhabitants of that distant region are descended from that one first man. some historians do not view augustine ' s scriptural commentaries as endorsing any particular cosmological model, endorsing instead the view that augustine shared the common view of his contemporaries that the earth is spherical, in line with his endorsement of science in de genesi ad litteram. c. p. e. nothaft, responding to writers like leo ferrari who described augustine as endorsing a flat earth, says that "... other recent writers on the subject treat augustine ' s acceptance of the earth ' s spherical shape as a well - established fact ". while it always remained a minority view, from the mid - fourth to the seventh centuries ad, the flat - earth view experienced a revival, around the time when diodorus of tarsus founded the exegetical school known as the school of antioch, which sought to counter what he saw as the pagan cosmology of the greeks with a return to the traditional cosmology. the writings of diodorus did not survive, but are reconstructed from later criticism. this revival primarily took place in the east syriac world ( with little influence on the latin west ) where it gained proponents such as ephrem the syrian and in the popular hexaemeral homilies of jacob of serugh. chrys
Question: Earth rotates on its north-south axis. Which statement best describes one complete rotation?
A) It takes six months and causes summer and winter seasons.
B) It takes 24 hours and causes night and day.
C) It takes 29 days, which represents one cycle of the Moon's phases.
D) It takes 365 days, which represents one Earth year.
|
B) It takes 24 hours and causes night and day.
|
Context:
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
##thic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures
##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and
". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications
of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop
is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales
the walls of a victim ' s stomach. toxicology, a subfield of forensic chemistry, focuses on detecting and identifying drugs, poisons, and other toxic substances in biological samples. forensic toxicologists work on cases involving drug overdoses, poisoning, and substance abuse. their work is critical in determining whether harmful substances play a role in a person β s death or impairment. read more james marsh was the first to apply this new science to the art of forensics. he was called by the prosecution in a murder trial to give evidence as a chemist in 1832. the defendant, john bodle, was accused of poisoning his grandfather with arsenic - laced coffee. marsh performed the standard test by mixing a suspected sample with hydrogen sulfide and hydrochloric acid. while he was able to detect arsenic as yellow arsenic trisulfide, when it was shown to the jury it had deteriorated, allowing the suspect to be acquitted due to reasonable doubt. annoyed by that, marsh developed a much better test. he combined a sample containing arsenic with sulfuric acid and arsenic - free zinc, resulting in arsine gas. the gas was ignited, and it decomposed to pure metallic arsenic, which, when passed to a cold surface, would appear as a silvery - black deposit. so sensitive was the test, known formally as the marsh test, that it could detect as little as one - fiftieth of a milligram of arsenic. he first described this test in the edinburgh philosophical journal in 1836. = = = ballistics and firearms = = = ballistics is " the science of the motion of projectiles in flight ". in forensic science, analysts examine the patterns left on bullets and cartridge casings after being ejected from a weapon. when fired, a bullet is left with indentations and markings that are unique to the barrel and firing pin of the firearm that ejected the bullet. this examination can help scientists identify possible makes and models of weapons connected to a crime. henry goddard at scotland yard pioneered the use of bullet comparison in 1835. he noticed a flaw in the bullet that killed the victim and was able to trace this back to the mold that was used in the manufacturing process. = = = anthropometry = = = the french police officer alphonse bertillon was the first to apply the anthropological technique of anthropometry to law enforcement, thereby creating an identification system based on physical measurements. before that time, criminals could be identified only by name or photograph. dissatisfied with the ad hoc methods used to identify captured
and rock properties and existing underground infrastructure in construction projects. surface exploration can include on - foot surveys, geological mapping, geophysical methods, and photogrammetry. geological mapping and interpretation of geomorphology are typically completed in consultation with a geologist or engineering geologist. subsurface exploration usually involves in - situ testing ( for example, the standard penetration test and cone penetration test ). the digging of test pits and trenching ( particularly for locating faults and slide planes ) may also be used to learn about soil conditions at depth. large - diameter borings are rarely used due to safety concerns and expense. still, they are sometimes used to allow a geologist or engineer to be lowered into the borehole for direct visual and manual examination of the soil and rock stratigraphy. various soil samplers exist to meet the needs of different engineering projects. the standard penetration test, which uses a thick - walled split spoon sampler, is the most common way to collect disturbed samples. piston samplers, employing a thin - walled tube, are most commonly used to collect less disturbed samples. more advanced methods, such as the sherbrooke block sampler, are superior but expensive. coring frozen ground provides high - quality undisturbed samples from ground conditions, such as fill, sand, moraine, and rock fracture zones. geotechnical centrifuge modeling is another method of testing physical - scale models of geotechnical problems. the use of a centrifuge enhances the similarity of the scale model tests involving soil because soil ' s strength and stiffness are susceptible to the confining pressure. the centrifugal acceleration allows a researcher to obtain large ( prototype - scale ) stresses in small physical models. = = = foundation design = = = the foundation of a structure ' s infrastructure transmits loads from the structure to the earth. geotechnical engineers design foundations based on the load characteristics of the structure and the properties of the soils and bedrock at the site. generally, geotechnical engineers first estimate the magnitude and location of loads to be supported before developing an investigation plan to explore the subsurface and determine the necessary soil parameters through field and lab testing. following this, they may begin the design of an engineering foundation. the primary considerations for a geotechnical engineer in foundation design are bearing capacity, settlement, and ground movement beneath the foundations. = = = earthworks = = = geotechnical engineers are also involved in the planning and execution of earthworks, which include ground improvement, slope stabilization, and
, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest
Question: A rock sample will most likely contain
A) plants.
B) minerals.
C) water.
D) wood.
|
B) minerals.
|
Context:
angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit
the project consists to determine, mathematically, the trajectory that will take an artificial satellite to fight against the air resistance. during our work, we had to consider that our satellite will crash to the surface of our planet. we started our study by understanding the system of forces that are acting between our satellite and the earth. in this work, we had to study the second law of newton by taking knowledge of the air friction, the speed of the satellite which helped us to find the equation that relates the trajectory of the satellite itself, its speed and the density of the air depending on the altitude. finally, we had to find a mathematic relation that links the density with the altitude and then we had to put it into our movement equation. in order to verify our model, we ' ll see what happens if we give a zero velocity to the satellite.
resistant to the wet etchants. this has been used in mews pressure sensor manufacturing for example. etching progresses at the same speed in all directions. long and narrow holes in a mask will produce v - shaped grooves in the silicon. the surface of these grooves can be atomically smooth if the etch is carried out correctly, with dimensions and angles being extremely accurate. some single crystal materials, such as silicon, will have different etching rates depending on the crystallographic orientation of the substrate. this is known as anisotropic etching and one of the most common examples is the etching of silicon in koh ( potassium hydroxide ), where si < 111 > planes etch approximately 100 times slower than other planes ( crystallographic orientations ). therefore, etching a rectangular hole in a ( 100 ) - si wafer results in a pyramid shaped etch pit with 54. 7Β° walls, instead of a hole with curved sidewalls as with isotropic etching. hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide ( sio2, also known as box for soi ), usually in 49 % concentrated form, 5 : 1, 10 : 1 or 20 : 1 boe ( buffered oxide etchant ) or bhf ( buffered hf ). they were first used in medieval times for glass etching. it was used in ic fabrication for patterning the gate oxide until the process step was replaced by rie. hydrofluoric acid is considered one of the more dangerous acids in the cleanroom. electrochemical etching ( ece ) for dopant - selective removal of silicon is a common method to automate and to selectively control etching. an active p β n diode junction is required, and either type of dopant can be the etch - resistant ( " etch - stop " ) material. boron is the most common etch - stop dopant. in combination with wet anisotropic etching as described above, ece has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon. = = = = dry etching = = = = xenon difluoride ( xef2 ) is a dry vapor phase isotropic etch for silicon originally applied for me
reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 '
( potassium hydroxide ), where si < 111 > planes etch approximately 100 times slower than other planes ( crystallographic orientations ). therefore, etching a rectangular hole in a ( 100 ) - si wafer results in a pyramid shaped etch pit with 54. 7Β° walls, instead of a hole with curved sidewalls as with isotropic etching. hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide ( sio2, also known as box for soi ), usually in 49 % concentrated form, 5 : 1, 10 : 1 or 20 : 1 boe ( buffered oxide etchant ) or bhf ( buffered hf ). they were first used in medieval times for glass etching. it was used in ic fabrication for patterning the gate oxide until the process step was replaced by rie. hydrofluoric acid is considered one of the more dangerous acids in the cleanroom. electrochemical etching ( ece ) for dopant - selective removal of silicon is a common method to automate and to selectively control etching. an active p β n diode junction is required, and either type of dopant can be the etch - resistant ( " etch - stop " ) material. boron is the most common etch - stop dopant. in combination with wet anisotropic etching as described above, ece has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon. = = = = dry etching = = = = xenon difluoride ( xef2 ) is a dry vapor phase isotropic etch for silicon originally applied for mems in 1995 at university of california, los angeles. primarily used for releasing metal and dielectric structures by undercutting silicon, xef2 has the advantage of a stiction - free release unlike wet etchants. its etch selectivity to silicon is very high, allowing it to work with photoresist, sio2, silicon nitride, and various metals for masking. its reaction to silicon is " plasmaless ", is purely chemical and spontaneous and is often operated in pulsed mode. models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach. modern
plasma etching should not be conflated with the use of the same term when referring to orientation - dependent etching. the source gas for the plasma usually contains small molecules rich in chlorine or fluorine. for instance, carbon tetrachloride ( ccl4 ) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. a plasma containing oxygen is used to oxidize ( " ash " ) photoresist and facilitate its removal. ion milling, or sputter etching, uses lower pressures, often as low as 10β4 torr ( 10 mpa ). it bombards the wafer with energetic ions of noble gases, often ar +, which knock atoms from the substrate by transferring momentum. because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. on the other hand, it tends to display poor selectivity. reactive - ion etching ( rie ) operates under conditions intermediate between sputter and plasma etching ( between 10β3 and 10β1 torr ). deep reactive - ion etching ( drie ) modifies the rie technique to produce deep, narrow features. in reactive - ion etching ( rie ), the substrate is placed inside a reactor, and several gases are introduced. a plasma is struck in the gas mixture using an rf power source, which breaks the gas molecules into ions. the ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. this is known as the chemical part of reactive ion etching. there is also a physical part, which is similar to the sputtering deposition process. if the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. it is a very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. by changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical. deep reactive ion etching ( drie ) is a special subclass of rie that is growing in popularity. in this process, etch depths of hundreds of micrometers are achieved with almost vertical sidewalls. the primary technology is based on the
##ist, sio2, silicon nitride, and various metals for masking. its reaction to silicon is " plasmaless ", is purely chemical and spontaneous and is often operated in pulsed mode. models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach. modern vlsi processes avoid wet etching, and use plasma etching instead. plasma etchers can operate in several modes by adjusting the parameters of the plasma. ordinary plasma etching operates between 0. 1 and 5 torr. ( this unit of pressure, commonly used in vacuum engineering, equals approximately 133. 3 pascals. ) the plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. since neutral particles attack the wafer from all angles, this process is isotropic. plasma etching can be isotropic, i. e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i. e., exhibiting a smaller lateral undercut rate than its downward etch rate. such anisotropy is maximized in deep reactive ion etching. the use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation - dependent etching. the source gas for the plasma usually contains small molecules rich in chlorine or fluorine. for instance, carbon tetrachloride ( ccl4 ) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. a plasma containing oxygen is used to oxidize ( " ash " ) photoresist and facilitate its removal. ion milling, or sputter etching, uses lower pressures, often as low as 10β4 torr ( 10 mpa ). it bombards the wafer with energetic ions of noble gases, often ar +, which knock atoms from the substrate by transferring momentum. because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. on the other hand, it tends to display poor selectivity. reactive - ion etching ( rie ) operates under conditions intermediate between sputter and plasma etching ( between 10β3 and 10β1 torr ). deep reactive - ion etching ( drie ) modifies the rie technique to produce deep, narrow features.
and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has
fluid dynamics video demonstrating the evolution of dynamic stall on a wind turbine blade.
this is a comment on phys. rev. lett. 98, 180403 ( 2007 ) [ arxiv : 0704. 2162 ].
Question: A jet plane is moving at a constant velocity on a flat surface. Which forces act against the forward motion of the plane?
A) gravity and engine thrust
B) engine thrust and friction
C) friction and air resistance
D) air resistance and gravity
|
C) friction and air resistance
|
Context:
higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies.
enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the
the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used?
the standard theory of ideal gases ignores the interaction of the gas particles with the thermal radiation ( photon gas ) that fills the otherwise vacuum space between them. this is an unphysical feature since every material absorbs and radiates thermal energy. this interaction may be important in gases since the latter, unlike solids and liquids are capable of undergoing conspicuous volume changes. taking it into account makes the behaviour of the ideal gases more realistic and removes gibbs ' paradox.
ambient air ( see lockheed f - 117 nighthawk, rectangular nozzles on the lockheed martin f - 22 raptor, and serrated nozzle flaps on the lockheed martin f - 35 lightning ). often, cool air is deliberately injected into the exhaust flow to boost this process ( see ryan aqm - 91 firefly and northrop b - 2 spirit ). the stefan β boltzmann law shows how this results in less energy ( thermal radiation in infrared spectrum ) being released and thus reduces the heat signature. in some aircraft, the jet exhaust is vented above the wing surface to shield it from observers below, as in the lockheed f - 117 nighthawk, and the unstealthy fairchild republic a - 10 thunderbolt ii. to achieve infrared stealth, the exhaust gas is cooled to the temperatures where the brightest wavelengths it radiates are absorbed by atmospheric carbon dioxide and water vapor, greatly reducing the infrared visibility of the exhaust plume. another way to reduce the exhaust temperature is to circulate coolant fluids such as fuel inside the exhaust pipe, where the fuel tanks serve as heat sinks cooled by the flow of air along the wings. ground combat includes the use of both active and passive infrared sensors. thus, the united states marine corps ( usmc ) ground combat uniform requirements document specifies infrared reflective quality standards. = = reducing radio frequency ( rf ) emissions = = in addition to reducing infrared and acoustic emissions, a stealth vehicle must avoid radiating any other detectable energy, such as from onboard radars, communications systems, or rf leakage from electronics enclosures. the f - 117 uses passive infrared and low light level television sensor systems to aim its weapons and the f - 22 raptor has an advanced lpi radar which can illuminate enemy aircraft without triggering a radar warning receiver response. = = measuring = = the size of a target ' s image on radar is measured by the rcs, often represented by the symbol Ο and expressed in square meters. this does not equal geometric area. a perfectly conducting sphere of projected cross sectional area 1 m2 ( i. e. a diameter of 1. 13 m ) will have an rcs of 1 m2. note that for radar wavelengths much less than the diameter of the sphere, rcs is independent of frequency. conversely, a square flat plate of area 1 m2 will have an rcs of Ο = 4Ο a2 / Ξ»2 ( where a = area, Ξ» = wavelength ), or 13, 982 m2 at 10 ghz if the radar is perpendicular to the flat
shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured - pyrolized to convert the furfuryl alcohol to carbon. to provide oxidation resistance for reusability, the outer layers of the rcc are converted to silicon carbide. other examples can be seen in the " plastic " casings of television sets, cell - phones and so on. these plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene ( abs ) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. these additions may be termed reinforcing fibers, or dispersants, depending on their purpose. = = = polymers = = = polymers are chemical compounds made up of a large number of identical components linked together like chains. polymers are the raw materials ( the resins ) used to make what are commonly called plastics and rubber. plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride ( pvc ), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. rubbers include natural rubber, styrene - butadiene rubber, chloroprene, and butadiene rubber. plastics are generally classified as commodity, specialty and engineering plastics. polyvinyl chloride ( pvc ) is widely used, inexpensive, and annual production quantities are large. it lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. its fabrication and processing are simple and well - established.
a minimum atmospheric temperature, or tropopause, occurs at a pressure of around 0. 1 bar in the atmospheres of earth, titan, jupiter, saturn, uranus and neptune, despite great differences in atmospheric composition, gravity, internal heat and sunlight. in all these bodies, the tropopause separates a stratosphere with a temperature profile that is controlled by the absorption of shortwave solar radiation, from a region below characterised by convection, weather, and clouds. however, it is not obvious why the tropopause occurs at the specific pressure near 0. 1 bar. here we use a physically - based model to demonstrate that, at atmospheric pressures lower than 0. 1 bar, transparency to thermal radiation allows shortwave heating to dominate, creating a stratosphere. at higher pressures, atmospheres become opaque to thermal radiation, causing temperatures to increase with depth and convection to ensue. a common dependence of infrared opacity on pressure, arising from the shared physics of molecular absorption, sets the 0. 1 bar tropopause. we hypothesize that a tropopause at a pressure of approximately 0. 1 bar is characteristic of many thick atmospheres, including exoplanets and exomoons in our galaxy and beyond. judicious use of this rule could help constrain the atmospheric structure, and thus the surface environments and habitability, of exoplanets.
single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division
cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make
horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology )
Question: Atmospheric greenhouse gases help heat the atmosphere by
A) increasing the amount of solar radiation reaching Earth.
B) storing energy produced by human activity.
C) absorbing infrared radiation released by Earth.
D) increasing the average density of air.
|
C) absorbing infrared radiation released by Earth.
|
Context:
the maximum strength of gravity at the surface of an object of a given mass is not attained for a spherical shape, but for a small departure from sphericity.
is the science / subject of measuring and modelling the process of care in health and social care systems. nosology is the classification of diseases for various purposes. occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained. pain management ( also called pain medicine, or algiatry ) is the medical discipline concerned with the relief of pain. pharmacogenomics is a form of individualized medicine. podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. sports medicine deals with the treatment and prevention and rehabilitation of sports / exercise injuries such as muscle spasms, muscle tears, injuries to ligaments ( ligament tears or ruptures ) and their repair in athletes, amateur and professional. therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. tropical medicine deals with the prevention and treatment of tropical diseases. it is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. urgent care focuses on delivery of unscheduled, walk - in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. in some jurisdictions this function is combined with the emergency department. veterinary medicine ; veterinarians apply similar techniques as physicians to the care of non - human animals. wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. = = education and legal controls = = medical education and training varies around the world. it typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. this can be followed by postgraduate vocational training. a variety of teaching methods have been employed in medical education, still itself a focus of active research. in canada and the united states of america, a doctor of medicine degree, often abbreviated m. d., or a doctor of osteopathic medicine degree, often abbreviated as d. o. and unique to the united states, must be completed in and delivered from a recognized university. since knowledge, techniques, and medical technology continue to evolve at a
the formation of supermassive black holes ( smbh ) is intimately related to galaxy formation, although precisely how remains a mystery. i speculate that formation of, and feedback from, smbh may alleviate problems that have arisen in our understanding of the cores of dark halos of galaxies.
current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references
was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. bronze significantly advanced shipbuilding technology with better tools and bronze nails. bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. better ships enabled long - distance trade and the advance of civilization. this technological trend apparently began in the fertile crescent and spread outward over time. these developments were not, and still are not, universal. the three - age system does not accurately describe the technology history of groups outside of eurasia, and does not apply at all in the case of some isolated populations, such as the spinifex people, the sentinelese, and various amazonian tribes, which still make use of stone age technology, and have not developed agricultural or metal technology. these villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. = = = = iron age = = = = before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. the iron age involved the adoption of iron smelting technology. it generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. the raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. consequently, iron was produced in many areas. it was not possible to mass manufacture steel or pure iron because of the high temperatures required. furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. steel could be produced by forging bloomery iron to reduce the carbon content in a
a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation (
suppose that s is a surface of genus two or more, with exactly one boundary component. then the curve complex of s has one end.
##drate - rich plant products such as barley ( beer ), rice ( sake ) and grapes ( wine ). native americans have used various plants as ways of treating illness or disease for thousands of years. this knowledge native americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of
background : african swine fever is among the most devastating viral diseases of pigs. despite nearly a century of research, there is still no safe and effective vaccine available. the current situation is that either vaccines are safe but not effective, or they are effective but not safe. findings : the asf vaccine prepared using the inactivation method with propiolactone provided 98. 6 % protection within 100 days after three intranasal immunizations, spaced 7 days apart. conclusions : an inactivated vaccine made from complete african swine fever virus particles using propiolactone is safe and effective for controlling asf through mucosal immunity.
the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease β the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. within medical circles, specialities usually fit into one of two broad categories : " medicine " and " surgery ". " medicine " refers to the practice of non - operative medicine, and most of its subspecialties require preliminary training in internal medicine. in the uk, this was traditionally evidenced by passing the examination for the membership of the royal college of physicians ( mrcp ) or the equivalent college in scotland or ireland. " surgery " refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in general surgery, which in the uk leads to
Question: Scurvy is a disease that sailors often got on long voyages. It was discovered that scurvy could be prevented by eating oranges and lemons. This suggests that scurvy is a disease caused by
A) exposure to sea air
B) a nutritional deficiency
C) a microorganism
D) lack of exercise
|
B) a nutritional deficiency
|
Context:
also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in
remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling and the risks of creating more pollution. = = = e - waste recycling = = = the recycling of electronic waste ( e - waste ) has seen significant technological advancements due to increasing environmental concerns and the growing volume of electronic product disposals. traditional e - waste recycling methods, which often involve manual disassemb
the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a
= = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling
depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform
from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their
equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 )
##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models
the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united
becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under
Question: Runoff from farms that use fertilizers is entering a small lake. This will most directly affect the lake by causing
A) the lake to dry up.
B) algae to grow in the lake.
C) the lake to become deeper.
D) water in the lake to become solid.
|
B) algae to grow in the lake.
|
Context:
, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration ) ; or anabolic β the building up ( synthesis ) of compounds ( such as proteins, carbohydrates, lipids, and nucleic acids ). usually, catabolism releases energy, and anabolism consumes energy. the chemical reactions of metabolism are organized into metabolic pathways, in which
and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell
in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. in the current decades, significant progress has been done in creating genetically modified organisms ( gmos ) that enhance the diversity of applications and economical viability of industrial biotechnology. by using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical - based economy. synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. jointly biotechnology and synthetic biology play a crucial role in generating cost - effective products with nature - friendly features by using bio - based
new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper
the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration ) ; or anabolic β the building up ( synthesis ) of compounds ( such as proteins, carbohydrates, lipids, and nucleic acids ). usually, catabolism releases energy, and anabolism consumes energy. the chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. enzymes act as catalysts β they allow a
have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. in the current decades, significant progress has been done in creating genetically modified organisms ( gmos ) that enhance the diversity of applications and economical viability of industrial biotechnology. by using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse
when the hydration shell of a protein is filled with at least 0. 6 gram of water per gram of protein, a significant anti - correlation between the vibrational free energy and the potential energy of energy - minimized conformers is observed. this means that low potential energy, well - hydrated, protein conformers tend to be more rigid than high - energy ones. on the other hand, in the case of casp target 624, when its hydration shell is filled, a significant average energy gap is observed between the crystal structure and the best conformers proposed during the prediction experiment, strongly suggesting that including explicit water molecules may help identifying unlikely conformers among good - looking ones.
##tion, and pasteurization in order to become products that can be sold. there are three levels of food processing : primary, secondary, and tertiary. primary food processing involves turning agricultural products into other products that can be turned into food, secondary food processing is the making of food from readily available ingredients, and tertiary food processing is commercial production of ready - to eat or heat - and - serve foods. drying, pickling, salting, and fermenting foods were some of the oldest food processing techniques used to preserve food by preventing yeasts, molds, and bacteria to cause spoiling. methods for preserving food have evolved to meet current standards of food safety but still use the same processes as the past. biochemical engineers also work to improve the nutritional value of food products, such as in golden rice, which was developed to prevent vitamin a deficiency in certain areas where this was an issue. efforts to advance preserving technologies can also ensure lasting retention of nutrients as foods are stored. packaging plays a key role in preserving as well as ensuring the safety of the food by protecting the product from contamination, physical damage, and tampering. packaging can also make it easier to transport and serve food. a common job for biochemical engineers working in the food industry is to design ways to perform all these processes on a large scale in order to meet the demands of the population. responsibilities for this career path include designing and performing experiments, optimizing processes, consulting with groups to develop new technologies, and preparing project plans for equipment and facilities. = = = pharmaceuticals = = = in the pharmaceutical industry, bioprocess engineering plays a crucial role in the large - scale production of biopharmaceuticals, such as monoclonal antibodies, vaccines, and therapeutic proteins. the development and optimization of bioreactors and fermentation systems are essential for the mass production of these products, ensuring consistent quality and high yields. for example, recombinant proteins like insulin and erythropoietin are produced through cell culture systems using genetically modified cells. the bioprocess engineer β s role is to optimize variables like temperature, ph, nutrient availability, and oxygen levels to maximize the efficiency of these systems. the growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. this involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance. as the demand for biopharmaceutical products increases, advancements
small ubiquitin - related modifier ( sumo ) proteins are widely expressed in eukaryotic cells, which are reversibly coupled to their substrates by motif recognition, called sumoylation. two interesting questions are 1 ) how many potential sumo substrates may be included in mammalian proteomes, such as human and mouse, 2 ) and given a sumo substrate, can we recognize its sumoylation sites? to answer these two questions, previous prediction systems of sumo substrates mainly adopted the pattern recognition methods, which could get high sensitivity with relatively too many potential false positives. so we use phylogenetic conservation between mouse and human to reduce the number of potential false positives.
not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition of the wild - type gene with a reporting element such as green fluorescent protein ( gfp ) that will allow easy visualisation of the products of the genetic modification. while this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment. more sophisticated techniques are now in development that can track protein products without mitigating their function, such as the addition of small sequences that will serve as binding motifs to monoclonal antibodies. expression studies aim to discover where and when specific proteins are produced. in these experiments, the dna sequence before the dna that codes for a protein, known as a gene ' s promoter, is reintroduced into an organism with the protein coding region replaced by a reporter gene such as gfp or an enzyme that catalyses the production of a dye. thus the time and place where a particular protein is produced can be observed. expression studies can be taken a step further by altering the promoter to find which pieces are crucial for the proper expression of the gene and are actually bound by transcription factor proteins ; this process is known as promoter bashing. = = = industrial = = = organisms can have their cells transformed with a gene coding for a useful protein, such as an enzyme, so that they will overexpress the desired protein. mass quantities of the protein can then be manufactured by growing the transformed organism in bioreactor equipment using industrial fermentation, and then purifying the protein. some genes do not work well in bacteria, so yeast, insect cells or mammalian cells can also be used. these techniques are used to produce medicines such as insulin, human growth hormone, and vaccines, supplements such as tryptophan, aid in the production of food ( chymosin in cheese making ) and fuels. other applications with genetically engineered bacteria could involve making them perform tasks outside their natural cycle, such as making biofuels, cleaning up oil spills, carbon and other toxic waste and detecting arsenic in drinking water. certain genetically modified microbes can also be used in biomining and bioremediation, due to their ability to extract heavy metals from their environment and incorporate them into compounds that are more easily recover
Question: Why is protein an important part of a healthy diet?
A) It is needed to change glucose to energy.
B) It is needed to store nutrients.
C) It is needed to repair tissue.
D) It is needed to produce water.
|
C) It is needed to repair tissue.
|
Context:
has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well β not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named
, they can fission as well, leading to a chain reaction. the average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self - sustaining chain reaction. a mass of fissile material large enough ( and in a suitable configuration ) to induce a self - sustaining chain reaction is called a critical mass. when a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. if there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. when discovered on the eve of world war ii, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb β a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. the manhattan project, run by the united states with the help of the united kingdom and canada, developed multiple fission weapons which were used against japan in 1945 at hiroshima and nagasaki. during the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity. in 1951, the first nuclear fission power plant was the first to produce electricity at the experimental breeder reactor no. 1 ( ebr - 1 ), in arco, idaho, ushering in the " atomic age " of more intensive human energy use. however, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. this is what allows nuclear reactors to be built. fast neutrons are not easily captured by nuclei ; they must be slowed ( slow neutrons ), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. today, this type of fission is commonly used to generate electricity. = = = nuclear fusion = = = if nuclei are forced to collide, they can undergo nuclear fusion. this process may release or absorb energy. when the resulting nucleus is lighter than that of iron, energy is normally released ; when the nucleus is heavier than that of iron, energy is generally absorbed. this process of fusion occurs in stars, which derive their energy from hydrogen and helium. they form, through stellar nucleos
it seems natural to ask why the universe exists at all. modern physics suggests that the universe can exist all by itself as a self - contained system, without anything external to create or sustain it. but there might not be an absolute answer to why it exists. i argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts ; the universe simply is, without ultimate cause or explanation.
it is believed that there may have been a large number of black holes formed in the very early universe. these would have quantised masses. a charged ` ` elementary black hole ' ' ( with the minimum possible mass ) can capture electrons, protons and other charged particles to form a ` ` black hole atom ' '. we find the spectrum of such an object with a view to laboratory and astronomical observation of them, and estimate the lifetime of the bound states. there is no limit to the charge of the black hole, which gives us the possibility of observing z > 137 bound states and transitions at the lower continuum. negatively charged black holes can capture protons. for z > 1, the orbiting protons will coalesce to form a nucleus ( after beta - decay of some protons to neutrons ), with a stability curve different to that of free nuclei. in this system there is also the distinct possibility of single quark capture. this leads to the formation of a coloured black hole that plays the role of an extremely heavy quark interacting strongly with the other two quarks. finally we consider atoms formed with much larger black holes.
- sustaining chain reaction. a mass of fissile material large enough ( and in a suitable configuration ) to induce a self - sustaining chain reaction is called a critical mass. when a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. if there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. when discovered on the eve of world war ii, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb β a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. the manhattan project, run by the united states with the help of the united kingdom and canada, developed multiple fission weapons which were used against japan in 1945 at hiroshima and nagasaki. during the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity. in 1951, the first nuclear fission power plant was the first to produce electricity at the experimental breeder reactor no. 1 ( ebr - 1 ), in arco, idaho, ushering in the " atomic age " of more intensive human energy use. however, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. this is what allows nuclear reactors to be built. fast neutrons are not easily captured by nuclei ; they must be slowed ( slow neutrons ), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. today, this type of fission is commonly used to generate electricity. = = = nuclear fusion = = = if nuclei are forced to collide, they can undergo nuclear fusion. this process may release or absorb energy. when the resulting nucleus is lighter than that of iron, energy is normally released ; when the nucleus is heavier than that of iron, energy is generally absorbed. this process of fusion occurs in stars, which derive their energy from hydrogen and helium. they form, through stellar nucleosynthesis, the light elements ( lithium to calcium ) as well as some of the heavy elements ( beyond iron and nickel, via the s - process ). the remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the r - process. of course
the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements ' resulting unique chronological timescales would then give inconsistent time estimates. in refutation of young earth claims of inconstant decay rates affecting the reliability of radiometric dating, roger c. wiens, a physicist specializing in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio
on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union
schr \ " odinger ' s cat puzzle is resolved. the reason why we do not see a macroscopic superposition of states is cleared in the light of everett ' s formulation of quantum mechanics.
##ting the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. english scientist john dalton proposed the modern theory of atoms ; that all substances are composed of indivisible ' atoms ' of matter and that different atoms have varying atomic weights. the development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, jons jacob berzelius and humphry davy, made possible by the prior invention of the voltaic pile by alessandro volta. davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. british william prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. j. a. r. newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by dmitri mendeleev and independently by several other scientists including julius lothar meyer. the inert gases, later called the noble gases were discovered by william ramsay in collaboration with lord rayleigh at the end of the century, thereby filling in the basic structure of the table. organic chemistry was developed by justus von liebig and others, following friedrich wohler ' s synthesis of urea. other crucial 19th century advances were ; an understanding of valence bonding ( edward frankland in 1852 ) and the application of thermodynamics to chemistry ( j. w. gibbs and svante arrhenius in the 1870s ). at the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. in 1897, j. j. thomson of the university of cambridge discovered the electron and soon after the french scientist becquerel as well as the couple pierre and marie curie investigated the phenomenon of radioactivity. in a series of pioneering scattering experiments ernest rutherford at the university of manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. his work on atomic structure was improved on by his students, the danish physicist niels bohr, the englishman henry moseley and the german otto hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. the electronic theory
in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio - chronological and cosmological perspective ' by robert v. gentry. " baillieul noted that gentry was a physicist with no background in geology and given the absence of this background, gentry had misrepresented the geological context from which the specimens were collected. additionally, he noted that gentry relied on research from the
Question: An atom will always have
A) a single, negatively-charged nucleus.
B) equal numbers of protons and electrons.
C) "shared" electrons from another atom.
D) a stable number of charged neutrons.
|
B) equal numbers of protons and electrons.
|
Context:
on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of gmos. the development of a regulatory framework began in 1975, at asilomar, california. the asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. as the technology improved
the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united
cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of
options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygen
generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various
new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper
the other hand, multiplication does not have this same property, as distance is not invariant under multiplication. angles and ratios of distances are invariant under scalings, rotations, translations and reflections. these transformations produce similar shapes, which is the basis of trigonometry. in contrast, angles and ratios are not invariant under non - uniform scaling ( such as stretching ). the sum of a triangle ' s interior angles ( 180Β° ) is invariant under all the above operations. as another example, all circles are similar : they can be transformed into each other and the ratio of the circumference to the diameter is invariant ( denoted by the greek letter Ο ( pi ) ). some more complicated examples : the real part and the absolute value of a complex number are invariant under complex conjugation. the tricolorability of knots. the degree of a polynomial is invariant under a linear change of variables. the dimension and homology groups of a topological object are invariant under homeomorphism. the number of fixed points of a dynamical system is invariant under many mathematical operations. euclidean distance is invariant under orthogonal transformations. area is invariant under linear maps which have determinant Β±1 ( see equiareal map Β§ linear transformations ). some invariants of projective transformations include collinearity of three or more points, concurrency of three or more lines, conic sections, and the cross - ratio. the determinant, trace, eigenvectors, and eigenvalues of a linear endomorphism are invariant under a change of basis. in other words, the spectrum of a matrix is invariant under a change of basis. the principal invariants of tensors do not change with rotation of the coordinate system ( see invariants of tensors ). the singular values of a matrix are invariant under orthogonal transformations. lebesgue measure is invariant under translations. the variance of a probability distribution is invariant under translations of the real line. hence the variance of a random variable is unchanged after the addition of a constant. the fixed points of a transformation are the elements in the domain that are invariant under the transformation. they may, depending on the application, be called symmetric with respect to that transformation. for example, objects with translational symmetry are invariant under certain translations. the integral [UNK] m k d ΞΌ { \ textstyle \ int _ { m } k \, d \ mu } of the gaussian curvature k { \ displaystyle k } of a two - dimensional riemannian manifold ( m, g ) { \
industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll
##angulation from bearings taken by two rdf stations separated geographically, as the point where the two bearing lines cross, this is called a " fix ". military forces use rdf to locate enemy forces by their tactical radio transmissions, counterintelligence services use it to locate clandestine transmitters used by espionage agents, and governments use it to locate unlicensed transmitters or interference sources. older rdf receivers used rotatable loop antennas, the antenna is rotated until the radio signal strength is weakest, indicating the transmitter is in one of the antenna ' s two nulls. the nulls are used since they are sharper than the antenna ' s lobes ( maxima ). more modern receivers use phased array antennas which have a much greater angular resolution. animal migration tracking β a widely used technique in wildlife biology, conservation biology, and wildlife management in which small battery - powered radio transmitters are attached to wild animals so their movements can be tracked with a directional rdf receiver. sometimes the transmitter is implanted in the animal. the vhf band is typically used since antennas in this band are fairly compact. the receiver has a directional antenna ( typically a small yagi ) which is rotated until the received signal is strongest ; at this point the antenna is pointing in the direction of the animal. sophisticated systems used in recent years use satellites to track the animal, or geolocation tags with gps receivers which record and transmit a log of the animal ' s location. = = = = remote control = = = = radio remote control is the use of electronic control signals sent by radio waves from a transmitter to control the actions of a device at a remote location. remote control systems may also include telemetry channels in the other direction, used to transmit real - time information on the state of the device back to the control station. uncrewed spacecraft are an example of remote - controlled machines, controlled by commands transmitted by satellite ground stations. most handheld remote controls used to control consumer electronics products like televisions or dvd players actually operate by infrared light rather than radio waves, so are not examples of radio remote control. a security concern with remote control systems is spoofing, in which an unauthorized person transmits an imitation of the control signal to take control of the device. examples of radio remote control : unmanned aerial vehicle ( uav, drone ) β a drone is an aircraft without an onboard pilot, flown by remote control by a pilot in another location, usually in a piloting station on the ground. they are used by the military for reconnaissance and ground attack, and
depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform
Question: Students on two different school campuses are comparing the growth rate of grass three weeks after fertilizer has been applied. The same fertilizer and the same amount of water are used on both campuses. Which additional variable is most important to control when the results of the two investigations are compared?
A) type of grass used
B) amount of trees in the area
C) the weather conditions of the day
D) the time of day the measurements are taken
|
A) type of grass used
|
Context:
and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. a single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. the process results from the epigenetic activation of some genes and inhibition of others. unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. while plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. epigenetic changes can lead to paramutations, which do not follow the mendelian heritage rules. these epigenetic marks are carried from one generation to the next,
is a verma module transformed into another verma module by a selfequivalence? the answer is affirmative and the proof suggests a notion of standard object in the category of harish - chandra modules that coincides often, but not always, with the usual one.
the world is changing at an ever - increasing pace. and it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity : structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. how to respond to this challenge? and what are the costs?
or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. a single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. the process results from the epigenetic activation of some genes and inhibition of others. unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. while plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. epigenetic changes can lead to paramutations, which do not follow the mendelian heritage rules. these epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. = = plant evolution = = the chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, ( commonly but incorrectly known as " blue - green algae " ) and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry,
a particular example of chaos can be conceived in the interaction of non - linear oscillator with a harmonic gravitational wave. when we replace the linear potential forces by the therm sin ( x ), the type of solution becomes subject to external perturbation. although the perturbation produced by the gravitational wave is weak the standard estimations allow to predict the appearance of chaos at definite range of parameters. this qualitative change in the character of motion immediately detects the fact of impact with gravitational wave. another advantage relates to a broad range of frequencies so that the narrow resonance band is not required.
much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent β the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost
simple examples are constructed that show the entanglement of two qubits being both increased and decreased by interactions on just one of them. one of the two qubits interacts with a third qubit, a control, that is never entangled or correlated with either of the two entangled qubits and is never entangled, but becomes correlated, with the system of those two qubits. the two entangled qubits do not interact, but their state can change from maximally entangled to separable or from separable to maximally entangled. similar changes for the two qubits are made with a swap operation between one of the qubits and a control ; then there are compensating changes of entanglement that involve the control. when the entanglement increases, the map that describes the change of the state of the two entangled qubits is not completely positive. combination of two independent interactions that individually give exponential decay of the entanglement can cause the entanglement to not decay exponentially but, instead, go to zero at a finite time.
if the hazard rate $ \ frac { f ' ( x ) } { 1 - f ( x ) } $ is increasing ( in $ x $ ), then $ \ mathbb e \, ( x _ { n : n } - x _ { n - 1 : n } ) $ is decreasing ( in $ n $ ), and moreover, completely monotone.
are invariant under homeomorphism. the number of fixed points of a dynamical system is invariant under many mathematical operations. euclidean distance is invariant under orthogonal transformations. area is invariant under linear maps which have determinant Β±1 ( see equiareal map Β§ linear transformations ). some invariants of projective transformations include collinearity of three or more points, concurrency of three or more lines, conic sections, and the cross - ratio. the determinant, trace, eigenvectors, and eigenvalues of a linear endomorphism are invariant under a change of basis. in other words, the spectrum of a matrix is invariant under a change of basis. the principal invariants of tensors do not change with rotation of the coordinate system ( see invariants of tensors ). the singular values of a matrix are invariant under orthogonal transformations. lebesgue measure is invariant under translations. the variance of a probability distribution is invariant under translations of the real line. hence the variance of a random variable is unchanged after the addition of a constant. the fixed points of a transformation are the elements in the domain that are invariant under the transformation. they may, depending on the application, be called symmetric with respect to that transformation. for example, objects with translational symmetry are invariant under certain translations. the integral [UNK] m k d ΞΌ { \ textstyle \ int _ { m } k \, d \ mu } of the gaussian curvature k { \ displaystyle k } of a two - dimensional riemannian manifold ( m, g ) { \ displaystyle ( m, g ) } is invariant under changes of the riemannian metric g { \ displaystyle g }. this is the gauss β bonnet theorem. = = = mu puzzle = = = the mu puzzle is a good example of a logical problem where determining an invariant is of use for an impossibility proof. the puzzle asks one to start with the word mi and transform it into the word mu, using in each step one of the following transformation rules : if a string ends with an i, a u may be appended ( xi β xiu ) the string after the m may be completely duplicated ( mx β mxx ) any three consecutive i ' s ( iii ) may be replaced with a single u ( xiiiy β xuy ) any two consecutive u ' s may be removed ( xuuy β xy ) an example derivation ( with superscripts indicating the applied rules ) is mi β2 mii β
i transform the trapdoor problem of hfe into a linear algebra problem.
Question: A caterpillar changing into a butterfly is an example of
A) instinct.
B) duplication.
C) reproduction.
D) metamorphosis.
|
D) metamorphosis.
|
Context:
of cells = = = autologous : the donor and the recipient of the cells are the same individual. cells are harvested, cultured or stored, and then reintroduced to the host. as a result of the host ' s own cells being reintroduced, an antigenic response is not elicited. the body ' s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source β induced pluripotent stem cells β may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells
s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source β induced pluripotent stem cells β may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells are stem cells which can divide into further stem cells or differentiate into any cell type in the body, including extra - embryonic tissue. pluripotent cells are stem cells which can differentiate into any cell type in the body except extra - embryonic tissue. induced pluripotent stem cells ( ipscs )
##iation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. the radiation sources used include radioisotope gamma ray sources, x - ray generators and electron accelerators. further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re - hydration. irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal ( in this context ' ionizing radiation ' is implied ). as such it is also used on non - food items, such as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioact
an antibody is to be generated. usually this is done by a series of injections of the antigen in question, over the course of several weeks. these injections are typically followed by the use of in vivo electroporation, which significantly enhances the immune response. once splenocytes are isolated from the mammal ' s spleen, the b cells are fused with immortalised myeloma cells. the fusion of the b cells with myeloma cells can be done using electrofusion. electrofusion causes the b cells and myeloma cells to align and fuse with the application of an electric field. alternatively, the b - cells and myelomas can be made to fuse by chemical protocols, most often using polyethylene glycol. the myeloma cells are selected beforehand to ensure they are not secreting antibody themselves and that they lack the hypoxanthine - guanine phosphoribosyltransferase ( hgprt ) gene, making them sensitive ( or vulnerable ) to the hat medium ( see below ). fused cells are incubated in hat medium ( hypoxanthine - aminopterin - thymidine medium ) for roughly 10 to 14 days. aminopterin blocks the pathway that allows for nucleotide synthesis. hence, unfused myeloma cells die, as they cannot produce nucleotides by the de novo or salvage pathways because they lack hgprt. removal of the unfused myeloma cells is necessary because they have the potential to outgrow other cells, especially weakly established hybridomas. unfused b cells die as they have a short life span. in this way, only the b cell - myeloma hybrids survive, since the hgprt gene coming from the b cells is functional. these cells produce antibodies ( a property of b cells ) and are immortal ( a property of myeloma cells ). the incubated medium is then diluted into multi - well plates to such an extent that each well contains only one cell. since the antibodies in a well are produced by the same b cell, they will be directed towards the same epitope, and are thus monoclonal antibodies. the next stage is a rapid primary screening process, which identifies and selects only those hybridomas that produce antibodies of appropriate specificity. the first screening technique used is called elisa. the hybridoma culture supernatant, secondary enzyme labeled conjugate, and chromogenic substrate, are then inc
cell. in juxtacrine signaling, there is direct contact between the signaling and responding cells. finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. for instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. other types of receptors include protein kinase receptors ( e. g., receptor for the hormone insulin ) and g protein - coupled receptors. activation of g protein - coupled receptors can initiate second messenger cascades. the process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction. = = = cell cycle = = = the cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. these events include the duplication of its dna and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. in eukaryotes ( i. e., animal, plant, fungal, and protist cells ), there are two distinct types of cell division : mitosis and meiosis. mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. in general, mitosis ( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions
##rates and peripheral blood, further development of this method is necessary before it can be used routinely. one major drawback of immuno - cytochemistry is that only tumor - associated and not tumor - specific monoclonal antibodies are used, and as a result, some cross - reaction with normal cells can occur. in order to effectively stage breast cancer and assess the efficacy of purging regimens prior to autologous stem cell infusion, it is important to detect even small quantities of breast cancer cells. immuno - histochemical methods are ideal for this purpose because they are simple, sensitive, and quite specific. franklin et al. performed a sensitive immuno - cytochemical assay by using a combination of four monoclonal antibodies ( 260f9, 520c9, 317g5 and bre - 3 ) against tumor cell surface glycoproteins to identify breast tumour cells in bone marrow and peripheral blood. they concluded from the results that immuno - cytochemical staining of bone marrow and peripheral blood is a sensitive and simple way to detect and quantify breast cancer cells. one of the main reasons for metastatic relapse in patients with solid tumours is the early dissemination of malignant cells. the use of monoclonal antibodies ( mabs ) specific for cytokeratins can identify disseminated individual epithelial tumor cells in the bone marrow. one study reports on having developed an immuno - cytochemical procedure for simultaneous labeling of cytokeratin component no. 18 ( ck18 ) and prostate specific antigen ( psa ). this would help in the further characterization of disseminated individual epithelial tumor cells in patients with prostate cancer. the twelve control aspirates from patients with benign prostatic hyperplasia showed negative staining, which further supports the specificity of ck18 in detecting epithelial tumour cells in bone marrow. in most cases of malignant disease complicated by effusion, neoplastic cells can be easily recognized. however, in some cases, malignant cells are not so easily seen or their presence is too doubtful to call it a positive report. the use of immuno - cytochemical techniques increases diagnostic accuracy in these cases. ghosh, mason and spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease. conventional cytological examination had not revealed any neoplastic cells. three monocl
a property of myeloma cells ). the incubated medium is then diluted into multi - well plates to such an extent that each well contains only one cell. since the antibodies in a well are produced by the same b cell, they will be directed towards the same epitope, and are thus monoclonal antibodies. the next stage is a rapid primary screening process, which identifies and selects only those hybridomas that produce antibodies of appropriate specificity. the first screening technique used is called elisa. the hybridoma culture supernatant, secondary enzyme labeled conjugate, and chromogenic substrate, are then incubated, and the formation of a colored product indicates a positive hybridoma. alternatively, immunocytochemical, western blot, and immunoprecipitation - mass spectrometry. unlike western blot assays, immunoprecipitation - mass spectrometry facilitates screening and ranking of clones which bind to the native ( non - denaturated ) forms of antigen proteins. flow cytometry screening has been used for primary screening of a large number ( ~ 1000 ) of hybridoma clones recognizing the native form of the antigen on the cell surface. in the flow cytometry - based screening, a mixture of antigen - negative cells and antigen - positive cells is used as the antigen to be tested for each hybridoma supernatant sample. the b cell that produces the desired antibodies can be cloned to produce many identical daughter clones. supplemental media containing interleukin - 6 ( such as briclone ) are essential for this step. once a hybridoma colony is established, it will continually grow in culture medium like rpmi - 1640 ( with antibiotics and fetal bovine serum ) and produce antibodies. multiwell plates are used initially to grow the hybridomas, and after selection, are changed to larger tissue culture flasks. this maintains the well - being of the hybridomas and provides enough cells for cryopreservation and supernatant for subsequent investigations. the culture supernatant can yield 1 to 60 ΞΌg / ml of monoclonal antibody, which is maintained at - 20 Β°c or lower until required. by using culture supernatant or a purified immunoglobulin preparation, further analysis of a potential monoclonal antibody producing hybridoma can be made in terms of reactivity, specificity, and cross - reactivity. = = applications = = the use of mono
##tase, human chorionic gonadotrophin, Ξ± - fetoprotein and others are organ - associated antigens and the production of monoclonal antibodies against these antigens helps in determining the nature of a primary tumor. monoclonal antibodies are especially useful in distinguishing morphologically similar lesions, like pleural and peritoneal mesothelioma, adenocarcinoma, and in the determination of the organ or tissue origin of undifferentiated metastases. selected monoclonal antibodies help in the detection of occult metastases ( cancer of unknown primary origin ) by immuno - cytological analysis of bone marrow, other tissue aspirates, as well as lymph nodes and other tissues and can have increased sensitivity over normal histopathological staining. one study performed a sensitive immuno - histochemical assay on bone marrow aspirates of 20 patients with localized prostate cancer. three monoclonal antibodies ( t16, c26, and ae - 1 ), capable of recognizing membrane and cytoskeletal antigens expressed by epithelial cells to detect tumour cells, were used in the assay. bone marrow aspirates of 22 % of patients with localized prostate cancer ( stage b, 0 / 5 ; stage c, 2 / 4 ), and 36 % patients with metastatic prostate cancer ( stage d1, 0 / 7 patients ; stage d2, 4 / 4 patients ) had antigen - positive cells in their bone marrow. it was concluded that immuno - histochemical staining of bone marrow aspirates are very useful to detect occult bone marrow metastases in patients with apparently localized prostate cancer. although immuno - cytochemistry using tumor - associated monoclonal antibodies has led to an improved ability to detect occult breast cancer cells in bone marrow aspirates and peripheral blood, further development of this method is necessary before it can be used routinely. one major drawback of immuno - cytochemistry is that only tumor - associated and not tumor - specific monoclonal antibodies are used, and as a result, some cross - reaction with normal cells can occur. in order to effectively stage breast cancer and assess the efficacy of purging regimens prior to autologous stem cell infusion, it is important to detect even small quantities of breast cancer cells. immuno - histochemical methods are ideal for this purpose because they are simple, sensitive, and quite specific
naturally take up foreign dna. this ability can be induced in other bacteria via stress ( e. g. thermal or electric shock ), which increases the cell membrane ' s permeability to dna ; up - taken dna can either integrate with the genome or exist as extrachromosomal dna. dna is generally inserted into animal cells using microinjection, where it can be injected through the cell ' s nuclear envelope directly into the nucleus, or through the use of viral vectors. plant genomes can be engineered by physical methods or by use of agrobacterium for the delivery of sequences hosted in t - dna binary vectors. in plants the dna is often inserted using agrobacterium - mediated transformation, taking advantage of the agrobacteriums t - dna sequence that allows natural insertion of genetic material into plant cells. other methods include biolistics, where particles of gold or tungsten are coated with dna and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid dna. as only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. in plants this is accomplished through the use of tissue culture. in animals it is necessary to ensure that the inserted dna is present in the embryonic stem cells. bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. selectable markers are used to easily differentiate transformed from untransformed cells. these markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant. further testing using pcr, southern hybridization, and dna sequencing is conducted to confirm that an organism contains the new gene. these tests can also confirm the chromosomal location and copy number of the inserted gene. the presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products ( rna and protein ) are also used. these include northern hybridisation, quantitative rt - pcr, western blot, immunofluorescence, elisa and phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally
. in order to effectively stage breast cancer and assess the efficacy of purging regimens prior to autologous stem cell infusion, it is important to detect even small quantities of breast cancer cells. immuno - histochemical methods are ideal for this purpose because they are simple, sensitive, and quite specific. franklin et al. performed a sensitive immuno - cytochemical assay by using a combination of four monoclonal antibodies ( 260f9, 520c9, 317g5 and bre - 3 ) against tumor cell surface glycoproteins to identify breast tumour cells in bone marrow and peripheral blood. they concluded from the results that immuno - cytochemical staining of bone marrow and peripheral blood is a sensitive and simple way to detect and quantify breast cancer cells. one of the main reasons for metastatic relapse in patients with solid tumours is the early dissemination of malignant cells. the use of monoclonal antibodies ( mabs ) specific for cytokeratins can identify disseminated individual epithelial tumor cells in the bone marrow. one study reports on having developed an immuno - cytochemical procedure for simultaneous labeling of cytokeratin component no. 18 ( ck18 ) and prostate specific antigen ( psa ). this would help in the further characterization of disseminated individual epithelial tumor cells in patients with prostate cancer. the twelve control aspirates from patients with benign prostatic hyperplasia showed negative staining, which further supports the specificity of ck18 in detecting epithelial tumour cells in bone marrow. in most cases of malignant disease complicated by effusion, neoplastic cells can be easily recognized. however, in some cases, malignant cells are not so easily seen or their presence is too doubtful to call it a positive report. the use of immuno - cytochemical techniques increases diagnostic accuracy in these cases. ghosh, mason and spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease. conventional cytological examination had not revealed any neoplastic cells. three monoclonal antibodies ( anti - cea, ca 1 and hmfg - 2 ) were used to search for malignant cells. immunocytochemical labelling was performed on unstained smears, which had been stored at - 20 Β°c up to 18 months. twelve of the forty - one cases
Question: Some immune cells are actively involved in ingesting, destroying, and presenting invading microbial antigens on their surface to stimulate other cells to produce antibodies. Which of these cells is responsible for initiating such an immune response?
A) mast cells
B) phagocytes
C) B-lymphocytes
D) T-lymphocytes
|
B) phagocytes
|
Context:
it to divide into two daughter cells. these events include the duplication of its dna and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. in eukaryotes ( i. e., animal, plant, fungal, and protist cells ), there are two distinct types of cell division : mitosis and meiosis. mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. in general, mitosis ( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ft
. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support
of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic dna damage and genetic complementation which masks the expression of deleterious recessive mutations. the beneficial effect of genetic complementation, derived from outcrossing ( cross - fertilization ) is also referred to as hybrid vigor or heterosis. charles
protist cells ), there are two distinct types of cell division : mitosis and meiosis. mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. in general, mitosis ( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = mei
are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its
the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic dna damage and genetic complementation which masks the expression of deleterious recessive mutations. the beneficial effect of genetic complementation, derived from outcrossing ( cross - fertilization ) is also referred to as hybrid vigor or heterosis. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted β the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilis
invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. most protists are unicellular ; these are called microbial eukaryotes. plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom plantae, which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna
likely that protists share a common ancestor ( the last eukaryotic common ancestor ), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. like groupings such as algae, invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. most protists are unicellular ; these are called microbial eukaryotes. plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom plantae, which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms,
##ochondria and chloroplasts, both of which are now part of modern - day eukaryotic cells. the major lineages of eukaryotes diversified in the precambrian about 1. 5 billion years ago and can be classified into eight major clades : alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. while it is likely that protists share a common ancestor ( the last eukaryotic common ancestor ), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. like groupings such as algae, invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. most protists are unicellular ; these are called microbial eukaryotes. plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom plantae, which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals
into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off
Question: A single prokaryotic cell can divide several times in an hour. Few eukaryotic cells can divide as quickly. Which of the following statements best explains this difference?
A) Eukaryotic cells are smaller than prokaryotic cells.
B) Eukaryotic cells have less DNA than prokaryotic cells.
C) Eukaryotic cells have more cell walls than prokaryotic cells.
D) Eukaryotic cells are more structurally complex than prokaryotic cells.
|
D) Eukaryotic cells are more structurally complex than prokaryotic cells.
|
Context:
is the scientific study of inheritance. mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. it has several principles. the first is that genetic characteristics, alleles, are discrete and have alternate forms ( e. g., purple vs. white or tall vs. dwarf ), each inherited from one of two parents. based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive ; an organism with at least one dominant allele will display the phenotype of that dominant allele. during gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. heterozygotic individuals produce gametes with an equal frequency of two alleles. finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can
cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short
for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals ( cattle or pigs ). the genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. biotechnology has also enabled emerging therapeutics like gene therapy. the application of biotechnology to basic science ( for example through the human genome project ) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child ' s parentage ( genetic mother and father ) or in general a person ' s ancestry. in addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. genetic testing identifies changes in chromosomes, genes, or proteins. most of the time, testing is used to find changes that are associated with inherited disorders. the results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person ' s chance of developing or passing on a genetic disorder. as of 2011 several hundred genetic tests were in use. since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. = = = agriculture = = = genetically modified crops ( " gm crops ", or " biotech crops " ) are plants used in agriculture, the dna of which has been modified with genetic engineering techniques. in most cases, the main aim is to introduce a new trait that does not occur naturally in the species. biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments ( e. g. resistance to a herbicide ), reduction of spoilage, or improving the nutrient profile of the crop. examples in non - food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. farmers have widely adopted gm technology. between 1996 and 2011, the total surface area of land cultivated with gm crops had increased by a factor of 94, from 17, 000 to 1, 600, 000 square
of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics,
##tes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical ( e. g., nitrous acid, benzopyrene ) or radiation ( e. g., x - ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes ). mutations can lead to phenotypic effects such as loss - of - function, gain - of - function, and conditional mutations. some mutations are beneficial, as they are a source of genetic variation for evolution. others are harmful if they were to result in a loss of function of genes needed for survival. = = = gene expression = = = gene expression is the molecular process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna
tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the
to chromatin, which is a complex of dna and protein found in eukaryotic cells. = = = genes, development, and evolution = = = development is the process by which a multicellular organism ( plant or animal ) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. cellular differentiation dramatically changes a cell ' s size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. with a few exceptions, cellular differentiation almost never involves a change in the dna sequence itself. thus, different cells can have very different physical characteristics despite having the same genome. morphogenesis, or the development of body form, is the result of spatial differences in gene expression. a small fraction of the genes in an organism ' s genome called the developmental - genetic toolkit control the development of that organism. these toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. among the most important toolkit genes are the hox genes. hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. = = evolution = = = = = evolutionary processes = = = evolution is a central organizing concept in biology. it is the change in heritable characteristics of populations over successive generations. in artificial selection, animals were selectively bred for specific traits. given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. he further inferred that this would lead to the
phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end - joining. there are four families of engineered nucleases : meganucleases, zinc finger nucleases, transcription activator - like effector nucleases ( talens ), and the cas9 - guiderna system ( adapted from crispr ). talen and crispr are the two most commonly used and each has its own advantages. talens have greater target specificity, while crispr is easier to design and more efficient. in addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout. = = applications = = genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and microorganisms. bacteria, the first organisms to be genetically modified, can have plasmid dna inserted containing new genes that code for medicines or enzymes that process food and other substrates. plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines. most commercialised gmos are insect resistant or herbicide tolerant crop plants. genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. the genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk. = = = medicine = = = genetic engineering has many applications to medicine that include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin,
for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition
##ply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition of the wild - type gene with a reporting element such as green fluorescent protein ( gfp ) that will allow easy visualisation of the products of the genetic modification. while this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment.
Question: Which biological process determines the probability that particular alleles will be found in any given gamete?
A) mutation
B) meiosis
C) cell cycle
D) protein synthesis
|
B) meiosis
|
Context:
it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. the modulation signal is converted by a transducer back to a human - usable form : an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. the radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter ' s radio waves oscillate at a different frequency, measured in hertz ( hz ), kilohertz ( khz ), megahertz ( mhz ) or gigahertz ( ghz ). the receiving antenna typically picks up the radio signals of many transmitters. the receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. a tuned circuit acts like a resonator, similar to a tuning fork. it has a natural resonant frequency at which it oscillates. the resonant frequency of the receiver ' s tuned circuit is adjusted by the user to the frequency of the desired radio station ; this is called tuning. the oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. radio signals at other frequencies are blocked by the tuned circuit and not passed on. = = = bandwidth = = = a modulated radio wave, carrying an information signal, occupies a range of frequencies. the information in a radio signal is usually concentrated in narrow frequency bands called sidebands ( sb ) just above and below the carrier frequency. the width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of the information being sent, and the spectral efficiency of the modulation method used ; how much data it can transmit in each unit of bandwidth. different types of information signals carried by radio have different data rates. for example, a television signal has a greater data rate than an audio signal. the radio spectrum, the total range of
a live / sound reinforcement engineer hears source material and tries to correlate that sonic experience with system performance. wireless microphone engineer, or a2. this position is responsible for wireless microphones during a theatre production, a sports event or a corporate event. foldback or monitor engineer β a person running foldback sound during a live event. the term foldback comes from the old practice of folding back audio signals from the front of house ( foh ) mixing console to the stage so musicians can hear themselves while performing. monitor engineers usually have a separate audio system from the foh engineer and manipulate audio signals independently from what the audience hears so they can satisfy the requirements of each performer on stage. in - ear systems, digital and analog mixing consoles, and a variety of speaker enclosures are typically used by monitor engineers. in addition, most monitor engineers must be familiar with wireless or rf ( radio - frequency ) equipment and often must communicate personally with the artist ( s ) during each performance. systems engineer β responsible for the design setup of modern pa systems, which are often very complex. a systems engineer is usually also referred to as a crew chief on tour and is responsible for the performance and day - to - day job requirements of the audio crew as a whole along with the foh audio system. this is a sound - only position concerned with implementation, not to be confused with the interdisciplinary field of system engineering, which typically requires a college degree. re - recording mixer β a person in post - production who mixes audio tracks for feature films or television programs. = = equipment = = an audio engineer is proficient with different types of recording media, such as analog tape, digital multi - track recorders and workstations, plug - ins and computer knowledge. with the advent of the digital age, it is increasingly important for the audio engineer to understand software and hardware integration, from synchronization to analog to digital transfers. in their daily work, audio engineers use many tools, including : tape machines analog - to - digital converters digital - to - analog converters digital audio workstations ( daws ) audio plug - ins dynamic range compressors audio data compressors equalization ( audio ) music sequencers signal processors headphones microphones preamplifiers mixing consoles amplifiers loudspeakers = = notable audio engineers = = = = = recording = = = = = = mastering = = = = = = live sound = = = = = see also = = = = references = = = = external links = = audio engineering society audio engineering
produces. the mastering engineer makes any final adjustments to the overall sound of the record in the final step before commercial duplication. mastering engineers use principles of equalization, compression and limiting to fine - tune the sound timbre and dynamics and to achieve a louder recording. sound designer β broadly an artist who produces soundtracks or sound effects content for media. live sound engineer front of house ( foh ) engineer, or a1. β a person dealing with live sound reinforcement. this usually includes planning and installation of loudspeakers, cabling and equipment and mixing sound during the show. this may or may not include running the foldback sound. a live / sound reinforcement engineer hears source material and tries to correlate that sonic experience with system performance. wireless microphone engineer, or a2. this position is responsible for wireless microphones during a theatre production, a sports event or a corporate event. foldback or monitor engineer β a person running foldback sound during a live event. the term foldback comes from the old practice of folding back audio signals from the front of house ( foh ) mixing console to the stage so musicians can hear themselves while performing. monitor engineers usually have a separate audio system from the foh engineer and manipulate audio signals independently from what the audience hears so they can satisfy the requirements of each performer on stage. in - ear systems, digital and analog mixing consoles, and a variety of speaker enclosures are typically used by monitor engineers. in addition, most monitor engineers must be familiar with wireless or rf ( radio - frequency ) equipment and often must communicate personally with the artist ( s ) during each performance. systems engineer β responsible for the design setup of modern pa systems, which are often very complex. a systems engineer is usually also referred to as a crew chief on tour and is responsible for the performance and day - to - day job requirements of the audio crew as a whole along with the foh audio system. this is a sound - only position concerned with implementation, not to be confused with the interdisciplinary field of system engineering, which typically requires a college degree. re - recording mixer β a person in post - production who mixes audio tracks for feature films or television programs. = = equipment = = an audio engineer is proficient with different types of recording media, such as analog tape, digital multi - track recorders and workstations, plug - ins and computer knowledge. with the advent of the digital age, it is increasingly important for the audio engineer to understand software and hardware integration, from synchronization to analog to digital transfers
effects content for media. live sound engineer front of house ( foh ) engineer, or a1. β a person dealing with live sound reinforcement. this usually includes planning and installation of loudspeakers, cabling and equipment and mixing sound during the show. this may or may not include running the foldback sound. a live / sound reinforcement engineer hears source material and tries to correlate that sonic experience with system performance. wireless microphone engineer, or a2. this position is responsible for wireless microphones during a theatre production, a sports event or a corporate event. foldback or monitor engineer β a person running foldback sound during a live event. the term foldback comes from the old practice of folding back audio signals from the front of house ( foh ) mixing console to the stage so musicians can hear themselves while performing. monitor engineers usually have a separate audio system from the foh engineer and manipulate audio signals independently from what the audience hears so they can satisfy the requirements of each performer on stage. in - ear systems, digital and analog mixing consoles, and a variety of speaker enclosures are typically used by monitor engineers. in addition, most monitor engineers must be familiar with wireless or rf ( radio - frequency ) equipment and often must communicate personally with the artist ( s ) during each performance. systems engineer β responsible for the design setup of modern pa systems, which are often very complex. a systems engineer is usually also referred to as a crew chief on tour and is responsible for the performance and day - to - day job requirements of the audio crew as a whole along with the foh audio system. this is a sound - only position concerned with implementation, not to be confused with the interdisciplinary field of system engineering, which typically requires a college degree. re - recording mixer β a person in post - production who mixes audio tracks for feature films or television programs. = = equipment = = an audio engineer is proficient with different types of recording media, such as analog tape, digital multi - track recorders and workstations, plug - ins and computer knowledge. with the advent of the digital age, it is increasingly important for the audio engineer to understand software and hardware integration, from synchronization to analog to digital transfers. in their daily work, audio engineers use many tools, including : tape machines analog - to - digital converters digital - to - analog converters digital audio workstations ( daws ) audio plug - ins dynamic range compressors audio data compressors equalization ( audio ) music sequencers signal processors headphones microphone
baby while they are in other parts of the house. the wavebands used vary by region, but analog baby monitors generally transmit with low power in the 16, 9. 3 β 49. 9 or 900 mhz wavebands, and digital systems in the 2. 4 ghz waveband. many baby monitors have duplex channels so the parent can talk to the baby, and cameras to show video of the baby. wireless microphone β a battery - powered microphone with a short - range transmitter that is handheld or worn on a person ' s body which transmits its sound by radio to a nearby receiver unit connected to a sound system. wireless microphones are used by public speakers, performers, and television personalities so they can move freely without trailing a microphone cord. traditionally, analog models transmit in fm on unused portions of the television broadcast frequencies in the vhf and uhf bands. some models transmit on two frequency channels for diversity reception to prevent nulls from interrupting transmission as the performer moves around. some models use digital modulation to prevent unauthorized reception by scanner radio receivers ; these operate in the 900 mhz, 2. 4 ghz or 6 ghz ism bands. european standards also support wireless multichannel audio systems ( wmas ) that can better support the use of large numbers of wireless microphones at a single event or venue. as of 2021, u. s. regulators were considering adopting rules for wmas. = = = data communication = = = wireless networking β automated radio links which transmit digital data between computers and other wireless devices using radio waves, linking the devices together transparently in a computer network. computer networks can transmit any form of data : in addition to email and web pages, they also carry phone calls ( voip ), audio, and video content ( called streaming media ). security is more of an issue for wireless networks than for wired networks since anyone nearby with a wireless modem can access the signal and attempt to log in. the radio signals of wireless networks are encrypted using wpa. wireless lan ( wireless local area network or wi - fi ) β based on the ieee 802. 11 standards, these are the most widely used computer networks, used to implement local area networks without cables, linking computers, laptops, cell phones, video game consoles, smart tvs and printers in a home or office together, and to a wireless router connecting them to the internet with a wire or cable connection. wireless routers in public places like libraries, hotels and coffee shops create wireless access points ( hotspots ) to allow the public to
the higher microwave band 3 β 6 ghz, and millimeter wave band, around 28 and 39 ghz. since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. millimeter - wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. satellite phone ( satphone ) β a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. they are more expensive than cell phones ; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the earth. in order for the phone to communicate with a satellite using a small omnidirectional antenna, first - generation systems use satellites in low earth orbit, about 400 β 700 miles ( 640 β 1, 100 km ) above the surface. with an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 β 15 minutes, so the call is " handed off " to another satellite when one passes beyond the local horizon. therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on earth. other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. cordless phone β a landline telephone in which the handset is portable and communicates with the rest of the phone by a short - range full duplex radio link, instead of being attached by a cord. both the handset and the base station have low - power radio transceivers that handle the short - range bidirectional radio link. as of 2022, cordless phones in most nations use the dect transmission standard. land mobile radio system β short - range mobile or portable half - duplex radio transceivers operating in the vhf or uhf band that can be used without a license. they are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. special systems with reserved frequencies are used by first responder services ; police, fire, ambulance, and emergency services, and other government services. other systems are made for
. older 2g, 3g, and 4g networks use frequencies in the uhf and low microwave range, between 700 mhz and 3 ghz. the cell phone transmitter adjusts its power output to use the minimum power necessary to communicate with the cell tower ; 0. 6 w when near the tower, up to 3 w when farther away. cell tower channel transmitter power is 50 w. current generation phones, called smartphones, have many functions besides making telephone calls, and therefore have several other radio transmitters and receivers that connect them with other networks : usually a wi - fi modem, a bluetooth modem, and a gps receiver. 5g cellular network β next - generation cellular networks which began deployment in 2019. their major advantage is much higher data rates than previous cellular networks, up to 10 gbps ; 100 times faster than the previous cellular technology, 4g lte. the higher data rates are achieved partly by using higher frequency radio waves, in the higher microwave band 3 β 6 ghz, and millimeter wave band, around 28 and 39 ghz. since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. millimeter - wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. satellite phone ( satphone ) β a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. they are more expensive than cell phones ; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the earth. in order for the phone to communicate with a satellite using a small omnidirectional antenna, first - generation systems use satellites in low earth orbit, about 400 β 700 miles ( 640 β 1, 100 km ) above the surface. with an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 β 15 minutes, so the call is " handed off " to another satellite when one passes beyond the local horizon. therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on earth. other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot
5g cellular network β next - generation cellular networks which began deployment in 2019. their major advantage is much higher data rates than previous cellular networks, up to 10 gbps ; 100 times faster than the previous cellular technology, 4g lte. the higher data rates are achieved partly by using higher frequency radio waves, in the higher microwave band 3 β 6 ghz, and millimeter wave band, around 28 and 39 ghz. since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. millimeter - wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. satellite phone ( satphone ) β a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. they are more expensive than cell phones ; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the earth. in order for the phone to communicate with a satellite using a small omnidirectional antenna, first - generation systems use satellites in low earth orbit, about 400 β 700 miles ( 640 β 1, 100 km ) above the surface. with an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 β 15 minutes, so the call is " handed off " to another satellite when one passes beyond the local horizon. therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on earth. other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. cordless phone β a landline telephone in which the handset is portable and communicates with the rest of the phone by a short - range full duplex radio link, instead of being attached by a cord. both the handset and the base station have low - power radio transceivers that handle the short - range bidirectional radio link. as of 2022, cordless phones in most nations use the dect transmission standard. land mobile radio system β short - range mobile or portable half - duplex radio transceivers operating in the vhf or uhf
when farther away. cell tower channel transmitter power is 50 w. current generation phones, called smartphones, have many functions besides making telephone calls, and therefore have several other radio transmitters and receivers that connect them with other networks : usually a wi - fi modem, a bluetooth modem, and a gps receiver. 5g cellular network β next - generation cellular networks which began deployment in 2019. their major advantage is much higher data rates than previous cellular networks, up to 10 gbps ; 100 times faster than the previous cellular technology, 4g lte. the higher data rates are achieved partly by using higher frequency radio waves, in the higher microwave band 3 β 6 ghz, and millimeter wave band, around 28 and 39 ghz. since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. millimeter - wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. satellite phone ( satphone ) β a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. they are more expensive than cell phones ; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the earth. in order for the phone to communicate with a satellite using a small omnidirectional antenna, first - generation systems use satellites in low earth orbit, about 400 β 700 miles ( 640 β 1, 100 km ) above the surface. with an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 β 15 minutes, so the call is " handed off " to another satellite when one passes beyond the local horizon. therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on earth. other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. cordless phone β a landline telephone in which the handset is portable and communicates with the rest of the phone by a short - range full duplex radio link, instead of being attached by a cord. both the handset and the base station have low
are combined in the proper order into one bitstream. many other types of modulation are also used. in some types, the carrier wave is suppressed, and only one or both modulation sidebands are transmitted. the modulated carrier is amplified in the transmitter and applied to a transmitting antenna which radiates the energy as radio waves. the radio waves carry the information to the receiver location. at the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna β a weaker replica of the current in the transmitting antenna. this voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. the modulation signal is converted by a transducer back to a human - usable form : an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. the radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter ' s radio waves oscillate at a different frequency, measured in hertz ( hz ), kilohertz ( khz ), megahertz ( mhz ) or gigahertz ( ghz ). the receiving antenna typically picks up the radio signals of many transmitters. the receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. a tuned circuit acts like a resonator, similar to a tuning fork. it has a natural resonant frequency at which it oscillates. the resonant frequency of the receiver ' s tuned circuit is adjusted by the user to the frequency of the desired radio station ; this is called tuning. the oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. radio signals at other frequencies are blocked by the tuned circuit and not passed on. = = = bandwidth = = = a modulated radio wave, carrying an information signal, occupies a range of frequencies. the information in a radio signal is usually concentrated in narrow frequency bands called sidebands ( sb ) just above and below the carrier frequency. the width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency,
Question: When a person speaks into a telephone, sound energy is changed mostly into which form of energy?
A) heat
B) light
C) electrical
D) chemical
|
C) electrical
|
Context:
the tests, assays, and procedures needed for providing the specific services. subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. these kinds of tests can be divided into recordings of : ( 1 ) spontaneous or continuously running electrical activity, or ( 2 ) stimulus evoked responses. subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. diagnostic radiology is concerned with imaging of the body, e. g. by x - rays, x - ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also
cross - fertilization that takes place among the various fields. psychology differs from biology and neuroscience in that it is primarily concerned with the interaction of mental processes and behaviour, and of the overall processes of a system, and not simply the biological or neural processes themselves, though the subfield of neuropsychology combines the study of the actual neural processes with the study of the mental effects they have subjectively produced. many people associate psychology with clinical psychology, which focuses on assessment and treatment of problems in living and psychopathology. in reality, psychology has myriad specialties including social psychology, developmental psychology, cognitive psychology, educational psychology, industrial - organizational psychology, mathematical psychology, neuropsychology, and quantitative analysis of behaviour. psychology is a very broad science that is rarely tackled as a whole, major block. although some subfields encompass a natural science base and a social science application, others can be clearly distinguished as having little to do with the social sciences or having a lot to do with the social sciences. for example, biological psychology is considered a natural science with a social scientific application ( as is clinical medicine ), social and occupational psychology are, generally speaking, purely social sciences, whereas neuropsychology is a natural science that lacks application out of the scientific tradition entirely. in british universities, emphasis on what tenet of psychology a student has studied and / or concentrated is communicated through the degree conferred : bpsy indicates a balance between natural and social sciences, bsc indicates a strong ( or entire ) scientific concentration, whereas a ba underlines a majority of social science credits. this is not always necessarily the case however, and in many uk institutions students studying the bpsy, bsc, and ba follow the same curriculum as outlined by the british psychological society and have the same options of specialism open to them regardless of whether they choose a balance, a heavy science basis, or heavy social science basis to their degree. if they applied to read the ba. for example, but specialized in heavily science - based modules, then they will still generally be awarded the ba. = = = sociology = = = sociology is the systematic study of society, individuals ' relationship to their societies, the consequences of difference, and other aspects of human social action. the meaning of the word comes from the suffix - logy, which means " study of ", derived from ancient greek, and the stem soci -, which is from the latin word socius, meaning " companion ", or society in general. auguste comte ( 1798 β 1857 ) coined
? if the latter, an important question is how the internal experiences of others can be measured. self - reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self - deception or selective memory may affect their responses. then even in the case of accurate self - reports, how can responses be compared across individuals? even if two individuals respond with the same answer on a likert scale, they may be experiencing very different things. other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. for example, are humans rational creatures? is there any sense in which they have free will, and how does that relate to the experience of making choices? philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology. philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. in particular, neurophilosophy has just recently become its own field with the works of paul churchland and patricia churchland. philosophy of mind, by contrast, has been a well - established discipline since before psychology was a field of study at all. it is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism. = = = philosophy of social science = = = the philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency. the french philosopher, auguste comte ( 1798 β 1857 ), established the epistemological perspective of positivism in the course in positivist philosophy, a series of texts published between 1830 and 1842. the first three volumes of the course dealt chiefly with the natural sciences already in existence ( geoscience, astronomy, physics, chemistry, biology ), whereas the latter two emphasised the inevitable coming of social science : " sociologie ". for comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex " queen science " of human society
the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. emergency medicine is concerned with the diagnosis and treatment of acute or life - threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. family medicine, family practice, general practice or primary care is, in many countries, the first port - of - call for patients with non - emergency medical problems. family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. medical genetics is concerned with the diagnosis and management of hereditary disorders. neurology is concerned with diseases of the nervous system. in the uk, neurology is a subspecialty of general medicine. obstetrics and gynecology ( often abbreviated as ob / gyn ( american english ) or obs & gynae ( british english ) ) are concerned respectively with childbirth and the female reproductive and associated organs. reproductive medicine and fertility medicine are generally practiced by gynecological specialists. pediatrics ( ae ) or paediatrics ( be ) is devoted to the care of infants, children, and adolescents. like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery. pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health. physical medicine and rehabilitation ( or physiatry ) is concerned with functional improvement after injury, illness, or congenital disorders. podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back. preventive medicine is the branch of medicine concerned with preventing disease. community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis. psychiatry is the branch of medicine concerned with the bio - psycho - social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. related fields include psychotherapy and clinical psychology. = = = interdisciplinary fields = = = some interdisciplinary sub - specialties of medicine include : addiction medicine deals with the treatment of addiction. aerospace medicine deals with medical problems related to flying and space travel. biomedical engineering is a field dealing with the application of engineering principles to medical practice
the nervous system. these kinds of tests can be divided into recordings of : ( 1 ) spontaneous or continuously running electrical activity, or ( 2 ) stimulus evoked responses. subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. diagnostic radiology is concerned with imaging of the body, e. g. by x - rays, x - ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of
as subjects perceive the sensory world, different stimuli elicit a number of neural representations. here, a subjective distance between stimuli is defined, measuring the degree of similarity between the underlying representations. as an example, the subjective distance between different locations in space is calculated from the activity of rodent hippocampal place cells, and lateral septal cells. such a distance is compared to the real distance, between locations. as the number of sampled neurons increases, the subjective distance shows a tendency to resemble the metrics of real space.
##ry. immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. medical physics is the study of the applications of physics principles in medicine. microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease β the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. within medical circles, specialities usually fit into one of two broad categories : " medicine " and " surgery ". " medicine " refers to the practice of non - operative medicine, and most of its subspecialties require preliminary training in internal medicine. in the uk
decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. this allows us to localize particular functions within different brain regions. fmri has moderate spatial and temporal resolution. optical imaging. this technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active ( i. e., those that have more oxygenated blood ). optical imaging has moderate temporal resolution, but poor spatial resolution. it also has the advantage that it is extremely safe and can be used to study infants ' brains. magnetoencephalography. meg measures magnetic fields resulting from cortical activity. it is similar to eeg, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in eeg is. meg uses squid sensors to detect tiny magnetic fields. = = = computational modeling = = = computational models require a mathematically and logically formal representation of a problem. computer models are used in the simulation and experimental verification of different
functions of the human body, if necessary, through the use of technology. modern medicine can replace several of the body ' s functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. the fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems. conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. this has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. there are also substantial interdisciplinary interactions between engineering and medicine. both fields provide solutions to real world problems. this often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both. medicine, in part, studies the function of the human body. the human body, as a biological machine, has many functions that can be modeled using engineering methods. the heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. these similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines. newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems. = = = art = = = there are connections between engineering and art, for example, architecture, landscape architecture and industrial design ( even to the extent that these disciplines may sometimes be included in a university ' s faculty of engineering ). the art institute of chicago, for instance, held an exhibition about the art of nasa ' s aerospace design. robert maillart ' s bridge design is perceived by some to have been deliberately artistic. at the university of south florida, an engineering professor, through a grant with the national science foundation, has developed a course that connects art and engineering. among famous historical figures, leonardo da vinci is a well - known renaissance artist and engineer, and a prime example of the nexus between art and engineering. = = = business = = = business engineering deals with the relationship between professional engineering, it systems, business administration and change management. engineering management or " management engineering " is a specialized field of management concerned with engineering practice or the engineering industry sector. the demand for management
assuming that the e ( 38 ) boson candidate recently observed at the jinr nuclotron is produced in a bremsstrahlung - like manner and decays only to two photons, its coupling constant to light quarks is estimated to be $ \ sim 10 ^ { - 4 } $.
Question: In the human body, what part of the central nervous system connects with other nerves outside of the central nervous system?
A) dendrite
B) cerebrum
C) cerebellum
D) spinal cord
|
D) spinal cord
|
Context:
substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the
the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist
of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and
liver glycogen. during recovery, when oxygen becomes available, nad + attaches to hydrogen from lactate to form atp. in yeast, the waste products are ethanol and carbon dioxide. this type of fermentation is known as alcoholic or ethanol fermentation. the atp generated in this process is made by substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and
is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside
energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photos
the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. there are generally four types of chemical signals : autocrine, paracrine, juxtacrine, and hormones. in autocrine signaling, the ligand affects the same cell that releases it. tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their
pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin
by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods.
river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu shastra ', suggests a thorough understanding of materials engineering, hydrology, and sanitation. = = = = china = = = = the chinese made many first - known discoveries and developments. major technological contributions from china include the earliest known form of the binary code and epigenetic sequencing, early seismological detectors, matches, paper, helicopter rotor, raised - relief map, the double - action piston pump, cast iron, water powered blast furnace bellows, the iron plough, the multi - tube seed drill, the wheelbarrow, the parachute, the compass, the rudder, the crossbow, the south pointing chariot and gunpowder
Question: Photovoltaic cells capture photons of sunlight and transform them directly into electricity. Many of Earth's other energy resources are simply transformed solar energy. Which two energy resources store energy that did not begin as solar energy?
A) oil and coal
B) wind and wood
C) nuclear and geothermal
D) hydropower and natural gas
|
C) nuclear and geothermal
|
Context:
of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic dna damage and genetic complementation which masks the expression of deleterious recessive mutations. the beneficial effect of genetic complementation, derived from outcrossing ( cross - fertilization ) is also referred to as hybrid vigor or heterosis. charles
( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by
. the phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. sometimes the distinction between phases can be continuous instead of having a discrete boundary ; in this case the matter is considered to be in a supercritical state. when three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. the most familiar examples of phases are solids, liquids, and gases. many substances exhibit multiple solid phases. for example, there are three phases of solid iron ( alpha, gamma, and delta ) that vary based on temperature and pressure. a principal difference between solid phases is the crystal structure, or arrangement, of the atoms. another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution ( that is, in water ). less familiar phases include plasmas, bose β einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. while most familiar phases deal with three - dimensional systems, it is also possible to define analogs in two - dimensional systems, which has received attention for its relevance to systems in biology. = = = bonding = = = atoms sticking together in molecules or crystals are said to be bonded with one another. a chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. more than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. the chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of van der waals force. each of these kinds of bonds is ascribed to some potential. these potentials create the interactions which hold atoms together in molecules or crystals. in many simple compounds, valence bond theory, the valence shell electron pair repulsion model ( vsepr ), and the concept of oxidation number can be used to explain molecular structure and composition. an ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non - metal atom, becoming a negatively charged anion. the two oppositely charged ions attract one another, and the ionic bond
and measuring radiation levels. the surveyor program conducted uncrewed lunar landings and takeoffs, as well as taking surface and regolith observations. despite the setback caused by the apollo 1 fire, which killed three astronauts, the program proceeded. apollo 8 was the first crewed spacecraft to leave low earth orbit and the first human spaceflight to reach the moon. the crew orbited the moon ten times on december 24 and 25, 1968, and then traveled safely back to earth. the three apollo 8 astronauts β frank borman, james lovell, and william anders β were the first humans to see the earth as a globe in space, the first to witness an earthrise, and the first to see and manually photograph the far side of the moon. the first lunar landing was conducted by apollo 11. commanded by neil armstrong with astronauts buzz aldrin and michael collins, apollo 11 was one of the most significant missions in nasa ' s history, marking the end of the space race when the soviet union gave up its lunar ambitions. as the first human to step on the surface of the moon, neil armstrong uttered the now famous words : that ' s one small step for man, one giant leap for mankind. nasa would conduct six total lunar landings as part of the apollo program, with apollo 17 concluding the program in 1972. = = = = end of apollo = = = = wernher von braun had advocated for nasa to develop a space station since the agency was created. in 1973, following the end of the apollo lunar missions, nasa launched its first space station, skylab, on the final launch of the saturn v. skylab reused a significant amount of apollo and saturn hardware, with a repurposed saturn v third stage serving as the primary module for the space station. damage to skylab during its launch required spacewalks to be performed by the first crew to make it habitable and operational. skylab hosted nine missions and was decommissioned in 1974 and deorbited in 1979, two years prior to the first launch of the space shuttle and any possibility of boosting its orbit. in 1975, the apollo β soyuz mission was the first ever international spaceflight and a major diplomatic accomplishment between the cold war rivals, which also marked the last flight of the apollo capsule. flown in 1975, a us apollo spacecraft docked with a soviet soyuz capsule. = = = interplanetary exploration and space science = = = during the 1960s, nasa started its space science and interplanetary probe program. the mariner program was its flagship
classifications ; however, some more exotic phases are incompatible with certain chemical properties. a phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. physical properties, such as density and refractive index tend to fall within values characteristic of the phase. the phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. sometimes the distinction between phases can be continuous instead of having a discrete boundary ; in this case the matter is considered to be in a supercritical state. when three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. the most familiar examples of phases are solids, liquids, and gases. many substances exhibit multiple solid phases. for example, there are three phases of solid iron ( alpha, gamma, and delta ) that vary based on temperature and pressure. a principal difference between solid phases is the crystal structure, or arrangement, of the atoms. another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution ( that is, in water ). less familiar phases include plasmas, bose β einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. while most familiar phases deal with three - dimensional systems, it is also possible to define analogs in two - dimensional systems, which has received attention for its relevance to systems in biology. = = = bonding = = = atoms sticking together in molecules or crystals are said to be bonded with one another. a chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. more than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. the chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of van der waals force. each of these kinds of bonds is ascribed to some potential. these potentials create the interactions which hold atoms together in molecules or crystals. in many simple compounds, valence bond theory, the valence shell electron pair repulsion model ( vsepr ), and the concept of oxidation number can be used
the origin of the martian moons deimos and phobos is controversial. one hypothesis for their origin is that they are captured asteroids, but the mechanism requires an extremely dense martian atmosphere, and the mechanism by which an asteroid in solar orbit could shed sufficient orbital energy to be captured into mars orbit has not been well elucidated. since the discovery by the space probe galileo that the asteroid ida has a moon " dactyl ", a significant number of asteroids have been discovered to have smaller asteroids in orbit about them. the existence of asteroid moons provides a mechanism for the capture of the martian moons ( and the small moons of the outer planets ). when a binary asteroid makes a close approach to a planet, tidal forces can strip the moon from the asteroid. depending on the phasing, the asteroid can then be captured. clearly, the same process can be used to explain the origin of any of the small moons in the solar system.
the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations.
protist cells ), there are two distinct types of cell division : mitosis and meiosis. mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. in general, mitosis ( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = mei
the lunar university network for astrophysics research ( lunar ) is a team of researchers and students at leading universities, nasa centers, and federal research laboratories undertaking investigations aimed at using the moon as a platform for space science. lunar research includes lunar interior physics & gravitation using lunar laser ranging ( llr ), low frequency cosmology and astrophysics ( lfca ), planetary science and the lunar ionosphere, radio heliophysics, and exploration science. the lunar team is exploring technologies that are likely to have a dual purpose, serving both exploration and science. there is a certain degree of commonality in much of lunar ' s research. specifically, the technology development for a lunar radio telescope involves elements from lfca, heliophysics, exploration science, and planetary science ; similarly the drilling technology developed for llr applies broadly to both exploration and lunar science.
high speed photometry of kuv 01584 - 0939 ( alias cet3 ) shows that is has a period of 620. 26 s. combined with its hydrogen - deficient spectrum, this implies that it is an am cvn star. the optical modulation is probably a superhump, in which case the orbital period will be slightly shorter than what we have observed.
Question: What determines how long the Moon takes to complete one cycle of phases?
A) the period of rotation of the Moon around its axis
B) the period of revolution of the Moon around Earth
C) the period of rotation of Earth around its axis
D) the period of revolution of Earth around the Sun
|
B) the period of revolution of the Moon around Earth
|
Context:
in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress
. historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek ΞΌΞ΅ΟαλλοΟ
ΟΞ³ΞΏΟ, metallourgos, " worker in metal ", from ΞΌΞ΅Οαλλον, metallon, " mine, metal " + Ξ΅ΟΞ³ΞΏΞ½, ergon, " work " the word was originally an alchemist ' s term for the extraction of metals from minerals, the ending - urgy signifying a process, especially manufacturing : it was discussed in this sense in the 1797 encyclopΓ¦dia britannica. in the late 19th century, metallurgy ' s definition was extended to the more general scientific study of metals, alloys, and related processes. in english, the pronunciation is the more common one in the united kingdom. the pronunciation is the more common one in the us and is the first - listed variant in various american dictionaries, including merriam - webster collegiate and american heritage. = = history = = the earliest metal employed by humans appears to be gold, which can be found " native ". small amounts of natural gold, dating to the late paleolithic period, 40, 000 bc, have been found in spanish caves. silver, copper, tin and meteoric iron
high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted
iron - peroxide intermediates are central in the reaction cycle of many iron - containing biomolecules. we trapped iron ( iii ) - ( hydro ) peroxo species in crystals of superoxide reductase ( sor ), a nonheme mononuclear iron enzyme that scavenges superoxide radicals. x - ray diffraction data at 1. 95 angstrom resolution and raman spectra recorded in crystallo revealed iron - ( hydro ) peroxo intermediates with the ( hydro ) peroxo group bound end - on. the dynamic sor active site promotes the formation of transient hydrogen bond networks, which presumably assist the cleavage of the iron - oxygen bond in order to release the reaction product, hydrogen peroxide.
is further subdivided into two broad categories : chemical metallurgy and physical metallurgy. chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation ( corrosion ). in contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek ΞΌΞ΅ΟαλλοΟ
ΟΞ³ΞΏΟ, metallourgos, " worker in metal ", from ΞΌΞ΅Οαλλον, metallon, " mine, metal " + Ξ΅ΟΞ³ΞΏΞ½, ergon, " work " the word was originally an alchemist ' s term for the extraction of metals from minerals, the ending - urgy signifying a process, especially manufacturing : it was discussed in this sense in the 1797 encyclopΓ¦dia britannica. in the late 19th century, metallurgy '
joints. = = = metal alloys = = = the alloys of iron ( steel, stainless steel, cast iron, tool steel, alloy steels ) make up the largest proportion of metals today both by quantity and commercial value. iron alloyed with various proportions of carbon gives low, mid and high carbon steels. an iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron β carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a long time ( since the bronze age ), while the alloys of the other three metals have been relatively recently developed. due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. the alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. these materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. = = = semiconductors = = = a semiconductor is a material that has a resistivity between a conductor and insulator. modern day electronics run on semiconductors, and the industry had an estimated us $ 530 billion market in 2021. its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. semiconductor materials are used to build diodes, transistors, light - emitting diodes ( leds ), and analog and digital electric circuits, among their many uses. semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. semiconductor devices are manufactured both as single discrete devices and as integrated circuits ( ics ), which consist of a number β from a
during aqueous corrosion, atoms in the solid react chemically with oxygen, leading either to the formation of an oxide film or to the dissolution of the host material. commonly, the first step in corrosion involves an oxygen atom from the dissociated water that reacts with the surface atoms and breaks near surface bonds. in contrast, hydrogen on the surface often functions as a passivating species. here, we discovered that the roles of o and h are reversed in the early corrosion stages on a si terminated sic surface. o forms stable species on the surface, and chemical attack occurs by h that breaks the si - c bonds. this so - called hydrogen scission reaction is enabled by a newly discovered metastable bridging hydroxyl group that can form during water dissociation. the si atom that is displaced from the surface during water attack subsequently forms h2sio3, which is a known precursor to the formation of silica and silicic acid. this study suggests that the roles of h and o in oxidation need to be reconsidered.
is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales
pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted surplus to the public. the first treated public water supply in the world was installed by engineer james simpson for the chelsea waterworks company in london in 1829. the first screw - down water tap was patented in 1845 by guest and chrimes, a brass foundry in rotherham. the practice of water treatment soon became mainstream,
##chemistry, and chemical degradation ( corrosion ). in contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek ΞΌΞ΅ΟαλλοΟ
ΟΞ³ΞΏΟ, metallourgos, " worker in metal ", from ΞΌΞ΅Οαλλον, metallon, " mine, metal " + Ξ΅ΟΞ³ΞΏΞ½, ergon, " work " the word was originally an alchemist ' s term for the extraction of metals from minerals, the ending - urgy signifying a process, especially manufacturing : it was discussed in this sense in the 1797 encyclopΓ¦dia britannica. in the late 19th century, metallurgy ' s definition was extended to the more general scientific study of metals, alloys, and related processes. in english, the pronunciation is the more common one in the united kingdom. the pronunciation is the more common one in the us and is the first - listed variant in various american dictionaries, including merriam - webster collegiate
Question: Oxygen reacts with iron to produce rust and with hydrogen to produce water. Which statement describes both reactions?
A) A different mixture is formed in each case.
B) A different solution is formed in each case.
C) Both a change of state and of elements is involved.
D) New molecules are formed but the same elements exist.
|
D) New molecules are formed but the same elements exist.
|
Context:
used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception
##drate - rich plant products such as barley ( beer ), rice ( sake ) and grapes ( wine ). native americans have used various plants as ways of treating illness or disease for thousands of years. this knowledge native americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of
this scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a " landmark ". the lab first stripped the cells away from a rat heart ( a process called " decellularization " ) and then injected rat stem cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with support matrices for tissue engineering applications. an adequate environment for promoting cell growth, differentiation, and integration with the existing tissue is a critical factor for cell - based building blocks. manipulation of any of these cell processes create alternative avenues for the development of new tissue ( e. g., cell reprogramming - somatic
cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with support matrices for tissue engineering applications. an adequate environment for promoting cell growth, differentiation, and integration with the existing tissue is a critical factor for cell - based building blocks. manipulation of any of these cell processes create alternative avenues for the development of new tissue ( e. g., cell reprogramming - somatic cells, vascularization ). = = = isolation = = = techniques for cell isolation depend on the cell source. centrifugation and apheresis are techniques used for extracting cells from biofluids ( e. g., blood ). whereas digestion processes, typically using enzymes to remove the extra
waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity ( space bioeconomy ) dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops. = = = medicine = = = in medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing ( or genetic screening ). in 2021, nearly 40 % of the total company value of pharmaceutical biotech companies worldwide were active in oncology with neurology and rare diseases being the other two big applications. pharmacogenomics ( a combination of pharmacology and genomics ) is the technology that analyses how genetic makeup affects an individual ' s response to drugs. researchers in the field investigate the influence of genetic variation on drug responses in patients by
##ilage generated without the use of exogenous scaffold material. in this methodology, all material in the construct is cellular produced directly by the cells. bioartificial heart : doris taylor ' s lab constructed a biocompatible rat heart by re - cellularising a de - cellularised rat heart. this scaffold and cells were placed in a bioreactor, where it matured to become a partially or fully transplantable organ. the work was called a " landmark ". the lab first stripped the cells away from a rat heart ( a process called " decellularization " ) and then injected rat stem cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with
human blood primarily comprises plasma, red blood cells, white blood cells, and platelets. it plays a vital role in transporting nutrients to different organs, where it stores essential health - related data about the human body. blood cells are utilized to defend the body against diverse infections, including fungi, viruses, and bacteria. hence, blood analysis can help physicians assess an individual ' s physiological condition. blood cells have been sub - classified into eight groups : neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes ( promyelocytes, myelocytes, and metamyelocytes ), erythroblasts, and platelets or thrombocytes on the basis of their nucleus, shape, and cytoplasm. traditionally, pathologists and hematologists in laboratories have examined these blood cells using a microscope before manually classifying them. the manual approach is slower and more prone to human error. therefore, it is essential to automate this process. in our paper, transfer learning with cnn pre - trained models. vgg16, vgg19, resnet - 50, resnet - 101, resnet - 152, inceptionv3, mobilenetv2, and densenet - 20 applied to the pbc dataset ' s normal dib. the overall accuracy achieved with these models lies between 91. 375 and 94. 72 %. hence, inspired by these pre - trained architectures, a model has been proposed to automatically classify the ten types of blood cells with increased accuracy. a novel cnn - based framework has been presented to improve accuracy. the proposed cnn model has been tested on the pbc dataset normal dib. the outcomes of the experiments demonstrate that our cnn - based framework designed for blood cell classification attains an accuracy of 99. 91 % on the pbc dataset. our proposed convolutional neural network model performs competitively when compared to earlier results reported in the literature.
generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various
cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of
and peripheral blood. they concluded from the results that immuno - cytochemical staining of bone marrow and peripheral blood is a sensitive and simple way to detect and quantify breast cancer cells. one of the main reasons for metastatic relapse in patients with solid tumours is the early dissemination of malignant cells. the use of monoclonal antibodies ( mabs ) specific for cytokeratins can identify disseminated individual epithelial tumor cells in the bone marrow. one study reports on having developed an immuno - cytochemical procedure for simultaneous labeling of cytokeratin component no. 18 ( ck18 ) and prostate specific antigen ( psa ). this would help in the further characterization of disseminated individual epithelial tumor cells in patients with prostate cancer. the twelve control aspirates from patients with benign prostatic hyperplasia showed negative staining, which further supports the specificity of ck18 in detecting epithelial tumour cells in bone marrow. in most cases of malignant disease complicated by effusion, neoplastic cells can be easily recognized. however, in some cases, malignant cells are not so easily seen or their presence is too doubtful to call it a positive report. the use of immuno - cytochemical techniques increases diagnostic accuracy in these cases. ghosh, mason and spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease. conventional cytological examination had not revealed any neoplastic cells. three monoclonal antibodies ( anti - cea, ca 1 and hmfg - 2 ) were used to search for malignant cells. immunocytochemical labelling was performed on unstained smears, which had been stored at - 20 Β°c up to 18 months. twelve of the forty - one cases in which immuno - cytochemical staining was performed, revealed malignant cells. the result represented an increase in diagnostic accuracy of approximately 20 %. the study concluded that in patients with suspected malignant disease, immuno - cytochemical labeling should be used routinely in the examination of cytologically negative samples and has important implications with respect to patient management. another application of immuno - cytochemical staining is for the detection of two antigens in the same smear. double staining with light chain antibodies and with t and b cell markers can indicate the neoplastic origin of a lymph
Question: What is the primary job of red blood cells?
A) transport oxygen
B) remove waste
C) fight disease
D) allow reproduction
|
A) transport oxygen
|
Context:
their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that
aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna that can move between cells β while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", and as self - replicators. = = ecology = = ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. = = = ecosystems = = = the community of living ( biotic ) organisms in conjunction with the nonliving ( abiotic ) components ( e.
known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose,
, which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna that can move between cells β while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ",
with one allele inducing a change on the other. = = plant evolution = = the chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, ( commonly but incorrectly known as " blue - green algae " ) and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gym
invertebrates, or protozoans, the protist grouping is not a formal taxonomic group but is used for convenience. most protists are unicellular ; these are called microbial eukaryotes. plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom plantae, which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β of which around 1 million are insects β but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β pieces of dna
- people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table
pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form
ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an o
pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin
Question: What kingdom contains organisms that are multicellular, have no chlorophyll, and absorb nutrients from decaying tissue?
A) Fungi
B) Plantae
C) Protista
D) Animalia
|
A) Fungi
|
Context:
based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another
casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. for example, steels are classified based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and
is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales
". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications
, calorimetry, nuclear microscopy ( hefib ), rutherford backscattering, neutron diffraction, small - angle x - ray scattering ( saxs ), etc. ). besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. for example, steels are classified based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon
the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle
. historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek ΞΌΞ΅ΟαλλοΟ
ΟΞ³ΞΏΟ, metallourgos, " worker in metal ", from ΞΌΞ΅Οαλλον, metallon, " mine, metal " + Ξ΅ΟΞ³ΞΏΞ½, ergon, " work " the word was originally an alchemist ' s term for the extraction of metals from minerals, the ending - urgy signifying a process, especially manufacturing : it was discussed in this sense in the 1797 encyclopΓ¦dia britannica. in the late 19th century, metallurgy ' s definition was extended to the more general scientific study of metals, alloys, and related processes. in english, the pronunciation is the more common one in the united kingdom. the pronunciation is the more common one in the us and is the first - listed variant in various american dictionaries, including merriam - webster collegiate and american heritage. = = history = = the earliest metal employed by humans appears to be gold, which can be found " native ". small amounts of natural gold, dating to the late paleolithic period, 40, 000 bc, have been found in spanish caves. silver, copper, tin and meteoric iron
building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another material. cermets are ceramic particles containing some metals. the wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. this process involves the strategic addition of second - phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. this approach enhances fracture toughness, paving the way for the creation of advanced, high - performance ceramics in various industries. = = = composites = = = another application of materials science in industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a
use less energy than conventional thermal separation processes such as distillation, sublimation or crystallization. the separation process is purely physical and both fractions ( permeate and retentate ) can be obtained as useful products. cold separation using membrane technology is widely used in the food technology, biotechnology and pharmaceutical industries. furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. for example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. important technical applications include the production of drinking water by reverse osmosis. in waste water treatment, membrane technology is becoming increasingly important. ultra / microfiltration can be very effective in removing colloids and macromolecules from wastewater. this is needed if wastewater is discharged into sensitive waters especially those designated for contact water sports and recreation. about half of the market is in medical applications such as artificial kidneys to remove toxic substances by hemodialysis and as artificial lung for bubble - free supply of oxygen in the blood. the importance of membrane technology is growing in the field of environmental protection ( nano - mem - pro ippc database ). even in modern energy recovery techniques, membranes are increasingly used, for example in fuel cells and in osmotic power plants. = = mass transfer = = two basic models can be distinguished for mass transfer through the membrane : the solution - diffusion model and the hydrodynamic model. in real membranes, these two transport mechanisms certainly occur side by side, especially during ultra - filtration. = = = solution - diffusion model = = = in the solution - diffusion model, transport occurs only by diffusion. the component that needs to be transported must first be dissolved in the membrane. the general approach of the solution - diffusion model is to assume that the chemical potential of the feed and permeate fluids are in equilibrium with the adjacent membrane surfaces such that appropriate expressions for the chemical potential in the fluid and membrane phases can be equated at the solution - membrane interface. this principle is more important for dense membranes without natural pores such as those used for reverse osmosis and in fuel cells. during the filtration process a boundary layer forms on the membrane. this concentration gradient is created by molecules which cannot pass through the membrane. the
joints. = = = metal alloys = = = the alloys of iron ( steel, stainless steel, cast iron, tool steel, alloy steels ) make up the largest proportion of metals today both by quantity and commercial value. iron alloyed with various proportions of carbon gives low, mid and high carbon steels. an iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron β carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a long time ( since the bronze age ), while the alloys of the other three metals have been relatively recently developed. due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. the alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. these materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. = = = semiconductors = = = a semiconductor is a material that has a resistivity between a conductor and insulator. modern day electronics run on semiconductors, and the industry had an estimated us $ 530 billion market in 2021. its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. semiconductor materials are used to build diodes, transistors, light - emitting diodes ( leds ), and analog and digital electric circuits, among their many uses. semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. semiconductor devices are manufactured both as single discrete devices and as integrated circuits ( ics ), which consist of a number β from a
Question: A salvage yard contains a mixture of iron, glass, aluminum, and plastic. Which property of iron does the salvage yard take advantage of when separating the iron from the rest of the materials?
A) magnetic
B) electrical
C) ductility
D) malleability
|
A) magnetic
|
Context:
the group velocity of light has been measured at eight different wavelengths between 385 nm and 532 nm in the mediterranean sea at a depth of about 2. 2 km with the antares optical beacon systems. a parametrisation of the dependence of the refractive index on wavelength based on the salinity, pressure and temperature of the sea water at the antares site is in good agreement with these measurements.
subsea engineering and the ability to detect, track and destroy submarines ( anti - submarine warfare ) required the parallel development of a host of marine scientific instrumentation and sensors. visible light is not transferred far underwater, so the medium for transmission of data is primarily acoustic. high - frequency sound is used to measure the depth of the ocean, determine the nature of the seafloor, and detect submerged objects. the higher the frequency, the higher the definition of the data that is returned. sound navigation and ranging or sonar was developed during the first world war to detect submarines, and has been greatly refined through to the present day. submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. simple echo - sounders point straight down and can give an accurate reading of ocean depth ( or look up at the underside of sea - ice ). more advanced echo sounders use a fan - shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. high power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. for close - range underwater communications, optical transmission is possible, mainly using blue lasers. these have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. as well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental dna. the industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. the sensors and instruments are fitted to autonomous and remotely - operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human - crewed platform. manufacture of marine sensors and instruments mainly takes place in asia, europe and north america. products are advertised in specialist journals, and through trade shows such as oceanology international and ocean business which help raise awareness of the products. = = = environmental engineering = = = in every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean
current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references
, and carpentry. the trade of the ship - wright. the trade of the wheel - wright. the trade of the wainwright : making wagons. ( the latin word for a two - wheeled wagon is carpentum, the maker of which was a carpenter. ) ( wright is the agent form of the word wrought, which itself is the original past passive participle of the word work, now superseded by the weak verb forms worker and worked respectively. ) blacksmithing and the various related smithing and metal - crafts. folk music played on acoustic instruments. mathematics ( particularly, pure mathematics ) organic farming and animal husbandry ( i. e. ; agriculture as practiced by all american farmers prior to world war ii ). milling in the sense of operating hand - constructed equipment with the intent to either grind grain, or the reduction of timber to lumber as practiced in a saw - mill. fulling, felting, drop spindle spinning, hand knitting, crochet, & similar textile preparation. the production of charcoal by the collier, for use in home heating, foundry operations, smelting, the various smithing trades, and for brushing ones teeth as in colonial america. glass - blowing. various subskills of food preservation : smoking salting pickling drying note : home canning is a counter example of a low technology since some of the supplies needed to pursue this skill rely on a global trade network and an existing manufacturing infrastructure. the production of various alcoholic beverages : wine : poorly preserved fruit juice. beer : a way to preserve the calories of grain products from decay. whiskey : an improved ( distilled ) form of beer. flint - knapping masonry as used in castles, cathedrals, and root cellars. = = = domestic or consumer = = = ( non exhaustive ) list of low - tech in a westerner ' s everyday life : getting around by bike, and repairing it with second - hand materials using a cargo bike to carry loads ( rather than a gasoline vehicle ) drying clothes on a clothesline or on a drying rack washing clothes by hand, or in a human - powered washing machine cooling one ' s home with a fan or an air expander ( rather than electrical appliances such as air conditioners ) using a bell as door bell a cellar, " desert fridge ", or icebox ( rather than a fridge or freezer ) long - distance travel by sailing boat ( rather than by plane ) a wicker bag or a tote bag ( rather than a plastic bag ) to
##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river
, behind which are structures termed reentrant triangles. radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. this method was first used on the blackbird series : a - 12, yf - 12a, lockheed sr - 71 blackbird. the most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth air
for inland navigation in the lower portion of their course, as, for instance, the rhine, the danube and the mississippi. river engineering works are only required to prevent changes in the course of the stream, to regulate its depth, and especially to fix the low - water channel and concentrate the flow in it, so as to increase as far as practicable the navigable depth at the lowest stage of the water level. engineering works to increase the navigability of rivers can only be advantageously undertaken in large rivers with a moderate fall and a fair discharge at their lowest stage, for with a large fall the current presents a great impediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is
river - beds ), but not for where there may be large obstructions in the ground. an open caisson that is used in soft grounds or high water tables, where open trench excavations are impractical, can also be used to install deep manholes, pump stations and reception / launch pits for microtunnelling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caisson
##lling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called
bear ' ) was conspicuous on radar. it is now known that propellers and jet turbine blades produce a bright radar image ; the bear has four pairs of large 18 - foot ( 5. 6 m ) diameter contra - rotating propellers. another important factor is internal construction. some stealth aircraft have skin that is radar transparent or absorbing, behind which are structures termed reentrant triangles. radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. this method was first used on the blackbird series : a - 12, yf - 12a, lockheed sr - 71 blackbird. the most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar
Question: While a sailor was out fishing, he got lost at sea. Which object would help him direct the boat back to shore?
A) Rope
B) Magnetic compass
C) Blanket
D) Measuring tape
|
B) Magnetic compass
|
Context:
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
water content and the internal evolution of terrestrial planets and icy bodies are closely linked. the distribution of water in planetary systems is controlled by the temperature structure in the protoplanetary disk and dynamics and migration of planetesimals and planetary embryos. this results in the formation of planetesimals and planetary embryos with a great variety of compositions, water contents and degrees of oxidation. the internal evolution and especially the formation time of planetesimals relative to the timescale of radiogenic heating by short - lived 26al decay may govern the amount of hydrous silicates and leftover rock - ice mixtures available in the late stages of their evolution. in turn, water content may affect the early internal evolution of the planetesimals and in particular metal - silicate separation processes. moreover, water content may contribute to an increase of oxygen fugacity and thus affect the concentrations of siderophile elements within the silicate reservoirs of solar system objects. finally, the water content strongly influences the differentiation rate of the icy moons, controls their internal evolution and governs the alteration processes occurring in their deep interiors.
##chemistry, and chemical degradation ( corrosion ). in contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek ΞΌΞ΅ΟαλλοΟ
ΟΞ³ΞΏΟ, metallourgos, " worker in metal ", from ΞΌΞ΅Οαλλον, metallon, " mine, metal " + Ξ΅ΟΞ³ΞΏΞ½, ergon, " work " the word was originally an alchemist ' s term for the extraction of metals from minerals, the ending - urgy signifying a process, especially manufacturing : it was discussed in this sense in the 1797 encyclopΓ¦dia britannica. in the late 19th century, metallurgy ' s definition was extended to the more general scientific study of metals, alloys, and related processes. in english, the pronunciation is the more common one in the united kingdom. the pronunciation is the more common one in the us and is the first - listed variant in various american dictionaries, including merriam - webster collegiate
are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). "
navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea
earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and
the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a
s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it
natural science disciplines are not always sharp, and they share many cross - discipline fields. physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry. a particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. this field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. this science also draws upon expertise from other fields, such as economics, law, and social sciences. a comparable discipline is oceanography, as it draws upon a similar breadth of scientific disciplines. oceanography is sub - categorized into more specialized cross - disciplines, such as physical oceanography and marine biology. as the marine ecosystem is vast and diverse, marine biology is further divided into many subfields, including specializations in particular species. there is also a subset of cross - disciplinary fields with strong currents that run counter to specialization by the nature of the problems they address. put another way : in some fields of integrative application, specialists in more than one field are a key part of most scientific discourse. such integrative fields, for example, include nanoscience, astrobiology, and complex system informatics. = = = materials science = = = materials science is a relatively new, interdisciplinary field that deals with the study of matter and its properties and the discovery and design of new materials. originally developed through the field of metallurgy, the study of the properties of materials and solids has now expanded into all materials. the field covers the chemistry, physics, and engineering applications of materials, including metals, ceramics, artificial polymers, and many others. the field ' s core deals with relating the structure of materials with their properties. materials science is at the forefront of research in science and engineering. it is an essential part of forensic engineering ( the investigation of materials, products, structures, or components that fail or do not operate or function as intended, causing personal injury or damage to property ) and failure analysis, the latter being the key to understanding, for example, the cause of various aviation accidents. many of the most pressing scientific problems that are faced today are due to the limitations of the materials that are available, and, as a result, breakthroughs in this field are likely to have a significant impact on the future of technology. the basis of materials science involves
use less energy than conventional thermal separation processes such as distillation, sublimation or crystallization. the separation process is purely physical and both fractions ( permeate and retentate ) can be obtained as useful products. cold separation using membrane technology is widely used in the food technology, biotechnology and pharmaceutical industries. furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. for example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. important technical applications include the production of drinking water by reverse osmosis. in waste water treatment, membrane technology is becoming increasingly important. ultra / microfiltration can be very effective in removing colloids and macromolecules from wastewater. this is needed if wastewater is discharged into sensitive waters especially those designated for contact water sports and recreation. about half of the market is in medical applications such as artificial kidneys to remove toxic substances by hemodialysis and as artificial lung for bubble - free supply of oxygen in the blood. the importance of membrane technology is growing in the field of environmental protection ( nano - mem - pro ippc database ). even in modern energy recovery techniques, membranes are increasingly used, for example in fuel cells and in osmotic power plants. = = mass transfer = = two basic models can be distinguished for mass transfer through the membrane : the solution - diffusion model and the hydrodynamic model. in real membranes, these two transport mechanisms certainly occur side by side, especially during ultra - filtration. = = = solution - diffusion model = = = in the solution - diffusion model, transport occurs only by diffusion. the component that needs to be transported must first be dissolved in the membrane. the general approach of the solution - diffusion model is to assume that the chemical potential of the feed and permeate fluids are in equilibrium with the adjacent membrane surfaces such that appropriate expressions for the chemical potential in the fluid and membrane phases can be equated at the solution - membrane interface. this principle is more important for dense membranes without natural pores such as those used for reverse osmosis and in fuel cells. during the filtration process a boundary layer forms on the membrane. this concentration gradient is created by molecules which cannot pass through the membrane. the
Question: Water is a very important part of the physical weathering of rock. Which of these properties of water is most important in causing some of the physical weathering of rock?
A) Water is a liquid at room temperature.
B) Water can contain different minerals.
C) Water expands when it freezes.
D) Water dissolves many chemicals.
|
C) Water expands when it freezes.
|
Context:
scientists look through telescopes, study images on electronic screens, record meter readings, and so on. generally, on a basic level, they can agree on what they see, e. g., the thermometer shows 37. 9 degrees c. but, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. for example, before albert einstein ' s general theory of relativity, observers would have likely interpreted an image of the einstein cross as five different objects in space. in light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. observations that cannot be separated from theoretical interpretation are said to be theory - laden. all observation involves both perception and cognition. that is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. therefore, observations are affected by one ' s underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. in this sense, it can be argued that all observation is theory - laden. = = = the purpose of science = = = should science aim to determine ultimate truth, or are there questions that science cannot answer? scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. conversely, scientific anti - realists argue that science does not aim ( or at least does not succeed ) at truth, especially truth about unobservables like electrons or other universes. instrumentalists argue that scientific theories should only be evaluated on whether they are useful. in their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. realists often point to the success of recent scientific theories as evidence for the truth ( or near truth ) of current theories. antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. antirealists attempt to explain the success of scientific theories without reference to truth. some antirealists claim that scientific
v735 sgr was known as an enigmatic star with rapid brightness variations. long - term ogle photometry, brightness measurements in infrared bands, and recently obtained moderate resolution spectrum from the 6. 5 - m magellan telescope show that this star is an active young stellar object of herbig ae / be type.
there are a few different mechanisms that can cause white dwarf stars to vary in brightness, providing opportunities to probe the physics, structures, and formation of these compact stellar remnants. the observational characteristics of the three most common types of white dwarf variability are summarized : stellar pulsations, rotation, and ellipsoidal variations from tidal distortion in binary systems. stellar pulsations are emphasized as the most complex type of variability, which also has the greatest potential to reveal the conditions of white dwarf interiors.
two planetary nebulae are shown to belong to the sagittarius dwarf galaxy, on the basis of their radial velocities. this is only the second dwarf spheroidal galaxy, after fornax, found to contain planetary nebulae. their existence confirms that this galaxy is at least as massive as the fornax dwarf spheroidal which has a single planetary nebula, and suggests a mass of a few times 10 * * 7 solar masses. the two planetary nebulae are located along the major axis of the galaxy, near the base of the tidal tail. there is a further candidate, situated at a very large distance along the direction of the tidal tail, for which no velocity measurement is available. the location of the planetary nebulae and globular clusters of the sagittarius dwarf galaxy suggests that a significant fraction of its mass is contained within the tidal tail.
, including objects we can see with our naked eyes. it is one of the oldest sciences. astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. there are two types of astronomy : observational astronomy and theoretical astronomy. observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. in contrast, theoretical astronomy is oriented towards developing computer or analytical models to describe astronomical objects and phenomena. this discipline is the science of celestial objects and phenomena that originate outside the earth ' s atmosphere. it is concerned with the evolution, physics, chemistry, meteorology, geology, and motion of celestial objects, as well as the formation and development of the universe. astronomy includes examining, studying, and modeling stars, planets, and comets. most of the information used by astronomers is gathered by remote observation. however, some laboratory reproduction of celestial phenomena has been performed ( such as the molecular chemistry of the interstellar medium ). there is considerable overlap with physics and in some areas of earth science. there are also interdisciplinary fields such as astrophysics, planetary sciences, and cosmology, along with allied disciplines such as space physics and astrochemistry. while the study of celestial features and phenomena can be traced back to antiquity, the scientific methodology of this field began to develop in the middle of the 17th century. a key factor was galileo ' s introduction of the telescope to examine the night sky in more detail. the mathematical treatment of astronomy began with newton ' s development of celestial mechanics and the laws of gravitation. however, it was triggered by earlier work of astronomers such as kepler. by the 19th century, astronomy had developed into formal science, with the introduction of instruments such as the spectroscope and photography, along with much - improved telescopes and the creation of professional observatories. = = interdisciplinary studies = = the distinctions between the natural science disciplines are not always sharp, and they share many cross - discipline fields. physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry. a particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. this field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. this science also draws upon expertise from other fields, such
occur outside of the milky way galaxy. the chandra x - ray observatory was launched from the columbia on sts - 93 in 1999, observing black holes, quasars, supernova, and dark matter. it provided critical observations on the sagittarius a * black hole at the center of the milky way galaxy and the separation of dark and regular matter during galactic collisions. finally, the spitzer space telescope is an infrared telescope launched in 2003 from a delta ii rocket. it is in a trailing orbit around the sun, following the earth and discovered the existence of brown dwarf stars. other telescopes, such as the cosmic background explorer and the wilkinson microwave anisotropy probe, provided evidence to support the big bang. the james webb space telescope, named after the nasa administrator who lead the apollo program, is an infrared observatory launched in 2021. the james webb space telescope is a direct successor to the hubble space telescope, intended to observe the formation of the first galaxies. other space telescopes include the kepler space telescope, launched in 2009 to identify planets orbiting extrasolar stars that may be terran and possibly harbor life. the first exoplanet that the kepler space telescope confirmed was kepler - 22b, orbiting within the habitable zone of its star. nasa also launched a number of different satellites to study earth, such as television infrared observation satellite ( tiros ) in 1960, which was the first weather satellite. nasa and the united states weather bureau cooperated on future tiros and the second generation nimbus program of weather satellites. it also worked with the environmental science services administration on a series of weather satellites and the agency launched its experimental applications technology satellites into geosynchronous orbit. nasa ' s first dedicated earth observation satellite, landsat, was launched in 1972. this led to nasa and the national oceanic and atmospheric administration jointly developing the geostationary operational environmental satellite and discovering ozone depletion. = = = space shuttle = = = nasa had been pursuing spaceplane development since the 1960s, blending the administration ' s dual aeronautics and space missions. nasa viewed a spaceplane as part of a larger program, providing routine and economical logistical support to a space station in earth orbit that would be used as a hub for lunar and mars missions. a reusable launch vehicle would then have ended the need for expensive and expendable boosters like the saturn v. in 1969, nasa designated the johnson space center as the lead center for the design, development, and manufacturing of the space shuttle orbiter, while the marshall space flight center
oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars.
planetary nebulae retain the signature of the nucleosynthesis and mixing events that occurred during the previous agb phase. observational signatures complement observations of agb and post - agb stars and their binary companions. the abundances of the elements heavier than iron such as kr and xe in planetary nebulae can be used to complement abundances of sr / y / zr and ba / la / ce in agb stars, respectively, to determine the operation of the slow neutron - capture process ( the s process ) in agb stars. additionally, observations of the rb abundance in type i planetary nebulae may allow us to infer the initial mass of the central star. several noble gas components present in meteoritic stardust silicon carbide ( sic ) grains are associated with implantation into the dust grains in the high - energy environment connected to the fast winds from the central stars during the planetary nebulae phase.
the infrared excess around the white dwarf g29 - 38 can be explained by emission from an opaque flat ring of dust with an inner radius 0. 14 of the radius of the sun and an outer radius approximately equal to the sun ' s. this ring lies within the roche region of the white dwarf where an asteroid could have been tidally destroyed, producing a system reminiscent of saturn ' s rings. accretion onto the white dwarf from this circumstellar dust can explain the observed calcium abundance in the atmosphere of g29 - 38. either as a bombardment by a series of asteroids or because of one large disruption, the total amount of matter accreted onto the white dwarf may have been comparable to the total mass of asteroids in the solar system, or, equivalently, about 1 % of the mass in the asteroid belt around the main sequence star zeta lep.
armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets.
Question: A student uses a telescope to view stars at night. The student notices some of the stars are different colors. The color of a star is determined most by its
A) size.
B) distance from Earth.
C) mass.
D) temperature.
|
D) temperature.
|
Context:
shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration
to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical ( e. g., nitrous acid, benzopyrene ) or radiation ( e. g., x - ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes ). mutations can lead to phenotypic effects such as loss - of - function, gain - of - function, and conditional mutations. some mutations are beneficial, as they are a source of genetic variation for evolution. others are harmful if they were to result in a loss of function of genes needed for survival. = = = gene expression = = = gene expression is the molecular process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna to rna to protein. there are two gene expression processes : transcription ( dna to rna ) and translation ( rna to protein ). = = = gene regulation = = = the regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, rna splicing, translation, and post - translational modification of a protein. gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the dna sequence close to or at a promoter. a cluster of genes that share the same promoter is called an operon,
plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent β the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell β which can dedifferentiate into a callus ( a mass of
, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration ) ; or anabolic β the building up ( synthesis ) of compounds ( such as proteins, carbohydrates, lipids, and nucleic acids ). usually, catabolism releases energy, and anabolism consumes energy. the chemical reactions of metabolism are organized into metabolic pathways, in which
. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support
cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical ( e. g., nitrous acid, benzopyrene ) or radiation ( e. g., x - ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes ). mutations can lead to phenotypic effects such as loss - of - function, gain - of - function, and conditional mutations. some mutations are beneficial, as they are a source of genetic variation for evolution. others are harmful if they were to result in a loss of function of genes needed for survival. = = = gene expression = = = gene expression is the molecular process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna to rna to protein. there are two gene expression processes : transcription ( dna to rna ) and translation ( rna to protein ). = = = gene regulation = = = the regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, rna splicing
the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways
, depending on the type of receptor. for instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. other types of receptors include protein kinase receptors ( e. g., receptor for the hormone insulin ) and g protein - coupled receptors. activation of g protein - coupled receptors can initiate second messenger cascades. the process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction. = = = cell cycle = = = the cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. these events include the duplication of its dna and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. in eukaryotes ( i. e., animal, plant, fungal, and protist cells ), there are two distinct types of cell division : mitosis and meiosis. mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. in general, mitosis ( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in
activation of g protein - coupled receptors can initiate second messenger cascades. the process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction. = = = cell cycle = = = the cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. these events include the duplication of its dna and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. in eukaryotes ( i. e., animal, plant, fungal, and protist cells ), there are two distinct types of cell division : mitosis and meiosis. mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. in general, mitosis ( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a
are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its
Question: Which function is performed at similarly structured sites in prokaryotic and eukaryotic cells?
A) protein synthesis
B) packaging and transport of proteins
C) storage of genetic material
D) release of energy from storage forms
|
A) protein synthesis
|
Context:
cell - culture scaffolds the material needed for each application is different, and dependent on the desired mechanical properties of the material. tissue engineering of long bone defects for example, will require a rigid scaffold with a compressive strength similar to that of cortical bone ( 100 - 150 mpa ), which is much higher compared to a scaffold for skin regeneration. there are a few versatile synthetic materials used for many different scaffold applications. one of these commonly used materials is polylactic acid ( pla ), a synthetic polymer. pla β polylactic acid. this is a polyester which degrades within the human body to form lactic acid, a naturally occurring chemical which is easily removed from the body. similar materials are polyglycolic acid ( pga ) and polycaprolactone ( pcl ) : their degradation mechanism is similar to that of pla, but pcl degrades slower and pga degrades faster. pla is commonly combined with pga to create poly - lactic - co - glycolic acid ( plga ). this is especially useful because the degradation of plga can be tailored by altering the weight percentages of pla and pga : more pla β slower degradation, more pga β faster degradation. this tunability, along with its biocompatibility, makes it an extremely useful material for scaffold creation. scaffolds may also be constructed from natural materials : in particular different derivatives of the extracellular matrix have been studied to evaluate their ability to support cell growth. protein based materials β such as collagen, or fibrin, and polysaccharidic materials - like chitosan or glycosaminoglycans ( gags ), have all proved suitable in terms of cell compatibility. among gags, hyaluronic acid, possibly in combination with cross linking agents ( e. g. glutaraldehyde, water - soluble carbodiimide, etc. ), is one of the possible choices as scaffold material. due to the covalent attachment of thiol groups to these polymers, they can crosslink via disulfide bond formation. the use of thiolated polymers ( thiomers ) as scaffold material for tissue engineering was initially introduced at the 4th central european symposium on pharmaceutical technology in vienna 2001. as thiomers are biocompatible, exhibit cellular mimicking properties and efficiently support proliferation and differentiation of various cell types,
chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabinol ( active ingredient in cannabis ), caffeine, morphine and nicotine come directly from plants. others are simple derivatives of botanical natural products. for example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and
to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiot
##able. additionally, they must be biocompatible, meaning that they do not cause any adverse effects to cells. silicone, for example, is a synthetic, non - biodegradable material commonly used as a drug delivery material, while gelatin is a biodegradable, natural material commonly used in cell - culture scaffolds the material needed for each application is different, and dependent on the desired mechanical properties of the material. tissue engineering of long bone defects for example, will require a rigid scaffold with a compressive strength similar to that of cortical bone ( 100 - 150 mpa ), which is much higher compared to a scaffold for skin regeneration. there are a few versatile synthetic materials used for many different scaffold applications. one of these commonly used materials is polylactic acid ( pla ), a synthetic polymer. pla β polylactic acid. this is a polyester which degrades within the human body to form lactic acid, a naturally occurring chemical which is easily removed from the body. similar materials are polyglycolic acid ( pga ) and polycaprolactone ( pcl ) : their degradation mechanism is similar to that of pla, but pcl degrades slower and pga degrades faster. pla is commonly combined with pga to create poly - lactic - co - glycolic acid ( plga ). this is especially useful because the degradation of plga can be tailored by altering the weight percentages of pla and pga : more pla β slower degradation, more pga β faster degradation. this tunability, along with its biocompatibility, makes it an extremely useful material for scaffold creation. scaffolds may also be constructed from natural materials : in particular different derivatives of the extracellular matrix have been studied to evaluate their ability to support cell growth. protein based materials β such as collagen, or fibrin, and polysaccharidic materials - like chitosan or glycosaminoglycans ( gags ), have all proved suitable in terms of cell compatibility. among gags, hyaluronic acid, possibly in combination with cross linking agents ( e. g. glutaraldehyde, water - soluble carbodiimide, etc. ), is one of the possible choices as scaffold material. due to the covalent attachment of thiol groups to these polymers, they can crosslink via disulfide bond
some references for the breaking strength of fused silica fibers compiled in 1999.
as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemically - durable crystalline materials based on polycrystalline ceramics and large single crystals. alumina ceramics are widely utilized in the chemical industry due to their excellent chemical stability and high resistance to corrosion. it is used as acid - resistant pump impellers and pump bodies, ensuring long - lasting performance in transferring aggressive fluids. they are also used in acid - carrying pipe linings to prevent contamination and maintain fluid purity, which is crucial in industries like pharmaceuticals and food processing. valves made from alumina ceramics demonstrate exceptional durability and resistance to chemical attack, making them reliable for controlling the flow of corrosive liquids. = = glass - ceramics = = glass - ceramic materials share many properties with both glasses and ceramics. glass - ceramics have an amorphous phase and one or more crystalline phases and are produced by a so - called " controlled crystallization ", which is typically avoided in glass manufacturing. glass - ceramics often contain a crystalline phase
made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up
##drate - rich plant products such as barley ( beer ), rice ( sake ) and grapes ( wine ). native americans have used various plants as ways of treating illness or disease for thousands of years. this knowledge native americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of
pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabinol ( active ingredient in cannabis ), caffeine, morphine and nicotine come directly from plants. others are simple derivatives of botanical natural products. for example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. most alcoholic beverages come from fermentation of carbohy
1975. skin tissue - engineered skin is a type of bioartificial organ that is often used to treat burns, diabetic foot ulcers, or other large wounds that cannot heal well on their own. artificial skin can be made from autografts, allografts, and xenografts. autografted skin comes from a patient ' s own skin, which allows the dermis to have a faster healing rate, and the donor site can be re - harvested a few times. allograft skin often comes from cadaver skin and is mostly used to treat burn victims. lastly, xenografted skin comes from animals and provides a temporary healing structure for the skin. they assist in dermal regeneration, but cannot become part of the host skin. tissue - engineered skin is now available in commercial products. integra, originally used to only treat burns, consists of a collagen matrix and chondroitin sulfate that can be used as a skin replacement. the chondroitin sulfate functions as a component of proteoglycans, which helps to form the extracellular matrix. integra can be repopulated and revascularized while maintaining its dermal collagen architecture, making it a bioartificial organ dermagraft, another commercial - made tissue - engineered skin product, is made out of living fibroblasts. these fibroblasts proliferate and produce growth factors, collagen, and ecm proteins, that help build granulation tissue. = = = = heart = = = = since the number of patients awaiting a heart transplant is continuously increasing over time, and the number of patients on the waiting list surpasses the organ availability, artificial organs used as replacement therapy for terminal heart failure would help alleviate this difficulty. artificial hearts are usually used to bridge the heart transplantation or can be applied as replacement therapy for terminal heart malfunction. the total artificial heart ( tah ), first introduced by dr. vladimir p. demikhov in 1937, emerged as an ideal alternative. since then it has been developed and improved as a mechanical pump that provides long - term circulatory support and replaces diseased or damaged heart ventricles that cannot properly pump the blood, restoring thus the pulmonary and systemic flow. some of the current tahs include abiocor, an fda - approved device that comprises two artificial ventricles and their valves, and does not require subcutaneous connections, and is indicated for
Question: Which material is the best natural resource to use for making water-resistant shoes?
A) cotton
B) leather
C) plastic
D) wool
|
B) leather
|
Context:
time - dependent distribution of the global extinction of megafauna is compared with the growth of human population. there is no correlation between the two processes. furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna.
and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying
= = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling
species occupying the same geographical area at the same time. a biological interaction is the effect that a pair of organisms living together in a community have on each other. they can be either of the same species ( intraspecific interactions ), or of different species ( interspecific interactions ). these effects may be short - term, like pollination and predation, or long - term ; both often strongly influence the evolution of the species involved. a long - term interaction is called a symbiosis. symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. every species participates as a consumer, resource, or both in consumer β resource interactions, which form the core of food chains or food webs. there are different trophic levels within any food web, with the lowest level being the primary producers ( or autotrophs ) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. at the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. heterotrophs that consume plants are primary consumers ( or herbivores ) whereas heterotrophs that consume herbivores are secondary consumers ( or carnivores ). and those that eat secondary consumers are tertiary consumers and so on. omnivorous heterotrophs are able to consume at multiple levels. finally, there are decomposers that feed on the waste products or dead bodies of organisms. on average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one - tenth of the energy of the trophic level that it consumes. waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. = = = biosphere = = = in the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. for example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. a biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic ( biosphere ) and the abiotic ( lithos
radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. the most significant meltdowns occurred at three mile island in pennsylvania and chernobyl in the soviet ukraine. the earthquake and tsunami on march 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the fukushima daiichi nuclear power plant in japan. military reactors that experienced similar accidents were windscale in the united kingdom and sl - 1 in the united states. military accidents usually involve the loss or unexpected detonation of nuclear weapons. the castle bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a japanese fishing boat ( with one fatality ), and raised concerns about contaminated fish in japan. in the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. the last twenty years have seen a marked decline in such accidents. = = examples of environmental benefits = = proponents of nuclear energy note that annually, nuclear - generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed / recycled for other energy uses. proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. for example, the environmental protection agency estimates that coal kills 30, 000 people a year, as a result of its environmental impact, while 60 people died in the chernobyl disaster. a real world example of impact provided by proponents of nuclear energy is the 650, 000 ton increase in carbon emissions in the two months following the closure of the vermont yankee nuclear plant. = = see also = = atomic age lists of nuclear disasters and radioactive incidents nuclear power debate outline of nuclear technology radiology = = references = = = = external links = = nuclear energy institute β beneficial uses
##nts from the air to reduce the potential adverse effects on humans and the environment. the process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation. = = = sewage treatment = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the
, lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges.
the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a
huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number
and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. the most significant meltdowns occurred at three mile island in pennsylvania and chernobyl in the soviet ukraine. the earthquake and tsunami on march 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the fukushima daiichi nuclear power plant in japan. military reactors that experienced similar accidents were windscale in the united kingdom and sl - 1 in the united states. military accidents usually involve the loss or unexpected detonation of nuclear weapons. the castle bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a japanese fishing boat ( with one fatality ), and raised concerns about contaminated fish in japan. in the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. the last twenty years have seen a marked decline in such accidents. = = examples of environmental benefits = = proponents of nuclear energy note that annually, nuclear - generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed / recycled for other energy uses. proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. for example, the environmental protection agency estimates that coal kills 30, 000 people a year, as a result of its environmental impact, while 60 people died in the chernobyl disaster. a real world example of impact provided by proponents of nuclear energy is
Question: When hunting by humans causes a species to become extinct, this may produce damaging effects throughout the ecosystem of the extinct species. What is the cause of this damage?
A) alteration of a food web
B) degradation of a habitat
C) modification of a climate
D) reversal of a flow of energy
|
A) alteration of a food web
|
Context:
several thoughts are presented on the long ongoing difficulties both students and academics face related to calculus 101. some of these thoughts may have a more general interest.
we make a few comments on some misleading statements in the above paper.
honorable rector, honorable professors, and students of this university : in these times of political and economic struggle and nationalistic fragmentation, it is a particular joy for me to see people assembling here to give their attention exclusively to the highest values that are common to us all. i am glad to be in this blessed land before a small circle of people who are interested in topics of science to speak on those issues that, in essence, are the subject of my own meditations.. [ abridged ].
learning to use math in physics involves combining ( blending ) our everyday experiences and the conceptual ideas of physics with symbolic mathematical representations. graphs are one of the best ways to learn to build the blend. they are a mathematical representation that builds on visual recognition to create a bridge between words and equations. but students in introductory physics classes often see a graph as an endpoint, a task the teacher asks them to complete, rather than as a tool to help them make sense of a physical system. and most of the graph problems in traditional introductory physics texts simply ask students to extract a number from a graph. but if graphs are used appropriately, they can be a powerful tool in helping students learn to build the blend and develop their physical intuition and ability to think with math.
by reference to undefined terms : the subject in which we never know what we are talking about, nor whether what we are saying is true. bertrand russell 1901 many other attempts to characterize mathematics have led to humor or poetic prose : a mathematician is a blind man in a dark room looking for a black cat which isn ' t there. charles darwin a mathematician, like a painter or poet, is a maker of patterns. if his patterns are more permanent than theirs, it is because they are made with ideas. g. h. hardy, 1940 mathematics is the art of giving the same name to different things. henri poincare mathematics is the science of skillful operations with concepts and rules invented just for this purpose. [ this purpose being the skillful operation.... ] eugene wigner mathematics is not a book confined within a cover and bound between brazen clasps, whose contents it needs only patience to ransack ; it is not a mine, whose treasures may take long to reduce into possession, but which fill only a limited number of veins and lodes ; it is not a soil, whose fertility can be exhausted by the yield of successive harvests ; it is not a continent or an ocean, whose area can be mapped out and its contour defined : it is limitless as that space which it finds too narrow for its aspirations ; its possibilities are as infinite as the worlds which are forever crowding in and multiplying upon the astronomer ' s gaze ; it is as incapable of being restricted within assigned boundaries or being reduced to definitions of permanent validity, as the consciousness of life, which seems to slumber in each monad, in every atom of matter, in each leaf and bud cell, and is forever ready to burst forth into new forms of vegetable and animal existence. james joseph sylvester what is mathematics? what is it for? what are mathematicians doing nowadays? wasn ' t it all finished long ago? how many new numbers can you invent anyway? is today ' s mathematics just a matter of huge calculations, with the mathematician as a kind of zookeeper, making sure the precious computers are fed and watered? if it ' s not, what is it other than the incomprehensible outpourings of superpowered brainboxes with their heads in the clouds and their feet dangling from the lofty balconies of their ivory towers? mathematics is all of these, and none. mostly, it ' s just different. it ' s not what you expect it to be, you turn your back for
mixes of multi - track recordings. it is common to record a commercial record at one studio and have it mixed by different engineers in other studios. mastering engineer β the person who masters the final mixed stereo tracks ( or sometimes a series of audio stems, which consists in a mix of the main sections ) that the mix engineer produces. the mastering engineer makes any final adjustments to the overall sound of the record in the final step before commercial duplication. mastering engineers use principles of equalization, compression and limiting to fine - tune the sound timbre and dynamics and to achieve a louder recording. sound designer β broadly an artist who produces soundtracks or sound effects content for media. live sound engineer front of house ( foh ) engineer, or a1. β a person dealing with live sound reinforcement. this usually includes planning and installation of loudspeakers, cabling and equipment and mixing sound during the show. this may or may not include running the foldback sound. a live / sound reinforcement engineer hears source material and tries to correlate that sonic experience with system performance. wireless microphone engineer, or a2. this position is responsible for wireless microphones during a theatre production, a sports event or a corporate event. foldback or monitor engineer β a person running foldback sound during a live event. the term foldback comes from the old practice of folding back audio signals from the front of house ( foh ) mixing console to the stage so musicians can hear themselves while performing. monitor engineers usually have a separate audio system from the foh engineer and manipulate audio signals independently from what the audience hears so they can satisfy the requirements of each performer on stage. in - ear systems, digital and analog mixing consoles, and a variety of speaker enclosures are typically used by monitor engineers. in addition, most monitor engineers must be familiar with wireless or rf ( radio - frequency ) equipment and often must communicate personally with the artist ( s ) during each performance. systems engineer β responsible for the design setup of modern pa systems, which are often very complex. a systems engineer is usually also referred to as a crew chief on tour and is responsible for the performance and day - to - day job requirements of the audio crew as a whole along with the foh audio system. this is a sound - only position concerned with implementation, not to be confused with the interdisciplinary field of system engineering, which typically requires a college degree. re - recording mixer β a person in post - production who mixes audio tracks for feature films or television programs. = = equipment = = an audio engineer is
this is an " essay - review " of a book with the same title, by jeffrey bub ( cambridge university press, 1997 ).
there are four puzzling questions about by the magnitudes of neutrino mixings and mass splittings. a brief sketch is given of the various kinds of models of neutrino masses and how they answer these questions. special attention is given to so - called " lopsided " models.
##ubated, and the formation of a colored product indicates a positive hybridoma. alternatively, immunocytochemical, western blot, and immunoprecipitation - mass spectrometry. unlike western blot assays, immunoprecipitation - mass spectrometry facilitates screening and ranking of clones which bind to the native ( non - denaturated ) forms of antigen proteins. flow cytometry screening has been used for primary screening of a large number ( ~ 1000 ) of hybridoma clones recognizing the native form of the antigen on the cell surface. in the flow cytometry - based screening, a mixture of antigen - negative cells and antigen - positive cells is used as the antigen to be tested for each hybridoma supernatant sample. the b cell that produces the desired antibodies can be cloned to produce many identical daughter clones. supplemental media containing interleukin - 6 ( such as briclone ) are essential for this step. once a hybridoma colony is established, it will continually grow in culture medium like rpmi - 1640 ( with antibiotics and fetal bovine serum ) and produce antibodies. multiwell plates are used initially to grow the hybridomas, and after selection, are changed to larger tissue culture flasks. this maintains the well - being of the hybridomas and provides enough cells for cryopreservation and supernatant for subsequent investigations. the culture supernatant can yield 1 to 60 ΞΌg / ml of monoclonal antibody, which is maintained at - 20 Β°c or lower until required. by using culture supernatant or a purified immunoglobulin preparation, further analysis of a potential monoclonal antibody producing hybridoma can be made in terms of reactivity, specificity, and cross - reactivity. = = applications = = the use of monoclonal antibodies is numerous and includes the prevention, diagnosis, and treatment of disease. for example, monoclonal antibodies can distinguish subsets of b cells and t cells, which is helpful in identifying different types of leukaemias. in addition, specific monoclonal antibodies have been used to define cell surface markers on white blood cells and other cell types. this led to the cluster of differentiation series of markers. these are often referred to as cd markers and define several hundred different cell surface components of cells, each specified by binding of a particular monoclonal antibody. such antibodies are extremely useful for fluorescence - activated cell sorting,
a highly - asymmetric " psi ' ' factory " may be the best approach for studying d0 anti - d0 mixing.
Question: Sarah's class is learning about mixtures and solutions. Her teacher writes four statements on the board. Which statement best describes a mixture?
A) Both substances mix evenly.
B) Both substances can evaporate.
C) One substance dissolves into another.
D) One substance can be separated from the other.
|
D) One substance can be separated from the other.
|
Context:
in the year 1598 philipp uffenbach published a printed diptych sundial, which is a forerunner of franz ritters horizantal sundial. uffenbach ' s sundial contains apart from the usual information on a sundial ascending signs of the zodiac, several brigthest stars, an almucantar and most important the oldest gnomonic world map known so far. the sundial is constructed for the polar height of 50 1 / 6 degrees, the height of frankfurt / main the town of his citizenship.
the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the
oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars.
on biological causation and the diversity of life. he made countless observations of nature, especially the habits and attributes of plants and animals on lesbos, classified more than 540 animal species, and dissected at least 50. aristotle ' s writings profoundly influenced subsequent islamic and european scholarship, though they were eventually superseded in the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason
the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the laws of equilibrium ; they must have had practical and intuitional knowledge of the principals involved. what archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system. " and again : " with astonishment we find ourselves on the threshold of modern science
so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. however, synchronicity itself is considered neither testable nor falsifiable. the study was subsequently heavily criticised for its non - random sample and its use of statistics and also its lack of consistency with astrology. = = psychology = = psychological studies have not found any robust relationship between astrological signs and life outcomes. for example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well - being and quality of life. it has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. : 344 : 180 β 181 : 42 β 48 confirmation bias is a form of cognitive bias. : 553 from the literature, astrology believers often tend to selectively remember those predictions that turned out to be true and do not remember those that turned out false. another, separate, form of confirmation bias also plays a role, where believers often fail to
the magnetic field of the sun is the underlying cause of the many diverse phenomena combined under the heading of solar activity. here we describe the magnetic field as it threads its way from the bottom of the convection zone, where it is built up by the solar dynamo, to the solar surface, where it manifests itself in the form of sunspots and faculae, and beyond into the outer solar atmosphere and, finally, into the heliosphere. on the way it, transports energy from the surface and the subsurface layers into the solar corona, where it heats the gas and accelerates the solar wind.
the first observations of saturn ' s visible - wavelength aurora were made by the cassini camera. the aurora was observed between 2006 and 2013 in the northern and southern hemispheres. the color of the aurora changes from pink at a few hundred km above the horizon to purple at 1000 - 1500 km above the horizon. the spectrum observed in 9 filters spanning wavelengths from 250 nm to 1000 nm has a prominent h - alpha line and roughly agrees with laboratory simulated auroras. auroras in both hemispheres vary dramatically with longitude. auroras form bright arcs between 70 and 80 degree latitude north and between 65 and 80 degree latitude south, which sometimes spiral around the pole, and sometimes form double arcs. a large 10, 000 - km - scale longitudinal brightness structure persists for more than 100 hours. this structure rotates approximately together with saturn. on top of the large steady structure, the auroras brighten suddenly on the timescales of a few minutes. these brightenings repeat with a period of about 1 hour. smaller, 1000 - km - scale structures may move faster or lag behind saturn ' s rotation on timescales of tens of minutes. the persistence of nearly - corotating large bright longitudinal structure in the auroral oval seen in two movies spanning 8 and 11 rotations gives an estimate on the period of 10. 65 $ \ pm $ 0. 15 h for 2009 in the northern oval and 10. 8 $ \ pm $ 0. 1 h for 2012 in the southern oval. the 2009 north aurora period is close to the north branch of saturn kilometric radiation ( skr ) detected at that time.
also launched missions to mercury in 2004, with the messenger probe demonstrating as the first use of a solar sail. nasa also launched probes to the outer solar system starting in the 1960s. pioneer 10 was the first probe to the outer planets, flying by jupiter, while pioneer 11 provided the first close up view of the planet. both probes became the first objects to leave the solar system. the voyager program launched in 1977, conducting flybys of jupiter and saturn, neptune, and uranus on a trajectory to leave the solar system. the galileo spacecraft, deployed from the space shuttle flight sts - 34, was the first spacecraft to orbit jupiter, discovering evidence of subsurface oceans on the europa and observed that the moon may hold ice or liquid water. a joint nasa - european space agency - italian space agency mission, cassini β huygens, was sent to saturn ' s moon titan, which, along with mars and europa, are the only celestial bodies in the solar system suspected of being capable of harboring life. cassini discovered three new moons of saturn and the huygens probe entered titan ' s atmosphere. the mission discovered evidence of liquid hydrocarbon lakes on titan and subsurface water oceans on the moon of enceladus, which could harbor life. finally launched in 2006, the new horizons mission was the first spacecraft to visit pluto and the kuiper belt. beyond interplanetary probes, nasa has launched many space telescopes. launched in the 1960s, the orbiting astronomical observatory were nasa ' s first orbital telescopes, providing ultraviolet, gamma - ray, x - ray, and infrared observations. nasa launched the orbiting geophysical observatory in the 1960s and 1970s to look down at earth and observe its interactions with the sun. the uhuru satellite was the first dedicated x - ray telescope, mapping 85 % of the sky and discovering a large number of black holes. launched in the 1990s and early 2000s, the great observatories program are among nasa ' s most powerful telescopes. the hubble space telescope was launched in 1990 on sts - 31 from the discovery and could view galaxies 15 billion light years away. a major defect in the telescope ' s mirror could have crippled the program, had nasa not used computer enhancement to compensate for the imperfection and launched five space shuttle servicing flights to replace the damaged components. the compton gamma ray observatory was launched from the atlantis on sts - 37 in 1991, discovering a possible source of antimatter at the center of the milky way and observing that the majority of gamma - ray bursts
unversed in geometry enter here, " and also turned out many notable philosophers. plato ' s student aristotle introduced empiricism and the notion that universal truths can be arrived at via observation and induction, thereby laying the foundations of the scientific method. aristotle also produced many biological writings that were empirical in nature, focusing on biological causation and the diversity of life. he made countless observations of nature, especially the habits and attributes of plants and animals on lesbos, classified more than 540 animal species, and dissected at least 50. aristotle ' s writings profoundly influenced subsequent islamic and european scholarship, though they were eventually superseded in the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars
Question: Which of the following best explains why the Sun appears to move across the sky every day?
A) The Sun rotates on its axis.
B) Earth rotates on its axis.
C) The Sun orbits around Earth.
D) Earth orbits around the Sun.
|
B) Earth rotates on its axis.
|
Context:
a pomeron phenomenon remains a mystery. a short review of the experimental situation in diffractive physics and an account of some spectacular manifestations of the pomeron are given.
the theory outright... lakatos sought to reconcile the rationalism of popperian falsificationism with what seemed to be its own refutation by history ". many philosophers have tried to solve the problem of demarcation in the following terms : a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. but the history of thought shows us that many people were totally committed to absurd beliefs. if the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of heaven and hell as knowledge. scientists, on the other hand, are very sceptical even of their best theories. newton ' s is the most powerful theory science has yet produced, but newton himself never believed that bodies attract each other at a distance. so no degree of commitment to beliefs makes them knowledge. indeed, the hallmark of scientific behaviour is a certain scepticism even towards one ' s most cherished theories. blind commitment to a theory is not an intellectual virtue : it is an intellectual crime. thus a statement may be pseudoscientific even if it is eminently ' plausible ' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. a theory may even be of supreme scientific value even if no one understands it, let alone believes in it. the boundary between science and pseudoscience is disputed and difficult to determine analytically, even after more than a century of study by philosophers of science and scientists, and despite some basic agreements on the fundamentals of the scientific method. the concept of pseudoscience rests on an understanding that the scientific method has been misrepresented or misapplied with respect to a given theory, but many philosophers of science maintain that different kinds of methods are held as appropriate across different fields and different eras of human history. according to lakatos, the typical descriptive unit of great scientific achievements is not an isolated hypothesis but " a powerful problem - solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence ". to popper, pseudoscience uses induction to generate theories, and only performs experiments to seek to verify them. to popper, falsifiability is what determines the scientific status of a theory. taking a historical approach, kuhn observed that scientists did not follow popper ' s rule, and might ignore falsifying data, unless overwhelming. to kuhn, puzzle - solving within
it seems natural to ask why the universe exists at all. modern physics suggests that the universe can exist all by itself as a self - contained system, without anything external to create or sustain it. but there might not be an absolute answer to why it exists. i argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts ; the universe simply is, without ultimate cause or explanation.
little information is known about the polarization of gluons inside a longitudinally polarized proton. i report on the sensitivity of photoproduction experiments to it. both jet and heavy quark production are considered.
process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to
the origins of the series of european cosmic - ray symposia are briefly described. the first meeting in the series, on hadronic interactions and extensive air showers, held in lodz, poland in 1968, was attended by the author : some memories are recounted.
there is an odd tension in electroweak physics. perturbation theory is extremely successful. at the same time, fundamental field theory gives manifold reasons why this should not be the case. this tension is resolved by the fr \ " ohlich - morchio - strocchi mechanism. however, the legacy of this work goes far beyond the resolution of this tension, and may usher in a fundamentally and ontologically different perspective on elementary particles, and even quantum gravity.
##m and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to
this is an experimentalist ' s list of questions concerning the physics of the charmed baryon sector which have no satisfactory answer.
an essay on horndeski gravity, how it was formulated in the early 1970s and how it was ' re - discovered ' and widely adopted by cosmologists more than thirty years later.
Question: The theory of spontaneous generation was eventually disproved scientifically by
A) arguments in philosophy.
B) chemical analysis of material.
C) examining models of the process.
D) conducting a controlled experiment.
|
D) conducting a controlled experiment.
|
Context:
= = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling
and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. the most significant meltdowns occurred at three mile island in pennsylvania and chernobyl in the soviet ukraine. the earthquake and tsunami on march 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the fukushima daiichi nuclear power plant in japan. military reactors that experienced similar accidents were windscale in the united kingdom and sl - 1 in the united states. military accidents usually involve the loss or unexpected detonation of nuclear weapons. the castle bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a japanese fishing boat ( with one fatality ), and raised concerns about contaminated fish in japan. in the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. the last twenty years have seen a marked decline in such accidents. = = examples of environmental benefits = = proponents of nuclear energy note that annually, nuclear - generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed / recycled for other energy uses. proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. for example, the environmental protection agency estimates that coal kills 30, 000 people a year, as a result of its environmental impact, while 60 people died in the chernobyl disaster. a real world example of impact provided by proponents of nuclear energy is
and cell phones are a particular challenge because the stream of data can interfere with focusing and learning. although these technologies affect adults too, young people may be more influenced by it as their developing brains can easily become habituated to switching tasks and become unaccustomed to sustaining attention. too much information, coming too rapidly, can overwhelm thinking. technology is " rapidly and profoundly altering our brains. " high exposure levels stimulate brain cell alteration and release neurotransmitters, which causes the strengthening of some neural pathways and the weakening of others. this leads to heightened stress levels on the brain that, at first, boost energy levels, but, over time, actually augment memory, impair cognition, lead to depression, and alter the neural circuitry of the hippocampus, amygdala and prefrontal cortex. these are the brain regions that control mood and thought. if unchecked, the underlying structure of the brain could be altered. overstimulation due to technology may begin too young. when children are exposed before the age of seven, important developmental tasks may be delayed, and bad learning habits might develop, which " deprives children of the exploration and play that they need to develop. " media psychology is an emerging specialty field that embraces electronic devices and the sensory behaviors occurring from the use of educational technology in learning. = = = sociocultural criticism = = = according to lai, " the learning environment is a complex system where the interplay and interactions of many things impact the outcome of learning. " when technology is brought into an educational setting, the pedagogical setting changes in that technology - driven teaching can change the entire meaning of an activity without adequate research validation. if technology monopolizes an activity, students can begin to develop the sense that " life would scarcely be thinkable without technology. " leo marx considered the word " technology " itself as problematic, susceptible to reification and " phantom objectivity ", which conceals its fundamental nature as something that is only valuable insofar as it benefits the human condition. technology ultimately comes down to affecting the relations between people, but this notion is obfuscated when technology is treated as an abstract notion devoid of good and evil. langdon winner makes a similar point by arguing that the underdevelopment of the philosophy of technology leaves us with an overly simplistic reduction in our discourse to the supposedly dichotomous notions of the " making " versus the " uses " of new technologies and that a narrow focus on " use
wearable technology is any technology that is designed to be used while worn. common types of wearable technology include smartwatches, fitness trackers, and smartglasses. wearable electronic devices are often close to or on the surface of the skin, where they detect, analyze, and transmit information such as vital signs, and / or ambient data and which allow in some cases immediate biofeedback to the wearer. wearable devices collect vast amounts of data from users making use of different behavioral and physiological sensors, which monitor their health status and activity levels. wrist - worn devices include smartwatches with a touchscreen display, while wristbands are mainly used for fitness tracking but do not contain a touchscreen display. wearable devices such as activity trackers are an example of the internet of things, since " things " such as electronics, software, sensors, and connectivity are effectors that enable objects to exchange data ( including data quality ) through the internet with a manufacturer, operator, and / or other connected devices, without requiring human intervention. wearable technology offers a wide range of possible uses, from communication and entertainment to improving health and fitness, however, there are worries about privacy and security because wearable devices have the ability to collect personal data. wearable technology has a variety of use cases which is growing as the technology is developed and the market expands. it can be used to encourage individuals to be more active and improve their lifestyle choices. healthy behavior is encouraged by tracking activity levels and providing useful feedback to enable goal setting. this can be shared with interested stakeholders such as healthcare providers. wearables are popular in consumer electronics, most commonly in the form factors of smartwatches, smart rings, and implants. apart from commercial uses, wearable technology is being incorporated into navigation systems, advanced textiles ( e - textiles ), and healthcare. as wearable technology is being proposed for use in critical applications, like other technology, it is vetted for its reliability and security properties. = = history = = in the 1500s, german inventor peter henlein ( 1485 β 1542 ) created small watches that were worn as necklaces. a century later, pocket watches grew in popularity as waistcoats became fashionable for men. wristwatches were created in the late 1600s but were worn mostly by women as bracelets. pedometers were developed around the same time as pocket watches. the concept of a pedometer was described by leonardo da vinci around 1500, and the germanic national museum in nuremberg has a
and child health in boston, said of the digital generation, " their brains are rewarded not for staying on task, but for jumping to the next thing. the worry is we ' re raising a generation of kids in front of screens whose brains are going to be wired differently. " students have always faced distractions ; computers and cell phones are a particular challenge because the stream of data can interfere with focusing and learning. although these technologies affect adults too, young people may be more influenced by it as their developing brains can easily become habituated to switching tasks and become unaccustomed to sustaining attention. too much information, coming too rapidly, can overwhelm thinking. technology is " rapidly and profoundly altering our brains. " high exposure levels stimulate brain cell alteration and release neurotransmitters, which causes the strengthening of some neural pathways and the weakening of others. this leads to heightened stress levels on the brain that, at first, boost energy levels, but, over time, actually augment memory, impair cognition, lead to depression, and alter the neural circuitry of the hippocampus, amygdala and prefrontal cortex. these are the brain regions that control mood and thought. if unchecked, the underlying structure of the brain could be altered. overstimulation due to technology may begin too young. when children are exposed before the age of seven, important developmental tasks may be delayed, and bad learning habits might develop, which " deprives children of the exploration and play that they need to develop. " media psychology is an emerging specialty field that embraces electronic devices and the sensory behaviors occurring from the use of educational technology in learning. = = = sociocultural criticism = = = according to lai, " the learning environment is a complex system where the interplay and interactions of many things impact the outcome of learning. " when technology is brought into an educational setting, the pedagogical setting changes in that technology - driven teaching can change the entire meaning of an activity without adequate research validation. if technology monopolizes an activity, students can begin to develop the sense that " life would scarcely be thinkable without technology. " leo marx considered the word " technology " itself as problematic, susceptible to reification and " phantom objectivity ", which conceals its fundamental nature as something that is only valuable insofar as it benefits the human condition. technology ultimately comes down to affecting the relations between people, but this notion is obfuscated when technology is treated as an abstract notion devoid of
radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. the most significant meltdowns occurred at three mile island in pennsylvania and chernobyl in the soviet ukraine. the earthquake and tsunami on march 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the fukushima daiichi nuclear power plant in japan. military reactors that experienced similar accidents were windscale in the united kingdom and sl - 1 in the united states. military accidents usually involve the loss or unexpected detonation of nuclear weapons. the castle bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a japanese fishing boat ( with one fatality ), and raised concerns about contaminated fish in japan. in the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. the last twenty years have seen a marked decline in such accidents. = = examples of environmental benefits = = proponents of nuclear energy note that annually, nuclear - generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed / recycled for other energy uses. proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. for example, the environmental protection agency estimates that coal kills 30, 000 people a year, as a result of its environmental impact, while 60 people died in the chernobyl disaster. a real world example of impact provided by proponents of nuclear energy is the 650, 000 ton increase in carbon emissions in the two months following the closure of the vermont yankee nuclear plant. = = see also = = atomic age lists of nuclear disasters and radioactive incidents nuclear power debate outline of nuclear technology radiology = = references = = = = external links = = nuclear energy institute β beneficial uses
be a low - cost, feasible, and accessible way for promoting pa. " essentially, this insinuates that wearable technology can be beneficial to everyone and really is not cost prohibited. also, when consistently seeing wearable technology being actually utilized and worn by other people, it promotes the idea of physical activity and pushes more individuals to take part. wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. for example, according to the american journal of preventive medicine, " wearables can be used across different chronic disease trajectory phases ( e. g., pre - versus post - surgery ) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments. " wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes in their diet, workout routine, or sleep patterns. also, not only can wearable technology be helpful in measuring results pre and post surgery, but it can also help measure results as someone may be rehabbing from a chronic disease such as cancer, or heart disease, etc. wearable technology has the potential to create new and improved ways of how we look at health and how we actually interpret that science behind our health. it can propel us into higher levels of medicine and has already made a significant impact on how patients are diagnosed, treated, and rehabbed over time. however, extensive research still needs to be continued on how to properly integrate wearable technology into health care and how to best utilize it. in addition, despite the reaping benefits of wearable technology, a lot of research still also has to be completed in order to start transitioning wearable technology towards very sick high risk patients. = = = sense - making of the data = = = while wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data β thus, most are used primarily for general health information. end user perception of how their data is used plays a big role in how such datasets can be fully optimized. exception include seizure - alerting wearables, which continuously analyze the wearer ' s data and make a decision about calling for help β the data collected can then provide doctors with objective evidence that they may find useful in diagnoses. wearables can account for individual differences, although most
##olithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and the study of mummies. scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology. = = = ancient = = = = = = = copper and bronze ages = = = = metallic copper occurs on the surface of weathered copper ore deposits and copper was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for
delay of ripening, increase of juice yield, and improvement of re - hydration. irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal ( in this context ' ionizing radiation ' is implied ). as such it is also used on non - food items, such as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more
no offspring, to reduce the population. in industrial and food applications, radiation is used for sterilization of tools and equipment. an advantage is that the object may be sealed in plastic before sterilization. an emerging use in food production is the sterilization of food using food irradiation. food irradiation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. the radiation sources used include radioisotope gamma ray sources, x - ray generators and electron accelerators. further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re - hydration. irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal ( in this context ' ionizing radiation ' is implied ). as such it is also used on non - food items, such as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however
Question: Discarded electronic devices, such as outdated computers and cell phones, contain materials that can be toxic to the environment. Which statement best explains why humans continue to use these technologies?
A) Electronics have become less expensive over time.
B) Some areas have recycling programs for electronics.
C) Industries that produce electronics help the economy.
D) Humans value the benefits of these devices over their cost.
|
D) Humans value the benefits of these devices over their cost.
|
Context:
three of what is called the six simple machines, from which all machines are based. these machines are the inclined plane, the wedge, and the lever, which allowed the ancient egyptians to move millions of limestone blocks which weighed approximately 3. 5 tons ( 7, 000 lbs. ) each into place to create structures like the great pyramid of giza, which is 481 feet ( 147 meters ) high. they also made writing medium similar to paper from papyrus, which joshua mark states is the foundation for modern paper. papyrus is a plant ( cyperus papyrus ) which grew in plentiful amounts in the egyptian delta and throughout the nile river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu
the paper erroneously assumed that the normal carriers giving rise to the backflow could be either electrons or holes.
great pyramid of giza, which is 481 feet ( 147 meters ) high. they also made writing medium similar to paper from papyrus, which joshua mark states is the foundation for modern paper. papyrus is a plant ( cyperus papyrus ) which grew in plentiful amounts in the egyptian delta and throughout the nile river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu shastra ', suggests a thorough understanding of materials engineering, hydrology, and sanitation. = = = = china = = = = the chinese made many first - known discoveries and developments. major technological contributions from china include the earliest known form of the binary code and epigenetic sequencing, early seismological detectors,
this article is withdrawn because of a mistake in the main result of the paper.
the paper is withdrawn by the author because it is superseded by cond - mat / 0303357.
the paper has been withdrawn by the author since the protocol is not new. it is just the oldest version of bb84.
we make two tiny corrections to our previous paper with the same title, and also obtain, as a bonus, something new.
paper has been withdrawn due to non - compliance with ijcsi terms and conditions.
paper withdrawn due to a crucial algebraic error in section 3.
, even if the idempotence property is lost. an everyday example of a projection is the casting of shadows onto a plane ( sheet of paper ) : the projection of a point is its shadow on the sheet of paper, and the projection ( shadow ) of a point on the sheet of paper is that point itself ( idempotency ). the shadow of a three - dimensional sphere is a disk. originally, the notion of projection was introduced in euclidean geometry to denote the projection of the three - dimensional euclidean space onto a plane in it, like the shadow example. the two main projections of this kind are : the projection from a point onto a plane or central projection : if c is a point, called the center of projection, then the projection of a point p different from c onto a plane that does not contain c is the intersection of the line cp with the plane. the points p such that the line cp is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane ( see projective geometry for a formalization of this terminology ). the projection of the point c itself is not defined. the projection parallel to a direction d, onto a plane or parallel projection : the image of a point p is the intersection of the plane with the line parallel to d passing through p. see affine space Β§ projection for an accurate definition, generalized to any dimension. the concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real - world objects on the ground. this rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. over time different versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations. in cartography, a map projection is a map of a part of the surface of the earth onto a plane, which, in some cases, but not always, is the restriction of a projection in the above meaning. the 3d projections are also at the basis of the theory of perspective. the need for unifying the two kinds of projections and of defining the image by a central projection of any point different of the center of projection are at the origin of projective geometry. = = definition = = generally, a mapping where the domain and codomain are the same set ( or mathematical structure ) is a projection if the mapping is idempotent, which means that a projection is
Question: A student crumples up a sheet of paper. Which property of the paper has changed?
A) color
B) mass
C) state
D) shape
|
D) shape
|
Context:
the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the
to be separated conceptually from geology and crop production and treated as a whole. as a founding father of soil science, fallou has primacy in time. fallou was working on the origins of soil before dokuchaev was born ; however dokuchaev ' s work was more extensive and is considered to be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current
be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way
##trophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist frits went. the first known auxin, indole - 3 - acetic acid ( iaa ), which promotes cell growth, was only isolated from plants about 50 years later. this compound mediates the tropic responses of shoots and roots towards light and gravity. the finding in 1939 that plant callus
or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. a single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. the process results from the epigenetic activation of some genes and inhibition of others. unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. while plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. epigenetic changes can lead to paramutations, which do not follow the mendelian heritage rules. these epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. = = plant evolution = = the chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, ( commonly but incorrectly known as " blue - green algae " ) and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry,
soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the
slow, controlled release of energy from the series of reactions. sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. cellular respiration involving oxygen is called aerobic respiration, which has four stages : glycolysis, citric acid cycle ( or krebs cycle ), electron transport chain, and oxidative phosphorylation. glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of atp being produced at the same time. each pyruvate is then oxidized into acetyl - coa by the pyruvate dehydrogenase complex, which also generates nadh and carbon dioxide. acetyl - coa enters the citric acid cycle, which takes places inside the mitochondrial matrix. at the end of the cycle, the total yield from 1 glucose ( or 2 pyruvates ) is 6 nadh, 2 fadh2, and 2 atp molecules. finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from nadh and fadh2 that is coupled to the pumping of protons ( hydrogen ions ) across the inner mitochondrial membrane ( chemiosmosis ), which generates a proton motive force. energy from the proton motive force drives the enzyme atp synthase to synthesize more atps by phosphorylating adps. the transfer of electrons terminates with molecular oxygen being the final electron acceptor. if oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. the pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. this serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. fermentation oxidizes nadh to nad + so it can be re - used in glycolysis. in the absence of oxygen, fermentation prevents the buildup of nadh in the cytoplasm and provides nad + for gly
, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell β which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent β grouping organisms
, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both
the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist
Question: When a plant dies, it often decomposes and becomes part of the soil. This process is one step in which cycle?
A) lunar
B) water
C) carbon
D) energy
|
C) carbon
|
Context:
the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements ' resulting unique chronological timescales would then give inconsistent time estimates. in refutation of young earth claims of inconstant decay rates affecting the reliability of radiometric dating, roger c. wiens, a physicist specializing in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became
on biological causation and the diversity of life. he made countless observations of nature, especially the habits and attributes of plants and animals on lesbos, classified more than 540 animal species, and dissected at least 50. aristotle ' s writings profoundly influenced subsequent islamic and european scholarship, though they were eventually superseded in the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason
be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way
the union of space telescopes and interstellar spaceships guarantees that if extraterrestrial civilizations were common, someone would have come here long ago.
the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the
earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and
the world is changing at an ever - increasing pace. and it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity : structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. how to respond to this challenge? and what are the costs?
one of the greatest discoveries of modern times is that of the expanding universe, almost invariably attributed to hubble ( 1929 ). what is not widely known is that the original treatise by lemaitre ( 1927 ) contained a rich fusion of both theory and of observation. stiglers law of eponymy is yet again affirmed : no scientific discovery is named after its original discoverer ( merton, 1957 ). an appeal is made for a lemaitre telescope, to honour the discoverer of the expanding universe.
Question: Which would best aid a scientist in discovering how Earth may have changed over time?
A) finding the nest of a bald eagle
B) tracking the footprints of a wolf
C) analyzing the pollination of a sunflower
D) discovering a fossil of a seashell in a wooded area
|
D) discovering a fossil of a seashell in a wooded area
|
Context:
you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ),
) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system
a comparison of the sensitivities of methods which allow us to determine the coordinates of a moving hot body is made.
we have combined measurements of the kinematics, morphology, and oxygen abundance of the ionized gas in \ izw18, one of the most metal - poor galaxies known, to examine the star formation history and chemical mixing processes.
, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc β 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and bee. he investigated chick embryos by breaking open eggs and observing them at various stages of development. aristotle ' s works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. he also presented philosophies about physics, nature, and astronomy using
an article for the springer encyclopedia of complexity and system science
modeling of the x - ray spectra of the galactic superluminal jet sources grs 1915 + 105 and gro j1655 - 40 reveal a three - layered atmospheric structure in the inner region of their accretion disks. above the cold and optically thick disk of a temperature 0. 2 - 0. 5 kev, there is a warm layer with a temperature of 1. 0 - 1. 5 kev and an optical depth around 10. sometimes there is also a much hotter, optically thin corona above the warm layer, with a temperature of 100 kev or higher and an optical depth around unity. the structural similarity between the accretion disks and the solar atmosphere suggest that similar physical processes may be operating in these different systems.
the most puzzling issue in the foundations of quantum mechanics is perhaps that of the status of the wave function of a system in a quantum universe. is the wave function objective or subjective? does it represent the physical state of the system or merely our information about the system? and if the former, does it provide a complete description of the system or only a partial description? we shall address these questions here mainly from a bohmian perspective, and shall argue that part of the difficulty in ascertaining the status of the wave function in quantum mechanics arises from the fact that there are two different sorts of wave functions involved. the most fundamental wave function is that of the universe. from it, together with the configuration of the universe, one can define the wave function of a subsystem. we argue that the fundamental wave function, the wave function of the universe, has a law - like character.
oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars.
notes of the lectures delivered in les houches during the summer school on complex systems ( july 2006 ).
Question: Kendall studied the ways in which human body systems work together. He compared the respiratory and circulatory systems. In which way are these two systems similar to each other?
A) They both bring oxygen to the body.
B) They both send messages to the body.
C) They both digest nutrients for the body.
D) They both pump blood through the body.
|
A) They both bring oxygen to the body.
|
Context:
##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s
a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth '
, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest
are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). "
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
consisting of several distinct layers, often referred to as spheres : the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the earth ' s surface and its various processes these correspond to rocks, water, air and life. also included by some are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth
cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make
##hosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere
earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and
Question: A difference between the oceanic crust and the continental crust is that the oceanic crust is
A) composed chiefly of sedimentary rocks.
B) more dense than the continental crust.
C) older than the continental crust.
D) continually being created.
|
B) more dense than the continental crust.
|
Context:
) of the mass of all organisms, with calcium, phosphorus, sulfur, sodium, chlorine, and magnesium constituting essentially all the remainder. different elements can combine to form compounds such as water, which is fundamental to life. biochemistry is the study of chemical processes within and relating to living organisms. molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions. = = = water = = = life arose from the earth ' s first ocean, which formed some 3. 8 billion years ago. since then, water continues to be the most abundant molecule in every organism. water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. in terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen ( h ) atoms to one oxygen ( o ) atom ( h2o ). because the o β h bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. this polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. water is also adhesive as it is able to adhere to the surface of any polar or charged non - water molecules. water is denser as a liquid than it is as a solid ( or ice ). this unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds =
organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. archaea reproduce asexually by binary fission, fragmentation, or budding ; unlike bacteria, no known species of archaea form endospores. the first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet. archaea are a major part of earth ' s life. they are part of the microbiota of all organisms. in the human microbiome, they are important in the gut, mouth, and on the skin. their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles : carbon fixation ; nitrogen cycling ; organic compound turnover ; and maintaining microbial symbiotic and syntrophic communities, for example. = = = eukaryotes = = = eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria ( or symbiogenesis ) that gave rise to mitochondria and chloroplasts, both of which are now part of modern - day eukaryotic cells. the major lineages of eukaryotes diversified in the precambrian about 1. 5 billion years ago and can be classified into eight major clades : alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. while it is likely that protists share a common ancestor ( the last eukaryotic common ancestor ), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. like groupings such as algae,
##ting the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. english scientist john dalton proposed the modern theory of atoms ; that all substances are composed of indivisible ' atoms ' of matter and that different atoms have varying atomic weights. the development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, jons jacob berzelius and humphry davy, made possible by the prior invention of the voltaic pile by alessandro volta. davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. british william prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. j. a. r. newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by dmitri mendeleev and independently by several other scientists including julius lothar meyer. the inert gases, later called the noble gases were discovered by william ramsay in collaboration with lord rayleigh at the end of the century, thereby filling in the basic structure of the table. organic chemistry was developed by justus von liebig and others, following friedrich wohler ' s synthesis of urea. other crucial 19th century advances were ; an understanding of valence bonding ( edward frankland in 1852 ) and the application of thermodynamics to chemistry ( j. w. gibbs and svante arrhenius in the 1870s ). at the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. in 1897, j. j. thomson of the university of cambridge discovered the electron and soon after the french scientist becquerel as well as the couple pierre and marie curie investigated the phenomenon of radioactivity. in a series of pioneering scattering experiments ernest rutherford at the university of manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. his work on atomic structure was improved on by his students, the danish physicist niels bohr, the englishman henry moseley and the german otto hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. the electronic theory
joints. = = = metal alloys = = = the alloys of iron ( steel, stainless steel, cast iron, tool steel, alloy steels ) make up the largest proportion of metals today both by quantity and commercial value. iron alloyed with various proportions of carbon gives low, mid and high carbon steels. an iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron β carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a long time ( since the bronze age ), while the alloys of the other three metals have been relatively recently developed. due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. the alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. these materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. = = = semiconductors = = = a semiconductor is a material that has a resistivity between a conductor and insulator. modern day electronics run on semiconductors, and the industry had an estimated us $ 530 billion market in 2021. its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. semiconductor materials are used to build diodes, transistors, light - emitting diodes ( leds ), and analog and digital electric circuits, among their many uses. semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. semiconductor devices are manufactured both as single discrete devices and as integrated circuits ( ics ), which consist of a number β from a
index chemical substances. in this scheme each chemical substance is identifiable by a number known as its cas registry number. = = = = molecule = = = = a molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. however, this definition only works well for substances that are composed of molecules, which is not true of many substances ( see below ). molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry
contains a unique number that when added with any number leaves the latter unchanged. this unique number is known as the system ' s additive identity element. for example, the integers has the structure of an ordered ring. this number is generally denoted as 0. because of the total order in this ring, there are numbers greater than zero, called the positive numbers. another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than 0 whose sum with the original positive number is 0. these numbers less than 0 are called the negative numbers. the numbers in each such pair are their respective additive inverses. this attribute of a number, being exclusively either zero ( 0 ), positive ( + ), or negative ( β ), is called its sign, and is often encoded to the real numbers 0, 1, and β1, respectively ( similar to the way the sign function is defined ). since rational and real numbers are also ordered rings ( in fact ordered fields ), the sign attribute also applies to these number systems. when a minus sign is used in between two numbers, it represents the binary operation of subtraction. when a minus sign is written before a single number, it represents the unary operation of yielding the additive inverse ( sometimes called negation ) of the operand. abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. while 0 is its own additive inverse ( β0 = 0 ), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. a double application of this operation is written as β ( β3 ) = 3. the plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. in common numeral notation ( used in arithmetic and elsewhere ), the sign of a number is often made explicit by placing a plus or a minus sign before the number. for example, + 3 denotes " positive three ", and β3 denotes " negative three " ( algebraically : the additive inverse of 3 ). without specific context ( or when no explicit sign is given ), a number is interpreted per default as positive. this notation establishes a strong association of the minus sign " β " with negative numbers, and the plus sign " + " with positive numbers. = = = sign of zero = = = within the convention of zero being neither positive nor negative,
has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well β not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named
Β§ other meanings below. = = sign of a number = = numbers from various number systems, like integers, rationals, complex numbers, quaternions, octonions,... may have multiple attributes, that fix certain properties of a number. a number system that bears the structure of an ordered ring contains a unique number that when added with any number leaves the latter unchanged. this unique number is known as the system ' s additive identity element. for example, the integers has the structure of an ordered ring. this number is generally denoted as 0. because of the total order in this ring, there are numbers greater than zero, called the positive numbers. another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than 0 whose sum with the original positive number is 0. these numbers less than 0 are called the negative numbers. the numbers in each such pair are their respective additive inverses. this attribute of a number, being exclusively either zero ( 0 ), positive ( + ), or negative ( β ), is called its sign, and is often encoded to the real numbers 0, 1, and β1, respectively ( similar to the way the sign function is defined ). since rational and real numbers are also ordered rings ( in fact ordered fields ), the sign attribute also applies to these number systems. when a minus sign is used in between two numbers, it represents the binary operation of subtraction. when a minus sign is written before a single number, it represents the unary operation of yielding the additive inverse ( sometimes called negation ) of the operand. abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. while 0 is its own additive inverse ( β0 = 0 ), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. a double application of this operation is written as β ( β3 ) = 3. the plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. in common numeral notation ( used in arithmetic and elsewhere ), the sign of a number is often made explicit by placing a plus or a minus sign before the number. for example, + 3 denotes " positive three ", and β3 denotes " negative three " ( algebraically : the additive inverse of 3 ). without specific context ( or when
end { aligned } } } in summary, a set of the real numbers is an interval, if and only if it is an open interval, a closed interval, or a half - open interval. the only intervals that appear twice in the above classification are β
{ \ displaystyle \ emptyset } and r { \ displaystyle \ mathbb { r } } that are both open and closed. a degenerate interval is any set consisting of a single real number ( i. e., an interval of the form [ a, a ] ). some authors include the empty set in this definition. a real interval that is neither empty nor degenerate is said to be proper, and has infinitely many elements. an interval is said to be left - bounded or right - bounded, if there is some real number that is, respectively, smaller than or larger than all its elements. an interval is said to be bounded, if it is both left - and right - bounded ; and is said to be unbounded otherwise. intervals that are bounded at only one end are said to be half - bounded. the empty set is bounded, and the set of all reals is the only interval that is unbounded at both ends. bounded intervals are also commonly known as finite intervals. bounded intervals are bounded sets, in the sense that their diameter ( which is equal to the absolute difference between the endpoints ) is finite. the diameter may be called the length, width, measure, range, or size of the interval. the size of unbounded intervals is usually defined as + β, and the size of the empty interval may be defined as 0 ( or left undefined ). the centre ( midpoint ) of a bounded interval with endpoints a and b is ( a + b ) / 2, and its radius is the half - length | a β b | / 2. these concepts are undefined for empty or unbounded intervals. an interval is said to be left - open if and only if it contains no minimum ( an element that is smaller than all other elements ) ; right - open if it contains no maximum ; and open if it contains neither. the interval [ 0, 1 ) = { x | 0 β€ x < 1 }, for example, is left - closed and right - open. the empty set and the set of all reals are both open and closed intervals, while the set of non - negative reals, is a closed interval that is right - open but not left - open.
a proof that the set of real numbers is denumerable is given.
Question: Which of these elements is found in the greatest amount in organisms?
A) carbon
B) iron
C) lead
D) neon
|
A) carbon
|
Context:
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
to be separated conceptually from geology and crop production and treated as a whole. as a founding father of soil science, fallou has primacy in time. fallou was working on the origins of soil before dokuchaev was born ; however dokuchaev ' s work was more extensive and is considered to be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current
be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way
. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer
##thic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures
. the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period,
##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and
of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop
the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united
which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures
Question: After mining removes layers of rock from a hillside, new plants begin to grow in the cracks of the bare rock. The plants beginning to grow are an example of which natural process?
A) secondary succession in an existing ecosystem
B) new species developing in an ecosystem
C) species competition in a community
D) primary succession in a new habitat
|
D) primary succession in a new habitat
|
Context:
best - known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products. the first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering
on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of gmos. the development of a regulatory framework began in 1975, at asilomar, california. the asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. as the technology improved
the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united
the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form
. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer
industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll
. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity ( space bioeconomy ) dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and
the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. micro
new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper
and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states
Question: Improvements in farming technology would most likely
A) increase the amount of food produced.
B) change global climate conditions.
C) promote unhealthy dietary choices.
D) decrease the amount of daily exercise.
|
A) increase the amount of food produced.
|
Context:
which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. the fired part will be smaller than the dried part. = = forming methods = = ceramic forming techniques include throwing, slipcasting, tape casting, freeze - casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing
a discontinuity of a turbulent ideal fluid is considered. it is supposed to be split and dispersed, or spread in the stochastic environment forming a gas without hydrostatic pressure. two equal - mass fragments of a discontinuity are indistinguishable from each other. a gas, that possesses such properties, must behave itself as the madelung medium.
temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β batching β mixing β forming β drying β firing β assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first,
the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 β 139 ad ) to describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they
##g mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. the fired part will be smaller than the dried part. = = forming methods = = ceramic forming techniques include throwing, slipcasting, tape casting, freeze - casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing ( hip ), 3d printing and others. methods for forming ceramic powders into complex shapes are desirable in many areas of technology. such methods are required for producing advanced, high - temperature structural parts such as heat engine components and turbines. materials other than ceramics which are used in these processes may include : wood, metal, water, plaster and epoxy β most of which will be eliminated upon firing. a ceramic - filled epoxy, such as martyte, is sometimes used to protect structural steel under conditions of rocket exhaust impingement. these forming techniques are well known for providing tools and other components with dimensional stability, surface quality, high ( near theoretical ) density and microstructural uniformity. the increasing use and diversity of specialty forms of ceramics adds to the diversity of process technologies to be used. thus, reinforcing fibers and filaments are mainly made by polymer, sol - gel, or cvd processes, but melt
final version. to appear in discrete and continuous dynamical systems - a.
; however, a successful large - scale industrial application of the process was the development of continuous freeze drying of coffee. high - temperature short time processing β these processes, for the most part, are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers. decaffeination of coffee and tea β decaffeinated coffee and tea was first developed on a commercial basis in europe around 1900. the process is described in u. s. patent 897, 763. green coffee beans are treated with water, heat and solvents to remove the caffeine from the beans. process optimization β food technology now allows production of foods to be more efficient, oil saving technologies are now available on different forms. production methods and methodology have also become increasingly sophisticated. aseptic packaging β the process of filling a commercially sterile product into a sterile container and hermetically sealing the containers so that re - infection is prevented. thus, this results into a shelf stable product at ambient conditions. food irradiation β the process of exposing food and food packaging to ionizing radiation can effectively destroy organisms responsible for spoilage and foodborne illness and inhibit sprouting, extending shelf life. commercial fruit ripening rooms using ethylene as a plant hormone. food delivery β an order is typically made either through a restaurant or grocer ' s website or mobile app, or through a food ordering company. the ordered food is typically delivered in boxes or bags to the customer ' s doorsteps. = = categories = = technology has innovated these categories from the food industry : agricultural technology β or agtech, it is the use of technology in agriculture, horticulture, and aquaculture with the aim of improving yield, efficiency, and profitability. agricultural technology can be products, services or applications derived from agriculture that improve various input / output processes. food science β technology in this sector focuses on the development of new functional ingredients and alternative proteins. foodservice β technology innovated the way establishments prepare, supply, and serve food outside the home. there ' s a tendency to create the conditions for the restaurant of the future with robotics and cloudkitchens. consumer tech β technology allows what we call consumer electronics, which is the equipment of consumers with devices that facilitates the cooking process. food delivery β as the food delivery market is growing, companies and startups are rapidly revolutionizing the communication process between consumers and food establishments, with platform - to - consumer delivery as the
it is well - known that liquid and saturated vapor, separated by a flat interface in an unbounded space, are in equilibrium. one would similarly expect a liquid drop, sitting on a flat substrate, to be in equilibrium with the vapor surrounding it. yet, it is not : as shown in this work, the drop evaporates. mathematically, this conclusion is deduced using the diffuse - interface model, but it can also be reformulated in terms of the maximum - entropy principle, suggesting model independence. physically, evaporation of drops is due to the so - called kelvin effect, which gives rise to a liquid - to - vapor mass flux in all cases where the boundary of the liquid phase is convex.
wrought, which itself is the original past passive participle of the word work, now superseded by the weak verb forms worker and worked respectively. ) blacksmithing and the various related smithing and metal - crafts. folk music played on acoustic instruments. mathematics ( particularly, pure mathematics ) organic farming and animal husbandry ( i. e. ; agriculture as practiced by all american farmers prior to world war ii ). milling in the sense of operating hand - constructed equipment with the intent to either grind grain, or the reduction of timber to lumber as practiced in a saw - mill. fulling, felting, drop spindle spinning, hand knitting, crochet, & similar textile preparation. the production of charcoal by the collier, for use in home heating, foundry operations, smelting, the various smithing trades, and for brushing ones teeth as in colonial america. glass - blowing. various subskills of food preservation : smoking salting pickling drying note : home canning is a counter example of a low technology since some of the supplies needed to pursue this skill rely on a global trade network and an existing manufacturing infrastructure. the production of various alcoholic beverages : wine : poorly preserved fruit juice. beer : a way to preserve the calories of grain products from decay. whiskey : an improved ( distilled ) form of beer. flint - knapping masonry as used in castles, cathedrals, and root cellars. = = = domestic or consumer = = = ( non exhaustive ) list of low - tech in a westerner ' s everyday life : getting around by bike, and repairing it with second - hand materials using a cargo bike to carry loads ( rather than a gasoline vehicle ) drying clothes on a clothesline or on a drying rack washing clothes by hand, or in a human - powered washing machine cooling one ' s home with a fan or an air expander ( rather than electrical appliances such as air conditioners ) using a bell as door bell a cellar, " desert fridge ", or icebox ( rather than a fridge or freezer ) long - distance travel by sailing boat ( rather than by plane ) a wicker bag or a tote bag ( rather than a plastic bag ) to carry things swedish lighter ( rather than disposable lighter or matches ) a hand drill, instead of an electric one lighting with sunlight or candles hemp textiles to water plants with drip irrigation paper sheets for note - taking to clean with a broom ( rather than a vacuum cleaner ) to find one ' s way with map
which constitutes anywhere from 30 % [ m / m ] to 90 % [ m / m ] of its composition by volume, yielding an array of materials with interesting thermomechanical properties. in the processing of glass - ceramics, molten glass is cooled down gradually before reheating and annealing. in this heat treatment the glass partly crystallizes. in many cases, so - called ' nucleation agents ' are added in order to regulate and control the crystallization process. because there is usually no pressing and sintering, glass - ceramics do not contain the volume fraction of porosity typically present in sintered ceramics. the term mainly refers to a mix of lithium and aluminosilicates which yields an array of materials with interesting thermomechanical properties. the most commercially important of these have the distinction of being impervious to thermal shock. thus, glass - ceramics have become extremely useful for countertop cooking. the negative thermal expansion coefficient ( tec ) of the crystalline ceramic phase can be balanced with the positive tec of the glassy phase. at a certain point ( ~ 70 % crystalline ) the glass - ceramic has a net tec near zero. this type of glass - ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β batching β mixing β forming β drying β firing β assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression
Question: In a container, a mixture of water and salt is stirred so that the salt dissolves completely. Sand is added to this solution and allowed to settle to the bottom of the container. If the container is placed on a heat source and the liquid evaporates completely, what will be left in the container?
A) Nothing will remain in the container.
B) Only salt will remain in the container.
C) Only sand will remain in the container.
D) Salt and sand will both remain in the container.
|
D) Salt and sand will both remain in the container.
|
Context:
the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united
the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form
( division of the nucleus ) is preceded by the s stage of interphase ( during which the dna is replicated ) and is often followed by telophase and cytokinesis ; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by
. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer
the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. = = = human nutrition = = = virtually all staple foods come either directly from primary production by plants, or indirectly from animals that
of these cellular components. the different stages of mitosis all together define the mitotic phase of an animal cell cycle β the division of the mother cell into two genetically identical daughter cells. the cell cycle is a vital process by which a single - celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. after cell division, each of the daughter cells begin the interphase of a new cycle. in contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of dna replication followed by two divisions. homologous chromosomes are separated in the first division ( meiosis i ), and sister chromatids are separated in the second division ( meiosis ii ). both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. both are believed to be present in the last eukaryotic common ancestor. prokaryotes ( i. e., archaea and bacteria ) can also undergo cell division ( or binary fission ). unlike the processes of mitosis and meiosis in eukaryotes, binary fission in prokaryotes takes place without the formation of a spindle apparatus on the cell. before binary fission, dna in the bacterium is tightly coiled. after it has uncoiled and duplicated, it is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting. growth of a new cell wall begins to separate the bacterium ( triggered by ftsz polymerization and " z - ring " formation ). the new cell wall ( septum ) fully develops, resulting in the complete split of the bacterium. the new daughter cells have tightly coiled dna rods, ribosomes, and plasmids. = = = sexual reproduction and meiosis = = = meiosis is a central feature of sexual reproduction in eukaryotes, and the most fundamental function of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. two aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic dna damage and genetic complementation which masks the expression of deleterious recessive mutations. the beneficial effect of genetic complementation, derived from outcrossing ( cross - fertilization ) is also referred to as hybrid vigor or heterosis. charles
process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to
in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid
pluripotent cell lines of the embryo, which in turn become fully differentiated cells. a single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. the process results from the epigenetic activation of some genes and inhibition of others. unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. while plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. epigenetic changes can lead to paramutations, which do not follow the mendelian heritage rules. these epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. = = plant evolution = = the chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, ( commonly but incorrectly known as " blue - green algae " ) and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xyle
i discuss the possible instanton - induced multiparticle production in hard processes in qcd figures are available upon request
Question: Which of the following is formed immediately after fertilization?
A) egg
B) sperm
C) zygote
D) embryo
|
C) zygote
|
Context:
in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress
and irrigation in the alluvial south, and catchment systems stretching for tens of kilometers in the hilly north. their palaces had sophisticated drainage systems. writing was invented in mesopotamia, using the cuneiform script. many records on clay tablets and stone inscriptions have survived. these civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. by 1200 bc they could cast objects 5 m long in a single piece. several of the six classic simple machines were invented in mesopotamia. mesopotamians have been credited with the invention of the wheel. the wheel and axle mechanism first appeared with the potter ' s wheel, invented in mesopotamia ( modern iraq ) during the 5th millennium bc. this led to the invention of the wheeled vehicle in mesopotamia during the early 4th millennium bc. depictions of wheeled wagons found on clay tablet pictographs at the eanna district of uruk are dated between 3700 and 3500 bc. the lever was used in the shadoof water - lifting device, the first crane machine, which appeared in mesopotamia circa 3000 bc, and then in ancient egyptian technology circa 2000 bc. the earliest evidence of pulleys date back to mesopotamia in the early 2nd millennium bc. the screw, the last of the simple machines to be invented, first appeared in mesopotamia during the neo - assyrian period ( 911 β 609 ) bc. the assyrian king sennacherib ( 704 β 681 bc ) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two - part clay molds rather than by the ' lost wax ' process. the jerwan aqueduct ( c. 688 bc ) is made with stone arches and lined with waterproof concrete. the babylonian astronomical diaries spanned 800 years. they enabled meticulous astronomers to plot the motions of the planets and to predict eclipses. the earliest evidence of water wheels and watermills date back to the ancient near east in the 4th century bc, specifically in the persian empire before 350 bc, in the regions of mesopotamia ( iraq ) and persia ( iran ). this pioneering use of water power constituted the first human - devised motive force not to rely on muscle power ( besides the sail ). = = = = egypt = = = = the egyptians, known for building pyramids centuries before the creation of modern tools, invented and used many simple machines, such as the ramp to aid construction processes. historians and archaeologists have found evidence that the pyramids were built using
; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground
was done using the spinning wheel and weaving was done on a hand - and - foot - operated loom. it took from three to five spinners to supply one weaver. the invention of the flying shuttle in 1733 doubled the output of a weaver, creating a shortage of spinners. the spinning frame for wool was invented in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century,
electric motors, servo - mechanisms, and other electrical systems in conjunction with special software. a common example of a mechatronics system is a cd - rom drive. mechanical systems open and close the drive, spin the cd and move the laser, while an optical system reads the data on the cd and converts it to bits. integrated software controls the process and communicates the contents of the cd to the computer. robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. these robots may be of any shape and size, but all are preprogrammed and interact physically with the world. to create a robot, an engineer typically employs kinematics ( to determine the robot ' s range of motion ) and mechanics ( to determine the stresses within the robot ). robots are used extensively in industrial automation engineering. they allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. many companies employ assembly lines of robots, especially in automotive industries and some factories are so robotized that they can run by themselves. outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. robots are also sold for various residential applications, from recreation to domestic applications. = = = structural analysis = = = structural analysis is the branch of mechanical engineering ( and also civil engineering ) devoted to examining why and how objects fail and to fix the objects and their performance. structural failures occur in two general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure
used for tools, weapons and monumental statuary. by 1200 bc they could cast objects 5 m long in a single piece. several of the six classic simple machines were invented in mesopotamia. mesopotamians have been credited with the invention of the wheel. the wheel and axle mechanism first appeared with the potter ' s wheel, invented in mesopotamia ( modern iraq ) during the 5th millennium bc. this led to the invention of the wheeled vehicle in mesopotamia during the early 4th millennium bc. depictions of wheeled wagons found on clay tablet pictographs at the eanna district of uruk are dated between 3700 and 3500 bc. the lever was used in the shadoof water - lifting device, the first crane machine, which appeared in mesopotamia circa 3000 bc, and then in ancient egyptian technology circa 2000 bc. the earliest evidence of pulleys date back to mesopotamia in the early 2nd millennium bc. the screw, the last of the simple machines to be invented, first appeared in mesopotamia during the neo - assyrian period ( 911 β 609 ) bc. the assyrian king sennacherib ( 704 β 681 bc ) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two - part clay molds rather than by the ' lost wax ' process. the jerwan aqueduct ( c. 688 bc ) is made with stone arches and lined with waterproof concrete. the babylonian astronomical diaries spanned 800 years. they enabled meticulous astronomers to plot the motions of the planets and to predict eclipses. the earliest evidence of water wheels and watermills date back to the ancient near east in the 4th century bc, specifically in the persian empire before 350 bc, in the regions of mesopotamia ( iraq ) and persia ( iran ). this pioneering use of water power constituted the first human - devised motive force not to rely on muscle power ( besides the sail ). = = = = egypt = = = = the egyptians, known for building pyramids centuries before the creation of modern tools, invented and used many simple machines, such as the ramp to aid construction processes. historians and archaeologists have found evidence that the pyramids were built using three of what is called the six simple machines, from which all machines are based. these machines are the inclined plane, the wedge, and the lever, which allowed the ancient egyptians to move millions of limestone blocks which weighed approximately 3. 5 tons ( 7, 000 lbs. ) each into place to create structures like the
river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu shastra ', suggests a thorough understanding of materials engineering, hydrology, and sanitation. = = = = china = = = = the chinese made many first - known discoveries and developments. major technological contributions from china include the earliest known form of the binary code and epigenetic sequencing, early seismological detectors, matches, paper, helicopter rotor, raised - relief map, the double - action piston pump, cast iron, water powered blast furnace bellows, the iron plough, the multi - tube seed drill, the wheelbarrow, the parachute, the compass, the rudder, the crossbow, the south pointing chariot and gunpowder
water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 β 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two drummers operated by a programmable drum machine, where the drummer could be made to play different rhythms and different drum patterns. the castle clock, a hydropowered mechanical astronomical clock invented by al - jazari, was an early programmable analog computer. in the ottoman empire, a practical impulse steam turbine was invented in 1551 by taqi ad - din muhammad ibn ma ' ruf in ottoman egypt. he described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. known as a steam jack, a similar device for rotating a spit was also later described by john wilkins in 1648. = = = = medieval europe = = = = while medieval technology has been long depicted as a step backward in the evolution of western technology, a generation of medievalists ( like the american historian of science lynn white ) stressed from the 1940s onwards the innovative character of many medieval techniques. genuine medieval contributions include
be a low - cost, feasible, and accessible way for promoting pa. " essentially, this insinuates that wearable technology can be beneficial to everyone and really is not cost prohibited. also, when consistently seeing wearable technology being actually utilized and worn by other people, it promotes the idea of physical activity and pushes more individuals to take part. wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. for example, according to the american journal of preventive medicine, " wearables can be used across different chronic disease trajectory phases ( e. g., pre - versus post - surgery ) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments. " wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes in their diet, workout routine, or sleep patterns. also, not only can wearable technology be helpful in measuring results pre and post surgery, but it can also help measure results as someone may be rehabbing from a chronic disease such as cancer, or heart disease, etc. wearable technology has the potential to create new and improved ways of how we look at health and how we actually interpret that science behind our health. it can propel us into higher levels of medicine and has already made a significant impact on how patients are diagnosed, treated, and rehabbed over time. however, extensive research still needs to be continued on how to properly integrate wearable technology into health care and how to best utilize it. in addition, despite the reaping benefits of wearable technology, a lot of research still also has to be completed in order to start transitioning wearable technology towards very sick high risk patients. = = = sense - making of the data = = = while wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data β thus, most are used primarily for general health information. end user perception of how their data is used plays a big role in how such datasets can be fully optimized. exception include seizure - alerting wearables, which continuously analyze the wearer ' s data and make a decision about calling for help β the data collected can then provide doctors with objective evidence that they may find useful in diagnoses. wearables can account for individual differences, although most
i describe the early, from the nineteen sixties, history of attempts at quantizing general relativity.
Question: Which of the following helped lead to the invention of personal computers?
A) Internet
B) keyboard
C) wireless transmitter
D) integrated circuit
|
D) integrated circuit
|
Context:
pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin
is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside
the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the
of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and
the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. there are generally four types of chemical signals : autocrine, paracrine, juxtacrine, and hormones. in autocrine signaling, the ligand affects the same cell that releases it. tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their
energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) β including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photos
substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the
the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist
in this article i explain in detail a method for making small amounts of liquid oxygen in the classroom if there is no access to a cylinder of compressed oxygen gas. i also discuss two methods for identifying the fact that it is liquid oxygen as opposed to liquid nitrogen.
known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose,
Question: Blood absorbs oxygen in the
A) heart.
B) lungs.
C) stomach.
D) muscles.
|
B) lungs.
|
Context:
which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. the fired part will be smaller than the dried part. = = forming methods = = ceramic forming techniques include throwing, slipcasting, tape casting, freeze - casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing
temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β batching β mixing β forming β drying β firing β assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first,
wrought, which itself is the original past passive participle of the word work, now superseded by the weak verb forms worker and worked respectively. ) blacksmithing and the various related smithing and metal - crafts. folk music played on acoustic instruments. mathematics ( particularly, pure mathematics ) organic farming and animal husbandry ( i. e. ; agriculture as practiced by all american farmers prior to world war ii ). milling in the sense of operating hand - constructed equipment with the intent to either grind grain, or the reduction of timber to lumber as practiced in a saw - mill. fulling, felting, drop spindle spinning, hand knitting, crochet, & similar textile preparation. the production of charcoal by the collier, for use in home heating, foundry operations, smelting, the various smithing trades, and for brushing ones teeth as in colonial america. glass - blowing. various subskills of food preservation : smoking salting pickling drying note : home canning is a counter example of a low technology since some of the supplies needed to pursue this skill rely on a global trade network and an existing manufacturing infrastructure. the production of various alcoholic beverages : wine : poorly preserved fruit juice. beer : a way to preserve the calories of grain products from decay. whiskey : an improved ( distilled ) form of beer. flint - knapping masonry as used in castles, cathedrals, and root cellars. = = = domestic or consumer = = = ( non exhaustive ) list of low - tech in a westerner ' s everyday life : getting around by bike, and repairing it with second - hand materials using a cargo bike to carry loads ( rather than a gasoline vehicle ) drying clothes on a clothesline or on a drying rack washing clothes by hand, or in a human - powered washing machine cooling one ' s home with a fan or an air expander ( rather than electrical appliances such as air conditioners ) using a bell as door bell a cellar, " desert fridge ", or icebox ( rather than a fridge or freezer ) long - distance travel by sailing boat ( rather than by plane ) a wicker bag or a tote bag ( rather than a plastic bag ) to carry things swedish lighter ( rather than disposable lighter or matches ) a hand drill, instead of an electric one lighting with sunlight or candles hemp textiles to water plants with drip irrigation paper sheets for note - taking to clean with a broom ( rather than a vacuum cleaner ) to find one ' s way with map
. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer
higher education and advanced scientific research lead to social, economic, and political development of any country. all developed societies like the current 2022 g7 countries : canada, france, germany, italy, japan, the uk, and the us have all not only heavily invested in higher education but also in advanced scientific research in their respective countries. similarly, for african countries to develop socially, economically, and politically, they must follow suit by massively investing in higher education and local scientific research.
water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 β 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two drummers operated by a programmable drum machine, where the drummer could be made to play different rhythms and different drum patterns. the castle clock, a hydropowered mechanical astronomical clock invented by al - jazari, was an early programmable analog computer. in the ottoman empire, a practical impulse steam turbine was invented in 1551 by taqi ad - din muhammad ibn ma ' ruf in ottoman egypt. he described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. known as a steam jack, a similar device for rotating a spit was also later described by john wilkins in 1648. = = = = medieval europe = = = = while medieval technology has been long depicted as a step backward in the evolution of western technology, a generation of medievalists ( like the american historian of science lynn white ) stressed from the 1940s onwards the innovative character of many medieval techniques. genuine medieval contributions include
in the present - day universe, it appears that most, and perhaps all, massive stars are born in star clusters. it also appears that all star clusters contain stars drawn from an approximately universal initial mass function, so that almost all rich young star clusters contain massive stars. in this review i discuss the physical processes associated with both massive star formation and with star cluster formation. first i summarize the observed properties of star - forming gas clumps, then address the following questions. how do these clumps emerge from giant molecular clouds? in these clustered environments, how do individual stars form and gain mass? can a forming star cluster be treated as an equilibrium system or is this process too rapid for equilibrium to be established? how does feedback affect the formation process?
learning to use math in physics involves combining ( blending ) our everyday experiences and the conceptual ideas of physics with symbolic mathematical representations. graphs are one of the best ways to learn to build the blend. they are a mathematical representation that builds on visual recognition to create a bridge between words and equations. but students in introductory physics classes often see a graph as an endpoint, a task the teacher asks them to complete, rather than as a tool to help them make sense of a physical system. and most of the graph problems in traditional introductory physics texts simply ask students to extract a number from a graph. but if graphs are used appropriately, they can be a powerful tool in helping students learn to build the blend and develop their physical intuition and ability to think with math.
, only competed the national cheerleaders & dance association ( nca & nda ) college nationals along with buzz and the goldrush dance team competing here as well. however, in the 2022 season, goldrush competed at the universal cheerleaders & dance association ( uca & uda ) college nationals for the first time and in 2023 the cheer team will compete here for the first time as well. the institute mascots are buzz and the ramblin ' wreck. the institute ' s traditional football rival is the university of georgia ; the rivalry is considered one of the fiercest in college football. the rivalry is commonly referred to as clean, old - fashioned hate, which is also the title of a book about the subject. there is also a long - standing rivalry with clemson. tech has eighteen varsity sports : football, women ' s and men ' s basketball, baseball, softball, volleyball, golf, men ' s and women ' s tennis, men ' s and women ' s swimming and diving, men ' s and women ' s track and field, men ' s and women ' s cross country, and coed cheerleading. four georgia tech football teams were selected as national champions in news polls : 1917, 1928, 1952, and 1990. in may 2007, the women ' s tennis team won the ncaa national championship with a 4 β 2 victory over ucla, the first ever national title granted by the ncaa to tech. = = = fight songs = = = tech ' s fight song " i ' m a ramblin ' wreck from georgia tech " is known worldwide. first published in the 1908 blue print, it was adapted from an old drinking song ( " son of a gambolier " ) and embellished with trumpet flourishes by frank roman. then - vice president richard nixon and soviet premier nikita khrushchev sang the song together when they met in moscow in 1958 to reduce the tension between them. as the story goes, nixon did not know any russian songs, but khrushchev knew that one american song as it had been sung on the ed sullivan show. " i ' m a ramblin ' wreck " has had many other notable moments in its history. it is reportedly the first school song to have been played in space. gregory peck sang the song while strumming a ukulele in the movie the man in the gray flannel suit. john wayne whistled it in the high and the mighty. tim holt ' s character sings a few bars of it in
asymptotic giant branch ( agb ) winds from evolved stars not only provide a non - trivial amount of mass and energy return, but also produce dust grains in massive elliptical galaxies. due to the fast stellar velocity and the high ambient temperature, the wind is thought to form a comet - like tail, similar to mira in the local bubble. many massive elliptical galaxies and cluster central galaxies host extended dusty cold filaments. the fate of the cold dusty stellar wind and its relation to cold filaments are not well understood. in this work, we carry out both analytical and numerical studies of the interaction between an agb wind and the surrounding hot gas. we find that the cooling time of the tail is inversely proportional to the ambient pressure. in the absence of cooling, or in low pressure environments ( e. g., the outskirts of elliptical galaxies ), agb winds are quickly mixed into the hot gas, and all the agb winds have similar appearance and head - to - tail ratio. in high pressure environments, such as the local bubble and the central regions of massive elliptical galaxies, some of the gas in the mixing layer between the stellar wind and the surrounding hot gas can cool efficiently and cause the tail to become longer. our simulated tail of mira itself has similar length and velocity to that observed, and appears similar to the simulated agb tail in the central regions of massive galaxies. we speculate that instead of thermal instability, the induced condensation at the mixing layer of agb winds may be the origin of cold filaments in massive galaxies and galaxy clusters. this naturally explains the existence of dust and pah in the filaments.
Question: A student adds sugar, spices, and salt to a bowl of peanuts and stirs them together. What has the student made?
A) a compound
B) a substance
C) a mixture
D) a solution
|
C) a mixture
|
Context:
##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s
, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth '
##hosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere
have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became
s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = =
, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = further reading = = = = external links = = earth science picture of the day, a service of universities space research association, sponsored by nasa goddard space flight center. geoethics in planetary and space exploration. geology buzz : earth science archived 2021 - 11 - 04 at the wayback machine
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light
Question: Over time, non-volcanic mountains can form due to the interaction of plate boundaries. Which interaction is most likely associated with the formation of non-volcanic mountains?
A) oceanic plates colliding with oceanic plates
B) oceanic plates separating from oceanic plates
C) continental plates colliding with continental plates
D) continental plates separating from continental plates
|
C) continental plates colliding with continental plates
|
Context:
there are a few different mechanisms that can cause white dwarf stars to vary in brightness, providing opportunities to probe the physics, structures, and formation of these compact stellar remnants. the observational characteristics of the three most common types of white dwarf variability are summarized : stellar pulsations, rotation, and ellipsoidal variations from tidal distortion in binary systems. stellar pulsations are emphasized as the most complex type of variability, which also has the greatest potential to reveal the conditions of white dwarf interiors.
two planetary nebulae are shown to belong to the sagittarius dwarf galaxy, on the basis of their radial velocities. this is only the second dwarf spheroidal galaxy, after fornax, found to contain planetary nebulae. their existence confirms that this galaxy is at least as massive as the fornax dwarf spheroidal which has a single planetary nebula, and suggests a mass of a few times 10 * * 7 solar masses. the two planetary nebulae are located along the major axis of the galaxy, near the base of the tidal tail. there is a further candidate, situated at a very large distance along the direction of the tidal tail, for which no velocity measurement is available. the location of the planetary nebulae and globular clusters of the sagittarius dwarf galaxy suggests that a significant fraction of its mass is contained within the tidal tail.
the infrared excess around the white dwarf g29 - 38 can be explained by emission from an opaque flat ring of dust with an inner radius 0. 14 of the radius of the sun and an outer radius approximately equal to the sun ' s. this ring lies within the roche region of the white dwarf where an asteroid could have been tidally destroyed, producing a system reminiscent of saturn ' s rings. accretion onto the white dwarf from this circumstellar dust can explain the observed calcium abundance in the atmosphere of g29 - 38. either as a bombardment by a series of asteroids or because of one large disruption, the total amount of matter accreted onto the white dwarf may have been comparable to the total mass of asteroids in the solar system, or, equivalently, about 1 % of the mass in the asteroid belt around the main sequence star zeta lep.
while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern.
two types of stars are known to have strong, large scale magnetic fields : the main sequence ap stars and the magnetic white dwarfs. this suggest that the former might be the progenitors of the latter. in order to test this idea, i have carried out a search for large scale magnetic fields in stars with evolutionary states which are intermediate, i. e. in horizontal branch stars and in hot subdwarfs.
occur outside of the milky way galaxy. the chandra x - ray observatory was launched from the columbia on sts - 93 in 1999, observing black holes, quasars, supernova, and dark matter. it provided critical observations on the sagittarius a * black hole at the center of the milky way galaxy and the separation of dark and regular matter during galactic collisions. finally, the spitzer space telescope is an infrared telescope launched in 2003 from a delta ii rocket. it is in a trailing orbit around the sun, following the earth and discovered the existence of brown dwarf stars. other telescopes, such as the cosmic background explorer and the wilkinson microwave anisotropy probe, provided evidence to support the big bang. the james webb space telescope, named after the nasa administrator who lead the apollo program, is an infrared observatory launched in 2021. the james webb space telescope is a direct successor to the hubble space telescope, intended to observe the formation of the first galaxies. other space telescopes include the kepler space telescope, launched in 2009 to identify planets orbiting extrasolar stars that may be terran and possibly harbor life. the first exoplanet that the kepler space telescope confirmed was kepler - 22b, orbiting within the habitable zone of its star. nasa also launched a number of different satellites to study earth, such as television infrared observation satellite ( tiros ) in 1960, which was the first weather satellite. nasa and the united states weather bureau cooperated on future tiros and the second generation nimbus program of weather satellites. it also worked with the environmental science services administration on a series of weather satellites and the agency launched its experimental applications technology satellites into geosynchronous orbit. nasa ' s first dedicated earth observation satellite, landsat, was launched in 1972. this led to nasa and the national oceanic and atmospheric administration jointly developing the geostationary operational environmental satellite and discovering ozone depletion. = = = space shuttle = = = nasa had been pursuing spaceplane development since the 1960s, blending the administration ' s dual aeronautics and space missions. nasa viewed a spaceplane as part of a larger program, providing routine and economical logistical support to a space station in earth orbit that would be used as a hub for lunar and mars missions. a reusable launch vehicle would then have ended the need for expensive and expendable boosters like the saturn v. in 1969, nasa designated the johnson space center as the lead center for the design, development, and manufacturing of the space shuttle orbiter, while the marshall space flight center
planetary nebulae retain the signature of the nucleosynthesis and mixing events that occurred during the previous agb phase. observational signatures complement observations of agb and post - agb stars and their binary companions. the abundances of the elements heavier than iron such as kr and xe in planetary nebulae can be used to complement abundances of sr / y / zr and ba / la / ce in agb stars, respectively, to determine the operation of the slow neutron - capture process ( the s process ) in agb stars. additionally, observations of the rb abundance in type i planetary nebulae may allow us to infer the initial mass of the central star. several noble gas components present in meteoritic stardust silicon carbide ( sic ) grains are associated with implantation into the dust grains in the high - energy environment connected to the fast winds from the central stars during the planetary nebulae phase.
the origin of the arc - shaped stellar complexes in the lmc4 region is still unknown. these perfect arcs could not have been formed by o - stars and sne in their centers ; the strong arguments exist also against the possibility of their formation from infalling gas clouds. the origin from microquasars / grb jets is not excluded, because there is the strong concentration of x - ray binaries in the same region and the massive old cluster ngc 1978, probable site of formation of binaries with compact components, is there also. the last possibility is that the source of energy for formation of the stellar arcs and the lmc4 supershell might be the the giant jet from the nucleus of the milky way, which might be active a dozen myr ago.
we bring you, as usual, the sun and moon and stars, plus some galaxies and a new section on astrobiology. some highlights are short ( the newly identified class of gamma - ray bursts, and the deep impact on comet 9p / tempel 1 ), some long ( the age of the universe, which will be found to have the earth at its center ), and a few metonymic, for instance the term " down - sizing " to describe the evolution of star formation rates with redshift.
v735 sgr was known as an enigmatic star with rapid brightness variations. long - term ogle photometry, brightness measurements in infrared bands, and recently obtained moderate resolution spectrum from the 6. 5 - m magellan telescope show that this star is an active young stellar object of herbig ae / be type.
Question: Most stars in the Milky Way are like the Sun. The Sun will eventually become a red giant. After the red giant stage, what determines whether a star will become a white dwarf or a supernova?
A) the mass of the star
B) the diameter of the star
C) the brightness of the star
D) the type of gas in the star
|
A) the mass of the star
|
Context:
interventions lacked sufficient evidence to support either benefit or harm. in modern clinical practice, physicians and physician assistants personally assess patients to diagnose, prognose, treat, and prevent disease using clinical judgment. the doctor - patient relationship typically begins with an interaction with an examination of the patient ' s medical history and medical record, followed by a medical interview and a physical examination. basic diagnostic medical devices ( e. g., stethoscope, tongue depressor ) are typically used. after examining for signs and interviewing for symptoms, the doctor may order medical tests ( e. g., blood tests ), take a biopsy, or prescribe pharmaceutical drugs or other therapies. differential diagnosis methods help to rule out conditions based on the information provided. during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. the medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have
, or prescribe pharmaceutical drugs or other therapies. differential diagnosis methods help to rule out conditions based on the information provided. during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. the medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses
, followed by a medical interview and a physical examination. basic diagnostic medical devices ( e. g., stethoscope, tongue depressor ) are typically used. after examining for signs and interviewing for symptoms, the doctor may order medical tests ( e. g., blood tests ), take a biopsy, or prescribe pharmaceutical drugs or other therapies. differential diagnosis methods help to rule out conditions based on the information provided. during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. the medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history
technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, and 22. 5 % concluded positive effect. = = quality, efficiency, and access = = evidence - based medicine, prevention of medical error ( and other " iatrogenesis " ), and avoidance of unnecessary health care are a priority in modern medical systems. these topics generate significant political and public policy attention, particularly in the united states where healthcare is regarded as excessively costly but population health metrics lag similar nations. globally, many developing countries lack access to care and access to medicines. as of 2015, most wealthy developed countries provide health care to all citizens, with a few exceptions such as the united states where lack of health insurance
; kitasato shibasaburo ( japan ) ; jean - martin charcot, claude bernard, paul broca ( france ) ; adolfo lutz ( brazil ) ; nikolai korotkov ( russia ) ; sir william osler ( canada ) ; and harvey cushing ( united states ). as science and technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, and 22. 5 % concluded positive effect. = = quality, efficiency, and access = = evidence - based medicine, prevention of medical error ( and other " iatrogenesis " ), and avoidance of unnecessary health care are a priority in modern medical systems. these topics generate significant political and public policy attention, particularly in
considered the father of modern neuroscience. from new zealand and australia came maurice wilkins, howard florey, and frank macfarlane burnet. others that did significant work include william williams keen, william coley, james d. watson ( united states ) ; salvador luria ( italy ) ; alexandre yersin ( switzerland ) ; kitasato shibasaburo ( japan ) ; jean - martin charcot, claude bernard, paul broca ( france ) ; adolfo lutz ( brazil ) ; nikolai korotkov ( russia ) ; sir william osler ( canada ) ; and harvey cushing ( united states ). as science and technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect,
judgments to the practice of medicine. medical humanities includes the humanities ( literature, philosophy, ethics, history and religion ), social science ( anthropology, cultural studies, psychology, sociology ), and the arts ( literature, theater, film, and visual arts ) and their application to medical education and practice. nosokinetics is the science / subject of measuring and modelling the process of care in health and social care systems. nosology is the classification of diseases for various purposes. occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained. pain management ( also called pain medicine, or algiatry ) is the medical discipline concerned with the relief of pain. pharmacogenomics is a form of individualized medicine. podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. sports medicine deals with the treatment and prevention and rehabilitation of sports / exercise injuries such as muscle spasms, muscle tears, injuries to ligaments ( ligament tears or ruptures ) and their repair in athletes, amateur and professional. therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. tropical medicine deals with the prevention and treatment of tropical diseases. it is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. urgent care focuses on delivery of unscheduled, walk - in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. in some jurisdictions this function is combined with the emergency department. veterinary medicine ; veterinarians apply similar techniques as physicians to the care of non - human animals. wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. = = education and legal controls = = medical education and training varies around the world. it typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. this can be followed by postgraduate vocational training. a variety of teaching methods have been employed in medical education, still itself a focus of active research. in canada and the united states of
also called pain medicine, or algiatry ) is the medical discipline concerned with the relief of pain. pharmacogenomics is a form of individualized medicine. podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. sports medicine deals with the treatment and prevention and rehabilitation of sports / exercise injuries such as muscle spasms, muscle tears, injuries to ligaments ( ligament tears or ruptures ) and their repair in athletes, amateur and professional. therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. tropical medicine deals with the prevention and treatment of tropical diseases. it is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. urgent care focuses on delivery of unscheduled, walk - in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. in some jurisdictions this function is combined with the emergency department. veterinary medicine ; veterinarians apply similar techniques as physicians to the care of non - human animals. wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. = = education and legal controls = = medical education and training varies around the world. it typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. this can be followed by postgraduate vocational training. a variety of teaching methods have been employed in medical education, still itself a focus of active research. in canada and the united states of america, a doctor of medicine degree, often abbreviated m. d., or a doctor of osteopathic medicine degree, often abbreviated as d. o. and unique to the united states, must be completed in and delivered from a recognized university. since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs. a database of objectives covering medical knowledge, as suggested by national societies across the united states, can be searched at http : / / data. medobjectives
used to manufacture existing medicines relatively easily and cheaply. the first genetically engineered products were medicines designed to treat human diseases. to cite one example, in 1978 genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium escherichia coli. insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals ( cattle or pigs ). the genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. biotechnology has also enabled emerging therapeutics like gene therapy. the application of biotechnology to basic science ( for example through the human genome project ) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child ' s parentage ( genetic mother and father ) or in general a person ' s ancestry. in addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. genetic testing identifies changes in chromosomes, genes, or proteins. most of the time, testing is used to find changes that are associated with inherited disorders. the results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person ' s chance of developing or passing on a genetic disorder. as of 2011 several hundred genetic tests were in use. since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. = = = agriculture = = = genetically modified crops ( " gm crops ", or " biotech crops " ) are plants used in agriculture, the dna of which has been modified with genetic engineering techniques. in most cases, the main aim is to introduce a new trait that does not occur naturally in the species. biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments ( e. g. resistance to a herbicide ), reduction of spoilage, or improving the nutrient profile of the crop. examples in non - food crops include production of
medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on
Question: How should researchers share their findings with other scientists to validate their study of the effectiveness of a new medication?
A) e-mail the research results to local newspapers
B) publish the research results in a scientific journal
C) publish the results in brochures for doctors
D) discuss the research results on the Internet
|
B) publish the research results in a scientific journal
|
Context:
polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons ( reduction ) or losing electrons ( oxidation ). substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. an oxidant removes electrons from another substance. similarly,
= = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids
enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the
classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons ( reduction ) or losing electrons ( oxidation ). substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. an oxidant removes electrons from another substance. similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. a reductant transfers electrons to another substance and is thus oxidized itself. and because it " donates " electrons it is also called an electron
single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division
according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons ( reduction ) or losing electrons ( oxidation ). substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. an oxidant removes electrons from another substance. similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. a reductant transfers electrons to another substance and is thus oxidized itself. and because it " donates " electrons it is also called an electron donor. oxidation and reduction properly refer to a change in oxidation number β the actual transfer of electrons may never occur. thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. = = = equilibrium = = = although the concept of equilibrium is widely used across sciences, in
that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature. = = = = substance and mixture = = = = a chemical substance is a kind of matter with a definite composition and set of properties. a collection of substances is called a mixture. examples of mixtures are air and alloys. = = = = mole and amount of substance = = = = the mole is a unit of measurement that denotes an amount of substance ( also called chemical amount ). one mole is defined to contain exactly 6. 02214076Γ1023 particles ( atoms, molecules, ions, or electrons ), where the number of particles per mole is known as the avogadro constant. molar concentration is
the most abundant molecule in every organism. water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. in terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen ( h ) atoms to one oxygen ( o ) atom ( h2o ). because the o β h bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. this polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. water is also adhesive as it is able to adhere to the surface of any polar or charged non - water molecules. water is denser as a liquid than it is as a solid ( or ice ). this unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds = = = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such
ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their
) of the mass of all organisms, with calcium, phosphorus, sulfur, sodium, chlorine, and magnesium constituting essentially all the remainder. different elements can combine to form compounds such as water, which is fundamental to life. biochemistry is the study of chemical processes within and relating to living organisms. molecular biology is the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions. = = = water = = = life arose from the earth ' s first ocean, which formed some 3. 8 billion years ago. since then, water continues to be the most abundant molecule in every organism. water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. in terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen ( h ) atoms to one oxygen ( o ) atom ( h2o ). because the o β h bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. this polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. water is also adhesive as it is able to adhere to the surface of any polar or charged non - water molecules. water is denser as a liquid than it is as a solid ( or ice ). this unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds =
Question: Gaseous sulfur dioxide is a compound that combines with water in the atmosphere to form acid rain. What is the primary source of sulfur dioxide?
A) volcanic emissions
B) combustion of fossil fuels
C) destruction of tropical forests
D) mining and mineral extraction
|
B) combustion of fossil fuels
|
Context:
the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the
three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes.
on biological causation and the diversity of life. he made countless observations of nature, especially the habits and attributes of plants and animals on lesbos, classified more than 540 animal species, and dissected at least 50. aristotle ' s writings profoundly influenced subsequent islamic and european scholarship, though they were eventually superseded in the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason
the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements β thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the laws of equilibrium ; they must have had practical and intuitional knowledge of the principals involved. what archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system. " and again : " with astonishment we find ourselves on the threshold of modern science
so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. however, synchronicity itself is considered neither testable nor falsifiable. the study was subsequently heavily criticised for its non - random sample and its use of statistics and also its lack of consistency with astrology. = = psychology = = psychological studies have not found any robust relationship between astrological signs and life outcomes. for example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well - being and quality of life. it has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. : 344 : 180 β 181 : 42 β 48 confirmation bias is a form of cognitive bias. : 553 from the literature, astrology believers often tend to selectively remember those predictions that turned out to be true and do not remember those that turned out false. another, separate, form of confirmation bias also plays a role, where believers often fail to
variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated.
all christian authors held that the earth was round. athenagoras, an eastern christian writing around the year 175 ad, said that the earth was spherical. methodius ( c. 290 ad ), an eastern christian writing against " the theory of the chaldeans and the egyptians " said : " let us first lay bare... the theory of the chaldeans and the egyptians. they say that the circumference of the universe is likened to the turnings of a well - rounded globe, the earth being a central point. they say that since its outline is spherical,... the earth should be the center of the universe, around which the heaven is whirling. " arnobius, another eastern christian writing sometime around 305 ad, described the round earth : " in the first place, indeed, the world itself is neither right nor left. it has neither upper nor lower regions, nor front nor back. for whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture
outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites.
the presence of a co - orbital companion induces the splitting of the well known keplerian spin - orbit resonances. it leads to chaotic rotation when those resonances overlap.
armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets.
Question: The motion of Earth is responsible for several celestial events. Which of the following events is caused by Earth revolving around the sun?
A) the days in a year
B) the hours in a day
C) the changes in the atmosphere of Earth
D) the position of the constellations in space
|
A) the days in a year
|
Context:
an important question of theoretical physics is whether sound is able to propagate in vacuums at all and if this is the case, then it must lead to the reinterpretation of one zero - restmass particle which corresponds to vacuum - sound waves. taking the electron - neutrino as the corresponding particle, its observed non - vanishing rest - energy may only appear for neutrino - propagation inside material media. the idea may also influence the physics of dense matter, restricting the maximum speed of sound, both in vacuums and in matter to the speed of light.
the thickness and the density of the material to be measured. the method is used for containers of liquids or of grainy substances thickness gauges : if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. this is useful for continuous production, like of paper, rubber, etc. electrostatic control - to avoid the build - up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon - shaped source of the alpha emitter 241am can be placed close to the material at the end of the production line. the source ionizes the air to remove electric charges on the material. radioactive tracers - since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. examples : adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube. adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil. oil and gas exploration - nuclear well logging is used to help predict the commercial viability of new or existing wells. the technology involves the use of a neutron or gamma - ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography. [ 1 ] road construction - nuclear moisture / density gauges are used to determine the density of soils, asphalt, and concrete. typically a cesium - 137 source is used. = = = commercial applications = = = radioluminescence tritium illumination : tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. some runway markers and building exit signs use the same technology, to remain illuminated during blackouts. betavoltaics. smoke detector : an ionization smoke detector includes a tiny mass of radioactive americium - 241, which is a source of alpha radiation. two ionisation chambers are placed next to each other. both contain a small source of 241am that gives rise to a small constant current. one is closed and serves for comparison, the other is open to ambient air ; it has a gridded electrode. when smoke enters the open chamber, the current is disrupted as the smoke particles attach to the charged ions and restore them to a neutral electrical state. this reduces the current in the open chamber. when the current drops below a certain threshold, the
when fast radio burst ( frb ) waves propagate through the local ( < 1 pc ) environment of the frb source, electrons in the plasma undergo large - amplitude oscillations. the finite - amplitude effects cause the effective plasma frequency and cyclotron frequency to be dependent on the wave strength. the dispersion measure and rotation measure should therefore vary slightly from burst to burst for a repeating source, depending on the luminosity and frequency of the individual burst. furthermore, free - free absorption of strong waves is suppressed due to the accelerated electrons ' reduced energy exchange in coulomb collisions. this allows bright low - frequency bursts to propagate through an environment that would be optically thick to low - amplitude waves. given a large sample of bursts from a repeating source, it would be possible to use the deficit of low - frequency and low - luminosity bursts to infer the emission measure of the local intervening plasma and its distance from the source. information about the local environment will shed light on the nature of frb sources.
beam reveals the object ' s location. since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received " echo ", the range to the target can be calculated. the targets are often displayed graphically on a map display called a radar screen. doppler radar can measure a moving object ' s velocity, by measuring the change in frequency of the return radio waves due to the doppler effect. radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. parabolic ( dish ) antennas are widely used. in most radars the transmitting antenna also serves as the receiving antenna ; this is called a monostatic radar. a radar which uses separate transmitting and receiving antennas is called a bistatic radar. airport surveillance radar β in aviation, radar is the main tool of air traffic control. a rotating dish antenna sweeps a vertical fan - shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as " blips " of light on a display called a radar screen. airport radar operates at 2. 7 β 2. 9 ghz in the microwave s band. in large airports the radar image is displayed on multiple screens in an operations room called the tracon ( terminal radar approach control ), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. secondary surveillance radar β aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) β military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. it often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. marine radar β an s or x band radar on ships used to detect nearby ships and obstructions like bridges. a rotating antenna sweeps a vertical
wave, carrying an information signal, occupies a range of frequencies. the information in a radio signal is usually concentrated in narrow frequency bands called sidebands ( sb ) just above and below the carrier frequency. the width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of the information being sent, and the spectral efficiency of the modulation method used ; how much data it can transmit in each unit of bandwidth. different types of information signals carried by radio have different data rates. for example, a television signal has a greater data rate than an audio signal. the radio spectrum, the total range of radio frequencies that can be used for communication in a given area, is a limited resource. each radio transmission occupies a portion of the total bandwidth available. radio bandwidth is regarded as an economic good which has a monetary cost and is in increasing demand. in some parts of the radio spectrum, the right to use a frequency band or even a single radio channel is bought and sold for millions of dollars. so there is an incentive to employ technology to minimize the bandwidth used by radio services. a slow transition from analog to digital radio transmission technologies began in the late 1990s. part of the reason for this is that digital modulation can often transmit more information ( a greater data rate ) in a given bandwidth than analog modulation, by using data compression algorithms, which reduce redundancy in the data to be sent, and more efficient modulation. other reasons for the transition is that digital modulation has greater noise immunity than analog, digital signal processing chips have more power and flexibility than analog circuits, and a wide variety of types of information can be transmitted using the same digital modulation. because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to use it more effectively is driving many additional radio innovations such as trunked radio systems, spread spectrum ( ultra - wideband ) transmission, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio. = = = itu frequency bands = = = the itu arbitrarily divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power
a graviton laser works, in principle, by the stimulated emission of coherent gravitons from a lasing medium. for significant amplification, we must have a very long path length and / or very high densities. black holes and the existence of weakly interacting sub - ev dark matter particles ( wisps ) solve both of these obstacles. orbiting trajectories for massless particles around black holes are well understood \ cite { mtw } and allow for arbitrarily long graviton path lengths. superradiance from kerr black holes of wisps can provide the sufficiently high density \ cite { abh }. this suggests that black holes can act as efficient graviton lasers. thus directed graviton laser beams have been emitted since the beginning of the universe and give rise to new sources of gravitational wave signals. to be in the path of particularly harmfully amplified graviton death rays will not be pleasant.
is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of the information being sent, and the spectral efficiency of the modulation method used ; how much data it can transmit in each unit of bandwidth. different types of information signals carried by radio have different data rates. for example, a television signal has a greater data rate than an audio signal. the radio spectrum, the total range of radio frequencies that can be used for communication in a given area, is a limited resource. each radio transmission occupies a portion of the total bandwidth available. radio bandwidth is regarded as an economic good which has a monetary cost and is in increasing demand. in some parts of the radio spectrum, the right to use a frequency band or even a single radio channel is bought and sold for millions of dollars. so there is an incentive to employ technology to minimize the bandwidth used by radio services. a slow transition from analog to digital radio transmission technologies began in the late 1990s. part of the reason for this is that digital modulation can often transmit more information ( a greater data rate ) in a given bandwidth than analog modulation, by using data compression algorithms, which reduce redundancy in the data to be sent, and more efficient modulation. other reasons for the transition is that digital modulation has greater noise immunity than analog, digital signal processing chips have more power and flexibility than analog circuits, and a wide variety of types of information can be transmitted using the same digital modulation. because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to use it more effectively is driving many additional radio innovations such as trunked radio systems, spread spectrum ( ultra - wideband ) transmission, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio. = = = itu frequency bands = = = the itu arbitrarily divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power of ten ( 10n ) metres, with corresponding frequency of 3 times a power of ten, and each covering a decade of frequency or wavelength. each of these bands has a traditional name : it can be seen that the bandwidth, the range of frequencies, contained in each band is not equal but increases exponentially as the
nanodust, which undergoes stochastic heating by single starlight photons in the interstellar medium, ranges from angstrom - sized large molecules containing tens to thousands of atoms ( e. g. polycyclic aromatic hydrocarbon molecules ) to grains of a couple tens of nanometers. the presence of nanograins in astrophysical environments has been revealed by a variety of interstellar phenomena : the optical luminescence, the near - and mid - infrared emission, the galactic foreground microwave emission, and the ultraviolet extinction which are ubiquitously seen in the interstellar medium of the milky way and beyond. nanograins ( e. g. nanodiamonds ) have also been identified as presolar in primitive meteorites based on their isotopically anomalous composition. considering the very processes that lead to the detection of nanodust in the ism for the nanodust in the solar system shows that the observation of solar system nanodust by these processes is less likely.
the optical activity of a chiral medium is discussed from the view point of transfer of energy. the absorbed energy of the polarized light in the optical active medium is transferred to the mechanical rotation of the chiral molecule. they acquire the helicity dependent geometric phase due to passage of the polarized light which loses energy by having an optical rotation. the entanglement of a polarized photon and fermion is the very source of this behavior. this theoretical knowledge has been reflected in an experimental study with six essential and five non - essential amino acids.
of a light source can be measured with a spectroradiometer, which works by optically collecting the light, then passing it through a monochromator before reading it in narrow bands of wavelength. reflected color can be measured using a spectrophotometer ( also called spectroreflectometer or reflectometer ), which takes measurements in the visible region ( and a little beyond ) of a given color sample. if the custom of taking readings at 10 nanometer increments is followed, the visible light range of 400 β 700 nm will yield 31 readings. these readings are typically used to draw the sample ' s spectral reflectance curve ( how much it reflects, as a function of wavelength ) β the most accurate data that can be provided regarding its characteristics. the readings by themselves are typically not as useful as their tristimulus values, which can be converted into chromaticity co - ordinates and manipulated through color space transformations. for this purpose, a spectrocolorimeter may be used. a spectrocolorimeter is simply a spectrophotometer that can estimate tristimulus values by numerical integration ( of the color matching functions ' inner product with the illuminant ' s spectral power distribution ). one benefit of spectrocolorimeters over tristimulus colorimeters is that they do not have optical filters, which are subject to manufacturing variance, and have a fixed spectral transmittance curve β until they age. on the other hand, tristimulus colorimeters are purpose - built, cheaper, and easier to use. the cie ( international commission on illumination ) recommends using measurement intervals under 5 nm, even for smooth spectra. sparser measurements fail to accurately characterize spiky emission spectra, such as that of the red phosphor of a crt display, depicted aside. = = = color temperature meter = = = photographers and cinematographers use information provided by these meters to decide what color balancing should be done to make different light sources appear to have the same color temperature. if the user enters the reference color temperature, the meter can calculate the mired difference between the measurement and the reference, enabling the user to choose a corrective color gel or photographic filter with the closest mired factor. internally the meter is typically a silicon photodiode tristimulus colorimeter. the correlated color temperature can be calculated from the tristimulus values by first calculating the chromaticity co - ordinates in the cie 1960 color space, then finding the closest
Question: What takes place as a light wave enters a denser medium?
A) it is reflected
B) it is absorbed
C) it is refracted
D) it is compressed
|
C) it is refracted
|
Context:
time - dependent distribution of the global extinction of megafauna is compared with the growth of human population. there is no correlation between the two processes. furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna.
, lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges.
they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian β triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous β paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea
and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying
industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll
by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent β grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : kingdom ; phylum ( or division ) ; class ; order ; family ; genus ( plural genera ) ; species. the scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. for example, the tiger lily is lilium columbianum. lilium is the genus, and columbianum the specific epithet. the combination is the name of the species. when writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. additionally, the entire term is ordinarily italicised ( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the
the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. micro
= = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling
approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with
depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform
Question: Deforestation in the rainforest can lead to localized extinctions of populations of some organisms. Once an area is deforested, which is most likely to decrease and result in organisms' extinctions?
A) the amount of annual rainfall
B) competition between consumers
C) the energy available to producers
D) diversity of the resources in the habitat
|
D) diversity of the resources in the habitat
|
Context:
##lling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called
is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales
river - beds ), but not for where there may be large obstructions in the ground. an open caisson that is used in soft grounds or high water tables, where open trench excavations are impractical, can also be used to install deep manholes, pump stations and reception / launch pits for microtunnelling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caisson
made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up
radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. the most significant meltdowns occurred at three mile island in pennsylvania and chernobyl in the soviet ukraine. the earthquake and tsunami on march 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the fukushima daiichi nuclear power plant in japan. military reactors that experienced similar accidents were windscale in the united kingdom and sl - 1 in the united states. military accidents usually involve the loss or unexpected detonation of nuclear weapons. the castle bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a japanese fishing boat ( with one fatality ), and raised concerns about contaminated fish in japan. in the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. the last twenty years have seen a marked decline in such accidents. = = examples of environmental benefits = = proponents of nuclear energy note that annually, nuclear - generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed / recycled for other energy uses. proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. for example, the environmental protection agency estimates that coal kills 30, 000 people a year, as a result of its environmental impact, while 60 people died in the chernobyl disaster. a real world example of impact provided by proponents of nuclear energy is the 650, 000 ton increase in carbon emissions in the two months following the closure of the vermont yankee nuclear plant. = = see also = = atomic age lists of nuclear disasters and radioactive incidents nuclear power debate outline of nuclear technology radiology = = references = = = = external links = = nuclear energy institute β beneficial uses
iron - peroxide intermediates are central in the reaction cycle of many iron - containing biomolecules. we trapped iron ( iii ) - ( hydro ) peroxo species in crystals of superoxide reductase ( sor ), a nonheme mononuclear iron enzyme that scavenges superoxide radicals. x - ray diffraction data at 1. 95 angstrom resolution and raman spectra recorded in crystallo revealed iron - ( hydro ) peroxo intermediates with the ( hydro ) peroxo group bound end - on. the dynamic sor active site promotes the formation of transient hydrogen bond networks, which presumably assist the cleavage of the iron - oxygen bond in order to release the reaction product, hydrogen peroxide.
joints. = = = metal alloys = = = the alloys of iron ( steel, stainless steel, cast iron, tool steel, alloy steels ) make up the largest proportion of metals today both by quantity and commercial value. iron alloyed with various proportions of carbon gives low, mid and high carbon steels. an iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron β carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a long time ( since the bronze age ), while the alloys of the other three metals have been relatively recently developed. due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. the alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. these materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. = = = semiconductors = = = a semiconductor is a material that has a resistivity between a conductor and insulator. modern day electronics run on semiconductors, and the industry had an estimated us $ 530 billion market in 2021. its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. semiconductor materials are used to build diodes, transistors, light - emitting diodes ( leds ), and analog and digital electric circuits, among their many uses. semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. semiconductor devices are manufactured both as single discrete devices and as integrated circuits ( ics ), which consist of a number β from a
onset of electro - chemical corrosion. similar problems are encountered in coastal and offshore structures. = = = anti - fouling = = = anti - fouling is the process of eliminating obstructive organisms from essential components of seawater systems. depending on the nature and location of marine growth, this process is performed in a number of different ways : marine organisms may grow and attach to the surfaces of the outboard suction inlets used to obtain water for cooling systems. electro - chlorination involves running high electrical current through sea water, altering the water ' s chemical composition to create sodium hypochlorite, purging any bio - matter. an electrolytic method of anti - fouling involves running electrical current through two anodes ( scardino, 2009 ). these anodes typically consist of copper and aluminum ( or alternatively, iron ). the first metal, copper anode, releases its ion into the water, creating an environment that is too toxic for bio - matter. the second metal, aluminum, coats the inside of the pipes to prevent corrosion. other forms of marine growth such as mussels and algae may attach themselves to the bottom of a ship ' s hull. this growth interferes with the smoothness and uniformity of the ship ' s hull, causing the ship to have a less hydrodynamic shape that causes it to be slower and less fuel - efficient. marine growth on the hull can be remedied by using special paint that prevents the growth of such organisms. = = = pollution control = = = = = = = sulfur emission = = = = the burning of marine fuels releases harmful pollutants into the atmosphere. ships burn marine diesel in addition to heavy fuel oil. heavy fuel oil, being the heaviest of refined oils, releases sulfur dioxide when burned. sulfur dioxide emissions have the potential to raise atmospheric and ocean acidity causing harm to marine life. however, heavy fuel oil may only be burned in international waters due to the pollution created. it is commercially advantageous due to the cost effectiveness compared to other marine fuels. it is prospected that heavy fuel oil will be phased out of commercial use by the year 2020 ( smith, 2018 ). = = = = oil and water discharge = = = = water, oil, and other substances collect at the bottom of the ship in what is known as the bilge. bilge water is pumped overboard, but must pass a pollution threshold test of 15 ppm ( parts per million ) of oil to be discharged. water is tested
the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution
the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle
Question: Which of the following causes a shipβs iron anchor to sink to the ocean floor when it is released overboard?
A) chemical forces
B) gravity
C) magnetism
D) nuclear forces
|
B) gravity
|
Context:
, lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges.
equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 )
emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has no adverse effect on pilot vision. = = = = ships = = = = ships have also adopted similar methods. though the earlier american arleigh burke - class destroyers incorporated some signature - reduction features. the norwegian skjold - class corvettes was the first coastal defence and the french la fayette - class frigates the first ocean - going stealth ships to enter service. other examples are the dutch de zeven provincien - class frigates, the taiwanese tuo chiang - class corvettes, german sachsen - class frigates, the swedish visby - class corvette, the american san antonio - class amphibious transport docks, and most modern warship designs. = = = materials = = = = = = = non - metallic airframe = = = = dielectric composite materials are more transparent to radar, whereas electrically conductive materials such as metals and carbon fibers reflect electromagnetic energy incident on the material ' s surface. composites may also contain ferrites to optimize the dielectric and magnetic properties of a material for its application. = = = = radar - absorbent material = = = = radiation - absorbent material ( ram ), often as paints, are used especially on the edges of metal surfaces. while the material and thickness of ram coatings can
the mean apparent magnitude of starlink mini direct - to - cell ( dtc ) satellites is 4. 62 while the mean of magnitudes adjusted to a uniform distance of 1000 km is 5. 50. dtcs average 4. 9 times brighter than other starlink mini spacecraft at a common distance. we cannot currently separate the effects of the dtc antenna itself, the different attitude modes that may be required for dtc operations and to what extent brightness mitigation procedures were in place at the times of our observations. in a best case scenario, where dtc brightness mitigation is as successful as that for other minis and the dtc antenna does not add significantly to brightness, we estimate that dtcs will be about 2. 6 times as bright as the others based upon their lower altitudes. the dtcs spend a greater fraction of their time in the earth ' s shadow than satellites at higher altitudes. that will offset some of their impact on astronomical observing.
are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. = = = pre - modern = = = innovations continued through the middle ages with the introduction of silk production ( in asia and later europe ), the horse collar, and horseshoes. simple machines ( such as the lever, the screw, and the pulley ) were combined into more complicated tools, such as the wheelbarrow, windmills, and clocks. a system of universities developed and spread scientific ideas and practices, including oxford and cambridge. the renaissance era produced many innovations, including the introduction of the movable type printing press to europe, which facilitated the communication of knowledge. technology became increasingly influenced by science, beginning a cycle of mutual advancement. = = = modern = = = starting in the united kingdom in the 18th century, the discovery of steam power set off the industrial revolution, which saw wide - ranging technological discoveries, particularly in the areas of agriculture, manufacturing, mining, metallurgy, and transport, and the
navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea
approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with
##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river
to increase as far as practicable the navigable depth at the lowest stage of the water level. engineering works to increase the navigability of rivers can only be advantageously undertaken in large rivers with a moderate fall and a fair discharge at their lowest stage, for with a large fall the current presents a great impediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the
received within a limited distance of its transmitter. systems that broadcast from satellites can generally be received over an entire country or continent. older terrestrial radio and television are paid for by commercial advertising or governments. in subscription systems like satellite television and satellite radio the customer pays a monthly fee. in these systems, the radio signal is encrypted and can only be decrypted by the receiver, which is controlled by the company and can be deactivated if the customer does not pay. broadcasting uses several parts of the radio spectrum, depending on the type of signals transmitted and the desired target audience. longwave and medium wave signals can give reliable coverage of areas several hundred kilometers across, but have a more limited information - carrying capacity and so work best with audio signals ( speech and music ), and the sound quality can be degraded by radio noise from natural and artificial sources. the shortwave bands have a greater potential range but are more subject to interference by distant stations and varying atmospheric conditions that affect reception. in the very high frequency band, greater than 30 megahertz, the earth ' s atmosphere has less of an effect on the range of signals, and line - of - sight propagation becomes the principal mode. these higher frequencies permit the great bandwidth required for television broadcasting. since natural and artificial noise sources are less present at these frequencies, high - quality audio transmission is possible, using frequency modulation. = = = = audio : radio broadcasting = = = = radio broadcasting means transmission of audio ( sound ) to radio receivers belonging to a public audience. analog audio is the earliest form of radio broadcast. am broadcasting began around 1920. fm broadcasting was introduced in the late 1930s with improved fidelity. a broadcast radio receiver is called a radio. most radios can receive both am and fm. am ( amplitude modulation ) β in am, the amplitude ( strength ) of the radio carrier wave is varied by the audio signal. am broadcasting, the oldest broadcasting technology, is allowed in the am broadcast bands, between 148 and 283 khz in the low frequency ( lf ) band for longwave broadcasts and between 526 and 1706 khz in the medium frequency ( mf ) band for medium - wave broadcasts. because waves in these bands travel as ground waves following the terrain, am radio stations can be received beyond the horizon at hundreds of miles distance, but am has lower fidelity than fm. radiated power ( erp ) of am stations in the us is usually limited to a maximum of 10 kw, although a few ( clear - channel stations ) are allowed to transmit at 50
Question: In clear weather, a bright light can be seen for a long distance. In conditions of heavy fog, the visibility is greatly reduced. Which of the following explains the reduced visibility?
A) Light is refracted by water vapor in the air.
B) Light is scattered by water droplets in the air.
C) Light is absorbed by water vapor near the ground.
D) Light is reflected by water droplets on the ground.
|
B) Light is scattered by water droplets in the air.
|
Context:
on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of gmos. the development of a regulatory framework began in 1975, at asilomar, california. the asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. as the technology improved
life, but most current gm crops are modified to increase resistance to insects and herbicides. glofish, the first gmo designed as a pet, was sold in the united states in december 2003. in 2016 salmon modified with a growth hormone were sold. genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. in research, gmos are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. by knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. as well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. chinese hamster ovary ( cho ) cells are used in industrial genetic engineering. additionally mrna vaccines are made through genetic engineering to prevent infections by viruses such as covid - 19. the same techniques that are used to produce drugs can also have industrial applications such as producing enzymes for laundry detergent, cheeses and other products. the rise of commercialised genetically modified crops has provided economic benefit to farmers in many different countries, but has also been the source of most of the controversy surrounding the technology. this has been present since its early use ; the first field trials were destroyed by anti - gm activists. although there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, critics consider gm food safety a leading concern. gene flow, impact on non - target organisms, control of the food supply and intellectual property rights have also been raised as potential issues. these concerns have led to the development of a regulatory framework, which started in 1975. it has led to an international treaty, the cartagena protocol on biosafety, that was adopted in 2000. individual countries have developed their own regulatory systems regarding gmos, with the most marked differences occurring between the united states and europe. = = overview = = genetic engineering is a process that alters the genetic structure of an organism by either removing or introducing dna, or modifying existing genetic material in situ. unlike traditional animal and plant breeding, which involves doing multiple crosses and then selecting for the organism with the desired phenotype, genetic engineering takes the gene directly from one organism and delivers it to the other. this is much faster, can be used to insert any genes from any organism ( even ones from different domains ) and prevents other undesirable genes from also being added. genetic engineering could potentially fix severe genetic disorders in humans by replacing the
herbicides. the people ' s republic of china was the first country to commercialise transgenic plants, introducing a virus - resistant tobacco in 1992. in 1994 calgene attained approval to commercially release the first genetically modified food, the flavr savr, a tomato engineered to have a longer shelf life. in 1994, the european union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialised in europe. in 1995, bt potato was approved safe by the environmental protection agency, after having been approved by the fda, making it the first pesticide producing crop to be approved in the us. in 2009 11 transgenic crops were grown commercially in 25 countries, the largest of which by area grown were the us, brazil, argentina, india, canada, china, paraguay and south africa. in 2010, scientists at the j. craig venter institute created the first synthetic genome and inserted it into an empty bacterial cell. the resulting bacterium, named mycoplasma laboratorium, could replicate and produce proteins. four years later this was taken a step further when a bacterium was developed that replicated a plasmid containing a unique base pair, creating the first organism engineered to use an expanded genetic alphabet. in 2012, jennifer doudna and emmanuelle charpentier collaborated to develop the crispr / cas9 system, a technique which can be used to easily and specifically alter the genome of almost any organism. = = process = = creating a gmo is a multi - step process. genetic engineers must first choose what gene they wish to insert into the organism. this is driven by what the aim is for the resultant organism and is built on earlier research. genetic screens can be carried out to determine potential genes and further tests then used to identify the best candidates. the development of microarrays, transcriptomics and genome sequencing has made it much easier to find suitable genes. luck also plays its part ; the roundup ready gene was discovered after scientists noticed a bacterium thriving in the presence of the herbicide. = = = gene isolation and cloning = = = the next step is to isolate the candidate gene. the cell containing the gene is opened and the dna is purified. the gene is separated by using restriction enzymes to cut the dna into fragments or polymerase chain reaction ( pcr ) to amplify up the gene segment. these segments can then be extracted through gel electrophoresis. if the chosen gene or the donor organism ' s
and irrigation in the alluvial south, and catchment systems stretching for tens of kilometers in the hilly north. their palaces had sophisticated drainage systems. writing was invented in mesopotamia, using the cuneiform script. many records on clay tablets and stone inscriptions have survived. these civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. by 1200 bc they could cast objects 5 m long in a single piece. several of the six classic simple machines were invented in mesopotamia. mesopotamians have been credited with the invention of the wheel. the wheel and axle mechanism first appeared with the potter ' s wheel, invented in mesopotamia ( modern iraq ) during the 5th millennium bc. this led to the invention of the wheeled vehicle in mesopotamia during the early 4th millennium bc. depictions of wheeled wagons found on clay tablet pictographs at the eanna district of uruk are dated between 3700 and 3500 bc. the lever was used in the shadoof water - lifting device, the first crane machine, which appeared in mesopotamia circa 3000 bc, and then in ancient egyptian technology circa 2000 bc. the earliest evidence of pulleys date back to mesopotamia in the early 2nd millennium bc. the screw, the last of the simple machines to be invented, first appeared in mesopotamia during the neo - assyrian period ( 911 β 609 ) bc. the assyrian king sennacherib ( 704 β 681 bc ) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two - part clay molds rather than by the ' lost wax ' process. the jerwan aqueduct ( c. 688 bc ) is made with stone arches and lined with waterproof concrete. the babylonian astronomical diaries spanned 800 years. they enabled meticulous astronomers to plot the motions of the planets and to predict eclipses. the earliest evidence of water wheels and watermills date back to the ancient near east in the 4th century bc, specifically in the persian empire before 350 bc, in the regions of mesopotamia ( iraq ) and persia ( iran ). this pioneering use of water power constituted the first human - devised motive force not to rely on muscle power ( besides the sail ). = = = = egypt = = = = the egyptians, known for building pyramids centuries before the creation of modern tools, invented and used many simple machines, such as the ramp to aid construction processes. historians and archaeologists have found evidence that the pyramids were built using
dissipation. as well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and mems in particular. silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems. polymers even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. mems devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. metals metals can also be used to create mems elements. while metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. metals can be deposited by electroplating, evaporation, and sputtering processes. commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. ceramics the nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in mems fabrication due to advantageous combinations of material properties. aln crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. tin, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic mems actuation schemes with ultrathin beams. moreover, the high resistance of tin against biocorrosion qualifies the material for applications in biogenic environments. the figure shows an electron - microscopic picture of a mems biosensor with a 50 nm thin bendable tin beam above a tin ground plate. both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. when a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground
##rs in their design. from that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. however, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass - production basis, which limited them to a number of specialised applications. the mosfet was invented at bell labs between 1955 and 1960. it was the first truly compact transistor that could be miniaturised and mass - produced for a wide range of uses. its advantages include high scalability, affordability, low power consumption, and high density. it revolutionized the electronics industry, becoming the most widely used electronic device in the world. the mosfet is the basic element in most modern electronic equipment. as the complexity of circuits grew, problems arose. one problem was the size of the circuit. a complex circuit like a computer was dependent on speed. if the components were large, the wires interconnecting them must be long. the electric signals took time to go through the circuit, thus slowing the computer. the invention of the integrated circuit by jack kilby and robert noyce solved this problem by making all the components and the chip out of the same block ( monolith ) of semiconductor material. the circuits could be made smaller, and the manufacturing process could be automated. this led to the idea of integrating all components on a single - crystal silicon wafer, which led to small - scale integration ( ssi ) in the early 1960s, and then medium - scale integration ( msi ) in the late 1960s, followed by vlsi. in 2008, billion - transistor processors became commercially available. = = subfields = = = = devices and components = = an electronic component is any component in an electronic system either active or passive. components are connected together, usually by being soldered to a printed circuit board ( pcb ), to create an electronic circuit with a particular function. components may be packaged singly, or in more complex groups as integrated circuits. passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices ; transistors and thyristors, which control current flow at electron level. = = types of circuits = = electronic circuit functions can be divided into two function groups : analog and digital. a particular device may consist of circuitry that has either or a mix of the two types. analog circuits are becoming less common, as many of their functions are being digitized. = = = analog circuits = =
the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united
##dians, assyrians and babylonians ) lived in cities from c. 4000 bc, and developed a sophisticated architecture in mud - brick and stone, including the use of the true arch. the walls of babylon were so massive they were quoted as a wonder of the world. they developed extensive water systems ; canals for transport and irrigation in the alluvial south, and catchment systems stretching for tens of kilometers in the hilly north. their palaces had sophisticated drainage systems. writing was invented in mesopotamia, using the cuneiform script. many records on clay tablets and stone inscriptions have survived. these civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. by 1200 bc they could cast objects 5 m long in a single piece. several of the six classic simple machines were invented in mesopotamia. mesopotamians have been credited with the invention of the wheel. the wheel and axle mechanism first appeared with the potter ' s wheel, invented in mesopotamia ( modern iraq ) during the 5th millennium bc. this led to the invention of the wheeled vehicle in mesopotamia during the early 4th millennium bc. depictions of wheeled wagons found on clay tablet pictographs at the eanna district of uruk are dated between 3700 and 3500 bc. the lever was used in the shadoof water - lifting device, the first crane machine, which appeared in mesopotamia circa 3000 bc, and then in ancient egyptian technology circa 2000 bc. the earliest evidence of pulleys date back to mesopotamia in the early 2nd millennium bc. the screw, the last of the simple machines to be invented, first appeared in mesopotamia during the neo - assyrian period ( 911 β 609 ) bc. the assyrian king sennacherib ( 704 β 681 bc ) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two - part clay molds rather than by the ' lost wax ' process. the jerwan aqueduct ( c. 688 bc ) is made with stone arches and lined with waterproof concrete. the babylonian astronomical diaries spanned 800 years. they enabled meticulous astronomers to plot the motions of the planets and to predict eclipses. the earliest evidence of water wheels and watermills date back to the ancient near east in the 4th century bc, specifically in the persian empire before 350 bc, in the regions of mesopotamia ( iraq ) and persia ( iran ). this pioneering use of water power constituted the first human - devised motive force not to
in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. in the current decades, significant progress has been done in creating genetically modified organisms ( gmos ) that enhance the diversity of applications and economical viability of industrial biotechnology. by using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical - based economy. synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. jointly biotechnology and synthetic biology play a crucial role in generating cost - effective products with nature - friendly features by using bio - based
best - known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products. the first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering
Question: When the pesticide DDT was first used, it killed nearly every mosquito it touched. Within a few years, however, many mosquitoes became resistant to DDT and survived. What enabled this to happen?
A) meiosis
B) migration
C) immune responses
D) gene mutations
|
D) gene mutations
|
Context:
winds from agn and quasars will form large amounts of dust, as the cool gas in these winds passes through the ( pressure, temperature ) region where dust is formed in agb stars. conditions in the gas are benign to dust at these radii. as a result quasar winds may be a major source of dust at high redshifts, obviating a difficulty with current observations, and requiring far less dust to exist at early epochs.
ambient air ( see lockheed f - 117 nighthawk, rectangular nozzles on the lockheed martin f - 22 raptor, and serrated nozzle flaps on the lockheed martin f - 35 lightning ). often, cool air is deliberately injected into the exhaust flow to boost this process ( see ryan aqm - 91 firefly and northrop b - 2 spirit ). the stefan β boltzmann law shows how this results in less energy ( thermal radiation in infrared spectrum ) being released and thus reduces the heat signature. in some aircraft, the jet exhaust is vented above the wing surface to shield it from observers below, as in the lockheed f - 117 nighthawk, and the unstealthy fairchild republic a - 10 thunderbolt ii. to achieve infrared stealth, the exhaust gas is cooled to the temperatures where the brightest wavelengths it radiates are absorbed by atmospheric carbon dioxide and water vapor, greatly reducing the infrared visibility of the exhaust plume. another way to reduce the exhaust temperature is to circulate coolant fluids such as fuel inside the exhaust pipe, where the fuel tanks serve as heat sinks cooled by the flow of air along the wings. ground combat includes the use of both active and passive infrared sensors. thus, the united states marine corps ( usmc ) ground combat uniform requirements document specifies infrared reflective quality standards. = = reducing radio frequency ( rf ) emissions = = in addition to reducing infrared and acoustic emissions, a stealth vehicle must avoid radiating any other detectable energy, such as from onboard radars, communications systems, or rf leakage from electronics enclosures. the f - 117 uses passive infrared and low light level television sensor systems to aim its weapons and the f - 22 raptor has an advanced lpi radar which can illuminate enemy aircraft without triggering a radar warning receiver response. = = measuring = = the size of a target ' s image on radar is measured by the rcs, often represented by the symbol Ο and expressed in square meters. this does not equal geometric area. a perfectly conducting sphere of projected cross sectional area 1 m2 ( i. e. a diameter of 1. 13 m ) will have an rcs of 1 m2. note that for radar wavelengths much less than the diameter of the sphere, rcs is independent of frequency. conversely, a square flat plate of area 1 m2 will have an rcs of Ο = 4Ο a2 / Ξ»2 ( where a = area, Ξ» = wavelength ), or 13, 982 m2 at 10 ghz if the radar is perpendicular to the flat
in steady state, the fuel cycle of a fusion plasma requires inward particle fluxes of fuel ions. these particle flows are also accompanied by heating. in the case of classical transport in a rotating cylindrical plasma, this heating can proceed through several distinct channels depending on the physical mechanisms involved. some channels directly heat the fuel ions themselves, whereas others heat electrons. which channel dominates depends, in general, on the details of the temperature, density, and rotation profiles of the plasma constituents. however, remarkably, under relatively few assumptions concerning these profiles, if the alpha particles, the byproducts of the fusion reaction, can be removed directly by other means, a hot - ion mode tends to emerge naturally.
higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies.
casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. forging β a red - hot billet is hammered into shape. rolling β a billet is passed through successively narrower rollers to create a sheet. extrusion β a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. machining β lathes, milling machines and drills cut the cold metal to shape. sintering β a powdered metal is heated in a non - oxidizing environment after being compressed into a die. fabrication β sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape. laser cladding β metallic powder is blown through a movable laser beam ( e. g. mounted on a nc 5 - axis machine ). the resulting melted metal reaches a substrate to form a melt pool. by moving the laser head, it is possible to stack the tracks and build up a three - dimensional piece. 3d printing β sintering or melting amorphous powder metal in a 3d space to make any object to shape. cold - working processes, in which the product ' s shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. work hardening creates microscopic defects in the metal, which resist further changes of shape. = = = heat treatment = = = metals can be heat - treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering : annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft - edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking ; it is also easier to sand, grind, or cut annealed metal. quenching is the process of cooling metal very quickly after heating, thus " freezing " the metal ' s molecules in the very hard martensite form, which makes the metal harder. tempering relieves stresses in the metal that were caused by the hardening process ; tempering makes the metal less hard while making it better able to sustain
the results of hydrodynamic simulations of the virgo and perseus clusters suggest that thermal conduction is not responsible for the observed temperature and density profiles. as a result it seems that thermal conduction occurs at a much lower level than the spitzer value. comparing cavity enthalpies to the radiative losses within the cooling radius for seven clusters suggests that some clusters are probably heated by sporadic, but extremely powerful, agn outflows interspersed between more frequent but lower power outflows.
enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the
within protogalaxies, thermal instability leads to the formation of a population of cool fragments, confined by the pressure of residual hot gas. the hot gas remains in quasi - hydrostatic equilibrium, at approximately the virial temperature of the dark matter halo. it is heated by compression and shock dissipation and is cooled by bremsstrahlung emission and conductive losses into the cool clouds. the cool fragments are photoionized and heated by the extragalactic uv background and nearby massive stars. the smallest clouds are evaporated due to conductive heat transfer from the hot gas. all are subject to disruption due to hydrodynamic instabilities. they also gain mass due to collisions and mergers and condensation from the hot gas due to conduction. the size distribution of the fragments in turn determines the rate and efficiency of star formation during the early phase of galactic evolution. we have performed one - dimensional hydrodynamic simulations of the evolution of the hot and cool gas. the cool clouds are assumed to follow a power - law size distribution, and fall into the galactic potential, subject to drag from the hot gas. the relative amounts of the hot and cool gas is determined by the processes discussed above, and star formation occurs at a rate sufficient to maintain the cool clouds at 10 $ ^ 4 $ k. we present density distributions for the two phases and also for the stars for several cases, parametrized by the circular speeds of the potentials. under some conditions, primarily low densities of the hot gas, conduction is more efficient than radiative processes at cooling the hot gas, limiting the x - ray radiation from the halo gas.
modeling of the x - ray spectra of the galactic superluminal jet sources grs 1915 + 105 and gro j1655 - 40 reveal a three - layered atmospheric structure in the inner region of their accretion disks. above the cold and optically thick disk of a temperature 0. 2 - 0. 5 kev, there is a warm layer with a temperature of 1. 0 - 1. 5 kev and an optical depth around 10. sometimes there is also a much hotter, optically thin corona above the warm layer, with a temperature of 100 kev or higher and an optical depth around unity. the structural similarity between the accretion disks and the solar atmosphere suggest that similar physical processes may be operating in these different systems.
is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical
Question: When air is heated, it will most likely
A) expand and fall.
B) expand and rise.
C) condense and fall.
D) condense and rise.
|
B) expand and rise.
|
Context:
; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground
, heat from friction during rolling can cause problems for metal bearings ; problems which are reduced by the use of ceramics. ceramics are also more chemically resistant and can be used in wet environments where steel bearings would rust. the major drawback to using ceramics is a significantly higher cost. in many cases their electrically insulating properties may also be valuable in bearings. in the early 1980s, toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 Β°f ( 3300 Β°c ). ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. fuel efficiency of the engine is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials
in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress
electric motors, servo - mechanisms, and other electrical systems in conjunction with special software. a common example of a mechatronics system is a cd - rom drive. mechanical systems open and close the drive, spin the cd and move the laser, while an optical system reads the data on the cd and converts it to bits. integrated software controls the process and communicates the contents of the cd to the computer. robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. these robots may be of any shape and size, but all are preprogrammed and interact physically with the world. to create a robot, an engineer typically employs kinematics ( to determine the robot ' s range of motion ) and mechanics ( to determine the stresses within the robot ). robots are used extensively in industrial automation engineering. they allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. many companies employ assembly lines of robots, especially in automotive industries and some factories are so robotized that they can run by themselves. outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. robots are also sold for various residential applications, from recreation to domestic applications. = = = structural analysis = = = structural analysis is the branch of mechanical engineering ( and also civil engineering ) devoted to examining why and how objects fail and to fix the objects and their performance. structural failures occur in two general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure
time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans
three of what is called the six simple machines, from which all machines are based. these machines are the inclined plane, the wedge, and the lever, which allowed the ancient egyptians to move millions of limestone blocks which weighed approximately 3. 5 tons ( 7, 000 lbs. ) each into place to create structures like the great pyramid of giza, which is 481 feet ( 147 meters ) high. they also made writing medium similar to paper from papyrus, which joshua mark states is the foundation for modern paper. papyrus is a plant ( cyperus papyrus ) which grew in plentiful amounts in the egyptian delta and throughout the nile river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu
material. silicon nitride parts are used in ceramic ball bearings. their higher hardness means that they are much less susceptible to wear and can offer more than triple lifetimes. they also deform less under load meaning they have less contact with the bearing retainer walls and can roll faster. in very high speed applications, heat from friction during rolling can cause problems for metal bearings ; problems which are reduced by the use of ceramics. ceramics are also more chemically resistant and can be used in wet environments where steel bearings would rust. the major drawback to using ceramics is a significantly higher cost. in many cases their electrically insulating properties may also be valuable in bearings. in the early 1980s, toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 Β°f ( 3300 Β°c ). ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. fuel efficiency of the engine is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are
it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. = = = pre - modern = = = innovations continued through the middle ages with the introduction of silk production ( in asia and later europe ), the horse collar, and horseshoes. simple machines ( such as the lever, the screw, and the pulley ) were combined into more complicated tools
a highly - asymmetric " psi ' ' factory " may be the best approach for studying d0 anti - d0 mixing.
water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 β 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two drummers operated by a programmable drum machine, where the drummer could be made to play different rhythms and different drum patterns. the castle clock, a hydropowered mechanical astronomical clock invented by al - jazari, was an early programmable analog computer. in the ottoman empire, a practical impulse steam turbine was invented in 1551 by taqi ad - din muhammad ibn ma ' ruf in ottoman egypt. he described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. known as a steam jack, a similar device for rotating a spit was also later described by john wilkins in 1648. = = = = medieval europe = = = = while medieval technology has been long depicted as a step backward in the evolution of western technology, a generation of medievalists ( like the american historian of science lynn white ) stressed from the 1940s onwards the innovative character of many medieval techniques. genuine medieval contributions include
Question: The wheels and gears of a machine are greased in order to decrease
A) potential energy
B) efficiency
C) output
D) friction
|
D) friction
|
Context:
have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became
three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes.
the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements ' resulting unique chronological timescales would then give inconsistent time estimates. in refutation of young earth claims of inconstant decay rates affecting the reliability of radiometric dating, roger c. wiens, a physicist specializing in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio
variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated.
in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio - chronological and cosmological perspective ' by robert v. gentry. " baillieul noted that gentry was a physicist with no background in geology and given the absence of this background, gentry had misrepresented the geological context from which the specimens were collected. additionally, he noted that gentry relied on research from the
armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets.
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or
a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation.
the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations.
Question: A student learns that one year on Mercury is about 88 Earth days. This means it takes 88 Earth days for Mercury to
A) travel in orbit around its moon.
B) complete one rotation on its axis.
C) switch orbits with the nearest planet.
D) make one complete orbit around the Sun.
|
D) make one complete orbit around the Sun.
|
Context:
so - called " bosch process ", named after the german company robert bosch, which filed the original patent, where two different gas compositions alternate in the reactor. currently, there are two variations of the drie. the first variation consists of three distinct steps ( the original bosch process ) while the second variation only consists of two steps. in the first variation, the etch cycle is as follows : ( i ) sf6 isotropic etch ; ( ii ) c4f8 passivation ; ( iii ) sf6 anisotropic etch for floor cleaning. in the 2nd variation, steps ( i ) and ( iii ) are combined. both variations operate similarly. the c4f8 creates a polymer on the surface of the substrate, and the second gas composition ( sf6 and o2 ) etches the substrate. the polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. as a result, etching aspect ratios of 50 to 1 can be achieved. the process can easily be used to etch completely through a silicon substrate, and etch rates are 3 β 6 times higher than wet etching. after preparing a large number of mems devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. for some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing. = = manufacturing technologies = = bulk micromachining is the oldest paradigm of silicon - based mems. the whole thickness of a silicon wafer is used for building the micro - mechanical structures. silicon is machined using various etching processes. bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 1990s. surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining mems and integrated circuits on the same silicon wafer. the original surface micro
current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references
winds from agn and quasars will form large amounts of dust, as the cool gas in these winds passes through the ( pressure, temperature ) region where dust is formed in agb stars. conditions in the gas are benign to dust at these radii. as a result quasar winds may be a major source of dust at high redshifts, obviating a difficulty with current observations, and requiring far less dust to exist at early epochs.
i suggest that the main process that amplifies magnetic fields in cooling flows in clusters and group of galaxies is a jet - driven dynamo ( jedd ). the main processes that are behind the jedd is the turbulence that is formed by the many vortices formed in the inflation processes of bubbles, and the large scale shear formed by the propagating jet. it is sufficient that a strong turbulence exits in the vicinity of the jets and bubbles, just where the shear is large. the typical amplification time of magnetic fields by the jedd near the jets and bubbles is approximately hundred million years. the amplification time in the entire cooling flow region is somewhat longer. the vortices that create the turbulence are those that also transfer energy from the jets to the intra - cluster medium, by mixing shocked jet gas with the intra - cluster medium gas, and by exciting sound waves. the jedd model adds magnetic fields to the cyclical behavior of energy and mass in the jet - feedback mechanism ( jfm ) in cooling flows.
##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models
we make a few comments on some misleading statements in the above paper.
genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way that gives us long - term ( decade - to - decade ) feedback on our performance as stewards of the planet. the effort includes understanding the functions of soil microbiotic crusts and exploring the potential to sequester atmospheric carbon in soil organic matter. relating the concept of agriculture to soil quality, however, has not
##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and
made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up
##thic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures
Question: Two processes are involved in the formation of a sand dune. Which two processes best describe how a sand dune forms?
A) wind erosion then deposition
B) plate movement then deposition
C) wind erosion then water erosion
D) water erosion then plate movement
|
A) wind erosion then deposition
|
Context:
factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole β dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic
learning to use math in physics involves combining ( blending ) our everyday experiences and the conceptual ideas of physics with symbolic mathematical representations. graphs are one of the best ways to learn to build the blend. they are a mathematical representation that builds on visual recognition to create a bridge between words and equations. but students in introductory physics classes often see a graph as an endpoint, a task the teacher asks them to complete, rather than as a tool to help them make sense of a physical system. and most of the graph problems in traditional introductory physics texts simply ask students to extract a number from a graph. but if graphs are used appropriately, they can be a powerful tool in helping students learn to build the blend and develop their physical intuition and ability to think with math.
endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole β dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer
and ability to incorporate electronic functionality make silicon attractive for a wide variety of mems applications. silicon also has significant advantages engendered through its material properties. in single crystal form, silicon is an almost perfect hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. as well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and mems in particular. silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems. polymers even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. mems devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. metals metals can also be used to create mems elements. while metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. metals can be deposited by electroplating, evaporation, and sputtering processes. commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. ceramics the nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in mems fabrication due to advantageous combinations of material properties. aln crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. tin, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic mems actuation schemes with ultrathin beams. moreover, the high resistance of tin against biocorrosion qualifies the material for applications in biogenic environments. the figure shows an electron - microscopic picture of a mems biosensor with a
temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β batching β mixing β forming β drying β firing β assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first,
dissipation. as well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and mems in particular. silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems. polymers even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. mems devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. metals metals can also be used to create mems elements. while metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. metals can be deposited by electroplating, evaporation, and sputtering processes. commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. ceramics the nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in mems fabrication due to advantageous combinations of material properties. aln crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. tin, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic mems actuation schemes with ultrathin beams. moreover, the high resistance of tin against biocorrosion qualifies the material for applications in biogenic environments. the figure shows an electron - microscopic picture of a mems biosensor with a 50 nm thin bendable tin beam above a tin ground plate. both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. when a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground
al - kimia is derived from the ancient greek ΟΞ·ΞΌΞΉΞ±, which is in turn derived from the word kemet, which is the ancient name of egypt in the egyptian language. alternately, al - kimia may derive from ΟημΡια ' cast together '. = = modern principles = = the current model of atomic structure is the quantum mechanical model. traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. the interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. such behaviors are studied in a chemistry laboratory. the chemistry laboratory stereotypically uses various forms of laboratory glassware. however glassware is not central to chemistry, and a great deal of experimental ( as well as applied / industrial ) chemistry is done without it. a chemical reaction is a transformation of some substances into one or more different substances. the basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. it can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. the number of atoms on the left and the right in the equation for a chemical transformation is equal. ( when the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay. ) the type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. energy and entropy considerations are invariably important in almost all chemical studies. chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. they can be analyzed using the tools of chemical analysis, e. g. spectroscopy and chromatography. scientists engaged in chemical research are known as chemists. most chemists specialize in one or more sub - disciplines. several concepts are essential for the study of chemistry ; some of them are : = = = matter = = = in chemistry, matter is defined as anything that has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well β not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = =
assuming only statistical mechanics and general relativity, we calculate the maximal temperature of gas of particles placed in ads space - time. if two particles with a given center of mass energy come close enough, according to classical gravity they will form a black hole. we focus only on the black holes with hawking temperature lower than the environment, because they do not disappear. the number density of such black holes grows with the temperature in the system. at a certain finite temperature, the thermodynamical system will be dominated by black holes. this critical temperature is lower than the planck temperature for the values of the ads vacuum energy density below the planck density. this result might be interesting from the ads / cft correspondence point of view, since it is different from the hawking - page phase transition, and it is not immediately clear what effect dynamically limits the maximal temperature of the thermal state on the cft side of the correspondence.
course material for mathematical methods of theoretical physics intended for an undergraduate audience.
galactic collisions are normally modeled in a cdm model by assuming the dm consists of a small number of very massive objects. this note shows that the behaviour of a cdm halo during collisions depends critically on the mass of the particles that make it up, and in particular, all halo particles below a certain characteristic mass are likely to be lost.
Question: When a student uses the equation, mass multiplied by change in temperature multiplied by specific heat, what is being calculated? q = m \times C \times \Delta T
A) a phase change
B) stored energy
C) heat convection
D) heat gain or heat loss
|
D) heat gain or heat loss
|
Context:
##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as
##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and
was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. bronze significantly advanced shipbuilding technology with better tools and bronze nails. bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. better ships enabled long - distance trade and the advance of civilization. this technological trend apparently began in the fertile crescent and spread outward over time. these developments were not, and still are not, universal. the three - age system does not accurately describe the technology history of groups outside of eurasia, and does not apply at all in the case of some isolated populations, such as the spinifex people, the sentinelese, and various amazonian tribes, which still make use of stone age technology, and have not developed agricultural or metal technology. these villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. = = = = iron age = = = = before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. the iron age involved the adoption of iron smelting technology. it generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. the raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. consequently, iron was produced in many areas. it was not possible to mass manufacture steel or pure iron because of the high temperatures required. furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. steel could be produced by forging bloomery iron to reduce the carbon content in a
prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or
the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution
, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest
##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to
##wi, turkana, dating from 3. 3 million years ago. stone tools diversified through the pleistocene period, which ended ~ 12, 000 years ago. the earliest evidence of warfare between two groups is recorded at the site of nataruk in turkana, kenya, where human skeletons with major traumatic injuries to the head, neck, ribs, knees and hands, including an embedded obsidian bladelet on a skull, are evidence of inter - group conflict between groups of nomadic hunter - gatherers 10, 000 years ago. humans entered the bronze age as they learned to smelt copper into an alloy with tin to make weapons. in asia where copper - tin ores are rare, this development was delayed until trading in bronze began in the third millennium bce. in the middle east and southern european regions, the bronze age follows the neolithic period, but in other parts of the world, the copper age is a transition from neolithic to the bronze age. although the iron age generally follows the bronze age, in some areas the iron age intrudes directly on the neolithic from outside the region, with the exception of sub - saharan africa where it was developed independently. the first large - scale use of iron weapons began in asia minor around the 14th century bce and in central europe around the 11th century bce followed by the middle east ( about 1000 bce ) and india and china. the assyrians are credited with the introduction of horse cavalry in warfare and the extensive use of iron weapons by 1100 bce. assyrians were also the first to use iron - tipped arrows. = = = post - classical technology = = = the wujing zongyao ( essentials of the military arts ), written by zeng gongliang, ding du, and others at the order of emperor renzong around 1043 during the song dynasty illustrate the eras focus on advancing intellectual issues and military technology due to the significance of warfare between the song and the liao, jin, and yuan to their north. the book covers topics of military strategy, training, and the production and employment of advanced weaponry. advances in military technology aided the song dynasty in its defense against hostile neighbors to the north. the flamethrower found its origins in byzantine - era greece, employing greek fire ( a chemically complex, highly flammable petrol fluid ) in a device with a siphon hose by the 7th century. : 77 the earliest reference to greek fire in china was made in 917, written by wu renchen in his spring and autumn annals of the ten kingdoms. : 80 in 91
. the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period,
##s ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up the muck tube. the pressurized air flow must be constant to ensure regular air changes for the workers and prevent excessive inflow of mud or water at the base of the caisson. when the caisson hits bedrock, the sandhogs exit through the airlock and fill the box with concrete, forming a solid foundation pier. a pneumatic ( compressed - air ) caisson has the advantage of providing dry working conditions, which is better for placing concrete. it is also well suited for foundations for which other methods might cause settlement of adjacent structures. construction workers who leave the pressurized environment of the caisson must decompress at a rate that allows symptom - free release of inert gases dissolved in the body tissues if they are to avoid decompression sickness, a condition first identified in caisson workers, and originally named " caisson disease " in recognition of the occupational hazard. construction of the brooklyn bridge, which was built with the help of pressurised caissons, resulted in numerous workers being either killed or permanently injured by caisson disease during its construction. barotrauma of the ears, sinus cavities and lungs and dysbaric osteonecrosis are other risks. = = other uses = = caissons have also been used in the installation of hydraulic elevators where a single - stage ram is installed below the ground level. caissons, codenamed phoenix, were an integral part of the mulberry harbours used during the world war ii allied invasion of normandy. = = other meanings = = boat lift caissons : the word caisson is also used as a synonym for the moving trough part of caisson locks, canal lifts and inclines in which boats and ships rest while being lifted from one canal elevation to another ; the water is retained on the inside of the caisson, or excluded from the caisson
Question: Which rock is most likely to contain fossil seashells?
A) basalt
B) gneiss
C) granite
D) limestone
|
D) limestone
|
Context:
all christian authors held that the earth was round. athenagoras, an eastern christian writing around the year 175 ad, said that the earth was spherical. methodius ( c. 290 ad ), an eastern christian writing against " the theory of the chaldeans and the egyptians " said : " let us first lay bare... the theory of the chaldeans and the egyptians. they say that the circumference of the universe is likened to the turnings of a well - rounded globe, the earth being a central point. they say that since its outline is spherical,... the earth should be the center of the universe, around which the heaven is whirling. " arnobius, another eastern christian writing sometime around 305 ad, described the round earth : " in the first place, indeed, the world itself is neither right nor left. it has neither upper nor lower regions, nor front nor back. for whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture
three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes.
the presence of a co - orbital companion induces the splitting of the well known keplerian spin - orbit resonances. it leads to chaotic rotation when those resonances overlap.
variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated.
a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth '
##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s
the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 β 139 ad ) to describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they
the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used?
this is erratum of the paper [ phys. rev. lett. { \ bf 84 }, 4260 ( 2000 ) ]
the curvature radiation is applied to the explain the circular polarization of frbs. significant circular polarization is reported in both apparently non - repeating and repeating frbs. curvature radiation can produce significant circular polarization at the wing of the radiation beam. in the curvature radiation scenario, in order to see significant circular polarization in frbs ( 1 ) more energetic bursts, ( 2 ) burst with electrons having higher lorentz factor, ( 3 ) a slowly rotating neutron star at the centre are required. different rotational period of the central neutron star may explain why some frbs have high circular polarization, while others don ' t. considering possible difference in refractive index for the parallel and perpendicular component of electric field, the position angle may change rapidly over the narrow pulse window of the radiation beam. the position angle swing in frbs may also be explained by this non - geometric origin, besides that of the rotating vector model.
Question: Which occurs as a result of Earth's tilt on its rotating axis?
A) movement of the tides
B) prevalent or trade winds
C) seasonal changes in the climate
D) light and dark changes of day and night
|
C) seasonal changes in the climate
|
Context:
outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites.
planetary systems can evolve dynamically even after the full growth of the planets themselves. there is actually circumstantial evidence that most planetary systems become unstable after the disappearance of gas from the protoplanetary disk. these instabilities can be due to the original system being too crowded and too closely packed or to external perturbations such as tides, planetesimal scattering, or torques from distant stellar companions. the solar system was not exceptional in this sense. in its inner part, a crowded system of planetary embryos became unstable, leading to a series of mutual impacts that built the terrestrial planets on a timescale of ~ 100 my. in its outer part, the giant planets became temporarily unstable and their orbital configuration expanded under the effect of mutual encounters. a planet might have been ejected in this phase. thus, the orbital distributions of planetary systems that we observe today, both solar and extrasolar ones, can be different from the those emerging from the formation process and it is important to consider possible long - term evolutionary effects to connect the two.
armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets.
the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations.
three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes.
recent surveys have revealed a lack of close - in planets around evolved stars more massive than 1. 2 msun. such planets are common around solar - mass stars. we have calculated the orbital evolution of planets around stars with a range of initial masses, and have shown how planetary orbits are affected by the evolution of the stars all the way to the tip of the red giant branch ( rgb ). we find that tidal interaction can lead to the engulfment of close - in planets by evolved stars. the engulfment is more efficient for more - massive planets and less - massive stars. these results may explain the observed semi - major axis distribution of planets around evolved stars with masses larger than 1. 5 msun. our results also suggest that massive planets may form more efficiently around intermediate - mass stars.
also launched missions to mercury in 2004, with the messenger probe demonstrating as the first use of a solar sail. nasa also launched probes to the outer solar system starting in the 1960s. pioneer 10 was the first probe to the outer planets, flying by jupiter, while pioneer 11 provided the first close up view of the planet. both probes became the first objects to leave the solar system. the voyager program launched in 1977, conducting flybys of jupiter and saturn, neptune, and uranus on a trajectory to leave the solar system. the galileo spacecraft, deployed from the space shuttle flight sts - 34, was the first spacecraft to orbit jupiter, discovering evidence of subsurface oceans on the europa and observed that the moon may hold ice or liquid water. a joint nasa - european space agency - italian space agency mission, cassini β huygens, was sent to saturn ' s moon titan, which, along with mars and europa, are the only celestial bodies in the solar system suspected of being capable of harboring life. cassini discovered three new moons of saturn and the huygens probe entered titan ' s atmosphere. the mission discovered evidence of liquid hydrocarbon lakes on titan and subsurface water oceans on the moon of enceladus, which could harbor life. finally launched in 2006, the new horizons mission was the first spacecraft to visit pluto and the kuiper belt. beyond interplanetary probes, nasa has launched many space telescopes. launched in the 1960s, the orbiting astronomical observatory were nasa ' s first orbital telescopes, providing ultraviolet, gamma - ray, x - ray, and infrared observations. nasa launched the orbiting geophysical observatory in the 1960s and 1970s to look down at earth and observe its interactions with the sun. the uhuru satellite was the first dedicated x - ray telescope, mapping 85 % of the sky and discovering a large number of black holes. launched in the 1990s and early 2000s, the great observatories program are among nasa ' s most powerful telescopes. the hubble space telescope was launched in 1990 on sts - 31 from the discovery and could view galaxies 15 billion light years away. a major defect in the telescope ' s mirror could have crippled the program, had nasa not used computer enhancement to compensate for the imperfection and launched five space shuttle servicing flights to replace the damaged components. the compton gamma ray observatory was launched from the atlantis on sts - 37 in 1991, discovering a possible source of antimatter at the center of the milky way and observing that the majority of gamma - ray bursts
three planets with minimum masses less than 10 earth masses orbit the star hd 40307, suggesting these planets may be rocky. however, with only radial velocity data, it is impossible to determine if these planets are rocky or gaseous. here we exploit various dynamical features of the system in order to assess the physical properties of the planets. observations allow for circular orbits, but a numerical integration shows that the eccentricities must be at least 0. 0001. also, planets b and c are so close to the star that tidal effects are significant. if planet b has tidal parameters similar to the terrestrial planets in the solar system and a remnant eccentricity larger than 0. 001, then, going back in time, the system would have been unstable within the lifetime of the star ( which we estimate to be 6. 1 + / - 1. 6 gyr ). moreover, if the eccentricities are that large and the inner planet is rocky, then its tidal heating may be an order of magnitude greater than extremely volcanic io, on a per unit surface area basis. if planet b is not terrestrial, e. g. neptune - like, these physical constraints would not apply. this analysis suggests the planets are not terrestrial - like, and are more like our giant planets. in either case, we find that the planets probably formed at larger radii and migrated early - on ( via disk interactions ) into their current orbits. this study demonstrates how the orbital and dynamical properties of exoplanet systems may be used to constrain the planets ' physical properties.
a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation.
light and cold extrasolar planets such as ogle 2005 - blg - 390lb, a 5. 5 earth - mass planet detected via microlensing, could be frequent in the galaxy according to some preliminary results from microlensing experiments. these planets can be frozen rocky - or ocean - planets, situated beyond the snow line and, therefore, beyond the habitable zone of their system. they can nonetheless host a layer of liquid water, heated by radiogenic energy, underneath an ice shell surface for billions of years, before freezing completely. these results suggest that oceans under ice, like those suspected to be present on icy moons in the solar system, could be a common feature of cold low - mass extrasolar planets.
Question: Planets remain in orbit around the Sun because of
A) gravity.
B) friction.
C) solar energy.
D) centrifugal force.
|
A) gravity.
|
Context:
naturally take up foreign dna. this ability can be induced in other bacteria via stress ( e. g. thermal or electric shock ), which increases the cell membrane ' s permeability to dna ; up - taken dna can either integrate with the genome or exist as extrachromosomal dna. dna is generally inserted into animal cells using microinjection, where it can be injected through the cell ' s nuclear envelope directly into the nucleus, or through the use of viral vectors. plant genomes can be engineered by physical methods or by use of agrobacterium for the delivery of sequences hosted in t - dna binary vectors. in plants the dna is often inserted using agrobacterium - mediated transformation, taking advantage of the agrobacteriums t - dna sequence that allows natural insertion of genetic material into plant cells. other methods include biolistics, where particles of gold or tungsten are coated with dna and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid dna. as only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. in plants this is accomplished through the use of tissue culture. in animals it is necessary to ensure that the inserted dna is present in the embryonic stem cells. bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. selectable markers are used to easily differentiate transformed from untransformed cells. these markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant. further testing using pcr, southern hybridization, and dna sequencing is conducted to confirm that an organism contains the new gene. these tests can also confirm the chromosomal location and copy number of the inserted gene. the presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products ( rna and protein ) are also used. these include northern hybridisation, quantitative rt - pcr, western blot, immunofluorescence, elisa and phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally
background : african swine fever is among the most devastating viral diseases of pigs. despite nearly a century of research, there is still no safe and effective vaccine available. the current situation is that either vaccines are safe but not effective, or they are effective but not safe. findings : the asf vaccine prepared using the inactivation method with propiolactone provided 98. 6 % protection within 100 days after three intranasal immunizations, spaced 7 days apart. conclusions : an inactivated vaccine made from complete african swine fever virus particles using propiolactone is safe and effective for controlling asf through mucosal immunity.
are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its
animal cells using microinjection, where it can be injected through the cell ' s nuclear envelope directly into the nucleus, or through the use of viral vectors. plant genomes can be engineered by physical methods or by use of agrobacterium for the delivery of sequences hosted in t - dna binary vectors. in plants the dna is often inserted using agrobacterium - mediated transformation, taking advantage of the agrobacteriums t - dna sequence that allows natural insertion of genetic material into plant cells. other methods include biolistics, where particles of gold or tungsten are coated with dna and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid dna. as only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. in plants this is accomplished through the use of tissue culture. in animals it is necessary to ensure that the inserted dna is present in the embryonic stem cells. bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. selectable markers are used to easily differentiate transformed from untransformed cells. these markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant. further testing using pcr, southern hybridization, and dna sequencing is conducted to confirm that an organism contains the new gene. these tests can also confirm the chromosomal location and copy number of the inserted gene. the presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products ( rna and protein ) are also used. these include northern hybridisation, quantitative rt - pcr, western blot, immunofluorescence, elisa and phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes
in plants the dna is often inserted using agrobacterium - mediated transformation, taking advantage of the agrobacteriums t - dna sequence that allows natural insertion of genetic material into plant cells. other methods include biolistics, where particles of gold or tungsten are coated with dna and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid dna. as only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. in plants this is accomplished through the use of tissue culture. in animals it is necessary to ensure that the inserted dna is present in the embryonic stem cells. bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. selectable markers are used to easily differentiate transformed from untransformed cells. these markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant. further testing using pcr, southern hybridization, and dna sequencing is conducted to confirm that an organism contains the new gene. these tests can also confirm the chromosomal location and copy number of the inserted gene. the presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products ( rna and protein ) are also used. these include northern hybridisation, quantitative rt - pcr, western blot, immunofluorescence, elisa and phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end - joining. there are four families of engineered nucleases : meganucleases, zinc finger nucleases, transcription activator - like effector nucleases ( talens ), and the cas9 - guide
cells into the decellularized rat heart. tissue - engineered blood vessels : blood vessels that have been grown in a lab and can be used to repair damaged blood vessels without eliciting an immune response. tissue engineered blood vessels have been developed by many different approaches. they could be implanted as pre - seeded cellularized blood vessels, as acellular vascular grafts made with decellularized vessels or synthetic vascular grafts. artificial skin constructed from human skin cells embedded in a hydrogel, such as in the case of bio - printed constructs for battlefield burn repairs. artificial bone marrow : bone marrow cultured in vitro to be transplanted serves as a " just cells " approach to tissue engineering. tissue engineered bone : a structural matrix can be composed of metals such as titanium, polymers of varying degradation rates, or certain types of ceramics. materials are often chosen to recruit osteoblasts to aid in reforming the bone and returning biological function. various types of cells can be added directly into the matrix to expedite the process. laboratory - grown penis : decellularized scaffolds of rabbit penises were recellularised with smooth muscle and endothelial cells. the organ was then transplanted to live rabbits and functioned comparably to the native organ, suggesting potential as treatment for genital trauma. oral mucosa tissue engineering uses a cells and scaffold approach to replicate the 3 dimensional structure and function of oral mucosa. = = cells as building blocks = = cells are one of the main components for the success of tissue engineering approaches. tissue engineering uses cells as strategies for creation / replacement of new tissue. examples include fibroblasts used for skin repair or renewal, chondrocytes used for cartilage repair ( maci β fda approved product ), and hepatocytes used in liver support systems cells can be used alone or with support matrices for tissue engineering applications. an adequate environment for promoting cell growth, differentiation, and integration with the existing tissue is a critical factor for cell - based building blocks. manipulation of any of these cell processes create alternative avenues for the development of new tissue ( e. g., cell reprogramming - somatic cells, vascularization ). = = = isolation = = = techniques for cell isolation depend on the cell source. centrifugation and apheresis are techniques used for extracting cells from biofluids ( e. g., blood ). whereas digestion processes, typically using enzymes to remove the extra
for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition
monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous, and premature. currently, germline modification is banned in 40 countries. scientists that do this type of research will often let embryos grow for a few days without allowing it to develop into a baby. researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of
. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support
include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous,
Question: Which cells help to destroy pathogens such as bacteria that enter the human body?
A) red blood cells
B) liver cells
C) white blood cells
D) brain cells
|
C) white blood cells
|
Context:
the group velocity of light has been measured at eight different wavelengths between 385 nm and 532 nm in the mediterranean sea at a depth of about 2. 2 km with the antares optical beacon systems. a parametrisation of the dependence of the refractive index on wavelength based on the salinity, pressure and temperature of the sea water at the antares site is in good agreement with these measurements.
the status of the theory of color confinemnt is discussed.
pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin
in the old age, few people need special care if they are suffering from specific diseases as they can get stroke while they are in normal life routine. also patients of any age, who are not able to walk, need to be taken care of personally but for this, either they have to be in hospital or someone like nurse should be with them for better care. this is costly in terms of money and man power. a person is needed for 24x7 care of these people. to help in this aspect we purposes a vision based system which will take input from the patient and will provide information to the specified person, who is currently may not in the patient room. this will reduce the need of man power, also a continuous monitoring would not be needed. the system is using ms kinect for gesture detection for better accuracy and this system can be installed at home or hospital easily. the system provides gui for simple usage and gives visual and audio feedback to user. this system work on natural hand interaction and need no training before using and also no need to wear any glove or color strip.
used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception
##d product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 β 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 β 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h pencil. = = = multiple views and projections = = = in most cases, a single view is not sufficient to show all necessary features, and several views are used. types of views include the following : = = = = multiview projection = = = = a multiview projection is a type of orthographic projection that shows the object as it looks from the front, right, left, top, bottom, or back ( e. g. the primary views ), and is typically positioned relative to each other according to the rules of either first - angle or third - angle projection. the origin and vector direction of the projectors ( also called projection lines ) differs, as explained below. in first - angle projection, the parallel projectors originate as if radiated from behind the viewer and pass through the 3d object to project a 2d image onto the orthogonal plane behind it. the 3d object is projected into 2d " paper " space as if you were looking at
reflectometer ), which takes measurements in the visible region ( and a little beyond ) of a given color sample. if the custom of taking readings at 10 nanometer increments is followed, the visible light range of 400 β 700 nm will yield 31 readings. these readings are typically used to draw the sample ' s spectral reflectance curve ( how much it reflects, as a function of wavelength ) β the most accurate data that can be provided regarding its characteristics. the readings by themselves are typically not as useful as their tristimulus values, which can be converted into chromaticity co - ordinates and manipulated through color space transformations. for this purpose, a spectrocolorimeter may be used. a spectrocolorimeter is simply a spectrophotometer that can estimate tristimulus values by numerical integration ( of the color matching functions ' inner product with the illuminant ' s spectral power distribution ). one benefit of spectrocolorimeters over tristimulus colorimeters is that they do not have optical filters, which are subject to manufacturing variance, and have a fixed spectral transmittance curve β until they age. on the other hand, tristimulus colorimeters are purpose - built, cheaper, and easier to use. the cie ( international commission on illumination ) recommends using measurement intervals under 5 nm, even for smooth spectra. sparser measurements fail to accurately characterize spiky emission spectra, such as that of the red phosphor of a crt display, depicted aside. = = = color temperature meter = = = photographers and cinematographers use information provided by these meters to decide what color balancing should be done to make different light sources appear to have the same color temperature. if the user enters the reference color temperature, the meter can calculate the mired difference between the measurement and the reference, enabling the user to choose a corrective color gel or photographic filter with the closest mired factor. internally the meter is typically a silicon photodiode tristimulus colorimeter. the correlated color temperature can be calculated from the tristimulus values by first calculating the chromaticity co - ordinates in the cie 1960 color space, then finding the closest point on the planckian locus. = = see also = = color science photometry radiometry = = references = = = = further reading = = schanda, janos d. ( 1997 ). " colorimetry " ( pdf ). in casimer decusatis ( ed. ). handbook
, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc β 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and bee. he investigated chick embryos by breaking open eggs and observing them at various stages of development. aristotle ' s works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. he also presented philosophies about physics, nature, and astronomy using
ammonium hydrosulphide has long since been postulated to exist at least in certain layers of the giant planets. its radiation products may be the reason for the red colour seen on jupiter. several ammonium salts, the products of nh3 and an acid, have previously been detected at comet 67p / churyumov - gerasimenko. the acid h2s is the fifth most abundant molecule in the coma of 67p followed by nh3. in order to look for the salt nh4 + sh -, we analysed in situ measurements from the rosetta / rosina double focusing mass spectrometer during the rosetta mission. nh3 and h2s appear to be independent of each other when sublimating directly from the nucleus. however, we observe a strong correlation between the two species during dust impacts, clearly pointing to the salt. we find that nh4 + sh - is by far the most abundant salt, more abundant in the dust impacts than even water. we also find all previously detected ammonium salts and for the first time ammonium fluoride. the amount of ammonia and acids balance each other, confirming that ammonia is mostly in the form of salt embedded into dust grains. allotropes s2 and s3 are strongly enhanced in the impacts, while h2s2 and its fragment hs2 are not detected, which is most probably the result of radiolysis of nh4 + sh -. this makes a prestellar origin of the salt likely. our findings may explain the apparent depletion of nitrogen in comets and maybe help to solve the riddle of the missing sulphur in star forming regions.
it is hard for us humans to recognize things in nature until we have invented them ourselves. for image - forming optics, nature has made virtually every kind of lens humans have devised. but what about lensless " imaging "? recently, we showed that a bare array of sensors on a curved substrate could achieve resolution not limited by diffraction - without any lens at all provided that the objects imaged conform to our a priori assumptions. is it possible that somewhere in nature we will find this kind of vision system? we think so and provide examples that seem to make no sense whatever unless they are using something like our lensless imaging work.
Question: Eye color in human beings is an
A) instinct.
B) acquired trait.
C) inherited trait.
D) environmentally influenced trait.
|
C) inherited trait.
|
Context:
astronomically, there are viable mechanisms for distributing organic material throughout the milky way. biologically, the destructive effects of ultraviolet light and cosmic rays means that the majority of organisms arrive broken and dead on a new world. the likelihood of conventional forms of panspermia must therefore be considered low. however, the information content of dam - aged biological molecules might serve to seed new life ( necropanspermia ).
as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation
time - dependent distribution of the global extinction of megafauna is compared with the growth of human population. there is no correlation between the two processes. furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna.
reference to recent papers and experimental feasibility are added. the paper will not be published in a hard - copy journal.
process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to
often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like
do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal
tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the
young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid dna. as only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. in plants this is accomplished through the use of tissue culture. in animals it is necessary to ensure that the inserted dna is present in the embryonic stem cells. bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. selectable markers are used to easily differentiate transformed from untransformed cells. these markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant. further testing using pcr, southern hybridization, and dna sequencing is conducted to confirm that an organism contains the new gene. these tests can also confirm the chromosomal location and copy number of the inserted gene. the presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products ( rna and protein ) are also used. these include northern hybridisation, quantitative rt - pcr, western blot, immunofluorescence, elisa and phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end - joining. there are four families of engineered nucleases : meganucleases, zinc finger nucleases, transcription activator - like effector nucleases ( talens ), and the cas9 - guiderna system ( adapted from crispr ). talen and crispr are the two most commonly used and each has its own advantages. talens have greater target specificity, while crispr is easier to design and more efficient. in addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations
, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from
Question: If a species is no longer able to reproduce, it will
A) adapt to its environment
B) become immune to disease
C) become extinct
D) increase its population
|
C) become extinct
|
Context:
nuclear jets containing relativistic ` ` hot ' ' particles close to the central engine cool dramatically by producing high energy radiation. the radiative dissipation is similar to the famous compton drag acting upon ` ` cold ' ' thermal particles in a relativistic bulk flow. highly relativistic protons induce anisotropic showers raining electromagnetic power down onto the putative accretion disk. thus, the radiative signature of hot hadronic jets is x - ray irradiation of cold thermal matter. the synchrotron radio emission of the accelerated electrons is self - absorbed due to the strong magnetic fields close to the magnetic nozzle.
10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is
a comparison of the sensitivities of methods which allow us to determine the coordinates of a moving hot body is made.
higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies.
ambient air ( see lockheed f - 117 nighthawk, rectangular nozzles on the lockheed martin f - 22 raptor, and serrated nozzle flaps on the lockheed martin f - 35 lightning ). often, cool air is deliberately injected into the exhaust flow to boost this process ( see ryan aqm - 91 firefly and northrop b - 2 spirit ). the stefan β boltzmann law shows how this results in less energy ( thermal radiation in infrared spectrum ) being released and thus reduces the heat signature. in some aircraft, the jet exhaust is vented above the wing surface to shield it from observers below, as in the lockheed f - 117 nighthawk, and the unstealthy fairchild republic a - 10 thunderbolt ii. to achieve infrared stealth, the exhaust gas is cooled to the temperatures where the brightest wavelengths it radiates are absorbed by atmospheric carbon dioxide and water vapor, greatly reducing the infrared visibility of the exhaust plume. another way to reduce the exhaust temperature is to circulate coolant fluids such as fuel inside the exhaust pipe, where the fuel tanks serve as heat sinks cooled by the flow of air along the wings. ground combat includes the use of both active and passive infrared sensors. thus, the united states marine corps ( usmc ) ground combat uniform requirements document specifies infrared reflective quality standards. = = reducing radio frequency ( rf ) emissions = = in addition to reducing infrared and acoustic emissions, a stealth vehicle must avoid radiating any other detectable energy, such as from onboard radars, communications systems, or rf leakage from electronics enclosures. the f - 117 uses passive infrared and low light level television sensor systems to aim its weapons and the f - 22 raptor has an advanced lpi radar which can illuminate enemy aircraft without triggering a radar warning receiver response. = = measuring = = the size of a target ' s image on radar is measured by the rcs, often represented by the symbol Ο and expressed in square meters. this does not equal geometric area. a perfectly conducting sphere of projected cross sectional area 1 m2 ( i. e. a diameter of 1. 13 m ) will have an rcs of 1 m2. note that for radar wavelengths much less than the diameter of the sphere, rcs is independent of frequency. conversely, a square flat plate of area 1 m2 will have an rcs of Ο = 4Ο a2 / Ξ»2 ( where a = area, Ξ» = wavelength ), or 13, 982 m2 at 10 ghz if the radar is perpendicular to the flat
molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of
building block. ceramics β not to be confused with raw, unfired clay β are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another material. cermets are ceramic particles containing some metals. the wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. this process involves the strategic addition of second - phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. this approach enhances fracture toughness, paving the way for the creation of advanced, high - performance ceramics in various industries. = = = composites = = = another application of materials science in industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a
is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical
nanodust, which undergoes stochastic heating by single starlight photons in the interstellar medium, ranges from angstrom - sized large molecules containing tens to thousands of atoms ( e. g. polycyclic aromatic hydrocarbon molecules ) to grains of a couple tens of nanometers. the presence of nanograins in astrophysical environments has been revealed by a variety of interstellar phenomena : the optical luminescence, the near - and mid - infrared emission, the galactic foreground microwave emission, and the ultraviolet extinction which are ubiquitously seen in the interstellar medium of the milky way and beyond. nanograins ( e. g. nanodiamonds ) have also been identified as presolar in primitive meteorites based on their isotopically anomalous composition. considering the very processes that lead to the detection of nanodust in the ism for the nanodust in the solar system shows that the observation of solar system nanodust by these processes is less likely.
modeling of the x - ray spectra of the galactic superluminal jet sources grs 1915 + 105 and gro j1655 - 40 reveal a three - layered atmospheric structure in the inner region of their accretion disks. above the cold and optically thick disk of a temperature 0. 2 - 0. 5 kev, there is a warm layer with a temperature of 1. 0 - 1. 5 kev and an optical depth around 10. sometimes there is also a much hotter, optically thin corona above the warm layer, with a temperature of 100 kev or higher and an optical depth around unity. the structural similarity between the accretion disks and the solar atmosphere suggest that similar physical processes may be operating in these different systems.
Question: Which of these is hottest?
A) The Earth
B) Mars
C) The Moon
D) The Sun
|
D) The Sun
|
Context:
enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the
the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used?
higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies.
the world is changing at an ever - increasing pace. and it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity : structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. how to respond to this challenge? and what are the costs?
this paper has been withdrawn by the authors until some changes are made.
cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β created by the internal motions of the core β produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make
industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll
dust grains absorb half of the radiation emitted by stars throughout the history of the universe, re - emitting this energy at infrared wavelengths. polycyclic aromatic hydrocarbons ( pahs ) are large organic molecules that trace millimeter - size dust grains and regulate the cooling of the interstellar gas within galaxies. observations of pah features in very distant galaxies have been difficult due to the limited sensitivity and wavelength coverage of previous infrared telescopes. here we present jwst observations that detect the 3. 3um pah feature in a galaxy observed less than 1. 5 billion years after the big bang. the high equivalent width of the pah feature indicates that star formation, rather than black hole accretion, dominates the infrared emission throughout the galaxy. the light from pah molecules, large dust grains, and stars and hot dust are spatially distinct from one another, leading to order - of - magnitude variations in the pah equivalent width and the ratio of pah to total infrared luminosity across the galaxy. the spatial variations we observe suggest either a physical offset between the pahs and large dust grains or wide variations in the local ultraviolet radiation field. our observations demonstrate that differences in the emission from pah molecules and large dust grains are a complex result of localized processes within early galaxies.
a focused modernization of sophus lie ' s brilliant writings about the foundations of geometry that every contemporary geometer should have at least once a look at. translated, updated, commented.
##nts from the air to reduce the potential adverse effects on humans and the environment. the process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation. = = = sewage treatment = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the
Question: In large industrial cities, the emissions from fossil fuels cause the atmosphere to change. Which process allows the atmosphere to change?
A) increased inorganic matter in soil
B) increased use of fertilizers on crops
C) increased buildup of greenhouse gases
D) increased rainfall rates near power plants
|
C) increased buildup of greenhouse gases
|
Context:
10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is
higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies.
do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal
variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated.
significantly greater strength and fracture toughness. another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. the growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 β 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. if a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component β a liquid phase sintering. this results in shorter sintering times compared to solid state sintering. such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain
in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid
. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole β dipole interactions. the transfer of
factor e β e / k t { \ displaystyle e ^ { - e / kt } } β that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g β€ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole β dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic
not only is the bekenstein expression for the entropy of a black hole a convex function of the energy, rather than being a concave function as it must be, it predicts a final equilibrium temperature given by the harmonic mean. this violates the third law, and the principle of maximum work. the property that means are monotonically increasing functions of their argument underscores the error of transferring from temperature means to means in the internal energy when the energy is not a monotonically increasing function of temperature. whereas the former leads to an increase in entropy, the latter lead to a decrease in entropy thereby violating the second law. the internal energy cannot increase at a slower rate than the temperature itself.
is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical
Question: Which is the most likely response of a human to an increase in temperature?
A) increasing the amount of perspiration
B) increasing waste elimination from the bladder
C) a reduction in the desire for liquids
D) a contraction in blood vessels in the skin
|
A) increasing the amount of perspiration
|
Context:
various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. cellular differentiation dramatically changes a cell ' s size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. with a few exceptions, cellular differentiation almost never involves a change in the dna sequence itself. thus, different cells can have very different physical characteristics despite having the same genome. morphogenesis, or the development of body form, is the result of spatial differences in gene expression. a small fraction of the genes in an organism ' s genome called the developmental - genetic toolkit control the development of that organism. these toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. among the most important toolkit genes are the hox genes. hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. = = evolution = = = = = evolutionary processes = = = evolution is a central organizing concept in biology. it is the change in heritable characteristics of populations over successive generations. in artificial selection, animals were selectively bred for specific traits. given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. he further inferred that this would lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment. = = = speciation = = = a species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other
is the scientific study of inheritance. mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. it has several principles. the first is that genetic characteristics, alleles, are discrete and have alternate forms ( e. g., purple vs. white or tall vs. dwarf ), each inherited from one of two parents. based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive ; an organism with at least one dominant allele will display the phenotype of that dominant allele. during gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. heterozygotic individuals produce gametes with an equal frequency of two alleles. finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can
##tes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical ( e. g., nitrous acid, benzopyrene ) or radiation ( e. g., x - ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes ). mutations can lead to phenotypic effects such as loss - of - function, gain - of - function, and conditional mutations. some mutations are beneficial, as they are a source of genetic variation for evolution. others are harmful if they were to result in a loss of function of genes needed for survival. = = = gene expression = = = gene expression is the molecular process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna
genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism ' s genes using technology. it is a set of technologies used to change the genetic makeup of cells, including the transfer of genes within and across species boundaries to produce improved or novel organisms. new dna is obtained by either isolating and copying the genetic material of interest using recombinant dna methods or by artificially synthesising the dna. a construct is usually created and used to insert this dna into the host organism. the first recombinant dna molecule was made by paul berg in 1972 by combining dna from the monkey virus sv40 with the lambda virus. as well as inserting genes, the process can be used to remove, or " knock out ", genes. the new dna can be inserted randomly, or targeted to a specific part of the genome. an organism that is generated through genetic engineering is considered to be genetically modified ( gm ) and the resulting entity is a genetically modified organism ( gmo ). the first gmo was a bacterium generated by herbert boyer and stanley cohen in 1973. rudolf jaenisch created the first gm animal when he inserted foreign dna into a mouse in 1974. the first company to focus on genetic engineering, genentech, was founded in 1976 and started the production of human proteins. genetically engineered human insulin was produced in 1978 and insulin - producing bacteria were commercialised in 1982. genetically modified food has been sold since 1994, with the release of the flavr savr tomato. the flavr savr was engineered to have a longer shelf life, but most current gm crops are modified to increase resistance to insects and herbicides. glofish, the first gmo designed as a pet, was sold in the united states in december 2003. in 2016 salmon modified with a growth hormone were sold. genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. in research, gmos are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. by knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. as well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. chinese hamster ovary ( cho ) cells are used in industrial genetic engineering. additionally mrna vaccines are made through genetic engineering to prevent infections by viruses such as covid - 19. the same techniques that are used to produce drugs can also have industrial applications such
##ply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition of the wild - type gene with a reporting element such as green fluorescent protein ( gfp ) that will allow easy visualisation of the products of the genetic modification. while this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment.
to chromatin, which is a complex of dna and protein found in eukaryotic cells. = = = genes, development, and evolution = = = development is the process by which a multicellular organism ( plant or animal ) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. cellular differentiation dramatically changes a cell ' s size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. with a few exceptions, cellular differentiation almost never involves a change in the dna sequence itself. thus, different cells can have very different physical characteristics despite having the same genome. morphogenesis, or the development of body form, is the result of spatial differences in gene expression. a small fraction of the genes in an organism ' s genome called the developmental - genetic toolkit control the development of that organism. these toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. among the most important toolkit genes are the hox genes. hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. = = evolution = = = = = evolutionary processes = = = evolution is a central organizing concept in biology. it is the change in heritable characteristics of populations over successive generations. in artificial selection, animals were selectively bred for specific traits. given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. he further inferred that this would lead to the
can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. in contrast to both, structural genes encode proteins that are not involved in gene regulation. in addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of dna and protein found in eukaryotic cells. = = = genes, development, and evolution = = = development is the process by which a multicellular organism ( plant or animal ) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. cellular differentiation dramatically changes a cell ' s size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. with a few exceptions, cellular differentiation almost never involves a change in the dna sequence itself. thus, different cells can have very different physical characteristics despite having the same genome. morphogenesis, or the development of body form, is the result of spatial differences in gene expression. a small fraction of the genes in an organism ' s genome called the developmental - genetic toolkit control the development of that organism. these toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. among the most important toolkit genes are the hox genes. hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. = = evolution = = = = = evolutionary processes = = = evolution is a central organizing concept in biology. it is the change in heritable characteristics of populations over successive generations. in artificial selection, animals were selectively bred for specific traits. given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population,
for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition
and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. a single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. the process results from the epigenetic activation of some genes and inhibition of others. unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. while plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. epigenetic changes can lead to paramutations, which do not follow the mendelian heritage rules. these epigenetic marks are carried from one generation to the next,
for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals ( cattle or pigs ). the genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. biotechnology has also enabled emerging therapeutics like gene therapy. the application of biotechnology to basic science ( for example through the human genome project ) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child ' s parentage ( genetic mother and father ) or in general a person ' s ancestry. in addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. genetic testing identifies changes in chromosomes, genes, or proteins. most of the time, testing is used to find changes that are associated with inherited disorders. the results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person ' s chance of developing or passing on a genetic disorder. as of 2011 several hundred genetic tests were in use. since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. = = = agriculture = = = genetically modified crops ( " gm crops ", or " biotech crops " ) are plants used in agriculture, the dna of which has been modified with genetic engineering techniques. in most cases, the main aim is to introduce a new trait that does not occur naturally in the species. biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments ( e. g. resistance to a herbicide ), reduction of spoilage, or improving the nutrient profile of the crop. examples in non - food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. farmers have widely adopted gm technology. between 1996 and 2011, the total surface area of land cultivated with gm crops had increased by a factor of 94, from 17, 000 to 1, 600, 000 square
Question: An organism's traits are largely determined by the genetic makeup of its parents. A mutation in which kinds of cells in a parent could cause a new trait to appear in the parent's offspring?
A) sperm or egg
B) egg or nerve
C) nerve or muscle
D) muscle or sperm
|
A) sperm or egg
|
Context:
the gravitational waves are non - physical sinuosities generated, in the last analysis, by undulating reference frames.
is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of the information being sent, and the spectral efficiency of the modulation method used ; how much data it can transmit in each unit of bandwidth. different types of information signals carried by radio have different data rates. for example, a television signal has a greater data rate than an audio signal. the radio spectrum, the total range of radio frequencies that can be used for communication in a given area, is a limited resource. each radio transmission occupies a portion of the total bandwidth available. radio bandwidth is regarded as an economic good which has a monetary cost and is in increasing demand. in some parts of the radio spectrum, the right to use a frequency band or even a single radio channel is bought and sold for millions of dollars. so there is an incentive to employ technology to minimize the bandwidth used by radio services. a slow transition from analog to digital radio transmission technologies began in the late 1990s. part of the reason for this is that digital modulation can often transmit more information ( a greater data rate ) in a given bandwidth than analog modulation, by using data compression algorithms, which reduce redundancy in the data to be sent, and more efficient modulation. other reasons for the transition is that digital modulation has greater noise immunity than analog, digital signal processing chips have more power and flexibility than analog circuits, and a wide variety of types of information can be transmitted using the same digital modulation. because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to use it more effectively is driving many additional radio innovations such as trunked radio systems, spread spectrum ( ultra - wideband ) transmission, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio. = = = itu frequency bands = = = the itu arbitrarily divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power of ten ( 10n ) metres, with corresponding frequency of 3 times a power of ten, and each covering a decade of frequency or wavelength. each of these bands has a traditional name : it can be seen that the bandwidth, the range of frequencies, contained in each band is not equal but increases exponentially as the
world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes ( tiraz in arabic ). a variety of industrial mills were employed in the islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. by the 11th century, every province throughout the islamic world had these industrial mills in operation. muslim engineers also employed water turbines and gears in mills and water - raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 β 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two
- power radio transceivers that handle the short - range bidirectional radio link. as of 2022, cordless phones in most nations use the dect transmission standard. land mobile radio system β short - range mobile or portable half - duplex radio transceivers operating in the vhf or uhf band that can be used without a license. they are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. special systems with reserved frequencies are used by first responder services ; police, fire, ambulance, and emergency services, and other government services. other systems are made for use by commercial firms such as taxi and delivery services. vhf systems use channels in the range 30 β 50 mhz and 150 β 172 mhz. uhf systems use the 450 β 470 mhz band and in some areas the 470 β 512 mhz range. in general, vhf systems have a longer range than uhf but require longer antennas. am or fm modulation is mainly used, but digital systems such as dmr are being introduced. the radiated power is typically limited to 4 watts. these systems have a fairly limited range, usually 3 to 20 miles ( 4. 8 to 32 km ) depending on terrain. repeaters installed on tall buildings, hills, or mountain peaks are often used to increase the range when it is desired to cover a larger area than line - of - sight. examples of land mobile systems are cb, frs, gmrs, and murs. modern digital systems, called trunked radio systems, have a digital channel management system using a control channel that automatically assigns frequency channels to user groups. walkie - talkie β a battery - powered portable handheld half - duplex two - way radio, used in land mobile radio systems. airband β half - duplex radio system used by aircraft pilots to talk to other aircraft and ground - based air traffic controllers. this vital system is the main communication channel for air traffic control. for most communication in overland flights in air corridors a vhf - am system using channels between 108 and 137 mhz in the vhf band is used. this system has a typical transmission range of 200 miles ( 320 km ) for aircraft flying at cruising altitude. for flights in more remote areas, such as transoceanic airline flights, aircraft use the hf band or channels on the inmarsat or iridium satphone satellites. military aircraft also use a dedicated uhf - am band from 225. 0 to 399. 95 mhz. marine radio β medium - range transceivers on
wireless communication ( or just wireless, when the context allows ) is the transfer of information ( telecommunication ) between two or more points without the use of an electrical conductor, optical fiber or other continuous guided medium for the transfer. the most common wireless technologies use radio waves. with radio waves, intended distances can be short, such as a few meters for bluetooth, or as far as millions of kilometers for deep - space radio communications. it encompasses various types of fixed, mobile, and portable applications, including two - way radios, cellular telephones, personal digital assistants ( pdas ), and wireless networking. other examples of applications of radio wireless technology include gps units, garage door openers, wireless computer mouse, keyboards and headsets, headphones, radio receivers, satellite television, broadcast television and cordless telephones. somewhat less common methods of achieving wireless communications involve other electromagnetic phenomena, such as light and magnetic or electric fields, or the use of sound. the term wireless has been used twice in communications history, with slightly different meanings. it was initially used from about 1890 for the first radio transmitting and receiving technology, as in wireless telegraphy, until the new word radio replaced it around 1920. radio sets in the uk and the english - speaking world that were not portable continued to be referred to as wireless sets into the 1960s. the term wireless was revived in the 1980s and 1990s mainly to distinguish digital devices that communicate without wires, such as the examples listed in the previous paragraph, from those that require wires or cables. this became its primary usage in the 2000s, due to the advent of technologies such as mobile broadband, wi - fi, and bluetooth. wireless operations permit services, such as mobile and interplanetary communications, that are impossible or impractical to implement with the use of wires. the term is commonly used in the telecommunications industry to refer to telecommunications systems ( e. g. radio transmitters and receivers, remote controls, etc. ) that use some form of energy ( e. g. radio waves and acoustic energy ) to transfer information without the use of wires. information is transferred in this manner over both short and long distances. = = history = = = = = photophone = = = the first wireless telephone conversation occurred in 1880 when alexander graham bell and charles sumner tainter invented the photophone, a telephone that sent audio over a beam of light. the photophone required sunlight to operate, and a clear line of sight between the transmitter and receiver, which greatly decreased the viability of the photophone in any practical use
the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used?
reference to recent papers and experimental feasibility are added. the paper will not be published in a hard - copy journal.
, they use the energy of plants ( agricultural revolution ). in the fourth, they learn to use the energy of natural resources : coal, oil, gas. in the fifth, they harness nuclear energy. white introduced the formula p = e / t, where p is the development index, e is a measure of energy consumed, and t is the measure of the efficiency of technical factors using the energy. in his own words, " culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased ". nikolai kardashev extrapolated his theory, creating the kardashev scale, which categorizes the energy use of advanced civilizations. lenski ' s approach focuses on information. the more information and knowledge ( especially allowing the shaping of natural environment ) a given society has, the more advanced it is. he identifies four stages of human development, based on advances in the history of communication. in the first stage, information is passed by genes. in the second, when humans gain sentience, they can learn and pass information through experience. in the third, the humans start using signs and develop logic. in the fourth, they can create symbols, develop language and writing. advancements in communications technology translate into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. he also differentiates societies based on their level of technology, communication, and economy : hunter - gatherer, simple agricultural, advanced agricultural, industrial, special ( such as fishing societies ). in economics, productivity is a measure of technological progress. productivity increases when fewer inputs ( classically labor and capital but some measures include energy and materials ) are used in the production of a unit of output. another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. in developed countries productivity growth has been slowing since the late 1970s ; however, productivity growth was higher in some economic sectors, such as manufacturing. for example, employment in manufacturing in the united states declined from over 30 % in the 1940s to just over 10 % 70 years later. similar changes occurred in other developed countries. this stage is referred to as post - industrial. in the late 1970s sociologists and anthropologists like alvin toffler ( author of future shock ), daniel bell and john naisbitt have approached the theories of post - industrial societies,
be used at high latitudes because of terrestrial interference. cordless phone β a landline telephone in which the handset is portable and communicates with the rest of the phone by a short - range full duplex radio link, instead of being attached by a cord. both the handset and the base station have low - power radio transceivers that handle the short - range bidirectional radio link. as of 2022, cordless phones in most nations use the dect transmission standard. land mobile radio system β short - range mobile or portable half - duplex radio transceivers operating in the vhf or uhf band that can be used without a license. they are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. special systems with reserved frequencies are used by first responder services ; police, fire, ambulance, and emergency services, and other government services. other systems are made for use by commercial firms such as taxi and delivery services. vhf systems use channels in the range 30 β 50 mhz and 150 β 172 mhz. uhf systems use the 450 β 470 mhz band and in some areas the 470 β 512 mhz range. in general, vhf systems have a longer range than uhf but require longer antennas. am or fm modulation is mainly used, but digital systems such as dmr are being introduced. the radiated power is typically limited to 4 watts. these systems have a fairly limited range, usually 3 to 20 miles ( 4. 8 to 32 km ) depending on terrain. repeaters installed on tall buildings, hills, or mountain peaks are often used to increase the range when it is desired to cover a larger area than line - of - sight. examples of land mobile systems are cb, frs, gmrs, and murs. modern digital systems, called trunked radio systems, have a digital channel management system using a control channel that automatically assigns frequency channels to user groups. walkie - talkie β a battery - powered portable handheld half - duplex two - way radio, used in land mobile radio systems. airband β half - duplex radio system used by aircraft pilots to talk to other aircraft and ground - based air traffic controllers. this vital system is the main communication channel for air traffic control. for most communication in overland flights in air corridors a vhf - am system using channels between 108 and 137 mhz in the vhf band is used. this system has a typical transmission range of 200 miles ( 320 km ) for aircraft flying at cruising altitude. for flights in more
evolution of culture is energy. for white, " the primary function of culture " is to " harness and control energy. " white differentiates between five stages of human development : in the first, people use the energy of their own muscles. in the second, they use the energy of domesticated animals. in the third, they use the energy of plants ( agricultural revolution ). in the fourth, they learn to use the energy of natural resources : coal, oil, gas. in the fifth, they harness nuclear energy. white introduced the formula p = e / t, where p is the development index, e is a measure of energy consumed, and t is the measure of the efficiency of technical factors using the energy. in his own words, " culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased ". nikolai kardashev extrapolated his theory, creating the kardashev scale, which categorizes the energy use of advanced civilizations. lenski ' s approach focuses on information. the more information and knowledge ( especially allowing the shaping of natural environment ) a given society has, the more advanced it is. he identifies four stages of human development, based on advances in the history of communication. in the first stage, information is passed by genes. in the second, when humans gain sentience, they can learn and pass information through experience. in the third, the humans start using signs and develop logic. in the fourth, they can create symbols, develop language and writing. advancements in communications technology translate into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. he also differentiates societies based on their level of technology, communication, and economy : hunter - gatherer, simple agricultural, advanced agricultural, industrial, special ( such as fishing societies ). in economics, productivity is a measure of technological progress. productivity increases when fewer inputs ( classically labor and capital but some measures include energy and materials ) are used in the production of a unit of output. another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. in developed countries productivity growth has been slowing since the late 1970s ; however, productivity growth was higher in some economic sectors, such as manufacturing. for example, employment in manufacturing in the united states declined from over 30 % in the 1940s to just
Question: Electricity to play your radio can be made using renewable or nonrenewable resources. Which of the following resources are renewable?
A) wind and oil
B) wind and sunlight
C) natural gas and oil
D) natural gas and coal
|
B) wind and sunlight
|
Context:
the recent report on laser cooling of liquid may contradict the law of energy conservation.
the branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including molecular synthesis, modification, mechanisms, and interactions. = = = water = = = life arose from the earth ' s first ocean, which formed some 3. 8 billion years ago. since then, water continues to be the most abundant molecule in every organism. water is important to life because it is an effective solvent, capable of dissolving solutes such as sodium and chloride ions or other small molecules to form an aqueous solution. once dissolved in water, these solutes are more likely to come in contact with one another and therefore take part in chemical reactions that sustain life. in terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen ( h ) atoms to one oxygen ( o ) atom ( h2o ). because the o β h bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. this polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. water is also adhesive as it is able to adhere to the surface of any polar or charged non - water molecules. water is denser as a liquid than it is as a solid ( or ice ). this unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds = = = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a
##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river
during aqueous corrosion, atoms in the solid react chemically with oxygen, leading either to the formation of an oxide film or to the dissolution of the host material. commonly, the first step in corrosion involves an oxygen atom from the dissociated water that reacts with the surface atoms and breaks near surface bonds. in contrast, hydrogen on the surface often functions as a passivating species. here, we discovered that the roles of o and h are reversed in the early corrosion stages on a si terminated sic surface. o forms stable species on the surface, and chemical attack occurs by h that breaks the si - c bonds. this so - called hydrogen scission reaction is enabled by a newly discovered metastable bridging hydroxyl group that can form during water dissociation. the si atom that is displaced from the surface during water attack subsequently forms h2sio3, which is a known precursor to the formation of silica and silicic acid. this study suggests that the roles of h and o in oxidation need to be reconsidered.
superdielectric behavior was observed in pastes made of high surface area alumina filled to the level of incipient wetness with water containing dissolved sodium chloride ( table salt ). in some cases the dielectric constants were greater than 10 ^ 10.
the realization of karl popper ' s epr - like experiment by shih and kim ( published 1999 ) produced the result that popper hoped for : no ` ` action at a distance ' ' on one photon of an entangled pair when a measurement is made on the other photon. this experimental result is interpretable in local realistic terms : each photon has a definite position and transverse momentum most of the time ; the position measurement on one photon ( localization within a slit ) disturbs the transverse momentum of that photon in a non - predictable way in accordance with the uncertainty principle ; however, there is no effect on the other photon ( the photon that is not in a slit ) no action at a distance. the position measurement ( localization within a slit ) of the one photon destroys the entanglement between the photons ; i. e. decoherence occurs.
fluid dynamics video demonstrating the evolution of dynamic stall on a wind turbine blade.
current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references
equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers β civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 )
and measuring radiation levels. the surveyor program conducted uncrewed lunar landings and takeoffs, as well as taking surface and regolith observations. despite the setback caused by the apollo 1 fire, which killed three astronauts, the program proceeded. apollo 8 was the first crewed spacecraft to leave low earth orbit and the first human spaceflight to reach the moon. the crew orbited the moon ten times on december 24 and 25, 1968, and then traveled safely back to earth. the three apollo 8 astronauts β frank borman, james lovell, and william anders β were the first humans to see the earth as a globe in space, the first to witness an earthrise, and the first to see and manually photograph the far side of the moon. the first lunar landing was conducted by apollo 11. commanded by neil armstrong with astronauts buzz aldrin and michael collins, apollo 11 was one of the most significant missions in nasa ' s history, marking the end of the space race when the soviet union gave up its lunar ambitions. as the first human to step on the surface of the moon, neil armstrong uttered the now famous words : that ' s one small step for man, one giant leap for mankind. nasa would conduct six total lunar landings as part of the apollo program, with apollo 17 concluding the program in 1972. = = = = end of apollo = = = = wernher von braun had advocated for nasa to develop a space station since the agency was created. in 1973, following the end of the apollo lunar missions, nasa launched its first space station, skylab, on the final launch of the saturn v. skylab reused a significant amount of apollo and saturn hardware, with a repurposed saturn v third stage serving as the primary module for the space station. damage to skylab during its launch required spacewalks to be performed by the first crew to make it habitable and operational. skylab hosted nine missions and was decommissioned in 1974 and deorbited in 1979, two years prior to the first launch of the space shuttle and any possibility of boosting its orbit. in 1975, the apollo β soyuz mission was the first ever international spaceflight and a major diplomatic accomplishment between the cold war rivals, which also marked the last flight of the apollo capsule. flown in 1975, a us apollo spacecraft docked with a soviet soyuz capsule. = = = interplanetary exploration and space science = = = during the 1960s, nasa started its space science and interplanetary probe program. the mariner program was its flagship
Question: The action of turning off the water while brushing teeth is an example of
A) recycling.
B) adaptation.
C) conservation.
D) resourcefulness.
|
C) conservation.
|
Context:
a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 β 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 β 2 ), or a downshift maneuver in passing ( 4 β 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up - front tooling and fixed costs associated with developing the vehicle. there are also costs associated with warranty reductions and marketing. program timing : to some extent programs are timed with respect to the market, and also to the production - schedules of assembly plants. any new part in the design must support the development and manufacturing schedule of the model. design for manufacturability ( dfm ) : dfm refers to designing vehicular components in such a way that they are not only feasible to manufacture, but also such that they are cost - efficient to produce while resulting in acceptable
vehicle crashes. fuel economy / emissions : fuel economy is the measured fuel efficiency of the vehicle in miles per gallon or kilometers per liter. emissions - testing covers the measurement of vehicle emissions, including hydrocarbons, nitrogen oxides ( nox ), carbon monoxide ( co ), carbon dioxide ( co2 ), and evaporative emissions. nvh engineering ( noise, vibration, and harshness ) : nvh involves customer feedback ( both tactile [ felt ] and audible [ heard ] ) concerning a vehicle. while sound can be interpreted as a rattle, squeal, or hot, a tactile response can be seat vibration or a buzz in the steering wheel. this feedback is generated by components either rubbing, vibrating, or rotating. nvh response can be classified in various ways : powertrain nvh, road noise, wind noise, component noise, and squeak and rattle. note, there are both good and bad nvh qualities. the nvh engineer works to either eliminate bad nvh or change the " bad nvh " to good ( i. e., exhaust tones ). vehicle electronics : automotive electronics is an increasingly important aspect of automotive engineering. modern vehicles employ dozens of electronic systems. these systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 β 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and power
and bad nvh qualities. the nvh engineer works to either eliminate bad nvh or change the " bad nvh " to good ( i. e., exhaust tones ). vehicle electronics : automotive electronics is an increasingly important aspect of automotive engineering. modern vehicles employ dozens of electronic systems. these systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 β 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 β 2 ), or a downshift maneuver in passing ( 4 β 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect
systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 β 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 β 2 ), or a downshift maneuver in passing ( 4 β 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up - front tooling and fixed costs associated with developing the vehicle. there are also costs associated with warranty reductions and marketing. program timing : to some extent programs are timed with respect to the market, and also to the production - schedules of assembly plants. any new
missiles, ships, vehicles, and also to map weather patterns and terrain. a radar set consists of a transmitter and receiver. the transmitter emits a narrow beam of radio waves which is swept around the surrounding space. when the beam strikes a target object, radio waves are reflected back to the receiver. the direction of the beam reveals the object ' s location. since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received " echo ", the range to the target can be calculated. the targets are often displayed graphically on a map display called a radar screen. doppler radar can measure a moving object ' s velocity, by measuring the change in frequency of the return radio waves due to the doppler effect. radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. parabolic ( dish ) antennas are widely used. in most radars the transmitting antenna also serves as the receiving antenna ; this is called a monostatic radar. a radar which uses separate transmitting and receiving antennas is called a bistatic radar. airport surveillance radar β in aviation, radar is the main tool of air traffic control. a rotating dish antenna sweeps a vertical fan - shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as " blips " of light on a display called a radar screen. airport radar operates at 2. 7 β 2. 9 ghz in the microwave s band. in large airports the radar image is displayed on multiple screens in an operations room called the tracon ( terminal radar approach control ), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. secondary surveillance radar β aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) β military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it
beam reveals the object ' s location. since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received " echo ", the range to the target can be calculated. the targets are often displayed graphically on a map display called a radar screen. doppler radar can measure a moving object ' s velocity, by measuring the change in frequency of the return radio waves due to the doppler effect. radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. parabolic ( dish ) antennas are widely used. in most radars the transmitting antenna also serves as the receiving antenna ; this is called a monostatic radar. a radar which uses separate transmitting and receiving antennas is called a bistatic radar. airport surveillance radar β in aviation, radar is the main tool of air traffic control. a rotating dish antenna sweeps a vertical fan - shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as " blips " of light on a display called a radar screen. airport radar operates at 2. 7 β 2. 9 ghz in the microwave s band. in large airports the radar image is displayed on multiple screens in an operations room called the tracon ( terminal radar approach control ), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. secondary surveillance radar β aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) β military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. it often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. marine radar β an s or x band radar on ships used to detect nearby ships and obstructions like bridges. a rotating antenna sweeps a vertical
##ent governmental regulations. some of these requirements include : seat belt and air bag functionality testing, front and side - impact testing, and tests of rollover resistance. assessments are done with various methods and tools, including computer crash simulation ( typically finite element analysis ), crash - test dummy, and partial system sled and full vehicle crashes. fuel economy / emissions : fuel economy is the measured fuel efficiency of the vehicle in miles per gallon or kilometers per liter. emissions - testing covers the measurement of vehicle emissions, including hydrocarbons, nitrogen oxides ( nox ), carbon monoxide ( co ), carbon dioxide ( co2 ), and evaporative emissions. nvh engineering ( noise, vibration, and harshness ) : nvh involves customer feedback ( both tactile [ felt ] and audible [ heard ] ) concerning a vehicle. while sound can be interpreted as a rattle, squeal, or hot, a tactile response can be seat vibration or a buzz in the steering wheel. this feedback is generated by components either rubbing, vibrating, or rotating. nvh response can be classified in various ways : powertrain nvh, road noise, wind noise, component noise, and squeak and rattle. note, there are both good and bad nvh qualities. the nvh engineer works to either eliminate bad nvh or change the " bad nvh " to good ( i. e., exhaust tones ). vehicle electronics : automotive electronics is an increasingly important aspect of automotive engineering. modern vehicles employ dozens of electronic systems. these systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 β 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the
; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground
the galactic microquasar ss 433 is a member of a binary system but there is a lack of data on the orbital velocities of the components. the emission lines of the c ii doublet at 7231 and 7236 angstrom have been tracked nightly over two orbital cycles. the spectra are adequate to establish that these lines are eclipsed by the companion and hence to extract a measure of the orbital velocity of the compact object ; the lines are formed in the disk photosphere. this velocity is 176 plus / minus 13 km / s. could xshooter do better?
electric motors, servo - mechanisms, and other electrical systems in conjunction with special software. a common example of a mechatronics system is a cd - rom drive. mechanical systems open and close the drive, spin the cd and move the laser, while an optical system reads the data on the cd and converts it to bits. integrated software controls the process and communicates the contents of the cd to the computer. robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. these robots may be of any shape and size, but all are preprogrammed and interact physically with the world. to create a robot, an engineer typically employs kinematics ( to determine the robot ' s range of motion ) and mechanics ( to determine the stresses within the robot ). robots are used extensively in industrial automation engineering. they allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. many companies employ assembly lines of robots, especially in automotive industries and some factories are so robotized that they can run by themselves. outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. robots are also sold for various residential applications, from recreation to domestic applications. = = = structural analysis = = = structural analysis is the branch of mechanical engineering ( and also civil engineering ) devoted to examining why and how objects fail and to fix the objects and their performance. structural failures occur in two general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure
Question: Which objects are the most useful for collecting data on the speed of a toy car?
A) microscope, computer, ruler
B) thermometer, calculator, magnet
C) stopwatch, calculator, meter stick
D) camera, digital recorder, safety goggles
|
C) stopwatch, calculator, meter stick
|
Context:
##ry. immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. medical physics is the study of the applications of physics principles in medicine. microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease β the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. within medical circles, specialities usually fit into one of two broad categories : " medicine " and " surgery ". " medicine " refers to the practice of non - operative medicine, and most of its subspecialties require preliminary training in internal medicine. in the uk
include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous,
the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease β the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. within medical circles, specialities usually fit into one of two broad categories : " medicine " and " surgery ". " medicine " refers to the practice of non - operative medicine, and most of its subspecialties require preliminary training in internal medicine. in the uk, this was traditionally evidenced by passing the examination for the membership of the royal college of physicians ( mrcp ) or the equivalent college in scotland or ireland. " surgery " refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in general surgery, which in the uk leads to
process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to
. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer
the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form
and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley β to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states
fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity ( space bioeconomy ) dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops. = = = medicine = = = in medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing ( or genetic screening ). in 2021, nearly 40 % of the total company value of pharmaceutical biotech companies worldwide were active in oncology
genetic engineering takes the gene directly from one organism and delivers it to the other. this is much faster, can be used to insert any genes from any organism ( even ones from different domains ) and prevents other undesirable genes from also being added. genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. it is an important tool in research that allows the function of specific genes to be studied. drugs, vaccines and other products have been harvested from organisms engineered to produce them. crops have been developed that aid food security by increasing yield, nutritional value and tolerance to environmental stresses. the dna can be introduced directly into the host organism or into a cell that is then fused or hybridised with the host. this relies on recombinant nucleic acid techniques to form new combinations of heritable genetic material followed by the incorporation of that material either indirectly through a vector system or directly through micro - injection, macro - injection or micro - encapsulation. genetic engineering does not normally include traditional breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. however, some broad definitions of genetic engineering include selective breeding. cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesised material into an organism. plants, animals or microorganisms that have been changed through genetic engineering are termed genetically modified organisms or gmos. if genetic material from another species is added to the host, the resulting organism is called transgenic. if genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. if genetic engineering is used to remove genetic material from the target organism the resulting organism is termed a knockout organism. in europe genetic modification is synonymous with genetic engineering while within the united states of america and canada genetic modification can also be used to refer to more conventional breeding methods. = = history = = humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection : 1 : 1 as contrasted with natural selection. more recently, mutation breeding has used exposure to chemicals or radiation to produce a high frequency of random mutations, for selective breeding purposes. genetic engineering as the direct manipulation of dna by humans outside breeding and
, and includes, but is not limited to, the study of epidemics. genetics is the study of genes, and their role in biological inheritance. gynecology is the study of female reproductive system. histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry. immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. medical physics is the study of the applications of physics principles in medicine. microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease β the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of
Question: Some microorganisms cause human disease. Other microorganisms are used in making cheese, yogurt, and bread. Based on this information, the relationship between humans and microorganisms can be
A) beneficial, only
B) harmful, only
C) beneficial or harmful
|
C) beneficial or harmful
|
Context:
bear ' ) was conspicuous on radar. it is now known that propellers and jet turbine blades produce a bright radar image ; the bear has four pairs of large 18 - foot ( 5. 6 m ) diameter contra - rotating propellers. another important factor is internal construction. some stealth aircraft have skin that is radar transparent or absorbing, behind which are structures termed reentrant triangles. radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. this method was first used on the blackbird series : a - 12, yf - 12a, lockheed sr - 71 blackbird. the most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar
in mathematics, a reflection ( also spelled reflexion ) is a mapping from a euclidean space to itself that is an isometry with a hyperplane as the set of fixed points ; this set is called the axis ( in dimension 2 ) or plane ( in dimension 3 ) of reflection. the image of a figure by a reflection is its mirror image in the axis or plane of reflection. for example the mirror image of the small latin letter p for a reflection with respect to a vertical axis ( a vertical reflection ) would look like q. its image by reflection in a horizontal axis ( a horizontal reflection ) would look like b. a reflection is an involution : when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. the term reflection is sometimes used for a larger class of mappings from a euclidean space to itself, namely the non - identity isometries that are involutions. the set of fixed points ( the " mirror " ) of such an isometry is an affine subspace, but is possibly smaller than a hyperplane. for instance a reflection through a point is an involutive isometry with just one fixed point ; the image of the letter p under it would look like a d. this operation is also known as a central inversion ( coxeter 1969, Β§ 7. 2 ), and exhibits euclidean space as a symmetric space. in a euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. other examples include reflections in a line in three - dimensional space. typically, however, unqualified use of the term " reflection " means reflection in a hyperplane. some mathematicians use " flip " as a synonym for " reflection ". = = construction = = in a plane ( or, respectively, 3 - dimensional ) geometry, to find the reflection of a point drop a perpendicular from the point to the line ( plane ) used for reflection, and extend it the same distance on the other side. to find the reflection of a figure, reflect each point in the figure. to reflect point p through the line ab using compass and straightedge, proceed as follows ( see figure ) : step 1 ( red ) : construct a circle with center at p and some fixed radius r to create points a β² and b β² on the line ab, which will be equidistant from p. step 2 ( green ) : construct circles centered at a β² and b β² having radius r
in gravitational lensing, the concept of optical depth assumes the lens is dark. several microlensing detections have now been made where the lens may be bright. relations are developed between apparent and absolute optical depth in the regime of the apparent and absolute brightness of the lens. an apparent optical depth through bright lenses is always less than the true, absolute optical depth. the greater the intrinsic brightness of the lens, the more likely it will be found nearer the source.
also called projection lines ) differs, as explained below. in first - angle projection, the parallel projectors originate as if radiated from behind the viewer and pass through the 3d object to project a 2d image onto the orthogonal plane behind it. the 3d object is projected into 2d " paper " space as if you were looking at a radiograph of the object : the top view is under the front view, the right view is at the left of the front view. first - angle projection is the iso standard and is primarily used in europe. in third - angle projection, the parallel projectors originate as if radiated from the far side of the object and pass through the 3d object to project a 2d image onto the orthogonal plane in front of it. the views of the 3d object are like the panels of a box that envelopes the object, and the panels pivot as they open up flat into the plane of the drawing. thus the left view is placed on the left and the top view on the top ; and the features closest to the front of the 3d object will appear closest to the front view in the drawing. third - angle projection is primarily used in the united states and canada, where it is the default projection system according to asme standard asme y14. 3m. until the late 19th century, first - angle projection was the norm in north america as well as europe ; but circa the 1890s, third - angle projection spread throughout the north american engineering and manufacturing communities to the point of becoming a widely followed convention, and it was an asa standard by the 1950s. circa world war i, british practice was frequently mixing the use of both projection methods. as shown above, the determination of what surface constitutes the front, back, top, and bottom varies depending on the projection method used. not all views are necessarily used. generally only as many views are used as are necessary to convey all needed information clearly and economically. the front, top, and right - side views are commonly considered the core group of views included by default, but any combination of views may be used depending on the needs of the particular design. in addition to the six principal views ( front, back, top, bottom, right side, left side ), any auxiliary views or sections may be included as serve the purposes of part definition and its communication. view lines or section lines ( lines with arrows marked " a - a ", " b - b ", etc. ) define the direction and location of viewing or sectioning. sometimes a note tells the reader in which zone
, behind which are structures termed reentrant triangles. radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. this method was first used on the blackbird series : a - 12, yf - 12a, lockheed sr - 71 blackbird. the most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth air
reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 '
sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient ' s problem. on subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. = = institutions = = contemporary medicine is, in general, conducted within health care systems. legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. the characteristics of any given health care system have a significant impact on the way medical care is provided. from ancient times,
scientists look through telescopes, study images on electronic screens, record meter readings, and so on. generally, on a basic level, they can agree on what they see, e. g., the thermometer shows 37. 9 degrees c. but, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. for example, before albert einstein ' s general theory of relativity, observers would have likely interpreted an image of the einstein cross as five different objects in space. in light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. observations that cannot be separated from theoretical interpretation are said to be theory - laden. all observation involves both perception and cognition. that is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. therefore, observations are affected by one ' s underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. in this sense, it can be argued that all observation is theory - laden. = = = the purpose of science = = = should science aim to determine ultimate truth, or are there questions that science cannot answer? scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. conversely, scientific anti - realists argue that science does not aim ( or at least does not succeed ) at truth, especially truth about unobservables like electrons or other universes. instrumentalists argue that scientific theories should only be evaluated on whether they are useful. in their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. realists often point to the success of recent scientific theories as evidence for the truth ( or near truth ) of current theories. antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. antirealists attempt to explain the success of scientific theories without reference to truth. some antirealists claim that scientific
that shows the object as it looks from the front, right, left, top, bottom, or back ( e. g. the primary views ), and is typically positioned relative to each other according to the rules of either first - angle or third - angle projection. the origin and vector direction of the projectors ( also called projection lines ) differs, as explained below. in first - angle projection, the parallel projectors originate as if radiated from behind the viewer and pass through the 3d object to project a 2d image onto the orthogonal plane behind it. the 3d object is projected into 2d " paper " space as if you were looking at a radiograph of the object : the top view is under the front view, the right view is at the left of the front view. first - angle projection is the iso standard and is primarily used in europe. in third - angle projection, the parallel projectors originate as if radiated from the far side of the object and pass through the 3d object to project a 2d image onto the orthogonal plane in front of it. the views of the 3d object are like the panels of a box that envelopes the object, and the panels pivot as they open up flat into the plane of the drawing. thus the left view is placed on the left and the top view on the top ; and the features closest to the front of the 3d object will appear closest to the front view in the drawing. third - angle projection is primarily used in the united states and canada, where it is the default projection system according to asme standard asme y14. 3m. until the late 19th century, first - angle projection was the norm in north america as well as europe ; but circa the 1890s, third - angle projection spread throughout the north american engineering and manufacturing communities to the point of becoming a widely followed convention, and it was an asa standard by the 1950s. circa world war i, british practice was frequently mixing the use of both projection methods. as shown above, the determination of what surface constitutes the front, back, top, and bottom varies depending on the projection method used. not all views are necessarily used. generally only as many views are used as are necessary to convey all needed information clearly and economically. the front, top, and right - side views are commonly considered the core group of views included by default, but any combination of views may be used depending on the needs of the particular design. in addition to the six principal views ( front, back, top, bottom, right side, left side ),
nanodust, which undergoes stochastic heating by single starlight photons in the interstellar medium, ranges from angstrom - sized large molecules containing tens to thousands of atoms ( e. g. polycyclic aromatic hydrocarbon molecules ) to grains of a couple tens of nanometers. the presence of nanograins in astrophysical environments has been revealed by a variety of interstellar phenomena : the optical luminescence, the near - and mid - infrared emission, the galactic foreground microwave emission, and the ultraviolet extinction which are ubiquitously seen in the interstellar medium of the milky way and beyond. nanograins ( e. g. nanodiamonds ) have also been identified as presolar in primitive meteorites based on their isotopically anomalous composition. considering the very processes that lead to the detection of nanodust in the ism for the nanodust in the solar system shows that the observation of solar system nanodust by these processes is less likely.
Question: Which of these objects is visible because it reflects light toward the eye?
A) burning candle
B) flashlight bulb
C) glowing campfire log
D) shiny metallic balloon
|
D) shiny metallic balloon
|
Context:
behavioral responses to different stimuli, one can understand something about how those stimuli are processed. lewandowski & strohmetz ( 2009 ) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present ( e. g., litter in a parking lot or readings on an electric meter ). behavioral observations involve the direct witnessing of the actor engaging in the behavior ( e. g., watching how close a person sits next to another person ). behavioral choices are when a person selects between two or more options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream
little information is known about the polarization of gluons inside a longitudinally polarized proton. i report on the sensitivity of photoproduction experiments to it. both jet and heavy quark production are considered.
is not present ( e. g., litter in a parking lot or readings on an electric meter ). behavioral observations involve the direct witnessing of the actor engaging in the behavior ( e. g., watching how close a person sits next to another person ). behavioral choices are when a person selects between two or more options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields
can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. in contrast to both, structural genes encode proteins that are not involved in gene regulation. in addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of dna and protein found in eukaryotic cells. = = = genes, development, and evolution = = = development is the process by which a multicellular organism ( plant or animal ) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. cellular differentiation dramatically changes a cell ' s size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. with a few exceptions, cellular differentiation almost never involves a change in the dna sequence itself. thus, different cells can have very different physical characteristics despite having the same genome. morphogenesis, or the development of body form, is the result of spatial differences in gene expression. a small fraction of the genes in an organism ' s genome called the developmental - genetic toolkit control the development of that organism. these toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. among the most important toolkit genes are the hox genes. hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. = = evolution = = = = = evolutionary processes = = = evolution is a central organizing concept in biology. it is the change in heritable characteristics of populations over successive generations. in artificial selection, animals were selectively bred for specific traits. given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population,
usability engineering, it ' s important target and identify human errors when interacting with the product of interest because if a user is expected to engage with a product, interface, or service in some way, the very introduction of a human in that engagement increases the potential of encountering human error. error should be reduced as much as possible in order to avoid frustration or injury. there are two main types of human errors which are categorized as slips and mistakes. slips are a very common kind of error involving automatic behaviors ( i. e. typos, hitting the wrong menu item ). when we experience slips, we have the correct goal in mind, but execute the wrong action. mistakes on the other hand involve conscious deliberation that result in the incorrect conclusion. when we experience mistakes, we have the wrong goal in mind and thereby execute the wrong action. even though slips are the more common type of error, they are no less dangerous. a certain type of slip error, a mode error, can be especially dangerous if a user is executing a high - risk task. for instance, if a user is operating a vehicle and does not realize they are in the wrong mode ( i. e. reverse ), they might step on the gas intending to drive, but instead accelerate into a garage wall or another car. in order to avoid modal errors, designers often employ modeless states in which users do not have to choose a mode at all, or they must execute a continuous action while intending to execute a certain mode ( i. e. pressing a key continuously in order to activate " lasso " mode in photoshop ). = = evaluation methods = = usability engineers conduct usability evaluations of existing or proposed interfaces and their findings are fed back to the designer for use in design or redesign. common usability evaluation methods include : card sorting cognitive task analysis cognitive walkthroughs contextual inquiry focus groups heuristic evaluations interviews questionnaires rite method surveys think aloud protocol usability testing = = software applications and development tools = = there are a variety of online resources that make the job of a usability engineer a little easier. online tools are only a useful tool, and do not substitute for a complete usability engineering analysis. some examples of these include : = = = the web metrics tool suite = = = this is a product of the national institute of standards and technology. this toolkit is focused on evaluating the html of a website versus a wide range of usability guidelines and includes : web static analyzer tool
as possible in order to avoid frustration or injury. there are two main types of human errors which are categorized as slips and mistakes. slips are a very common kind of error involving automatic behaviors ( i. e. typos, hitting the wrong menu item ). when we experience slips, we have the correct goal in mind, but execute the wrong action. mistakes on the other hand involve conscious deliberation that result in the incorrect conclusion. when we experience mistakes, we have the wrong goal in mind and thereby execute the wrong action. even though slips are the more common type of error, they are no less dangerous. a certain type of slip error, a mode error, can be especially dangerous if a user is executing a high - risk task. for instance, if a user is operating a vehicle and does not realize they are in the wrong mode ( i. e. reverse ), they might step on the gas intending to drive, but instead accelerate into a garage wall or another car. in order to avoid modal errors, designers often employ modeless states in which users do not have to choose a mode at all, or they must execute a continuous action while intending to execute a certain mode ( i. e. pressing a key continuously in order to activate " lasso " mode in photoshop ). = = evaluation methods = = usability engineers conduct usability evaluations of existing or proposed interfaces and their findings are fed back to the designer for use in design or redesign. common usability evaluation methods include : card sorting cognitive task analysis cognitive walkthroughs contextual inquiry focus groups heuristic evaluations interviews questionnaires rite method surveys think aloud protocol usability testing = = software applications and development tools = = there are a variety of online resources that make the job of a usability engineer a little easier. online tools are only a useful tool, and do not substitute for a complete usability engineering analysis. some examples of these include : = = = the web metrics tool suite = = = this is a product of the national institute of standards and technology. this toolkit is focused on evaluating the html of a website versus a wide range of usability guidelines and includes : web static analyzer tool ( websat ) β checks web page html against typical usability guidelines web category analysis tool ( webcat ) β lets the usability engineer construct and conduct a web category analysis web variable instrumenter program ( webvip ) β instruments a website to capture a log of user interaction framework for logging usability data ( flu
octet hyperon charge radii are calculated in a chiral constituent quark model including electromagnetic exchange currents between quarks. in impulse approximation one observes a decrease of the hyperon charge radii with increasing strangeness. this effect is reduced by exchange currents. due to exchange currents, the charge radius of the negatively charged hyperons are close to the proton charge radius.
from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent β the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable
metal hydrides have earlier been suggested for utilization in solar cells. with this as a motivation we have prepared thin films of yttrium hydride by reactive magnetron sputter deposition. the resulting films are metallic for low partial pressure of hydrogen during the deposition, and black or yellow - transparent for higher partial pressure of hydrogen. both metallic and semiconducting transparent yhx films have been prepared directly in - situ without the need of capping layers and post - deposition hydrogenation. optically the films are similar to what is found for yhx films prepared by other techniques, but the crystal structure of the transparent films differ from the well - known yh3 phase, as they have an fcc lattice instead of hcp.
to that of a flat crack through the plain matrix. the magnitude of the toughening is determined by the mismatch strain caused by thermal contraction incompatibility and the microfracture resistance of the particle / matrix interface. the toughening becomes noticeable with a narrow size distribution of appropriately sized particles, and researchers typically accept that deflection effects in materials with roughly equiaxial grains may increase the fracture toughness by about twice the grain boundary value. the model reveals that the increase in toughness is dependent on particle shape and the volume fraction of the second phase, with the most effective morphology being the rod of high aspect ratio, which can account for a fourfold increase in fracture toughness. the toughening arises primarily from the twist of the crack front between particles, as indicated by deflection profiles. disc - shaped particles and spheres are less effective in toughening. fracture toughness, regardless of morphology, is determined by the twist of the crack front at its most severe configuration, rather than the initial tilt of the crack front. only for disc - shaped particles does the initial tilting of the crack front provide significant toughening ; however, the twist component still overrides the tilt - derived toughening. additional important features of the deflection analysis include the appearance of asymptotic toughening for the three morphologies at volume fractions in excess of 0. 2. it is also noted that a significant influence on the toughening by spherical particles is exerted by the interparticle spacing distribution ; greater toughening is afforded when spheres are nearly contacting such that twist angles approach Ο / 2. these predictions provide the basis for the design of high - toughness two - phase ceramic materials. the ideal second phase, in addition to maintaining chemical compatibility, should be present in amounts of 10 to 20 volume percent. greater amounts may diminish the toughness increase due to overlapping particles. particles with high aspect ratios, especially those with rod - shaped morphologies, are most suitable for maximum toughening. this model is often used to determine the factors that contribute to the increase in fracture toughness in ceramics which is ultimately useful in the development of advanced ceramic materials with improved performance. = = theory of chemical processing = = = = = microstructural uniformity = = = in the processing of fine ceramics, the irregular particle sizes and shapes in a typical powder often lead to non - uniform packing morphologies that result in packing density variations in the powder compact. uncontrolled aggl
Question: Which dog trait is a learned behavior?
A) blinking its eyes
B) scratching an itch
C) panting to cool off
D) jumping to catch a ball
|
D) jumping to catch a ball
|
Context:
has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well β not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named
set of chemical reactions with other substances. however, this definition only works well for substances that are composed of molecules, which is not true of many substances ( see below ). molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature.
strangelets ( stable lumps of quark matter ) can have masses and charges much higher than those of nuclei, but have very low charge - to - mass ratios. this is confirmed in a relativistic thomas - fermi model. the high charge allows astrophysical strangelet acceleration to energies orders of magnitude higher than for protons. in addition, strangelets are much less susceptible to the interactions with the cosmic microwave background that suppress the flux of cosmic ray protons and nuclei above energies of $ 10 ^ { 19 } $ - - $ 10 ^ { 20 } $ ev ( the gzk - cutoff ). this makes strangelets an interesting possibility for explaining ultra - high energy cosmic rays.
other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature. = = = = substance and mixture = = = = a chemical substance is a kind of matter with a definite composition and set of properties. a collection of substances is called a mixture. examples of mixtures are air and alloys. = = = = mole and amount of substance = = = = the mole is a unit
ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid β base reactions are hydroxide ( ohβ ) and phosphate ( po43β ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid β base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted β lowry acid β base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid β base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted β lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an acid under the brΓΈnsted β lowry definition of an acid. that is, substances with a higher ka are more likely to donate hydrogen ions in chemical reactions than those with lower ka values. = = = redox = = = redox ( reduction - oxidation ) reactions include all chemical reactions in which atoms have their
it is believed that there may have been a large number of black holes formed in the very early universe. these would have quantised masses. a charged ` ` elementary black hole ' ' ( with the minimum possible mass ) can capture electrons, protons and other charged particles to form a ` ` black hole atom ' '. we find the spectrum of such an object with a view to laboratory and astronomical observation of them, and estimate the lifetime of the bound states. there is no limit to the charge of the black hole, which gives us the possibility of observing z > 137 bound states and transitions at the lower continuum. negatively charged black holes can capture protons. for z > 1, the orbiting protons will coalesce to form a nucleus ( after beta - decay of some protons to neutrons ), with a stability curve different to that of free nuclei. in this system there is also the distinct possibility of single quark capture. this leads to the formation of a coloured black hole that plays the role of an extremely heavy quark interacting strongly with the other two quarks. finally we consider atoms formed with much larger black holes.
index chemical substances. in this scheme each chemical substance is identifiable by a number known as its cas registry number. = = = = molecule = = = = a molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. however, this definition only works well for substances that are composed of molecules, which is not true of many substances ( see below ). molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry
the united rest mass and charge of a particle correspond to the two forms of the same regularity of the unified nature of its ultimate structure. each of them contains the electric, weak, strong and the gravitational contributions. as a consequence, the force of an attraction among the two neutrinos and force of their repulsion must be defined from the point of view of any of the existing types of the actions. therefore, to understand the nature of the micro world interaction at the fundamental level, one must use the fact that each of the four types of well known forces includes both a kind of the newton and a kind of the coulomb components. the opinion has been spoken that the existence of the gravitational parts of the united rest mass and charge would imply the availability of such a fifth force which come forwards in the system as a unified whole.
charges in the nuclei and the negative charges oscillating about them. more than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. the chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of van der waals force. each of these kinds of bonds is ascribed to some potential. these potentials create the interactions which hold atoms together in molecules or crystals. in many simple compounds, valence bond theory, the valence shell electron pair repulsion model ( vsepr ), and the concept of oxidation number can be used to explain molecular structure and composition. an ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non - metal atom, becoming a negatively charged anion. the two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. for example, sodium ( na ), a metal, loses one electron to become an na + cation while chlorine ( cl ), a non - metal, gains this electron to become clβ. the ions are held together due to electrostatic attraction, and that compound sodium chloride ( nacl ), or common table salt, is formed. in a covalent bond, one or more pairs of valence electrons are shared by two atoms : the resulting electrically neutral group of bonded atoms is termed a molecule. atoms will share valence electrons in such a way as to create a noble gas electron configuration ( eight electrons in their outermost shell ) for each atom. atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. however, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration ; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. similarly, theories from classical physics can be used to predict many ionic structures. with more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. = = = energy = = = in the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. since a chemical transformation is accompanied by a change
g. spectroscopy and chromatography. scientists engaged in chemical research are known as chemists. most chemists specialize in one or more sub - disciplines. several concepts are essential for the study of chemistry ; some of them are : = = = matter = = = in chemistry, matter is defined as anything that has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well β not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends
Question: When an atom has a neutral charge, which particles within the atom have equal numbers?
A) electrons and neutrons
B) protons and electrons
C) neutrons and protons
D) ions and neutrons
|
B) protons and electrons
|
Context:
general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. engineers often use online documents and books such as those published by asm to aid them in determining the type of failure and possible causes. once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. = = = thermodynamics and thermo - science = = = thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. at its simplest, thermodynamics is the study of energy, its use and transformation through a system. typically, engineering thermodynamics is concerned with changing energy from one form to another. as an example, automotive engines convert chemical energy ( enthalpy ) from the fuel into heat, and then into mechanical work that eventually turns the wheels. thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. mechanical engineers use thermo - science to design engines and power plants, heating, ventilation, and air - conditioning ( hvac ) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others. = = = design and drafting = = = drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. a technical drawing can be a computer model or hand - drawn schematic showing all the dimensions necessary to manufacture a
are commonly referred to as " cross - hatching ". phantom β ( not shown ) are alternately long - and double short - dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. e. g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 β 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 β 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h pencil. = = = multiple views and projections = = = in most cases, a single view is not sufficient to show all necessary features, and several views are used. types of views include the following : = = = = multiview projection = = = = a multiview projection is a type of orthographic projection that shows the object as it looks from the front, right, left, top, bottom, or back ( e. g. the primary views ), and is typically positioned relative to each other according to the rules of either first - angle or third - angle projection. the origin and vector direction of the projectors (
##d product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 β 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 β 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h pencil. = = = multiple views and projections = = = in most cases, a single view is not sufficient to show all necessary features, and several views are used. types of views include the following : = = = = multiview projection = = = = a multiview projection is a type of orthographic projection that shows the object as it looks from the front, right, left, top, bottom, or back ( e. g. the primary views ), and is typically positioned relative to each other according to the rules of either first - angle or third - angle projection. the origin and vector direction of the projectors ( also called projection lines ) differs, as explained below. in first - angle projection, the parallel projectors originate as if radiated from behind the viewer and pass through the 3d object to project a 2d image onto the orthogonal plane behind it. the 3d object is projected into 2d " paper " space as if you were looking at
are continuous lines used to depict edges directly visible from a particular angle. hidden β are short - dashed lines that may be used to represent edges that are not directly visible. center β are alternately long - and short - dashed lines that may be used to represent the axes of circular features. cutting plane β are thin, medium - dashed lines, or thick alternately long - and double short - dashed that may be used to define sections for section views. section β are thin lines in a pattern ( pattern determined by the material being " cut " or " sectioned " ) used to indicate surfaces in section views resulting from " cutting ". section lines are commonly referred to as " cross - hatching ". phantom β ( not shown ) are alternately long - and double short - dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. e. g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 β 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 β 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h
three of what is called the six simple machines, from which all machines are based. these machines are the inclined plane, the wedge, and the lever, which allowed the ancient egyptians to move millions of limestone blocks which weighed approximately 3. 5 tons ( 7, 000 lbs. ) each into place to create structures like the great pyramid of giza, which is 481 feet ( 147 meters ) high. they also made writing medium similar to paper from papyrus, which joshua mark states is the foundation for modern paper. papyrus is a plant ( cyperus papyrus ) which grew in plentiful amounts in the egyptian delta and throughout the nile river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu
- dashed lines, or thick alternately long - and double short - dashed that may be used to define sections for section views. section β are thin lines in a pattern ( pattern determined by the material being " cut " or " sectioned " ) used to indicate surfaces in section views resulting from " cutting ". section lines are commonly referred to as " cross - hatching ". phantom β ( not shown ) are alternately long - and double short - dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. e. g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 β 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 β 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h pencil. = = = multiple views and projections = = = in most cases, a single view is not sufficient to show all necessary features, and several views are used. types of views include the following : = = = = multiview projection = = = = a multiview projection is a type of orthographic projection
this article is withdrawn because of a mistake in the main result of the paper.
paper withdrawn due to a crucial algebraic error in section 3.
industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured - pyrolized to convert the furfuryl alcohol to carbon. to provide oxidation resistance for reusability, the outer layers of the rcc are converted to silicon carbide. other examples can be seen in the " plastic " casings of television sets, cell - phones and so on. these plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene ( abs ) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. these additions may be termed reinforcing fibers, or dispersants, depending on their purpose. = = = polymers = = = polymers are chemical compounds made up of a large number of identical components linked together like chains. polymers are the raw materials ( the resins ) used to make what are commonly called plastics and rubber. plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride ( pvc ), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. rubbers include natural rubber, styrene - butadiene rubber, chloroprene, and butadiene rubber. plastics are generally classified as commodity
river valley during ancient times. the papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. the strips were then laid - out side by side and covered in plant resin. the second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. the sheets were then joined to form a roll and later used for writing. egyptian society made several significant advances during dynastic periods in many areas of technology. according to hossam elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. they developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem - mounted rudders. the egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. ancient egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like paul t nicholson believe that the ancient egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. = = = = indus valley = = = = the indus valley civilization, situated in a resource - rich area ( in modern pakistan and northwestern india ), is notable for its early application of city planning, sanitation technologies, and plumbing. indus valley construction and architecture, called ' vaastu shastra ', suggests a thorough understanding of materials engineering, hydrology, and sanitation. = = = = china = = = = the chinese made many first - known discoveries and developments. major technological contributions from china include the earliest known form of the binary code and epigenetic sequencing, early seismological detectors, matches, paper, helicopter rotor, raised - relief map, the double - action piston pump, cast iron, water powered blast furnace bellows, the iron plough, the multi - tube seed drill, the wheelbarrow, the parachute, the compass, the rudder, the crossbow, the south pointing chariot and gunpowder
Question: A paper bag is ripped into pieces. Which of these BEST describes the pieces of the bag?
A) Stronger than the whole bag
B) Thicker than the whole bag
C) Smaller than the whole bag
D) Darker than the whole bag
|
C) Smaller than the whole bag
|
Context:
it is explained why excessive mu to e gamma can be a problem in susy gut see - saw models of neutrino mass, and ways that this problem might be avoided are discussed.
pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form
which constitutes anywhere from 30 % [ m / m ] to 90 % [ m / m ] of its composition by volume, yielding an array of materials with interesting thermomechanical properties. in the processing of glass - ceramics, molten glass is cooled down gradually before reheating and annealing. in this heat treatment the glass partly crystallizes. in many cases, so - called ' nucleation agents ' are added in order to regulate and control the crystallization process. because there is usually no pressing and sintering, glass - ceramics do not contain the volume fraction of porosity typically present in sintered ceramics. the term mainly refers to a mix of lithium and aluminosilicates which yields an array of materials with interesting thermomechanical properties. the most commercially important of these have the distinction of being impervious to thermal shock. thus, glass - ceramics have become extremely useful for countertop cooking. the negative thermal expansion coefficient ( tec ) of the crystalline ceramic phase can be balanced with the positive tec of the glassy phase. at a certain point ( ~ 70 % crystalline ) the glass - ceramic has a net tec near zero. this type of glass - ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β batching β mixing β forming β drying β firing β assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression
( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. the cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history β such as those evolved separately in different groups ( homoplasies ) or those left over from ancestors ( plesiomorphies ) β and derived characters, which have been passed down from innovations in a shared ancestor ( apomorphies ). only derived characters, such as the spine - producing areoles of cacti, provide evidence for descent from a common ancestor. the results of cladistic analyses are expressed as cladograms : tree - like diagrams showing the pattern of evolutionary branching and descent. from the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly dna sequences, rather than morphological characters like the presence or absence of spines and areoles. the difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. clive stace describes this as having " direct access to the genetic basis of evolution. " as a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below β fungi are more closely related to animals than to plants. in 1998, the angiosperm phylogeny group published a phylogeny for flowering plants based on an analysis of
the model of neutrino mass matrix with minimal texture is now tightly constrained by experiment so that it can yield a prediction for the phase of cp violation. this phase is predicted to lie in the range $ \ delta _ { cp } = 0. 77 \ pi - 1. 24 \ pi $. if neutrino oscillation experiment would find the cp violation phase outside this range, this means that the minimal - texture neutrino mass matrix, the element of which is all real, fails and the neutrino mass matrix must be complex, i. e., the phase must be present that is responsible for leptogenesis.
##ta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " β their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. hetero
= glass - ceramics = = glass - ceramic materials share many properties with both glasses and ceramics. glass - ceramics have an amorphous phase and one or more crystalline phases and are produced by a so - called " controlled crystallization ", which is typically avoided in glass manufacturing. glass - ceramics often contain a crystalline phase which constitutes anywhere from 30 % [ m / m ] to 90 % [ m / m ] of its composition by volume, yielding an array of materials with interesting thermomechanical properties. in the processing of glass - ceramics, molten glass is cooled down gradually before reheating and annealing. in this heat treatment the glass partly crystallizes. in many cases, so - called ' nucleation agents ' are added in order to regulate and control the crystallization process. because there is usually no pressing and sintering, glass - ceramics do not contain the volume fraction of porosity typically present in sintered ceramics. the term mainly refers to a mix of lithium and aluminosilicates which yields an array of materials with interesting thermomechanical properties. the most commercially important of these have the distinction of being impervious to thermal shock. thus, glass - ceramics have become extremely useful for countertop cooking. the negative thermal expansion coefficient ( tec ) of the crystalline ceramic phase can be balanced with the positive tec of the glassy phase. at a certain point ( ~ 70 % crystalline ) the glass - ceramic has a net tec near zero. this type of glass - ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β batching β mixing β forming β drying β firing β assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which
one may identify the general properties of the neutrino mass matrix by generating many random mass matrices and testing them against the results of the neutrino experiments.
##ubated, and the formation of a colored product indicates a positive hybridoma. alternatively, immunocytochemical, western blot, and immunoprecipitation - mass spectrometry. unlike western blot assays, immunoprecipitation - mass spectrometry facilitates screening and ranking of clones which bind to the native ( non - denaturated ) forms of antigen proteins. flow cytometry screening has been used for primary screening of a large number ( ~ 1000 ) of hybridoma clones recognizing the native form of the antigen on the cell surface. in the flow cytometry - based screening, a mixture of antigen - negative cells and antigen - positive cells is used as the antigen to be tested for each hybridoma supernatant sample. the b cell that produces the desired antibodies can be cloned to produce many identical daughter clones. supplemental media containing interleukin - 6 ( such as briclone ) are essential for this step. once a hybridoma colony is established, it will continually grow in culture medium like rpmi - 1640 ( with antibiotics and fetal bovine serum ) and produce antibodies. multiwell plates are used initially to grow the hybridomas, and after selection, are changed to larger tissue culture flasks. this maintains the well - being of the hybridomas and provides enough cells for cryopreservation and supernatant for subsequent investigations. the culture supernatant can yield 1 to 60 ΞΌg / ml of monoclonal antibody, which is maintained at - 20 Β°c or lower until required. by using culture supernatant or a purified immunoglobulin preparation, further analysis of a potential monoclonal antibody producing hybridoma can be made in terms of reactivity, specificity, and cross - reactivity. = = applications = = the use of monoclonal antibodies is numerous and includes the prevention, diagnosis, and treatment of disease. for example, monoclonal antibodies can distinguish subsets of b cells and t cells, which is helpful in identifying different types of leukaemias. in addition, specific monoclonal antibodies have been used to define cell surface markers on white blood cells and other cell types. this led to the cluster of differentiation series of markers. these are often referred to as cd markers and define several hundred different cell surface components of cells, each specified by binding of a particular monoclonal antibody. such antibodies are extremely useful for fluorescence - activated cell sorting,
industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured - pyrolized to convert the furfuryl alcohol to carbon. to provide oxidation resistance for reusability, the outer layers of the rcc are converted to silicon carbide. other examples can be seen in the " plastic " casings of television sets, cell - phones and so on. these plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene ( abs ) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. these additions may be termed reinforcing fibers, or dispersants, depending on their purpose. = = = polymers = = = polymers are chemical compounds made up of a large number of identical components linked together like chains. polymers are the raw materials ( the resins ) used to make what are commonly called plastics and rubber. plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride ( pvc ), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. rubbers include natural rubber, styrene - butadiene rubber, chloroprene, and butadiene rubber. plastics are generally classified as commodity
Question: A lizard most likely would be protected from its enemies if it has which characteristic?
A) claws to catch prey
B) a long tail to climb trees
C) eyes that can see far distances
D) skin color that matches its habitat
|
D) skin color that matches its habitat
|
Context:
applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. metals under continual cyclic loading can suffer from metal fatigue. metals under constant stress at elevated temperatures can creep. = = = metalworking processes = = = casting β molten metal is poured into a shaped mold. variants of casting include sand casting, investment
was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. bronze significantly advanced shipbuilding technology with better tools and bronze nails. bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. better ships enabled long - distance trade and the advance of civilization. this technological trend apparently began in the fertile crescent and spread outward over time. these developments were not, and still are not, universal. the three - age system does not accurately describe the technology history of groups outside of eurasia, and does not apply at all in the case of some isolated populations, such as the spinifex people, the sentinelese, and various amazonian tribes, which still make use of stone age technology, and have not developed agricultural or metal technology. these villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. = = = = iron age = = = = before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. the iron age involved the adoption of iron smelting technology. it generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. the raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. consequently, iron was produced in many areas. it was not possible to mass manufacture steel or pure iron because of the high temperatures required. furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. steel could be produced by forging bloomery iron to reduce the carbon content in a
the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle
computer networking. coaxial cable is widely used for cable television systems, office buildings, and other work - sites for local area networks. transmission speed ranges from 200 million bits per second to more than 500 million bits per second. itu - t g. hn technology uses existing home wiring ( coaxial cable, phone lines and power lines ) to create a high - speed local area network. twisted pair cabling is used for wired ethernet and other standards. it typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. the use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. the transmission speed ranges from 2 mbit / s to 10 gbit / s. twisted pair cabling comes in two forms : unshielded twisted pair ( utp ) and shielded twisted - pair ( stp ). each form comes in several category ratings, designed for use in various scenarios. an optical fiber is a glass fiber. it carries pulses of light that represent data via lasers and optical amplifiers. some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. there are two basic types of fiber optics, single - mode optical fiber ( smf ) and multi - mode optical fiber ( mmf ). single - mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade. = = = wireless = = = network connections can be established wirelessly using radio or other electromagnetic means of communication. terrestrial microwave β terrestrial microwave communication uses earth - based transmitters and receivers resembling satellite dishes. terrestrial microwaves are in the low gigahertz range, which limits all communications to line - of - sight. relay stations are spaced approximately 40 miles ( 64 km ) apart. communications satellites β satellites also communicate via microwave. the satellites are stationed in space, typically in geosynchronous orbit 35, 400 km ( 22, 000 mi ) above the equator. these earth - orbiting systems are
, phone lines and power lines ) to create a high - speed local area network. twisted pair cabling is used for wired ethernet and other standards. it typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. the use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. the transmission speed ranges from 2 mbit / s to 10 gbit / s. twisted pair cabling comes in two forms : unshielded twisted pair ( utp ) and shielded twisted - pair ( stp ). each form comes in several category ratings, designed for use in various scenarios. an optical fiber is a glass fiber. it carries pulses of light that represent data via lasers and optical amplifiers. some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. there are two basic types of fiber optics, single - mode optical fiber ( smf ) and multi - mode optical fiber ( mmf ). single - mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade. = = = wireless = = = network connections can be established wirelessly using radio or other electromagnetic means of communication. terrestrial microwave β terrestrial microwave communication uses earth - based transmitters and receivers resembling satellite dishes. terrestrial microwaves are in the low gigahertz range, which limits all communications to line - of - sight. relay stations are spaced approximately 40 miles ( 64 km ) apart. communications satellites β satellites also communicate via microwave. the satellites are stationed in space, typically in geosynchronous orbit 35, 400 km ( 22, 000 mi ) above the equator. these earth - orbiting systems are capable of receiving and relaying voice, data, and tv signals. cellular networks use several radio communications technologies. the systems divide the region covered into multiple geographic areas. each area is served by a low - power transceiver. radio and spread spectrum technologies β wireless lans use a high - frequency radio technology similar to
##electronics and mems in particular. silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems. polymers even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. mems devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. metals metals can also be used to create mems elements. while metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. metals can be deposited by electroplating, evaporation, and sputtering processes. commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. ceramics the nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in mems fabrication due to advantageous combinations of material properties. aln crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. tin, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic mems actuation schemes with ultrathin beams. moreover, the high resistance of tin against biocorrosion qualifies the material for applications in biogenic environments. the figure shows an electron - microscopic picture of a mems biosensor with a 50 nm thin bendable tin beam above a tin ground plate. both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. when a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground plate and measuring the bending velocity. = = basic processes = = = = = deposition processes = = = one of the basic building blocks in mems processing is the ability to deposit thin films of material with a thickness anywhere from one micrometre to about 100 micrometres. the nems process is the same,
insights from stripe incommensurabilities and antiferromagnetic stability indicate that the magnetic moments of both host cu ^ 2 + ions and cu atoms from electron doping support the thermal hall effect in cuprates, whereas those of o atoms from hole doping oppose it.
still a complex and relatively expensive material to produce. polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. mems devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. metals metals can also be used to create mems elements. while metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. metals can be deposited by electroplating, evaporation, and sputtering processes. commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. ceramics the nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in mems fabrication due to advantageous combinations of material properties. aln crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. tin, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic mems actuation schemes with ultrathin beams. moreover, the high resistance of tin against biocorrosion qualifies the material for applications in biogenic environments. the figure shows an electron - microscopic picture of a mems biosensor with a 50 nm thin bendable tin beam above a tin ground plate. both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. when a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground plate and measuring the bending velocity. = = basic processes = = = = = deposition processes = = = one of the basic building blocks in mems processing is the ability to deposit thin films of material with a thickness anywhere from one micrometre to about 100 micrometres. the nems process is the same, although the measurement of film deposition ranges from a few nanometres to one micrometre. there are two types of deposition processes, as follows. = = = = physical deposition = = = = physical vapor deposition ( " pvd " ) consists of a process in which a material is removed from a target, and
is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron β carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales
work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. bronze significantly advanced shipbuilding technology with better tools and bronze nails. bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. better ships enabled long - distance trade and the advance of civilization. this technological trend apparently began in the fertile crescent and spread outward over time. these developments were not, and still are not, universal. the three - age system does not accurately describe the technology history of groups outside of eurasia, and does not apply at all in the case of some isolated populations, such as the spinifex people, the sentinelese, and various amazonian tribes, which still make use of stone age technology, and have not developed agricultural or metal technology. these villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. = = = = iron age = = = = before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. the iron age involved the adoption of iron smelting technology. it generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. the raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. consequently, iron was produced in many areas. it was not possible to mass manufacture steel or pure iron because of the high temperatures required. furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. steel could be produced by forging bloomery iron to reduce the carbon content in a somewhat controllable way, but steel produced by this method was not homogeneous. in many eurasian cultures, the iron age was the last major step before the development of written language, though again this was not universally the case. in europe, large hill forts were built either as a refuge in time of war or sometimes as
Question: Copper is used in house wiring because it is
A) magnetic.
B) an insulator.
C) an electrical conductor.
D) hard to bend into new shapes.
|
C) an electrical conductor.
|
Context:
pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin
known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose,
the curvature radiation is applied to the explain the circular polarization of frbs. significant circular polarization is reported in both apparently non - repeating and repeating frbs. curvature radiation can produce significant circular polarization at the wing of the radiation beam. in the curvature radiation scenario, in order to see significant circular polarization in frbs ( 1 ) more energetic bursts, ( 2 ) burst with electrons having higher lorentz factor, ( 3 ) a slowly rotating neutron star at the centre are required. different rotational period of the central neutron star may explain why some frbs have high circular polarization, while others don ' t. considering possible difference in refractive index for the parallel and perpendicular component of electric field, the position angle may change rapidly over the narrow pulse window of the radiation beam. the position angle swing in frbs may also be explained by this non - geometric origin, besides that of the rotating vector model.
another and therefore take part in chemical reactions that sustain life. in terms of its molecular structure, water is a small polar molecule with a bent shape formed by the polar covalent bonds of two hydrogen ( h ) atoms to one oxygen ( o ) atom ( h2o ). because the o β h bonds are polar, the oxygen atom has a slight negative charge and the two hydrogen atoms have a slight positive charge. this polar property of water allows it to attract other water molecules via hydrogen bonds, which makes water cohesive. surface tension results from the cohesive force due to the attraction between molecules at the surface of the liquid. water is also adhesive as it is able to adhere to the surface of any polar or charged non - water molecules. water is denser as a liquid than it is as a solid ( or ice ). this unique property of water allows ice to float above liquid water such as ponds, lakes, and oceans, thereby insulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds = = = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen
their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that
of these organisms. the energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy - rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen ( o2 ) as a by - product. the light energy captured by chlorophyll a is initially in the form of electrons ( and later a proton gradient ) that is used to make molecules of atp and nadph which temporarily store and transport energy. their energy is used in the light - independent reactions of the calvin cycle by the enzyme rubisco to produce molecules of the 3 - carbon sugar glyceraldehyde 3 - phosphate ( g3p ). glyceraldehyde 3 - phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. some of the glucose is converted to starch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and
the first observations of saturn ' s visible - wavelength aurora were made by the cassini camera. the aurora was observed between 2006 and 2013 in the northern and southern hemispheres. the color of the aurora changes from pink at a few hundred km above the horizon to purple at 1000 - 1500 km above the horizon. the spectrum observed in 9 filters spanning wavelengths from 250 nm to 1000 nm has a prominent h - alpha line and roughly agrees with laboratory simulated auroras. auroras in both hemispheres vary dramatically with longitude. auroras form bright arcs between 70 and 80 degree latitude north and between 65 and 80 degree latitude south, which sometimes spiral around the pole, and sometimes form double arcs. a large 10, 000 - km - scale longitudinal brightness structure persists for more than 100 hours. this structure rotates approximately together with saturn. on top of the large steady structure, the auroras brighten suddenly on the timescales of a few minutes. these brightenings repeat with a period of about 1 hour. smaller, 1000 - km - scale structures may move faster or lag behind saturn ' s rotation on timescales of tens of minutes. the persistence of nearly - corotating large bright longitudinal structure in the auroral oval seen in two movies spanning 8 and 11 rotations gives an estimate on the period of 10. 65 $ \ pm $ 0. 15 h for 2009 in the northern oval and 10. 8 $ \ pm $ 0. 1 h for 2012 in the southern oval. the 2009 north aurora period is close to the north branch of saturn kilometric radiation ( skr ) detected at that time.
##simal cube of material relative to a reference configuration. mechanical strains are caused by mechanical stress, see stress - strain curve. the relationship between stress and strain is generally linear and reversible up until the yield point and the deformation is elastic. elasticity in materials occurs when applied stress does not surpass the energy required to break molecular bonds, allowing the material to deform reversibly and return to its original shape once the stress is removed. the linear relationship for a material is known as young ' s modulus. above the yield point, some degree of permanent distortion remains after unloading and is termed plastic deformation. the determination of the stress and strain throughout a solid object is given by the field of strength of materials and for a structure by structural analysis. in the above figure, it can be seen that the compressive loading ( indicated by the arrow ) has caused deformation in the cylinder so that the original shape ( dashed lines ) has changed ( deformed ) into one with bulging sides. the sides bulge because the material, although strong enough to not crack or otherwise fail, is not strong enough to support the load without change. as a result, the material is forced out laterally. internal forces ( in this case at right angles to the deformation ) resist the applied load. = = types of deformation = = depending on the type of material, size and geometry of the object, and the forces applied, various types of deformation may result. the image to the right shows the engineering stress vs. strain diagram for a typical ductile material such as steel. different deformation modes may occur under different conditions, as can be depicted using a deformation mechanism map. permanent deformation is irreversible ; the deformation stays even after removal of the applied forces, while the temporary deformation is recoverable as it disappears after the removal of applied forces. temporary deformation is also called elastic deformation, while the permanent deformation is called plastic deformation. = = = elastic deformation = = = the study of temporary or elastic deformation in the case of engineering strain is applied to materials used in mechanical and structural engineering, such as concrete and steel, which are subjected to very small deformations. engineering strain is modeled by infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement - gradient theory where strains and rotations are both small. for some materials, e. g. elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e. g. typical engineering strains
becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under
if a fintie group g acts topologically and faithfully on r ^ 3, then g is a subgroup of o ( 3 )
Question: Which process causes light to bend and form a rainbow?
A) frequency
B) resonance
C) refraction
D) reflection
|
C) refraction
|
Context:
= = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids
single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon β carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division
to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiot
liver glycogen. during recovery, when oxygen becomes available, nad + attaches to hydrogen from lactate to form atp. in yeast, the waste products are ethanol and carbon dioxide. this type of fermentation is known as alcoholic or ethanol fermentation. the atp generated in this process is made by substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and
nadh. during anaerobic glycolysis, nad + regenerates when pairs of hydrogen combine with pyruvate to form lactate. lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. lactate can also be used as an indirect precursor for liver glycogen. during recovery, when oxygen becomes available, nad + attaches to hydrogen from lactate to form atp. in yeast, the waste products are ethanol and carbon dioxide. this type of fermentation is known as alcoholic or ethanol fermentation. the atp generated in this process is made by substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of
##colysis. this waste product varies depending on the organism. in skeletal muscles, the waste product is lactic acid. this type of fermentation is called lactic acid fermentation. in strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by nadh. during anaerobic glycolysis, nad + regenerates when pairs of hydrogen combine with pyruvate to form lactate. lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. lactate can also be used as an indirect precursor for liver glycogen. during recovery, when oxygen becomes available, nad + attaches to hydrogen from lactate to form atp. in yeast, the waste products are ethanol and carbon dioxide. this type of fermentation is known as alcoholic or ethanol fermentation. the atp generated in this process is made by substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma
used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats β the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception
and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell
( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed
prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller β urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as
Question: Sugar is composed of carbon, hydrogen, and oxygen. Sugar is an example of which of the following?
A) an atom
B) a compound
C) an electron
D) a mixture
|
B) a compound
|
Context:
grasping an object is a matter of first moving a prehensile organ at some position in the world, and then managing the contact relationship between the prehensile organ and the object. once the contact relationship has been established and made stable, the object is part of the body and it can move in the world. as any action, the action of grasping is ontologically anchored in the physical space while the correlative movement originates in the space of the body. evolution has found amazing solutions that allow organisms to rapidly and efficiently manage the relationship between their body and the world. it is then natural that roboticists consider taking inspiration of these natural solutions, while contributing to better understand their origin.
as possible in order to avoid frustration or injury. there are two main types of human errors which are categorized as slips and mistakes. slips are a very common kind of error involving automatic behaviors ( i. e. typos, hitting the wrong menu item ). when we experience slips, we have the correct goal in mind, but execute the wrong action. mistakes on the other hand involve conscious deliberation that result in the incorrect conclusion. when we experience mistakes, we have the wrong goal in mind and thereby execute the wrong action. even though slips are the more common type of error, they are no less dangerous. a certain type of slip error, a mode error, can be especially dangerous if a user is executing a high - risk task. for instance, if a user is operating a vehicle and does not realize they are in the wrong mode ( i. e. reverse ), they might step on the gas intending to drive, but instead accelerate into a garage wall or another car. in order to avoid modal errors, designers often employ modeless states in which users do not have to choose a mode at all, or they must execute a continuous action while intending to execute a certain mode ( i. e. pressing a key continuously in order to activate " lasso " mode in photoshop ). = = evaluation methods = = usability engineers conduct usability evaluations of existing or proposed interfaces and their findings are fed back to the designer for use in design or redesign. common usability evaluation methods include : card sorting cognitive task analysis cognitive walkthroughs contextual inquiry focus groups heuristic evaluations interviews questionnaires rite method surveys think aloud protocol usability testing = = software applications and development tools = = there are a variety of online resources that make the job of a usability engineer a little easier. online tools are only a useful tool, and do not substitute for a complete usability engineering analysis. some examples of these include : = = = the web metrics tool suite = = = this is a product of the national institute of standards and technology. this toolkit is focused on evaluating the html of a website versus a wide range of usability guidelines and includes : web static analyzer tool ( websat ) β checks web page html against typical usability guidelines web category analysis tool ( webcat ) β lets the usability engineer construct and conduct a web category analysis web variable instrumenter program ( webvip ) β instruments a website to capture a log of user interaction framework for logging usability data ( flu
the dynamic impedance of a sphere oscillating in an elastic medium is considered. oestreicher ' s formula for the impedance of a sphere bonded to the surrounding medium can be expressed simply in terms of three lumped impedances associated with the displaced mass and the longitudinal and transverse waves. if the surface of the sphere slips while the normal velocity remains continuous, the impedance formula is modified by adjusting the definition of the transverse impedance to include the interfacial impedance.
usability engineering, it ' s important target and identify human errors when interacting with the product of interest because if a user is expected to engage with a product, interface, or service in some way, the very introduction of a human in that engagement increases the potential of encountering human error. error should be reduced as much as possible in order to avoid frustration or injury. there are two main types of human errors which are categorized as slips and mistakes. slips are a very common kind of error involving automatic behaviors ( i. e. typos, hitting the wrong menu item ). when we experience slips, we have the correct goal in mind, but execute the wrong action. mistakes on the other hand involve conscious deliberation that result in the incorrect conclusion. when we experience mistakes, we have the wrong goal in mind and thereby execute the wrong action. even though slips are the more common type of error, they are no less dangerous. a certain type of slip error, a mode error, can be especially dangerous if a user is executing a high - risk task. for instance, if a user is operating a vehicle and does not realize they are in the wrong mode ( i. e. reverse ), they might step on the gas intending to drive, but instead accelerate into a garage wall or another car. in order to avoid modal errors, designers often employ modeless states in which users do not have to choose a mode at all, or they must execute a continuous action while intending to execute a certain mode ( i. e. pressing a key continuously in order to activate " lasso " mode in photoshop ). = = evaluation methods = = usability engineers conduct usability evaluations of existing or proposed interfaces and their findings are fed back to the designer for use in design or redesign. common usability evaluation methods include : card sorting cognitive task analysis cognitive walkthroughs contextual inquiry focus groups heuristic evaluations interviews questionnaires rite method surveys think aloud protocol usability testing = = software applications and development tools = = there are a variety of online resources that make the job of a usability engineer a little easier. online tools are only a useful tool, and do not substitute for a complete usability engineering analysis. some examples of these include : = = = the web metrics tool suite = = = this is a product of the national institute of standards and technology. this toolkit is focused on evaluating the html of a website versus a wide range of usability guidelines and includes : web static analyzer tool
the gravitational poynting vector provides a mechanism for the transfer of gravitational energy to a system of falling objects. in the following we will show that the gravitational poynting vector together with the gravitational larmor theorem also provides a mechanism to explain how massive bodies acquire rotational kinetic energy when external mechanical forces are applied on them.
general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. engineers often use online documents and books such as those published by asm to aid them in determining the type of failure and possible causes. once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. = = = thermodynamics and thermo - science = = = thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. at its simplest, thermodynamics is the study of energy, its use and transformation through a system. typically, engineering thermodynamics is concerned with changing energy from one form to another. as an example, automotive engines convert chemical energy ( enthalpy ) from the fuel into heat, and then into mechanical work that eventually turns the wheels. thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. mechanical engineers use thermo - science to design engines and power plants, heating, ventilation, and air - conditioning ( hvac ) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others. = = = design and drafting = = = drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. a technical drawing can be a computer model or hand - drawn schematic showing all the dimensions necessary to manufacture a
a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 β 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 β 2 ), or a downshift maneuver in passing ( 4 β 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up - front tooling and fixed costs associated with developing the vehicle. there are also costs associated with warranty reductions and marketing. program timing : to some extent programs are timed with respect to the market, and also to the production - schedules of assembly plants. any new part in the design must support the development and manufacturing schedule of the model. design for manufacturability ( dfm ) : dfm refers to designing vehicular components in such a way that they are not only feasible to manufacture, but also such that they are cost - efficient to produce while resulting in acceptable
the project consists to determine, mathematically, the trajectory that will take an artificial satellite to fight against the air resistance. during our work, we had to consider that our satellite will crash to the surface of our planet. we started our study by understanding the system of forces that are acting between our satellite and the earth. in this work, we had to study the second law of newton by taking knowledge of the air friction, the speed of the satellite which helped us to find the equation that relates the trajectory of the satellite itself, its speed and the density of the air depending on the altitude. finally, we had to find a mathematic relation that links the density with the altitude and then we had to put it into our movement equation. in order to verify our model, we ' ll see what happens if we give a zero velocity to the satellite.
to that of a flat crack through the plain matrix. the magnitude of the toughening is determined by the mismatch strain caused by thermal contraction incompatibility and the microfracture resistance of the particle / matrix interface. the toughening becomes noticeable with a narrow size distribution of appropriately sized particles, and researchers typically accept that deflection effects in materials with roughly equiaxial grains may increase the fracture toughness by about twice the grain boundary value. the model reveals that the increase in toughness is dependent on particle shape and the volume fraction of the second phase, with the most effective morphology being the rod of high aspect ratio, which can account for a fourfold increase in fracture toughness. the toughening arises primarily from the twist of the crack front between particles, as indicated by deflection profiles. disc - shaped particles and spheres are less effective in toughening. fracture toughness, regardless of morphology, is determined by the twist of the crack front at its most severe configuration, rather than the initial tilt of the crack front. only for disc - shaped particles does the initial tilting of the crack front provide significant toughening ; however, the twist component still overrides the tilt - derived toughening. additional important features of the deflection analysis include the appearance of asymptotic toughening for the three morphologies at volume fractions in excess of 0. 2. it is also noted that a significant influence on the toughening by spherical particles is exerted by the interparticle spacing distribution ; greater toughening is afforded when spheres are nearly contacting such that twist angles approach Ο / 2. these predictions provide the basis for the design of high - toughness two - phase ceramic materials. the ideal second phase, in addition to maintaining chemical compatibility, should be present in amounts of 10 to 20 volume percent. greater amounts may diminish the toughness increase due to overlapping particles. particles with high aspect ratios, especially those with rod - shaped morphologies, are most suitable for maximum toughening. this model is often used to determine the factors that contribute to the increase in fracture toughness in ceramics which is ultimately useful in the development of advanced ceramic materials with improved performance. = = theory of chemical processing = = = = = microstructural uniformity = = = in the processing of fine ceramics, the irregular particle sizes and shapes in a typical powder often lead to non - uniform packing morphologies that result in packing density variations in the powder compact. uncontrolled aggl
variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated.
Question: The tendency of a stationary object to resist being put into motion is known as
A) acceleration.
B) inertia.
C) weight.
D) velocity.
|
B) inertia.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.