title
stringlengths
1
149
section
stringlengths
1
1.9k
text
stringlengths
13
73.5k
Sequence assembly
Influence of technological changes
The complexity of sequence assembly is driven by two major factors: the number of fragments and their lengths. While more and longer fragments allow better identification of sequence overlaps, they also pose problems as the underlying algorithms show quadratic or even exponential complexity behaviour to both number of fragments and their length. And while shorter sequences are faster to align, they also complicate the layout phase of an assembly as shorter reads are more difficult to use with repeats or near identical repeats.
Sequence assembly
Influence of technological changes
In the earliest days of DNA sequencing, scientists could only gain a few sequences of short length (some dozen bases) after weeks of work in laboratories. Hence, these sequences could be aligned in a few minutes by hand.
Sequence assembly
Influence of technological changes
In 1975, the dideoxy termination method (AKA Sanger sequencing) was invented and until shortly after 2000, the technology was improved up to a point where fully automated machines could churn out sequences in a highly parallelised mode 24 hours a day. Large genome centers around the world housed complete farms of these sequencing machines, which in turn led to the necessity of assemblers to be optimised for sequences from whole-genome shotgun sequencing projects where the reads are about 800–900 bases long contain sequencing artifacts like sequencing and cloning vectors have error rates between 0.5 and 10%With the Sanger technology, bacterial projects with 20,000 to 200,000 reads could easily be assembled on one computer. Larger projects, like the human genome with approximately 35 million reads, needed large computing farms and distributed computing.
Sequence assembly
Influence of technological changes
By 2004 / 2005, pyrosequencing had been brought to commercial viability by 454 Life Sciences. This new sequencing method generated reads much shorter than those of Sanger sequencing: initially about 100 bases, now 400-500 bases. Its much higher throughput and lower cost (compared to Sanger sequencing) pushed the adoption of this technology by genome centers, which in turn pushed development of sequence assemblers that could efficiently handle the read sets. The sheer amount of data coupled with technology-specific error patterns in the reads delayed development of assemblers; at the beginning in 2004 only the Newbler assembler from 454 was available. Released in mid-2007, the hybrid version of the MIRA assembler by Chevreux et al. was the first freely available assembler that could assemble 454 reads as well as mixtures of 454 reads and Sanger reads. Assembling sequences from different sequencing technologies was subsequently coined hybrid assembly.
Sequence assembly
Influence of technological changes
From 2006, the Illumina (previously Solexa) technology has been available and can generate about 100 million reads per run on a single sequencing machine. Compare this to the 35 million reads of the human genome project which needed several years to be produced on hundreds of sequencing machines. Illumina was initially limited to a length of only 36 bases, making it less suitable for de novo assembly (such as de novo transcriptome assembly), but newer iterations of the technology achieve read lengths above 100 bases from both ends of a 3-400bp clone. Announced at the end of 2007, the SHARCGS assembler by Dohm et al. was the first published assembler that was used for an assembly with Solexa reads. It was quickly followed by a number of others.
Sequence assembly
Influence of technological changes
Later, new technologies like SOLiD from Applied Biosystems, Ion Torrent and SMRT were released and new technologies (e.g. Nanopore sequencing) continue to emerge. Despite the higher error rates of these technologies they are important for assembly because their longer read length helps to address the repeat problem. It is impossible to assemble through a perfect repeat that is longer than the maximum read length; however, as reads become longer the chance of a perfect repeat that large becomes small. This gives longer sequencing reads an advantage in assembling repeats even if they have low accuracy (~85%).
Sequence assembly
Assembly algorithms
Different organisms have a distinct region of higher complexity within their genome. Hence, the need of different computational approaches is needed. Some of the commonly used algorithms are: Graph Assembly: is based on Graph theory in computer science. The de Bruijn Graph is an example of this approach and utilizes k-mers to assemble a contiguous from reads. Greedy Graph Assembly: this approach score each added read to the assembly and selects the highest possible score from the overlapping region.Given a set of sequence fragments, the object is to find a longer sequence that contains all the fragments (see figure under Types of Sequence Assembly): Сalculate pairwise alignments of all fragments. Choose two fragments with the largest overlap. Merge chosen fragments. Repeat step 2 and 3 until only one fragment is left.The result might not be an optimal solution to the problem.
Sequence assembly
Programs
For a lists of de-novo assemblers, see De novo sequence assemblers. For a list of mapping aligners, see List of sequence alignment software § Short-read sequence alignment. Some of the common tools used in different assembly steps are listed in the following table:
Nolisting
Nolisting
Nolisting is the name given to a technique to defend electronic mail domain names against e-mail spam.Each domain name on the internet has a series of one or more MX records specifying mail servers responsible for accepting email messages on behalf of that domain, each with a preference. Nolisting is simply the adding of an MX record pointing to a non-existent server as the "primary" (i.e. that with the lowest weighted value) - which means that an initial mail contact will always fail. Many spam sources don't retry on failure, so the spammer will move on to the next victim - while legitimate email servers should retry the next higher numbered MX, and normal email will be delivered with only a small delay.
Nolisting
Implementation
A simple example of MX records that demonstrate the technique: MX 10 dummy.example.com. MX 20 real-primary-mail-server.example.com. This defeats spam programs that only connect to the highest priority (lowest numbered) MX and do not follow the standard error-handling of retrying the next priority MX.
Nolisting
Drawbacks
The technique relies on spammers using simple software that doesn't retry the next priority MX, and so becomes ineffective if or when spammers begin using more sophisticated software.
Nolisting
Drawbacks
Some legitimate SMTP applications are also very simple and only send to the lowest numbered MX record. This might be the case with simple devices such as printers or data loggers, or with older legacy software. Mail from them will also fail unless there is some mechanism to allow a "whitelist" of IPs access to the mailserver via the lowest numbered MX record.
Nolisting
Drawbacks
It is important that the highest priority (lowest numbered) MX should be completely unresponsive on port 25. If it is open and responds with a 4xx error, (i.e. "retry later"), then email from some MTAs (such as qmail), may be lost if they do not step to the next MX record, but instead wait and continually retry the first one.
Nolisting
Similar techniques
There are alternate techniques that suggest "sandwiching" the valid MX records between non-responsive ones. Some variants also suggest configuring the highest-numbered hosts to always return 4xx errors (i.e. "retry later").A simple example of MX records that demonstrate the technique: MX 10 dummy1.example.com. MX 20 real-primary-mail-server.example.com. MX 30 dummy2.example.com. Greylisting also relies on the fact that spammers often use custom software which will not persevere to deliver a message in the correct RFC-compliant way.
Mir-744 microRNA precursor family
Mir-744 microRNA precursor family
In molecular biology mir-744 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
Mir-744 microRNA precursor family
miR-744 and cancer in mice
miR-744 plays a role in tumour development and growth in mouse cell lines. Its expression induces cyclin B1 expression, whilst knockdown sees a resultant decreased level of mouse cyclin B through the Ccnb1 gene. Short-term overexpression of miR-744 in mouse cell lines has been seen to enhance cell proliferation, whilst chromosomal instability and in vivo suppression are concurrent with a prolonged expression.
Mir-744 microRNA precursor family
TGF-β1 repression
Multiple miR-744 binding sites have been identified in the proximal 3' untranslated region of transforming growth factor beta 1 (TGF-β1). Direct targeting of TGF-β1 by miR-744 has been identified, and transfection is seen to inhibit endogenous TGF-β1 synthesis by directing post-transcriptional regulation.
Mir-744 microRNA precursor family
EEF1A2 repression
miR-744 directly targets translation elongation factor and known protooncogene EEF1A2. mIR-744 also upregulates during resveratrol treatment of MCF7 breast cancer cells.
Degrees of freedom problem
Degrees of freedom problem
In neuroscience and motor control , the degrees of freedom problem or motor equivalence problem states that there are multiple ways for humans or animals to perform a movement in order to achieve the same goal. In other words, under normal circumstances, no simple one-to-one correspondence exists between a motor problem (or task) and a motor solution to the problem. The motor equivalence problem was first formulated by the Russian neurophysiologist Nikolai Bernstein: "It is clear that the basic difficulties for co-ordination consist precisely in the extreme abundance of degrees of freedom, with which the [nervous] centre is not at first in a position to deal."Although the question of how the nervous system selects which particular degrees of freedom (DOFs) to use in a movement may be a problem to scientists, the abundance of DOFs is almost certainly an advantage to the mammalian and the invertebrate nervous systems. The human body has redundant anatomical DOFs (at muscles and joints), redundant kinematic DOFs (movements can have different trajectories, velocities, and accelerations and yet achieve the same goal), and redundant neurophysiological DOFs (multiple motoneurons synapsing on the same muscle, and vice versa). How the nervous system "chooses" a subset of these near-infinite DOFs is an overarching difficulty in understanding motor control and motor learning.
Degrees of freedom problem
History
The study of motor control historically breaks down into two broad areas: "Western" neurophysiological studies, and "Bernsteinian" functional analysis of movement. The latter has become predominant in motor control, as Bernstein's theories have held up well and are considered founding principles of the field as it exists today.
Degrees of freedom problem
History
Pre-Bernstein In the latter 19th and early 20th centuries, many scientists believed that all motor control came from the spinal cord, as experiments with stimulation in frogs displayed patterned movement ("motor primitives"), and spinalized cats were shown to be able to walk. This tradition was closely tied with the strict nervous system localizationism advocated during that period; since stimulation of the frog spinal cord in different places produced different movements, it was thought that all motor impulses were localized in the spinal cord. However, fixed structure and localizationism were slowly broken down as the central dogma of neuroscience. It is now known that the primary motor cortex and premotor cortex at the highest level are responsible for most voluntary movements. Animal models, though, remain relevant in motor control and spinal cord reflexes and central pattern generators are still a topic of study.
Degrees of freedom problem
History
Bernstein Although Lashley (1933) first formulated the motor equivalence problem, it was Bernstein who articulated the DOF problem in its current form. In Bernstein's formulation, the problem results from infinite redundancy, yet flexibility between movements; thus, the nervous system apparently must choose a particular motor solution every time it acts. In Bernstein's formulation, a single muscle never acts in isolation. Rather, large numbers of "nervous centres" cooperate in order to make a whole movement possible. Nervous impulses from different parts of the CNS may converge on the periphery in combination to produce a movement; however, there is great difficulty for scientists in understanding and coordinating the facts linking impulses to a movement. Bernstein's rational understanding of movement and prediction of motor learning via what we now call "plasticity" was revolutionary for his time.In Bernstein's view, movements must always reflect what is contained in the "central impulse", in one way or another. However, he recognized that effectors (feed-forward) were not the only important component to movement; feedback was also necessary. Thus, Bernstein was one of the first to understand movement as a closed circle of interaction between the nervous system and the sensory environment, rather than a simple arc toward a goal. He defined motor coordination as a means for overcoming indeterminacy due to redundant peripheral DOFs. With increasing DOFs, it is increasingly necessary for the nervous system to have a more complex, delicate organizational control.Because humans are adapted to survive, the "most important" movements tend to be reflexes -- pain or defensive reflexes needed to be carried out in very short time scales in order for ancient humans to survive their harsh environment. Most of our movements, though, are voluntary; voluntary control had historically been under-emphasized or even disregarded altogether. Bernstein saw voluntary movements as structured around a "motor problem" where the nervous system needed two factors to act: a full and complete perception of reality, as accomplished by multisensory integration, and objectivity of perception through constant and correct recognition of signals by the nervous system. Only with both may the nervous system choose an appropriate motor solution.
Degrees of freedom problem
Difficulties
The DOF problem is still a topic of study because of the complexity of the neuromuscular system of the human body. Not only is the problem itself exceedingly difficult to tackle, but the vastness of the field of study makes synthesis of theories a challenge.
Degrees of freedom problem
Difficulties
Counting degrees of freedom One of the largest difficulties in motor control is quantifying the exact number of DOFs in the complex neuromuscular system of the human body. In addition to having redundant muscles and joints, muscles may span multiple joints, further complicating the system. Properties of muscle change as the muscle length itself changes, making mechanical models difficult to create and understand. Individual muscles are innervated by multiple nerve fibers (motor units), and the manner in which these units are recruited is similarly complex. While each joint is commonly understood as having an agonist-antagonist pair, not all joint movement is controlled locally. Finally, movement kinematics are not identical even when performing the same motion repeatedly; natural variation in position, velocity, and acceleration of the limb occur even during seemingly identical movements.
Degrees of freedom problem
Difficulties
Types of studies Another difficulty in motor control is unifying the different ways to study movements. Three distinct areas in studying motor control have emerged: limb mechanics, neurophysiology, and motor behavior.
Degrees of freedom problem
Difficulties
Limb mechanics Studies of limb mechanics focus on the peripheral motor system as a filter which converts patterns of muscle activation into purposeful movement. In this paradigm, the building block is a motor unit (a neuron and all the muscle fibers it innervates) and complex models are built to understand the multitude of biological factors influencing motion. These models become increasingly complicated when multiple joints or environmental factors such as ground reaction forces are introduced.
Degrees of freedom problem
Difficulties
Neurophysiology In neurophysiological studies, the motor system is modeled as a distributed, often hierarchical system with the spinal cord controlling the "most automatic" of movements such as stretch reflexes, and the cortex controlling the "most voluntary" actions such as reaching for an object, with the brainstem performing a function somewhere in between the two. Such studies seek to investigate how the primary motor cortex (M1) controls planning and execution of motor tasks. Traditionally, neurophysiological studies have used animal models with electrophysiological recordings and stimulation to better understand human motor control.
Degrees of freedom problem
Difficulties
Motor behavior Studies of motor behavior focus on the adaptive and feedback properties of the nervous system in motor control. The motor system has been shown to adapt to changes in its mechanical environment on relatively short timescales while simultaneously producing smooth movements; these studies investigate how this remarkable feedback takes place. Such studies investigate which variables the nervous system controls, which variables are less tightly controlled, and how this control is implemented. Common paradigms of study include voluntary reaching tasks and perturbations of standing balance in humans.
Degrees of freedom problem
Difficulties
Abundance or redundancy Finally, the very nature of the DOF problem poses questions. For example, does the nervous system really have difficulty in choosing from DOFs, or is the abundance of DOFs necessary for evolutionary survival? In very extreme movements, humans may exhaust the limits of their DOFs—in these cases, the nervous system only has one choice. Therefore, DOFs are not always infinite. Bernstein has suggested that our vast number of DOFs allows motor learning to take place, wherein the nervous system "explores" the set of possible motor solutions before settling on an optimal solution (learning to walk and ride a bike, for example). Finally, additional DOFs allow patients with brain or spinal cord injury to often retain movement while relying on a reduced set of biomechanical DOFs. Therefore, the "degrees of freedom problem" may be a misnomer and is better understood as the "motor equivalence problem" with redundant DOFs offering an evolutionary solution to this problem.
Degrees of freedom problem
Hypotheses and proposed solutions
There have been many attempts to offer solutions or conceptual models that explain the DOF problem. One of the first hypotheses was Fitts' Law, which states that a trade-off must occur between movement speed and movement accuracy in a reaching task. Since then, many other theories have been offered.
Degrees of freedom problem
Hypotheses and proposed solutions
Optimal control hypothesis A general paradigm for understanding motor control, optimal control has been defined as "optimizing motor control for a given aspect of task performance," or as a way to minimize a certain "cost" associated with a movement. This "cost function" may be different depending on the task-goal; for example, minimum energy expenditure might be a task-variable associated with locomotion, while precise trajectory and positional control could be a task-variable associated with reaching for an object. Furthermore, the cost function may be quite complex (for instance, it may be a functional instead of function) and be also related to the representations in the internal space. For example, the speech produced by biomechanical tongue models (BTM), controlled by the internal model which minimizes the length of the path traveled in the internal space under the constraints related to the executed task (e.g., quality of speech, stiffness of tongue), was found to be quite realistic. In essence, the goal of optimal control is to "reduce degrees of freedom in a principled way." Two key components of all optimal control systems are: a "state estimator" which tells the nervous system about what it is doing, including afferent sensory feedback and an efferent copy of the motor command; and adjustable feedback gains based on task goals. A component of these adjustable gains might be a "minimum intervention principle" where the nervous system only performs selective error correction rather than heavily modulating the entirety of a movement.
Degrees of freedom problem
Hypotheses and proposed solutions
Open and closed-loop models Both open-loop and closed-loop models of optimal control have been studied; the former generally ignores the role of sensory feedback, while the latter attempts to incorporate sensory feedback, which includes delays and uncertainty associated with the sensory systems involved in movement. Open-loop models are simpler but have severe limitations—they model a movement as prerecorded in the nervous system, ignoring sensory feedback, and also fail to model variability between movements with the same task-goal. In both models, the primary difficulty is identifying the cost associated with a movement. A mix of cost variables such as minimum energy expenditure and a "smoothness" function is the most likely choice for a common performance criterion.
Degrees of freedom problem
Hypotheses and proposed solutions
Learning and optimal control Bernstein suggested that as humans learn a movement, we first reduce our DOFs by stiffening the musculature in order to have tight control, then gradually "loosen up" and explore the available DOFs as the task becomes more comfortable, and from there find an optimal solution. In terms of optimal control, it has been postulated that the nervous system can learn to find task-specific variables through an optimal control search strategy. It has been shown that adaptation in a visuomotor reaching task becomes optimally tuned so that the cost of movement trajectories decreases over trials. These results suggest that the nervous system is capable of both nonadaptive and adaptive processes of optimal control. Furthermore, these and other results suggest that rather than being a control variable, consistent movement trajectories and velocity profiles are the natural outcome of an adaptive optimal control process.
Degrees of freedom problem
Hypotheses and proposed solutions
Limits of optimal control Optimal control is a way of understanding motor control and the motor equivalence problem, but as with most mathematical theories about the nervous system, it has limitations. The theory must have certain information provided before it can make a behavioral prediction: what the costs and rewards of a movement are, what the constraints on the task are, and how state estimation takes place. In essence, the difficulty with optimal control lies in understanding how the nervous system precisely executes a control strategy. Multiple operational time-scales complicate the process, including sensory delays, muscle fatigue, changing of the external environment, and cost-learning.
Degrees of freedom problem
Hypotheses and proposed solutions
Muscle synergy hypothesis In order to reduce the number of musculoskeletal DOFs upon which the nervous system must operate, it has been proposed that the nervous system controls muscle synergies, or groups of co-activated muscles, rather than individual muscles. Specifically, a muscle synergy has been defined as "a vector specifying a pattern of relative muscle activation; absolute activation of each synergy is thought to be modulated by a single neural command signal." Multiple muscles are contained within each synergy at fixed ratios of co-activation, and multiple synergies can contain the same muscle. It has been proposed that muscle synergies emerge from an interaction between constraints and properties of the nervous and musculoskeletal systems. This organization may require less computational effort for the nervous system than individual muscle control because fewer synergies are needed to explain a behavior than individual muscles. Furthermore, it has been proposed that synergies themselves may change as behaviors are learned and/or optimized. However, synergies may also be innate to some degree, as suggested by postural responses of humans at very young ages.A key point of the muscle synergy hypothesis is that synergies are low-dimensional and thus just a few synergies may account for a complex movement. Evidence for this structure comes from electromyographical (EMG) data in frogs, cats, and humans, where various mathematical methods such as principal components analysis and non-negative matrix factorization are used to "extract" synergies from muscle activation patterns. Similarities have been observed in synergy structure even across different tasks such as kicking, jumping, swimming and walking in frogs. Further evidence comes from stroke patients, who have been observed to use fewer synergies in certain tasks; some stroke patients used a comparable number of synergies as healthy subjects, but with reduced motor performance. These data suggest that a synergy formulation is robust and may lie at the lowest level of a hierarchical neural controller.
Degrees of freedom problem
Hypotheses and proposed solutions
Equilibrium point hypothesis and threshold control In the Equilibrium Point hypothesis, all movements are generated by the nervous system through a gradual transition of equilibrium points along a desired trajectory. "Equilibrium point" in this sense is taken to mean a state where a field has zero force, meaning opposing muscles are in a state of balance with each other, like two rubber bands pulling the joint to a stable position. Equilibrium point control is also called "threshold control" because signals sent from the CNS to the periphery are thought to modulate the threshold length of each muscle. In this theory, motor neurons send commands to muscles, which changes the force–length relation within a muscle, resulting in a shift of the system's equilibrium point. The nervous system would not need to directly estimate limb dynamics, but rather muscles and spinal reflexes would provide all the necessary information about the system's state. The equilibrium-point hypothesis is also reported to be well suited for the design of biomechanical robots controlled by appropriated internal models.
Degrees of freedom problem
Hypotheses and proposed solutions
Force control and internal models The force control hypothesis states that the nervous system uses calculation and direct specification of forces to determine movement trajectories and reduce DOFs. In this theory, the nervous system must form internal models—a representation of the body's dynamics in terms of the surrounding environment. A nervous system which controls force must generate torques based on predicted kinematics, a process called inverse dynamics. Both feed-forward (predictive) and feedback models of motion in the nervous system may play a role in this process.
Degrees of freedom problem
Hypotheses and proposed solutions
Uncontrolled manifold (UCM) hypothesis It has been noted that the nervous system controls particular variables relevant to performance of a task, while leaving other variables free to vary; this is called the uncontrolled manifold hypothesis (UCM). The uncontrolled manifold is defined as the set of variables not affecting task performance; variables perpendicular to this set in Jacobian space are considered controlled variables (CM). For example, during a sit-to-stand task, head and center-of-mass position in the horizontal plane are more tightly controlled than other variables such as hand motion. Another study indicates that the quality of tongue's movements produced by bio-robots, which are controlled by a specially designed internal model, is practically uncorrelated with the stiffness of the tongue; in other words, during the speech production the relevant parameter is the quality of speech, while the stiffness is rather irrelevant. At the same time, the strict prescription of the stiffness' level to the tongue's body affects the speech production and creates some variability, which is however, not significant for the quality of speech (at least, in the reasonable range of stiffness' levels). UCM theory makes sense in terms of Bernstein's original theory because it constrains the nervous system to only controlling variables relevant to task performance, rather than controlling individual muscles or joints.
Degrees of freedom problem
Unifying theories
Not all theories about the selection of movement are mutually exclusive. Necessarily, they all involve reduction or elimination of redundant DOFs. Optimal feedback control is related to UCM theory in the sense that the optimal control law may not act along certain dimensions (the UCM) of lesser importance to the nervous system. Furthermore, this lack of control in certain directions implies that controlled variables will be more tightly correlated; this correlation is seen in the low-dimensionality of muscle synergies. Furthermore, most of these theories incorporate some sort of feedback and feed-forward models that the nervous system must utilize. Most of these theories also incorporate some sort of hierarchical neural control scheme, usually with cortical areas at the top and peripheral outputs at the lowest level. However, none of the theories is perfect; the DOF problem will continue to be relevant as long as the nervous system is imperfectly understood.
President (video game)
President (video game)
President is a 1987 game released by Kevin Toms for the Amstrad CPC, Commodore 64 and ZX Spectrum.
President (video game)
Gameplay
Following on from Toms' Football Manager and Software Star games, President is a game where the player takes control of a small country, and decides whether to be a dictator, or a hero. The player has to balance the wants and needs of their virtual citizens, while also balancing the books and trying to build up an army and search for oil.
President (video game)
Reception
Your Sinclair gave the game a positive review, awarding it 7/10. Similarly, Sinclair User gave the game 4/5. However, Crash were less favourable, only awarding it 29%.
Travel guitar
Travel guitar
Travel guitars are small guitars with a full or nearly full scale-length. In contrast, a reduced scale-length is typical for guitars intended for children, which have scale-lengths of one-quarter (ukulele guitar, or guitalele), one-half, and three-quarter.
Travel guitar
Examples
Examples of travel guitars include the following: C. F. MartinModel: Backpacker.A very small guitar with a body shaped like an elongated triangle, similar in shape to certain types of psaltery, and designed to be very portable and inexpensive while still being constructed of quality woods. The guitar is famous for having originally been designed by Robert McAnally before Martin took over the design, and was the first guitar to be taken into space. The guitar has also been taken up Mount EverestModel: Little MartinTaylorModel: Baby Taylor
Pot (poker)
Pot (poker)
The pot in poker refers to the sum of money that players wager during a single hand or game, according to the betting rules of the variant being played. It is likely that the word pot is related to or derived from the word jackpot.
Pot (poker)
Pot (poker)
At the conclusion of a hand, either by all but one player folding, or by showdown, the pot is won or shared by the player or players holding the winning cards. Sometimes a pot can be split between many players. This is particularly true in high-low games where not only the highest hand can win, but under appropriate conditions, the lowest hand will win a share of the pot.
Pot (poker)
Pot (poker)
See "all in" for more information about side pots.
Equivalent narcotic depth
Equivalent narcotic depth
Equivalent narcotic depth (END) (historically also equivalent nitrogen depth) is used in technical diving as a way of estimating the narcotic effect of a breathing gas mixture, such as nitrox, heliox or trimix. The method is used, for a given breathing gas mix and dive depth, to calculate the equivalent depth which would produce about the same narcotic effect when breathing air.The equivalent narcotic depth of a breathing gas mix at a particular depth is calculated by finding the depth at which breathing air would have the same total partial pressure of narcotic components as the breathing gas in question.Since air is composed of approximately 21% oxygen and 79% nitrogen, it makes a difference whether oxygen is considered narcotic, and how narcotic it is considered relative to nitrogen. If oxygen is considered to be equally narcotic to nitrogen, the narcotic gases make up 100% of the mix, or equivalently the fraction of the total gases which are narcotic is 1.0. Oxygen is assumed equivalent in narcotic effect to nitrogen for this purpose by some authorities and certification agencies. In contrast, other authorities and agencies consider oxygen to be non-narcotic, and group it with helium and other potential non-narcotic components, or less narcotic, and group it with gases like hydrogen, which has a narcotic effect estimated at about 55% of nitrogen based on lipid solubility.Research continues into the nature and mechanism of inert gas narcosis, and for objective methods of measurement for comparison of the severity at different depths and different gas compositions.
Equivalent narcotic depth
Oxygen narcosis
Although oxygen has greater lipid solubility than nitrogen and therefore should be more narcotic according to the Meyer-Overton correlation, it is likely that some of the oxygen is metabolised, thus reducing its effect to a level similar to that of nitrogen or less.There are also known exceptions to the Meyer-Overton correlation. Some gases that should be very narcotic based on their high solubility in oil, are much less narcotic than predicted. Anesthetic research has shown that for a gas to be narcotic, its molecule must bind to receptors on the neurons, and some molecules have a shape that is not conducive to such binding. It is unknown if and how oxygen binds to neuronal receptors, so the measurable fact that oxygen is more oil-soluble than nitrogen, does not necessarily mean it is more narcotic than nitrogen.Since there is some evidence that oxygen plays a part in the narcotic effects of a gas mixture, some organisations prefer assuming that it is narcotic to the previous method of considering only the nitrogen component as narcotic, since this assumption is more conservative, and the NOAA diving manual recommends treating oxygen and nitrogen as equally narcotic as a way to simplify calculations, given that no measured value is available.The situation is further complicated by the effects of inert gas narcosis being significantly variable between divers using the same gas mixture, and between occasions for the same diver on the same gas and dive profile.
Equivalent narcotic depth
Oxygen narcosis
Objective testing has failed to demonstrate oxygen narcosis, and research continues. There has been difficulty in identifying a reliable method of objectively measuring gas narcosis, but quantitative electroencephalography (EEG) has produced interesting results. Quantification of the more subtle effects of inert gas narcosis is difficult. Psychometric tests can be variable and affected by learning effects, and participant motivation. In principle, objective neurophysiological measurements like quantitative electroencephalogram (qEEG) analysis and the critical flicker fusion frequency (CFFF) could be used to get objective measurements.Some studies have shown a decrease in CFFF during air-breathing dives at 4 bar (30 msw), but have not detected a change with partial pressure of pure oxygen within the breathable range. The results with CFFF for nitrogen do not scale well with partial pressure at greater depths.Hyperbaric inert gas narcosis is associated with depressed brain activity when measured with an EEG. A functional connectivity metric based on the so-called mutual information analysis has been developed, and summarized using the global efficiency network measure. This method has successfully differentiated between breathing air at the surface and air at 50 m, and even showed an effect at 18 m on air, but did not show a difference associated with pressure for heliox exposures. The lack of change with heliox suggests that the effect of hyperbaric nitrogen is measured, and not a direct pressure effect.The EEG functional connectivity metric did not change while breathing hyperbaric oxygen within the safe range for testing, which indicates that oxygen does not produce the same changes in brain electrical activity associated with high partial pressures of nitrogen, which suggests that oxygen is not narcotic in the same way as nitrogen.
Equivalent narcotic depth
Carbon dioxide narcosis
Although carbon dioxide (CO2) is known to be more narcotic than nitrogen – a rise in end-tidal alveolar partial pressure of CO2 of 10 millimetres of mercury (13 mbar) caused an impairment of both mental and psychomotor functions of approximately 10% – the effects of carbon dioxide retention are not considered in these calculations, as the concentration of CO2 in the supplied breathing gas is normally low, and the alveolar concentration is mostly affected by diver exertion and ventilation issues, and indirectly by work of breathing due to equipment and gas density effects.The driving mechanism of CO2 narcosis in divers is acute hypercapnia. The potential causes can be split into four groups: insufficient ventilation, excessive dead space, increased metabolic carbon dioxide production, and high carbon dioxide content of the breathing gas, usually only a problem with rebreathers.
Equivalent narcotic depth
Other components of the breathing gas mixture
It is generally accepted as of 2023, that helium has no known narcotic effect at any depth at which gas can be breathed, and can be disregarded as a contributor to inert gas narcosis. Other gases which may be considered include hydrogen and neon.
Equivalent narcotic depth
Standards
The standards recommended by the recreational certification agencies are basically arbitrary, as the actual effects of breathing gas narcosis are poorly understood, and the effects quite variable between individual divers. Some standards are more conservative than others, and in almost all cases it is the responsibility of the individual diver to make the choice and accept the consequence of their decision, except during training programs where standards can be enforced if the agency chooses to do so. One agency, GUE, prescribes the gas mixtures their members are allowed to use, but even that requirement and membership of the organisation is ultimately the choice of the diver. Professional divers may be legally obliged to comply with the codes of practice under which they work, and contractually obliged to follow the requirements of the operations manual of their employer, in terms of occupational health and safety legislation.
Equivalent narcotic depth
Standards
Some training agencies, such as CMAS, GUE, and PADI and include oxygen as equivalent to nitrogen in their equivalent narcotic depth (END) calculations. PSAI considers oxygen narcotic but less so than nitrogen. Others like BSAC, IANTD, NAUI and TDI do not consider oxygen narcotic.
Equivalent narcotic depth
Calculations
In diving calculations it is assumed unless otherwise stipulated that the atmospheric pressure is 1 bar or 1 atm. and that the diving medium is water. The ambient pressure at depth is the sum of the hydrostatic pressure due to depth and the atmospheric pressure on the surface. Some early (1978) experimental results results suggest that, at raised partial pressures, nitrogen, oxygen and carcon dioxide have narcotic properties, and that the mechanism of CO2 narcosis differs fundamentally from that of N2 and O2 narcosis, and more recent work suggests a significant difference between N2 an O2 mechanisms. Other components of breathing gases for diving may include hydrogen, neon, and argon, all of which are known or thought to be narcotic to some extent. The formula can be extended to include these gases if desired. The argon normally found in air at about 1% by volume is assumed to be present in the nitrogen component in the same ratio to nitrogen as in air, which simplifies calculation.
Equivalent narcotic depth
Calculations
Since in the absence of conclusive evidence, oxygen may or may not be considered narcotic, there are two ways to calculate END depending on which opinion is followed.
Equivalent narcotic depth
Calculations
Oxygen considered narcotic Since for these calculations oxygen is usually assumed to be equally narcotic to nitrogen, the ratio considered is of the sum of nitrogen and oxygen in the breathing gas and in air, where air is approximated as entirely consisting of narcotic gas. In this system all nitrox mixtures are assumed to be narcotically indistinguishable from air. The other common calculation assumes that oxygen is not narcotic and is multiplied by a relative narcotic value of 0 on both sides of the equation.
Equivalent narcotic depth
Calculations
Metres The partial pressure in bar, of a component gas in a mixture at a particular depth in metres is given by: fraction of gas × (depth/10 + 1)So the equivalent narcotic depth can be calculated as follows: partial pressure of narcotic gases in air at END = partial pressure of narcotic gases in trimix at a given depth.or (fraction of O2 x (relative narcotic strength) + fraction of N2 x 1) in air × (END/10 + 1) = (fraction of O2 x (relative narcotic strength) + fraction of N2 x 1) in trimix × (depth/10 +1)which gives for oxygen deemed equal in narcotic strength to nitrogen: 1.0 × (END/10 + 1) = (fraction of O2 + fraction of N2) in trimix × (depth/10 +1)resulting in: END = (depth + 10) × (fraction of O2 + fraction of N2) in trimix − 10Since (fraction of O2 + fraction of N2) in a trimix = (1 − fraction of helium), the following formula is equivalent: Working the earlier example, for a gas mix containing 40% helium being used at 60 metres, the END is: END = (60 + 10) × (1 − 0.4) − 10 END = 70 × 0.6 − 10 END = 42 − 10 END = 32 metresSo at 60 metres on this mix, the diver would feel approximately the same narcotic effect as a dive on air to 32 metres.
Equivalent narcotic depth
Calculations
Feet The partial pressure of a gas in a mixture at a particular depth in feet is given by: fraction of gas × (depth/33 + 1)So the equivalent narcotic depth can be calculated as follows: partial pressure of narcotic gases in air at END = partial pressure of narcotic gases in trimix at a given depth.or (fraction of O2 + fraction of N2) in air × (END/33 + 1) = (fraction of O2 + fraction of N2) in trimix × (depth/33 +1)which gives: 1.0 × (END/33 + 1) = (fraction of O2 + fraction of N2) in trimix × (depth/33 +1)resulting in: END = (depth + 33) × (fraction of O2 + fraction of N2) in trimix − 33Since (fraction of O2 + fraction of N2) in a trimix = (1 − fraction of helium), the following formula is equivalent: As an example, for a gas mix containing 40% helium being used at 200 feet, the END is: END = (200 + 33) × (1 − 0.4) − 33 END = 233 × 0.6 − 33 END = 140 − 33 END = 107 feetSo at 200 feet on this mix, the diver would feel the same narcotic effect as a dive on air to 107 feet.
Equivalent narcotic depth
Calculations
Oxygen not considered equally narcotic to nitrogen The ratio of nitrogen between the gas mixture and air is considered. Oxygen may be factored in at a narcotic ratio chosen by the user, or assumed to be negligible. In this system nitrox mixtures are not considered equivalent to air.
Decoy (chess)
Decoy (chess)
In chess, a decoy is a tactic that lures an enemy man off its square and away from its defensive role. Typically this means away from a square on which it defends another piece or threat. The tactic is also called a deflection. Usually the piece is decoyed to a particular square via the sacrifice of a piece on that square. A piece so sacrificed is called a decoy. When the piece decoyed or deflected is the king, the tactic is known as attraction. In general in the middlegame, the sacrifice of a decoy piece is called a diversionary sacrifice.
Decoy (chess)
Examples
The game Honfi–Barczay, Kecskemet 1977, with Black to play, illustrates two separate decoys. First, the white queen is set up on c4 for a knight fork: 1... Rxc4! 2. Qxc4Next, the fork is executed by removing the sole defender of the a3-square: 2... Qxb2!+ 3. Rxb2 Na3+ 4. Kc1Finally, a zwischenzug decoys (attracts) the king to b2: 4... Bxb2+After either 5.Kxb2 Nxc4+ 6.Kc3 Rxe4, or 5.Kd1 Nxc4, Black is two pawns ahead and should win comfortably.
Decoy (chess)
Examples
In this position, after the moves 1.Rf8+ Kxf8 (forced) 2.Nd7+ Ke7 3.Nxb6, White wins the queen and the game. A similar, but more complex position is described by Huczek.
Decoy (chess)
Examples
In the diagrammed position from Vidmar–Euwe, Carlsbad 1929, Black had just played 33...Qf4, threatening mate on h2. White now uncorks the elegant combination 34.Re8+ Bf8 (forced) 35.Rxf8+ (attraction) Kxf8 (forced) 36.Nf5+ (discovered check) Kg8 (36...Ke8 37.Qe7#) 37.Qf8+ (attraction) 1–0 Black resigns. (If 37...Kxf8 then 38.Rd8#. If 37...Kh7 then 38.Qg7#.) The combination after 33...Qf4 features two separate examples of the attraction motif.
Decoy (chess)
Examples
This example shows a position from the game Dementiev–Dzindzichashvili, URS 1972. White had just played 61.g6 (with the threat 62.Qh7+ Kf8 63.Rxf5+). However, Black continued with the crushing 61...Rh1+ (attraction) 62. Kxh1 (best) Nxg3+ (the white rook is pinned) 63.Kh2 Nxh5 and White has dropped his queen to the knight fork. In the game, White resigned after 61...Rh1+.
Decoy (chess)
Examples
Perhaps the most celebrated game featuring a decoy theme is Petrosian–Pachman, Bled 1961, which also involved a queen sacrifice. Pachman resigned after 19.Qxf6+ (attraction) Kxf6 20.Be5+ Kg5 21.Bg7! setting a mating net. In the game Menchik–Graf, Semmering 1937, Graf resigned after 21.Rd7, deflecting Black's queen. (If 21...Qxd7, then 22.Qxh5 with mate to follow; 21.Qxh5 immediately wins only a pawn after 21...Qxh2+.) Often a wing pawn serves as a decoy in endgames. In the game Ivkov–Taimanov, Belgrade 1956, Black resigned in the position shown because White has an easy win by using his passed a2-pawn as a decoy to lure Black's king away from the center and to the queenside, allowing easy promotion of the h6-pawn.
Western Disturbance
Western Disturbance
A western disturbance is an extratropical storm originating in the Mediterranean region that brings sudden winter rain to the northern parts of the Indian subcontinent, which extends as east as up to northern parts of Bangladesh and South eastern Nepal. It is a non-monsoonal precipitation pattern driven by the westerlies. The moisture in these storms usually originates over the Mediterranean Sea, the Caspian Sea and the Black Sea. Extratropical storms are a global phenomena with moisture usually carried in the upper atmosphere, unlike their tropical counterparts where the moisture is carried in the lower atmosphere. In the case of the Indian subcontinent, moisture is sometimes shed as rain when the storm system encounters the Himalayas. Western disturbances are more frequent and stronger in the winter season.Western disturbances are important for the development of the Rabi crop, which includes the locally important staple wheat.
Western Disturbance
Formation
Western disturbances originate in the Mediterranean region. A high-pressure area over Ukraine and neighbourhood consolidates, causing the intrusion of cold air from polar regions towards an area of relatively warmer air with high moisture. This generates favorable conditions for cyclogenesis in the upper atmosphere, which promotes the formation of an eastward-moving extratropical depression. Traveling at speeds up to 12 m/s (43 km/h; 27 mph), the disturbance moves towards the Indian subcontinent until the Himalayas inhibits its development, upon which the depression rapidly weakens. The western disturbances are embedded in the mid-latitude subtropical westerly jet stream.
Western Disturbance
Significance and impact
Western disturbances, specifically the ones in winter, bring moderate to heavy rain in low-lying areas and heavy snow to mountainous areas of the Indian Subcontinent. They are the cause of most winter and post-monsoon season rainfall across northwest India. Precipitation during the winter season has great importance in agriculture, particularly for the rabi crops. Wheat among them is one of the most important crops, which helps to meet India's food security. An average of four to five western disturbances form during the winter season. The rainfall distribution and amount varies with every western disturbance.
Western Disturbance
Significance and impact
Western disturbances are usually associated with cloudy sky, higher night temperatures and unusual rain. Excessive precipitation due to western disturbances can cause crop damage, landslides, floods and avalanches. Over the Indo-Gangetic plains, they occasionally bring cold wave conditions and dense fog. These conditions remain stable until disturbed by another western disturbance. When western disturbances move across northwest India before the onset of monsoon, a temporary advancement of monsoon current appears over the region.
Western Disturbance
Significance and impact
The strongest western disturbances usually occur in the northern parts of Pakistan, where flooding is reported number of times during the winter season.
Western Disturbance
Effects on monsoon
Western disturbances start declining in numbers after winter. During the summer months of April and May, they move across north India. The southwest monsoon current generally progresses from east to west in the northern Himalayan region, unlike western disturbances which follow a west to east trend in north India with consequent rise in pressure carrying cold pool of air. This helps in the activation of monsoon in certain parts of northwest India. It also causes pre-monsoon rainfall especially in northern India.The interaction of the monsoon trough with western disturbances may occasionally cause dense clouding and heavy precipitation. The 2013 North India floods, which killed more than 5000 people in a span of 3 days, is said to be a result of one such interaction.
Relcovaptan
Relcovaptan
Relcovaptan (SR-49059) is a non-peptide vasopressin receptor antagonist, selective for the V1a subtype. It has shown positive initial results for the treatment of Raynaud's disease and dysmenorrhoea, and as a tocolytic, although it is not yet approved for clinical use.
NGC 89
NGC 89
NGC 89 is a barred spiral or lenticular galaxy, part of Robert's Quartet, a group of four interacting galaxies. This member has a Seyfert 2 nucleus with extra-planar features emitting H-alpha radiation. There are filamentary features on each side of the disk, including a jet-like structure extending about 4 kpc in the NE direction. It may have lost its neutral hydrogen (H1) gas due to interactions with the other members of the clusters—most likely NGC 92.
Banach measure
Banach measure
In the mathematical discipline of measure theory, a Banach measure is a certain type of content used to formalize geometric area in problems vulnerable to the axiom of choice. Traditionally, intuitive notions of area are formalized as a classical, countably additive measure. This has the unfortunate effect of leaving some sets with no well-defined area; a consequence is that some geometric transformations do not leave area invariant, the substance of the Banach–Tarski paradox. A Banach measure is a type of generalized measure to elide this problem. A Banach measure on a set Ω is a finite, finitely additive measure μ ≠ 0, defined for every subset of ℘(Ω), and whose value is 0 on finite subsets.
Banach measure
Banach measure
A Banach measure on Ω which takes values in {0, 1} is called an Ulam measure on Ω. As Vitali's paradox shows, Banach measures cannot be strengthened to countably additive ones.
Banach measure
Banach measure
Stefan Banach showed that it is possible to define a Banach measure for the Euclidean plane, consistent with the usual Lebesgue measure. This means that every Lebesgue-measurable subset of R2 is also Banach-measurable, implying that both measures are equal.The existence of this measure proves the impossibility of a Banach–Tarski paradox in two dimensions: it is not possible to decompose a two-dimensional set of finite Lebesgue measure into finitely many sets that can be reassembled into a set with a different measure, because this would violate the properties of the Banach measure that extends the Lebesgue measure.
Acefylline
Acefylline
Acefylline (INN), also known as 7-theophyllineacetic acid, is a stimulant drug of the xanthine chemical class. It acts as an adenosine receptor antagonist. It is combined with diphenhydramine in the pharmaceutical preparation etanautine to help offset diphenhydramine induced drowsiness.A silanol–mannuronic acid conjugate of acefylline, acefylline methylsilanol mannuronate (INCI; trade name Xantalgosil C) is marketed as a lipolytic phosphodiesterase inhibitor. It is used as an ingredient in cosmeceuticals for the treatment of cellulite and as a skin conditioner.
Differential algebraic group
Differential algebraic group
In mathematics, a differential algebraic group is a differential algebraic variety with a compatible group structure. Differential algebraic groups were introduced by Cassidy (1972).
Biotextile
Biotextile
Biotextiles are structures composed of textile fibers designed for use in specific biological environments where their performance depends on biocompatibility and biostability with cells and biological fluids. Biotextiles include implantable devices such as surgical sutures, hernia repair fabrics, arterial grafts, artificial skin and parts of artificial hearts.
Biotextile
Biotextile
They were first created 30 years ago by Dr. Martin W. King, a professor in North Carolina State University’s College of Textiles.Medical textiles are a broader group which also includes bandages, wound dressings, hospital linen, preventive clothing etc. Antiseptic biotextiles are textiles used in fighting against cutaneous bacterial proliferation. Zeolite and triclosan are at the present time the most used molecules. This original property allows to fightinhibits the development of odours or bacterial proliferation in the diabetic foot.
Biotextile
New developments
In the new paradigm of tissue engineering, professionals are trying to develop new textiles so that the body can form new tissue around these devices so it’s not relying solely on synthetic foreign implanted material. Graduate student Jessica Gluck has demonstrated that viable and functioning liver cells can be grown on textile scaffolds.
DIY Kindle Scanner
DIY Kindle Scanner
The DIY Kindle Scanner, or Do It Yourself Kindle Scanner, is a robotic device made from Lego Mindstorms which was designed and built by Peter Purgathofer from 2012 to 2013. The robot interfaces with Purgathofer's personal computer and a Kindle to make a copy of the Kindle e-book. This robot in effect bypasses the digital rights management system set in place to protect Kindle e-books.
DIY Kindle Scanner
Background
Peter Purgathofer is an associate professor at the Vienna University of Technology in Austria.
DIY Kindle Scanner
Background
When he released a video on Vimeo documenting the operation of the device, Purgathofer wrote that the project was meant to be an artistic reflection connecting the ideas of “book scanning, copyright, and digital rights management.” In a reply to an email, Purgathofer stated that the project was not meant to be a negative reaction against Kindle e-books, but rather a way to use both Lego Mindstorms and the Kindle in a way that neither was usually intended to be used.
DIY Kindle Scanner
Operation
The robot is first set up so that it can operate the computer as well as hold the Kindle. The image capture software must already be running on the computer and the Kindle must be open to the first page of the book to be scanned into the computer. The robot then runs through a loop where it hits the spacebar to activate the camera on the computer and then uses finger-like robotic appendages to turn to the next page on the Kindle. This loop is then repeated until all pages have been scanned into the computer. Optical character recognition (OCR) software is then used to convert the scanned images into a duplicate of the original Kindle e-book in a plain text file.
DIY Kindle Scanner
Reaction
Several critics have recognized that more direct means of bypassing digital rights management are available. In this context, the DIY Kindle Scanner has been labeled as a type of Rube Goldberg machine.Additionally, Cory Doctorow made the claim that the project was in fact a legal means of bypassing digital rights management. This claim has been supported with the argument that the DIY Kindle Scanner simply exploits the analog hole which is applicable to all digital rights management systems.In light of the question of the legality of this project, Purgathofer has scanned only one e-book with this method and he explains that he has not shared the copy with anyone because he is worried that "It would get me in deep trouble."Furthermore, Purgathofer states that this project should not be associated with his academic work. In explanation, he said, "It’s a private project."
DIY Kindle Scanner
General References
"DIY Kindle Scanner", Post-Digital Publishing Archive. Retrieved October 6, 2015. Hoffelder, Nate. "Kindle Plus Legos Plus Mac Equals DIY Scanner (video)", The Digital Reader. Retrieved October 27, 2015. Love, Dylan. "This Lego Robot Can Outwit Amazon's Kindle And Make Copies Of Your E-Books", Business Insider. Retrieved October 6, 2015.
Monosaccharide
Monosaccharide
Monosaccharides (from Greek monos: single, sacchar: sugar), also called simple sugars, are the simplest forms of sugar and the most basic units (monomers) from which all carbohydrates are built.They are usually colorless, water-soluble, and crystalline solids. Contrary to their name (sugars), only some monosaccharides have a sweet taste. Most monosaccharides have the formula (CH2O) (though not all molecules with this formula are monosaccharides).
Monosaccharide
Monosaccharide
Examples of monosaccharides include glucose (dextrose), fructose (levulose), and galactose. Monosaccharides are the building blocks of disaccharides (such as sucrose and lactose) and polysaccharides (such as cellulose and starch). The table sugar used in everyday vernacular is itself a disaccharide sucrose comprising one molecule of each of the two monosaccharides D-glucose and D-fructose.Each carbon atom that supports a hydroxyl group is chiral, except those at the end of the chain. This gives rise to a number of isomeric forms, all with the same chemical formula. For instance, galactose and glucose are both aldohexoses, but have different physical structures and chemical properties.
Monosaccharide
Monosaccharide
The monosaccharide glucose plays a pivotal role in metabolism, where the chemical energy is extracted through glycolysis and the citric acid cycle to provide energy to living organisms.
Monosaccharide
Structure and nomenclature
With few exceptions (e.g., deoxyribose), monosaccharides have this chemical formula: (CH2O)x, where conventionally x ≥ 3. Monosaccharides can be classified by the number x of carbon atoms they contain: triose (3), tetrose (4), pentose (5), hexose (6), heptose (7), and so on.
Monosaccharide
Structure and nomenclature
Glucose, used as an energy source and for the synthesis of starch, glycogen and cellulose, is a hexose. Ribose and deoxyribose (in RNA and DNA, respectively) are pentose sugars. Examples of heptoses include the ketoses, mannoheptulose and sedoheptulose. Monosaccharides with eight or more carbons are rarely observed as they are quite unstable. In aqueous solutions monosaccharides exist as rings if they have more than four carbons.
Monosaccharide
Structure and nomenclature
Linear-chain monosaccharides Simple monosaccharides have a linear and unbranched carbon skeleton with one carbonyl (C=O) functional group, and one hydroxyl (OH) group on each of the remaining carbon atoms. Therefore, the molecular structure of a simple monosaccharide can be written as H(CHOH)n(C=O)(CHOH)mH, where n + 1 + m = x; so that its elemental formula is CxH2xOx. By convention, the carbon atoms are numbered from 1 to x along the backbone, starting from the end that is closest to the C=O group. Monosaccharides are the simplest units of carbohydrates and the simplest form of sugar.
Monosaccharide
Structure and nomenclature
If the carbonyl is at position 1 (that is, n or m is zero), the molecule begins with a formyl group H(C=O)− and is technically an aldehyde. In that case, the compound is termed an aldose. Otherwise, the molecule has a ketone group, a carbonyl −(C=O)− between two carbons; then it is formally a ketone, and is termed a ketose. Ketoses of biological interest usually have the carbonyl at position 2.
Monosaccharide
Structure and nomenclature
The various classifications above can be combined, resulting in names such as "aldohexose" and "ketotriose".
Monosaccharide
Structure and nomenclature
A more general nomenclature for open-chain monosaccharides combines a Greek prefix to indicate the number of carbons (tri-, tetr-, pent-, hex-, etc.) with the suffixes "-ose" for aldoses and "-ulose" for ketoses. In the latter case, if the carbonyl is not at position 2, its position is then indicated by a numeric infix. So, for example, H(C=O)(CHOH)4H is pentose, H(CHOH)(C=O)(CHOH)3H is pentulose, and H(CHOH)2(C=O)(CHOH)2H is pent-3-ulose.
Monosaccharide
Structure and nomenclature
Open-chain stereoisomers Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers, whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center, specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness. In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group.
Monosaccharide
Structure and nomenclature
For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH)2H (glyceraldehyde), has one chiral carbon—the central one, number 2—which is bonded to groups −H, −OH, −C(OH)H2, and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2c, where c is the total number of chiral carbons.
Monosaccharide
Structure and nomenclature
The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons). Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature.