title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Runcicantitruncated tesseractic honeycomb
|
Runcicantitruncated tesseractic honeycomb
|
In four-dimensional Euclidean geometry, the runcicantitruncated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space.
|
Runcicantitruncated tesseractic honeycomb
|
Related honeycombs
|
The [4,3,3,4], , Coxeter group generates 31 permutations of uniform tessellations, 21 with distinct symmetry and 20 with distinct geometry. The expanded tesseractic honeycomb (also known as the stericated tesseractic honeycomb) is geometrically identical to the tesseractic honeycomb. Three of the symmetric honeycombs are shared in the [3,4,3,3] family. Two alternations (13) and (17), and the quarter tesseractic (2) are repeated in other families.
|
Runcicantitruncated tesseractic honeycomb
|
Related honeycombs
|
The [4,3,31,1], , Coxeter group generates 31 permutations of uniform tessellations, 23 with distinct symmetry and 4 with distinct geometry. There are two alternated forms: the alternations (19) and (24) have the same geometry as the 16-cell honeycomb and snub 24-cell honeycomb respectively.
|
SRPX2
|
SRPX2
|
Sushi repeat-containing protein SRPX2 is a protein that in humans is encoded by the SRPX2 gene, on the X chromosome. It has roles in glutamatergic synapse formation in the cerebral cortex and is more highly expressed in childhood. Bioinformatics analysis suggests the SRPX2 protein is a peroxiredoxin.
|
SRPX2
|
Function
|
SRPX2 is distributed on synapses throughout the cerebral cortex and hippocampus, largely in the same areas as vesicular glutamate transporter 1 and DLG4. It is involved in synapse formation and is more highly expressed in childhood. Overexpression of SRPX2 results in increased density of vesicular glutamate transporter 1 and DLG4 clusters on cortical neurons. Deficiency results in decreased dendritic spine density of excitatory glutamatergic synapses, while inhibitory GABAergic synapses are unaffected. Length or shape of spines is not affected by SPRX2, however.
|
SRPX2
|
Clinical significance
|
Mutations in SRPX2 were linked in one 2006 study to a family with a form of Rolandic epilepsy with intellectual disability and speech dyspraxia, however later studies showed that mutations in SRPX2 do not necessarily lead to epilepsy or intellectual disability. Additionally, no mutations in SRPX2 have been reported with Rolandic epilepsy since. In mice, mutations in SRPX2 lead to decreased frequency of ultrasonic vocalisations in pups when separated from mothers.
|
SRPX2
|
Interactions
|
FOXP2 directly reduces SRPX2 expression, by binding to its promoter. However, FOXP2 also reduces dendritic length, which SRPX2 does not affect, indicating it has other regulatory roles in dendritic morphology.
|
Vector field reconstruction
|
Vector field reconstruction
|
Vector field reconstruction is a method of creating a vector field from experimental or computer generated data, usually with the goal of finding a differential equation model of the system.
|
Vector field reconstruction
|
Vector field reconstruction
|
A differential equation model is one that describes the value of dependent variables as they evolve in time or space by giving equations involving those variables and their derivatives with respect to some independent variables, usually time and/or space. An ordinary differential equation is one in which the system's dependent variables are functions of only one independent variable. Many physical, chemical, biological and electrical systems are well described by ordinary differential equations. Frequently we assume a system is governed by differential equations, but we do not have exact knowledge of the influence of various factors on the state of the system. For instance, we may have an electrical circuit that in theory is described by a system of ordinary differential equations, but due to the tolerance of resistors, variations of the supply voltage or interference from outside influences we do not know the exact parameters of the system. For some systems, especially those that support chaos, a small change in parameter values can cause a large change in the behavior of the system, so an accurate model is extremely important. Therefore, it may be necessary to construct more exact differential equations by building them up based on the actual system performance rather than a theoretical model. Ideally, one would measure all the dynamical variables involved over an extended period of time, using many different initial conditions, then build or fine tune a differential equation model based on these measurements.
|
Vector field reconstruction
|
Vector field reconstruction
|
In some cases we may not even know enough about the processes involved in a system to even formulate a model. In other cases, we may have access to only one dynamical variable for our measurements, i.e., we have a scalar time series. If we only have a scalar time series, we need to use the method of time delay embedding or derivative coordinates to get a large enough set of dynamical variables to describe the system.
|
Vector field reconstruction
|
Vector field reconstruction
|
In a nutshell, once we have a set of measurements of the system state over some period of time, we find the derivatives of these measurements, which gives us a local vector field, then determine a global vector field consistent with this local field. This is usually done by a least squares fit to the derivative data.
|
Vector field reconstruction
|
Formulation
|
In the best possible case, one has data streams of measurements of all the system variables, equally spaced in time, say s1(t), s2(t), ... , sk(t)for t = t1, t2,..., tn,beginning at several different initial conditions. Then the task of finding a vector field, and thus a differential equation model consists of fitting functions, for instance, a cubic spline, to the data to obtain a set of continuous time functions x1(t), x2(t), ... , xk(t),computing time derivatives dx1/dt, dx2/dt,...,dxk/dt of the functions, then making a least squares fit using some sort of orthogonal basis functions (orthogonal polynomials, radial basis functions, etc.) to each component of the tangent vectors to find a global vector field. A differential equation then can be read off the global vector field.
|
Vector field reconstruction
|
Formulation
|
There are various methods of creating the basis functions for the least squares fit. The most common method is the Gram–Schmidt process. Which creates a set of orthogonal basis vectors, which can then easily be normalized. This method begins by first selecting any standard basis β={v1, v2,...,vn}. Next, set the first vector v1=u1. Then, we set u2=v2-proju1v2. This process is repeated to for k vectors, with the final vector being uk= vk-Σ(j=1)(k-1)projukvk. This then creates a set of orthogonal standard basis vectors.
|
Vector field reconstruction
|
Formulation
|
The reason for using a standard orthogonal basis rather than a standard basis arises from the creation of the least squares fitting done next. Creating a least-squares fit begins by assuming some function, in the case of the reconstruction an nth degree polynomial, and fitting the curve to the data using constants. The accuracy of the fit can be increased by increasing the degree of the polynomial being used to fit the data. If a set of non-orthogonal standard basis functions was used, it becomes necessary to recalculate the constant coefficients of the function describing the fit. However, by using the orthogonal set of basis functions, it is not necessary to recalculate the constant coefficients.
|
Vector field reconstruction
|
Applications
|
Vector field reconstruction has several applications, and many different approaches. Some mathematicians have not only used radial basis functions and polynomials to reconstruct a vector field, but they have used Lyapunov exponents and singular value decomposition. Gouesbet and Letellier used a multivariate polynomial approximation and least squares to reconstruct their vector field. This method was applied to the Rössler system, and the Lorenz system, as well as thermal lens oscillations.
|
Vector field reconstruction
|
Applications
|
The Rossler system, Lorenz system and Thermal lens oscillation follows the differential equations in standard system as X'=Y, Y'=Z and Z'=F(X,Y,Z)where F(X,Y,Z) is known as the standard function.
|
Vector field reconstruction
|
Implementation issues
|
In some situation the model is not very efficient and difficulties can arise if the model has a large number of coefficients and demonstrates a divergent solution. For example, nonautonomous differential equations give the previously described results. In this case the modification of the standard approach in application gives a better way of further development of global vector reconstruction.
Usually the system being modeled in this way is a chaotic dynamical system, because chaotic systems explore a large part of the phase space and the estimate of the global dynamics based on the local dynamics will be better than with a system exploring only a small part of the space.
|
Vector field reconstruction
|
Implementation issues
|
Frequently, one has only a single scalar time series measurement from a system known to have more than one degree of freedom. The time series may not even be from a system variable, but may be instead of a function of all the variables, such as temperature in a stirred tank reactor using several chemical species. In this case, one must use the technique of delay coordinate embedding, where a state vector consisting of the data at time t and several delayed versions of the data is constructed.
|
Vector field reconstruction
|
Implementation issues
|
A comprehensive review of the topic is available from
|
6in4
|
6in4
|
6in4 is an IPv6 transition mechanism for migrating from Internet Protocol version 4 (IPv4) to IPv6. It is a tunneling protocol that encapsulates IPv6 packets on specially configured IPv4 links according to the specifications of RFC 4213. The IP protocol number for 6in4 is 41, per IANA reservation.The 6in4 packet format consists of the IPv6 packet preceded by an IPv4 packet header. Thus, the encapsulation overhead is the size of the IPv4 header of 20 bytes. On Ethernet with a maximum transmission unit (MTU) of 1500 bytes, IPv6 packets of 1480 bytes may therefore be transmitted without fragmentation. 6in4 tunneling is also referred to as proto-41 static because the endpoints are configured statically. Although 6in4 tunnels are generally manually configured, the utility AICCU can configure tunnel parameters automatically after retrieving information from a Tunnel Information and Control Protocol (TIC) server.
|
6in4
|
6in4
|
The similarly named methods 6to4 or 6over4 describe a different mechanism. The 6to4 method also makes use of proto-41, but the endpoint IPv4 address information is derived from the IPv6 addresses within the IPv6 packet header, instead of from static configuration of the endpoints.
|
6in4
|
Network address translators
|
When an endpoint of a 6in4 tunnel is inside a network that uses network address translation (NAT) to external networks, the DMZ feature of a NAT router may be used to enable the service. Some NAT devices automatically permit transparent operation of 6in4.
|
6in4
|
Dynamic 6in4 tunnels and heartbeat
|
Even though 6in4 tunnels are static in nature, with the help of for example the heartbeat protocol one can still have dynamic tunnel endpoints. The heartbeat protocol signals the other side of the tunnel with its current endpoint location. A tool such as AICCU can then update the endpoints, in effect making the endpoint dynamic while still using the 6in4 protocol. Tunnels of this kind are generally called 'proto-41 heartbeat' tunnels.
|
6in4
|
Security issues
|
The 6in4 protocol has no security features, thus one can inject IPv6 packets by spoofing the source IPv4 address of a tunnel endpoint and sending it to the other endpoint. This problem can partially be solved by implementing network ingress filtering (not near the exit point but close to the true source) or with IPsec. The mentioned packet injection loophole of 6in4 was exploited for a research benefit in a method called IPv6 Tunnel Discovery which allowed the researchers to discover operating IPv6 tunnels around the world.
|
6in4
|
Specifications
|
RFC 1933, Transition Mechanisms for IPv6 Hosts and Routers, R. Gilligan and E. Nordmark, 1996 RFC 2893, Transition Mechanisms for IPv6 Hosts and Routers, R. Gilligan and E. Nordmark, 2000 RFC 4213, Basic Transition Mechanisms for IPv6 Hosts and Routers, R. Gilligan and E. Nordmark, 2005
|
Pitching machine
|
Pitching machine
|
A pitching machine is a machine that automatically pitches a baseball to a batter at different speeds and styles. Most machines are hand-fed, but there are some that automatically feed. There are multiple types of pitching machines; softball, baseball, youth, adult, and a combination of both softball and baseball.
|
Pitching machine
|
History
|
In 1897, mathematics instructor Charles Hinton designed a gunpowder-powered baseball pitching machine for the Princeton University baseball team's batting practice. According to one source it caused several injuries, and may have been in part responsible for Hinton's dismissal from Princeton that year. However, the machine was versatile: it was capable of throwing variable speeds with an adjustable breech size and firing curve balls by the use of two rubber-coated steel fingers at the muzzle of the pitcher. Hinton successfully introduced the machine to the University of Minnesota where he worked as an assistant professor until 1900.
|
Pitching machine
|
History
|
The arm-type pitching machine was designed by Paul Giovagnoli in 1952, for use on his driving range. Using a metal arm mounted to a large gear, this type of machine simulates the motion of an actual pitcher, throwing balls with consistent speed and direction. One- and two-wheel style machines were originally patented by Bartley N. Marty in 1916.
|
Pitching machine
|
Design
|
Pitching machines come in a variety of styles. However, the two most popular machines are an arm action machine and a circular wheel machine. The arm action machine simulates the delivery of a pitcher and carries a ball at the end of a bracket, much like a hand would. The arm action machine then delivers the ball in an overhand motion. The circular wheel machine contains one, two or three wheels that spin much like a bike tire. The wheels on these machines are usually set in either a horizontal or vertical fashion. With a circular machine, a ball shoots out towards the hitter after it is fed into the wheel or wheels. Three-wheel machines are more easily adjusted to be able to throw a variety of pitches and they can be used for a wide range of other practice scenarios, such as ground work or flyballs.
|
Pitching machine
|
Design
|
The use of pitching machines allows baseball and softball players the opportunity to get batting practice on their own. Most batting machines are set up in a batting cage, a netted area that will contain the balls after they are hit. By using a pitching machine and a batting cage, hitters can get as much batting practice as they desire without necessitating the cooperation of a human pitcher (and without wearing out the arm of one willing to cooperate). The cost of pitching machines varies greatly.
|
Pitching machine
|
Use in Little League
|
In the youngest divisions of Little League, and other youth baseball organizations, pitching machines are used instead of live pitching. This is done to give the kids more experience hitting the ball, as pitchers at that age would tend to throw few strikes. Simple spring-loaded manual models are common (such as from Louisville Slugger) as are battery-powered compressed-air machines (such as from Zooka).
|
ISO 31-2
|
ISO 31-2
|
ISO 31 (Quantities and units, International Organization for Standardization, 1992) is a superseded international standard concerning physical quantities, units of measurement, their interrelationships and their presentation. It was revised and replaced by ISO/IEC 80000.
|
ISO 31-2
|
Parts
|
The standard comes in 14 parts: ISO 31-0: General principles (replaced by ISO/IEC 80000-1:2009) ISO 31-1: Space and time (replaced by ISO/IEC 80000-3:2007) ISO 31-2: Periodic and related phenomena (replaced by ISO/IEC 80000-3:2007) ISO 31-3: Mechanics (replaced by ISO/IEC 80000-4:2006) ISO 31-4: Heat (replaced by ISO/IEC 80000-5) ISO 31-5: Electricity and magnetism (replaced by ISO/IEC 80000-6) ISO 31-6: Light and related electromagnetic radiations (replaced by ISO/IEC 80000-7) ISO 31-7: Acoustics (replaced by ISO/IEC 80000-8:2007) ISO 31-8: Physical chemistry and molecular physics (replaced by ISO/IEC 80000-9) ISO 31-9: Atomic and nuclear physics (replaced by ISO/IEC 80000-10) ISO 31-10: Nuclear reactions and ionizing radiations (replaced by ISO/IEC 80000-10) ISO 31-11: Mathematical signs and symbols for use in the physical sciences and technology (replaced by ISO 80000-2:2009) ISO 31-12: Characteristic numbers (replaced by ISO/IEC 80000-11) ISO 31-13: Solid state physics (replaced by ISO/IEC 80000-12)A second international standard on quantities and units was IEC 60027. The ISO 31 and IEC 60027 Standards were revised by the two standardization organizations in collaboration ([1], [2]) to integrate both standards into a joint standard ISO/IEC 80000 - Quantities and Units in which the quantities and equations used with SI are to be referred as the International System of Quantities (ISQ). ISO/IEC 80000 supersedes both ISO 31 and part of IEC 60027.
|
ISO 31-2
|
Coined words
|
ISO 31-0 introduced several new words into the English language that are direct spelling-calques from the French. Some of these words have been used in scientific literature.
|
ISO 31-2
|
Related national standards
|
Canada: CAN/CSA-Z234-1-89 Canadian Metric Practice Guide (covers some aspects of ISO 31-0, but is not a comprehensive list of physical quantities comparable to ISO 31) United States: There are several national SI guidance documents, such as NIST SP 811, NIST SP 330, NIST SP 814, IEEE/ASTM SI 10, SAE J916. These cover many aspects of the ISO 31-0 standard, but lack the comprehensive list of quantities and units defined in the remaining parts of ISO 31.
|
Vitamin K deficiency
|
Vitamin K deficiency
|
Vitamin K deficiency results from insufficient dietary vitamin K1 or vitamin K2 or both.
|
Vitamin K deficiency
|
Signs and symptoms
|
Symptoms include bruising, petechiae, and haematoma.
|
Vitamin K deficiency
|
Signs and symptoms
|
Vitamin K is changed to its active form in the liver by the enzyme Vitamin K epoxide reductase. Activated vitamin K is then used to gamma carboxylate (and thus activate) certain enzymes involved in coagulation: Factors II, VII, IX, X, and protein C and protein S. Inability to activate the clotting cascade via these factors leads to the bleeding symptoms mentioned above.Notably, when one examines the lab values in Vitamin K deficiency [see below] the prothrombin time is elevated, but the partial thromboplastin time is normal or only mildly prolonged. This may seem counterintuitive given that the deficiency leads to decreased activity in factors of both the intrinsic pathway (F-IX) which is monitored by PTT, as well as the extrinsic pathway (F-VII) which is monitored by PT. However, factor VII has the shortest half-life of all the factors carboxylated by vitamin K; therefore, when deficient, it is the PT that rises first, since the activated Factor VII is the first to "disappear." In later stages of deficiency, the other factors (which have longer half lives) are able to "catch up," and the PTT becomes elevated as well.
|
Vitamin K deficiency
|
Cause
|
Vitamin K1-deficiency may occur by disturbed intestinal uptake (such as would occur in a bile duct obstruction), by therapeutic or accidental intake of a vitamin K1-antagonist such as warfarin, or, very rarely, by nutritional vitamin K1 deficiency. As a result, Gla-residues are inadequately formed and the Gla-proteins are insufficiently active.
|
Vitamin K deficiency
|
Epidemiology
|
The prevalence of vitamin K deficiency varies by geographic region. For infants in the United States, vitamin K1 deficiency without bleeding may occur in as many as 50% of infants younger than 5 days old, with the classic hemorrhagic disease occurring in 0.25-1.7% of infants. Therefore, the Committee on Nutrition of the American Academy of Pediatrics recommends that 0.5 to 1.0 mg Vitamin K1 be administered to all newborns shortly after birth.Postmenopausal and elderly women in Thailand have high risk of Vitamin K2 deficiency, compared with the normal value of young, reproductive females.
|
Vitamin K deficiency
|
Epidemiology
|
Current dosage recommendations for Vitamin K may be too low. The deposition of calcium in soft tissues, including arterial walls, is quite common, especially in those who have atherosclerosis, suggesting that Vitamin K deficiency is more common than previously thought.Because colonic bacteria synthesize a significant portion of the Vitamin K required for human needs, individuals with disruptions to or insufficient amounts of these bacteria can be at risk for Vitamin K deficiency. Newborns, as mentioned above, fit into this category, as their colons are frequently not adequately colonized in the first five to seven days of life. Another at-risk population comprises those individuals on any sort of long-term antibiotic therapy, as this can diminish the population of normal gut flora.
|
Fenestrel
|
Fenestrel
|
Fenestrel (INN, USAN) (developmental code name ORF-3858) is a synthetic, nonsteroidal estrogen that was developed as a postcoital contraceptive in the 1960s but was never marketed. Synthesized by Ortho Pharmaceutical in 1961 and studied extensively, it was coined the "morning-after-pill" or "postcoital antifertility agent". Fenestrel is a seco analogue of doisynolic acid, and a member of the cyclohexenecarboxylic acid series of estrogens.
|
Polymerase chain reaction optimization
|
Polymerase chain reaction optimization
|
The polymerase chain reaction (PCR) is a commonly used molecular biology tool for amplifying DNA, and various techniques for PCR optimization which have been developed by molecular biologists to improve PCR performance and minimize failure.
|
Polymerase chain reaction optimization
|
Contamination and PCR
|
The PCR method is extremely sensitive, requiring only a few DNA molecules in a single reaction for amplification across several orders of magnitude. Therefore, adequate measures to avoid contamination from any DNA present in the lab environment (bacteria, viruses, or human sources) are required. Because products from previous PCR amplifications are a common source of contamination, many molecular biology labs have implemented procedures that involve dividing the lab into separate areas. One lab area is dedicated to preparation and handling of pre-PCR reagents and the setup of the PCR reaction, and another area to post-PCR processing, such as gel electrophoresis or PCR product purification. For the setup of PCR reactions, many standard operating procedures involve using pipettes with filter tips and wearing fresh laboratory gloves, and in some cases a laminar flow cabinet with UV lamp as a work station (to destroy any extraneomultimer formation). PCR is routinely assessed against a negative control reaction that is set up identically to the experimental PCR, but without template DNA, and performed alongside the experimental PCR.
|
Polymerase chain reaction optimization
|
Hairpins
|
Secondary structures in the DNA can result in folding or knotting of DNA template or primers, leading to decreased product yield or failure of the reaction. Hairpins, which consist of internal folds caused by base-pairing between nucleotides in inverted repeats within single-stranded DNA, are common secondary structures and may result in failed PCRs.
Typically, primer design that includes a check for potential secondary structures in the primers, or addition of DMSO or glycerol to the PCR to minimize secondary structures in the DNA template, are used in the optimization of PCRs that have a history of failure due to suspected DNA hairpins.
|
Polymerase chain reaction optimization
|
Polymerase errors
|
Taq polymerase lacks a 3′ to 5′ exonuclease activity. Thus, Taq has no error-proof-reading activity, which consists of excision of any newly misincorporated nucleotide base from the nascent (i.e., extending) DNA strand that does not match with its opposite base in the complementary DNA strand. The lack in 3′ to 5′ proofreading of the Taq enzyme results in a high error rate (mutations per nucleotide per cycle) of approximately 1 in 10,000 bases, which affects the fidelity of the PCR, especially if errors occur early in the PCR with low amounts of starting material, causing accumulation of a large proportion of amplified DNA with incorrect sequence in the final product.Several "high-fidelity" DNA polymerases, having engineered 3′ to 5′ exonuclease activity, have become available that permit more accurate amplification for use in PCRs for sequencing or cloning of products. Examples of polymerases with 3′ to 5′ exonuclease activity include: KOD DNA polymerase, a recombinant form of Thermococcus kodakaraensis KOD1; Vent, which is extracted from Thermococcus litoralis; Pfu DNA polymerase, which is extracted from Pyrococcus furiosus; Pwo, which is extracted from Pyrococcus woesii; Q5 polymerase, with 280x higher fidelity amplification compared with Taq.
|
Polymerase chain reaction optimization
|
Magnesium concentration
|
Magnesium is required as a co-factor for thermostable DNA polymerase. Taq polymerase is a magnesium-dependent enzyme and determining the optimum concentration to use is critical to the success of the PCR reaction. Some of the components of the reaction mixture such as template concentration, dNTPs and the presence of chelating agents (EDTA) or proteins can reduce the amount of free magnesium present thus reducing the activity of the enzyme. Primers which bind to incorrect template sites are stabilized in the presence of excessive magnesium concentrations and so results in decreased specificity of the reaction. Excessive magnesium concentrations also stabilize double stranded DNA and prevent complete denaturation of the DNA during PCR reducing the product yield. Inadequate thawing of MgCl2 may result in the formation of concentration gradients within the magnesium chloride solution supplied with the DNA polymerase and also contributes to many failed experiments .
|
Polymerase chain reaction optimization
|
Size and other limitations
|
PCR works readily with a DNA template of up to two to three thousand base pairs in length. However, above this size, product yields often decrease, as with increasing length stochastic effects such as premature termination by the polymerase begin to affect the efficiency of the PCR. It is possible to amplify larger pieces of up to 50,000 base pairs with a slower heating cycle and special polymerases. These are polymerases fused to a processivity-enhancing DNA-binding protein, enhancing adherence of the polymerase to the DNA.Other valuable properties of the chimeric polymerases TopoTaq and PfuC2 include enhanced thermostability, specificity and resistance to contaminants and inhibitors. They were engineered using the unique helix-hairpin-helix (HhH) DNA binding domains of topoisomerase V from hyperthermophile Methanopyrus kandleri. Chimeric polymerases overcome many limitations of native enzymes and are used in direct PCR amplification from cell cultures and even food samples, thus by-passing laborious DNA isolation steps. A robust strand-displacement activity of the hybrid TopoTaq polymerase helps solve PCR problems that can be caused by hairpins and G-loaded double helices. Helices with a high G-C content possess a higher melting temperature, often impairing PCR, depending on the conditions.
|
Polymerase chain reaction optimization
|
Non-specific priming
|
Non-specific binding of primers frequently occurs and may occur for several reasons. These include repeat sequences in the DNA template, non-specific binding between primer and template, high or low G-C content in the template, or incomplete primer binding, leaving the 5' end of the primer unattached to the template. Non-specific binding of degenerate primers is also common. Manipulation of annealing temperature and magnesium ion concentration may be used to increase specificity. For example, lower concentrations of magnesium or other cations may prevent non-specific primer interactions, thus enabling successful PCR. A "hot-start" polymerase enzyme whose activity is blocked unless it is heated to high temperature (e.g., 90–98˚C) during the denaturation step of the first cycle, is commonly used to prevent non-specific priming during reaction preparation at lower temperatures. Chemically mediated hot-start PCRs require higher temperatures and longer incubation times for polymerase activation, compared with antibody or aptamer-based hot-start PCRs.Other methods to increase specificity include Nested PCR and Touchdown PCR.
|
Polymerase chain reaction optimization
|
Non-specific priming
|
Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design.Touchdown polymerase chain reaction or touchdown style polymerase chain reaction is a method of polymerase chain reaction by which primers will avoid amplifying nonspecific sequences. The annealing temperature during a polymerase chain reaction determines the specificity of primer annealing. The melting point of the primer sets the upper limit on annealing temperature. At temperatures just below this point, only very specific base pairing between the primer and the template will occur. At lower temperatures, the primers bind less specifically. Nonspecific primer binding obscures polymerase chain reaction results, as the nonspecific sequences to which primers anneal in early steps of amplification will "swamp out" any specific sequences because of the exponential nature of polymerase amplification.
|
Polymerase chain reaction optimization
|
Non-specific priming
|
The earliest steps of a touchdown polymerase chain reaction cycle have high annealing temperatures. The annealing temperature is decreased in increments for every subsequent set of cycles (the number of individual cycles and increments of temperature decrease is chosen by the experimenter). The primer will anneal at the highest temperature which is least-permissive of nonspecific binding that it is able to tolerate. Thus, the first sequence amplified is the one between the regions of greatest primer specificity; it is most likely that this is the sequence of interest. These fragments will be further amplified during subsequent rounds at lower temperatures, and will out compete the nonspecific sequences to which the primers may bind at those lower temperatures. If the primer initially (during the higher-temperature phases) binds to the sequence of interest, subsequent rounds of polymerase chain reaction can be performed upon the product to further amplify those fragments.
|
Polymerase chain reaction optimization
|
Primer dimers
|
Annealing of the 3' end of one primer to itself or the second primer may cause primer extension, resulting in the formation of so-called primer dimers, visible as low-molecular-weight bands on PCR gels. Primer dimer formation often competes with formation of the DNA fragment of interest, and may be avoided using primers that are designed such that they lack complementarity—especially at the 3' ends—to itself or the other primer used in the reaction. If primer design is constrained by other factors and if primer-dimers do occur, methods to limit their formation may include optimisation of the MgCl2 concentration or increasing the annealing temperature in the PCR.
|
Polymerase chain reaction optimization
|
Deoxynucleotides
|
Deoxynucleotides (dNTPs) may bind Mg2+ ions and thus affect the concentration of free magnesium ions in the reaction. In addition, excessive amounts of dNTPs can increase the error rate of DNA polymerase and even inhibit the reaction. An imbalance in the proportion of the four dNTPs can result in misincorporation into the newly formed DNA strand and contribute to a decrease in the fidelity of DNA polymerase.
|
X Window System
|
X Window System
|
The X Window System (X11, or simply X) is a windowing system for bitmap displays, common on Unix-like operating systems.
X provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface – this is handled by individual programs. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces.
X originated as part of Project Athena at Massachusetts Institute of Technology (MIT) in 1984. The X protocol has been at version 11 (hence "X11") since September 1987. The X.Org Foundation leads the X project, with the current reference implementation, X.Org Server, available as free and open-source software under the MIT License and similar permissive licenses.
|
X Window System
|
Purpose and abilities
|
X is an architecture-independent system for remote graphical user interfaces and input device capabilities. Each person using a networked terminal has the ability to interact with the display with any type of user input device.
In its standard distribution it is a complete, albeit simple, display and interface solution which delivers a standard toolkit and protocol stack for building graphical user interfaces on most Unix-like operating systems and OpenVMS, and has been ported to many other contemporary general purpose operating systems.
|
X Window System
|
Purpose and abilities
|
X provides the basic framework, or primitives, for building such GUI environments: drawing and moving windows on the display and interacting with a mouse, keyboard or touchscreen. X does not mandate the user interface; individual client programs handle this. Programs may use X's graphical abilities with no user interface. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces.
|
X Window System
|
Purpose and abilities
|
Unlike most earlier display protocols, X was specifically designed to be used over network connections rather than on an integral or attached display device. X features network transparency, which means an X program running on a computer somewhere on a network (such as the Internet) can display its user interface on an X server running on some other computer on the network. The X server is typically the provider of graphics resources and keyboard/mouse events to X clients, meaning that the X server is usually running on the computer in front of a human user, while the X client applications run anywhere on the network and communicate with the user's computer to request the rendering of graphics content and receive events from input devices including keyboards and mice.
|
X Window System
|
Purpose and abilities
|
The fact that the term "server" is applied to the software in front of the user is often surprising to users accustomed to their programs being clients to services on remote computers. Here, rather than a remote database being the resource for a local app, the user's graphic display and input devices become resources made available by the local X server to both local and remotely hosted X client programs who need to share the user's graphics and input devices to communicate with the user.
|
X Window System
|
Purpose and abilities
|
X's network protocol is based on X command primitives. This approach allows both 2D and (through extensions like GLX) 3D operations by an X client application which might be running on a different computer to still be fully accelerated on the X server's display. For example, in classic OpenGL (before version 3.0), display lists containing large numbers of objects could be constructed and stored entirely in the X server by a remote X client program, and each then rendered by sending a single glCallList(which) across the network.
|
X Window System
|
Purpose and abilities
|
X provides no native support for audio; several projects exist to fill this niche, some also providing transparent network support.
|
X Window System
|
Software architecture
|
X uses a client–server model: an X server communicates with various client programs. The server accepts requests for graphical output (windows) and sends back user input (from keyboard, mouse, or touchscreen). The server may function as: an application displaying to a window of another display system a system program controlling the video output of a PC a dedicated piece of hardwareThis client–server terminology – the user's terminal being the server and the applications being the clients – often confuses new X users, because the terms appear reversed. But X takes the perspective of the application, rather than that of the end-user: X provides display and I/O services to applications, so it is a server; applications use these services, thus they are clients.
|
X Window System
|
Software architecture
|
The communication protocol between server and client operates network-transparently: the client and server may run on the same machine or on different ones, possibly with different architectures and operating systems. A client and server can even communicate securely over the Internet by tunneling the connection over an encrypted network session.
An X client itself may emulate an X server by providing display services to other clients. This is known as "X nesting". Open-source clients such as Xnest and Xephyr support such X nesting.
|
X Window System
|
Software architecture
|
Remote desktop To run an X client application on a remote machine, the user may do the following: on the local machine, open a terminal window use ssh -X command to connect to the remote machine request a local display/input service (e.g., export DISPLAY=[user's machine]:0 if not using SSH with X forwarding enabled)The remote X client application will then make a connection to the user's local X server, providing display and input to the user.
|
X Window System
|
Software architecture
|
Alternatively, the local machine may run a small program that connects to the remote machine and starts the client application.
|
X Window System
|
Software architecture
|
Practical examples of remote clients include: administering a remote machine graphically (similar to using remote desktop, but with single windows) using a client application to join with large numbers of other terminal users in collaborative workgroups running a computationally intensive simulation on a remote machine and displaying the results on a local desktop machine running graphical software on several machines at once, controlled by a single display, keyboard and mouse
|
X Window System
|
User interfaces
|
X primarily defines protocol and graphics primitives – it deliberately contains no specification for application user-interface design, such as button, menu, or window title-bar styles. Instead, application software – such as window managers, GUI widget toolkits and desktop environments, or application-specific graphical user interfaces – define and provide such details. As a result, there is no typical X interface and several different desktop environments have become popular among users.
|
X Window System
|
User interfaces
|
A window manager controls the placement and appearance of application windows. This may result in desktop interfaces reminiscent of those of Microsoft Windows or of the Apple Macintosh (examples include GNOME 2, KDE, Xfce) or have radically different controls (such as a tiling window manager, like wmii or Ratpoison). Some interfaces such as Sugar or ChromeOS eschew the desktop metaphor altogether, simplifying their interfaces for specialized applications. Window managers range in sophistication and complexity from the bare-bones (e.g., twm, the basic window manager supplied with X, or evilwm, an extremely light window manager) to the more comprehensive desktop environments such as Enlightenment and even to application-specific window managers for vertical markets such as point-of-sale.
|
X Window System
|
User interfaces
|
Many users use X with a desktop environment, which, aside from the window manager, includes various applications using a consistent user interface. Popular desktop environments include GNOME, KDE Plasma and Xfce. The UNIX 98 standard environment is the Common Desktop Environment (CDE). The freedesktop.org initiative addresses interoperability between desktops and the components needed for a competitive X desktop.
|
X Window System
|
Implementations
|
The X.Org implementation is the canonical implementation of X. Owing to liberal licensing, a number of variations, both free and open source and proprietary, have appeared. Commercial Unix vendors have tended to take the reference implementation and adapt it for their hardware, usually customizing it and adding proprietary extensions.
|
X Window System
|
Implementations
|
Until 2004, XFree86 provided the most common X variant on free Unix-like systems. XFree86 started as a port of X to 386-compatible PCs and, by the end of the 1990s, had become the greatest source of technical innovation in X and the de facto standard of X development. Since 2004, however, the X.Org Server, a fork of XFree86, has become predominant.
|
X Window System
|
Implementations
|
While it is common to associate X with Unix, X servers also exist natively within other graphical environments. VMS Software Inc.'s OpenVMS operating system includes a version of X with Common Desktop Environment (CDE), known as DECwindows, as its standard desktop environment. Apple originally ported X to macOS in the form of X11.app, but that has been deprecated in favor of the XQuartz implementation. Third-party servers under Apple's older operating systems in the 1990s, System 7, and Mac OS 8 and 9, included Apple's MacX and White Pine Software's eXodus.
|
X Window System
|
Implementations
|
Microsoft Windows is not shipped with support for X, but many third-party implementations exist, as free and open source software such as Cygwin/X, and proprietary products such as Exceed, MKS X/Server, Reflection X, X-Win32 and Xming.
There are also Java implementations of X servers. WeirdX runs on any platform supporting Swing 1.1, and will run as an applet within most browsers. The Android X Server is an open source Java implementation that runs on Android devices.
When an operating system with a native windowing system hosts X in addition, the X system can either use its own normal desktop in a separate host window or it can run rootless, meaning the X desktop is hidden and the host windowing environment manages the geometry and appearance of the hosted X windows within the host screen.
|
X Window System
|
Implementations
|
X terminals An X terminal is a thin client that only runs an X server. This architecture became popular for building inexpensive terminal parks for many users to simultaneously use the same large computer server to execute application programs as clients of each user's X terminal. This use is very much aligned with the original intention of the MIT project.
|
X Window System
|
Implementations
|
X terminals explore the network (the local broadcast domain) using the X Display Manager Control Protocol to generate a list of available hosts that are allowed as clients. One of the client hosts should run an X display manager.
A limitation of X terminals and most thin clients is that they are not capable of any input or output other than the keyboard, mouse, and display. All relevant data is assumed to exist solely on the remote server, and the X terminal user has no methods available to save or load data from a local peripheral device.
Dedicated (hardware) X terminals have fallen out of use; a PC or modern thin client with an X server typically provides the same functionality at the same, or lower, cost.
|
X Window System
|
Limitations and criticism
|
The Unix-Haters Handbook (1994) devoted a full chapter to the problems of X. Why X Is Not Our Ideal Window System (1990) by Gajewska, Manasse and McCormack detailed problems in the protocol with recommendations for improvement.
|
X Window System
|
Limitations and criticism
|
User interface issues The lack of design guidelines in X has resulted in several vastly different interfaces, and in applications that have not always worked well together. The Inter-Client Communication Conventions Manual (ICCCM), a specification for client interoperability, has a reputation for being difficult to implement correctly. Further standards efforts such as Motif and CDE did not alleviate problems. This has frustrated users and programmers. Graphics programmers now generally address consistency of application look and feel and communication by coding to a specific desktop environment or to a specific widget toolkit, which also avoids having to deal directly with the ICCCM.
|
X Window System
|
Limitations and criticism
|
X also lacks native support for user-defined stored procedures on the X server, in the manner of NeWS – there is no Turing-complete scripting facility. Various desktop environments may thus offer their own (usually mutually incompatible) facilities.
|
X Window System
|
Limitations and criticism
|
Computer accessibility related issues Systems built upon X may have accessibility issues that make utilization of a computer difficult for disabled users, including right click, double click, middle click, mouse-over, and focus stealing. Some X11 clients deal with accessibility issues better than others, so persons with accessibility problems are not locked out of using X11. However, there is no accessibility standard or accessibility guidelines for X11. Within the X11 standards process there is no working group on accessibility; however, accessibility needs are being addressed by software projects to provide these features on top of X.
|
X Window System
|
Limitations and criticism
|
The Orca project adds accessibility support to the X Window System, including implementing an API (AT-SPI). This is coupled with GNOME's ATK to allow for accessibility features to be implemented in X programs using the GNOME/GTK APIs. KDE provides a different set of accessibility software, including a text-to-speech converter and a screen magnifier. The other major desktops (LXDE, Xfce and Enlightenment) attempt to be compatible with ATK.
|
X Window System
|
Limitations and criticism
|
Network An X client cannot generally be detached from one server and reattached to another unless its code specifically provides for it (Emacs is one of the few common programs with this ability). As such, moving an entire session from one X server to another is generally not possible. However, approaches like Virtual Network Computing (VNC), NX and Xpra allow a virtual session to be reached from different X servers (in a manner similar to GNU Screen in relation to terminals), and other applications and toolkits provide related facilities. Workarounds like x11vnc (VNC :0 viewers), Xpra's shadow mode and NX's nxagent shadow mode also exist to make the current X-server screen available. This ability allows the user interface (mouse, keyboard, monitor) of a running application to be switched from one location to another without stopping and restarting the application.
|
X Window System
|
Limitations and criticism
|
Network traffic between an X server and remote X clients is not encrypted by default. An attacker with a packet sniffer can intercept it, making it possible to view anything displayed to or sent from the user's screen. The most common way to encrypt X traffic is to establish a Secure Shell (SSH) tunnel for communication.
|
X Window System
|
Limitations and criticism
|
Like all thin clients, when using X across a network, bandwidth limitations can impede the use of bitmap-intensive applications that require rapidly updating large portions of the screen with low latency, such as 3D animation or photo editing. Even a relatively small uncompressed 640×480×24 bit 30 fps video stream (~211 Mbit/s) can easily outstrip the bandwidth of a 100 Mbit/s network for a single client. In contrast, modern versions of X generally have extensions such as Mesa allowing local display of a local program's graphics to be optimized to bypass the network model and directly control the video card, for use of full-screen video, rendered 3D applications, and other such applications.
|
X Window System
|
Limitations and criticism
|
Client–server separation X's design requires the clients and server to operate separately, and device independence and the separation of client and server incur overhead. Most of the overhead comes from network round-trip delay time between client and server (latency) rather than from the protocol itself: the best solutions to performance issues depend on efficient application design. A common criticism of X is that its network features result in excessive complexity and decreased performance if only used locally.
|
X Window System
|
Limitations and criticism
|
Modern X implementations use Unix domain sockets for efficient connections on the same host. Additionally shared memory (via the MIT-SHM extension) can be employed for faster client–server communication. However, the programmer must still explicitly activate and use the shared memory extension. It is also necessary to provide fallback paths in order to stay compatible with older implementations, and in order to communicate with non-local X servers.
|
X Window System
|
Competitors
|
Some people have attempted writing alternatives to and replacements for X. Historical alternatives include Sun's NeWS and NeXT's Display PostScript, both PostScript-based systems supporting user-definable display-side procedures, which X lacked. Current alternatives include: macOS (and its mobile counterpart, iOS) implements its windowing system, which is known as Quartz. When Apple Computer bought NeXT, and used NeXTSTEP to construct Mac OS X, it replaced Display PostScript with Quartz. Mike Paquette, one of the authors of Quartz, explained that if Apple had added support for all the features it wanted to include into X11, it would not bear much resemblance to X11 nor be compatible with other servers anyway.
|
X Window System
|
Competitors
|
Wayland is being developed by several X.Org developers as a prospective replacement for X. It works directly with the GPU hardware, via DRI. Wayland can run an X server as a Wayland compositor, which can be rootless. A proprietary port of the Wayland backend to the Raspberry Pi was completed in 2013. The project reached version 1.0 in 2012. Like Android, Wayland is EGL-based.
|
X Window System
|
Competitors
|
Mir was a project from Canonical Ltd. with goals similar to Wayland. Mir was intended to work with mobile devices using ARM chipsets (a stated goal was compatibility with Android device-drivers) as well as x86 desktops. Like Android, Mir/UnityNext were EGL-based. Backwards compatibility with X client-applications was accomplished via Xmir. The project has since moved to being a Wayland compositor instead of being an alternative display server.
|
X Window System
|
Competitors
|
Other alternatives attempt to avoid the overhead of X by working directly with the hardware; such projects include DirectFB. The Direct Rendering Infrastructure (DRI) provides a kernel-level interface to the framebuffer.Additional ways to achieve a functional form of the "network transparency" feature of X, via network transmissibility of graphical services, include: Virtual Network Computing (VNC), a very low-level system which sends compressed bitmaps across the network; the Unix implementation includes an X server Remote Desktop Protocol (RDP), which is similar to VNC in purpose, but originated on Microsoft Windows before being ported to Unix-like systems, e.g. NX Citrix XenApp, an X-like protocol and application stack for Microsoft Windows Tarantella, which provides a Java-based remote-gui-client for use in web browsers
|
X Window System
|
History
|
Predecessors Several bitmap display systems preceded X. From Xerox came the Alto (1973) and the Star (1981). From Apollo Computer came Display Manager (1981). From Apple came the Lisa (1983) and the Macintosh (1984). The Unix world had the Andrew Project (1982) and Rob Pike's Blit terminal (1982).
Carnegie Mellon University produced a remote-access application called Alto Terminal, that displayed overlapping windows on the Xerox Alto, and made remote hosts (typically DEC VAX systems running Unix) responsible for handling window-exposure events and refreshing window contents as necessary.
X derives its name as a successor to a pre-1983 window system called W (the letter preceding X in the English alphabet). W ran under the V operating system. W used a network protocol supporting terminal and graphics windows, the server maintaining display lists.
|
X Window System
|
History
|
Origin and early development The original idea of X emerged at MIT in 1984 as a collaboration between Jim Gettys (of Project Athena) and Bob Scheifler (of the MIT Laboratory for Computer Science). Scheifler needed a usable display environment for debugging the Argus system. Project Athena (a joint project between DEC, MIT and IBM to provide easy access to computing resources for all students) needed a platform-independent graphics system to link together its heterogeneous multiple-vendor systems; the window system then under development in Carnegie Mellon University's Andrew Project did not make licenses available, and no alternatives existed.
|
X Window System
|
History
|
The project solved this by creating a protocol that could both run local applications and call on remote resources. In mid-1983 an initial port of W to Unix ran at one-fifth of its speed under V; in May 1984, Scheifler replaced the synchronous protocol of W with an asynchronous protocol and the display lists with immediate mode graphics to make X version 1. X became the first windowing system environment to offer true hardware independence and vendor independence.
|
X Window System
|
History
|
Scheifler, Gettys and Ron Newman set to work and X progressed rapidly. They released Version 6 in January 1985. DEC, then preparing to release its first Ultrix workstation, judged X the only windowing system likely to become available in time. DEC engineers ported X6 to DEC's QVSS display on MicroVAX.
In the second quarter of 1985, X acquired color support to function in the DEC VAXstation-II/GPX, forming what became version 9.
|
X Window System
|
History
|
A group at Brown University ported version 9 to the IBM RT PC, but problems with reading unaligned data on the RT forced an incompatible protocol change, leading to version 10 in late 1985. By 1986, outside organizations had begun asking for X. X10R2 was released in January 1986, then X10R3 in February 1986. Although MIT had licensed X6 to some outside groups for a fee, it decided at this time to license X10R3 and future versions under what became known as the MIT License, intending to popularize X further and, in return, hoping that many more applications would become available. X10R3 became the first version to achieve wide deployment, with both DEC and Hewlett-Packard releasing products based on it. Other groups ported X10 to Apollo and to Sun workstations and even to the IBM PC/AT. Demonstrations of the first commercial application for X (a mechanical computer-aided engineering system from Cognition Inc. that ran on VAXes and remotely displayed on PCs running an X server ported by Jim Fulton and Jan Hardenbergh) took place at the Autofact trade show at that time. The last version of X10, X10R4, appeared in December 1986. Attempts were made to enable X servers as real-time collaboration devices, much as Virtual Network Computing (VNC) would later allow a desktop to be shared. One such early effort was Philip J. Gust's SharedX tool.
|
X Window System
|
History
|
Although X10 offered interesting and powerful functionality, it had become obvious that the X protocol could use a more hardware-neutral redesign before it became too widely deployed, but MIT alone would not have the resources available for such a complete redesign. As it happened, DEC's Western Software Laboratory found itself between projects with an experienced team. Smokey Wallace of DEC WSL and Jim Gettys proposed that DEC WSL build X11 and make it freely available under the same terms as X9 and X10. This process started in May 1986, with the protocol finalized in August. Alpha testing of the software started in February 1987, beta-testing in May; the release of X11 finally occurred on 15 September 1987.The X11 protocol design, led by Scheifler, was extensively discussed on open mailing lists on the nascent Internet that were bridged to USENET newsgroups. Gettys moved to California to help lead the X11 development work at WSL from DEC's Systems Research Center, where Phil Karlton and Susan Angebrandt led the X11 sample server design and implementation. X therefore represents one of the first very large-scale distributed free and open source software projects.
|
X Window System
|
History
|
The MIT X Consortium and the X Consortium, Inc.
|
X Window System
|
History
|
By the late 1980s X was, Simson Garfinkel wrote in 1989, "Athena's most important single achievement to date". DEC reportedly believed that its development alone had made the company's donation to MIT worthwhile. Gettys joined the design team for the VAXstation 2000 to ensure that X—which DEC called DECwindows—would run on it, and the company assigned 1,200 employees to port X to both Ultrix and VMS. In 1987, with the success of X11 becoming apparent, MIT wished to relinquish the stewardship of X, but at a June 1987 meeting with nine vendors, the vendors told MIT that they believed in the need for a neutral party to keep X from fragmenting in the marketplace. In January 1988, the MIT X Consortium formed as a non-profit vendor group, with Scheifler as director, to direct the future development of X in a neutral atmosphere inclusive of commercial and educational interests.
|
X Window System
|
History
|
Jim Fulton joined in January 1988 and Keith Packard in March 1988 as senior developers, with Jim focusing on Xlib, fonts, window managers, and utilities; and Keith re-implementing the server. Donna Converse, Chris D. Peterson, and Stephen Gildea joined later that year, focusing on toolkits and widget sets, working closely with Ralph Swick of MIT Project Athena. The MIT X Consortium produced several significant revisions to X11, the first (Release 2 – X11R2) in February 1988. Jay Hersh joined the staff in January 1991 to work on the PEX and X113D functionality. He was followed soon after by Ralph Mor (who also worked on PEX) and Dave Sternlicht. In 1993, as the MIT X Consortium prepared to depart from MIT, the staff were joined by R. Gary Cutbill, Kaleb Keithley, and David Wiggins.
|
X Window System
|
History
|
In 1993, the X Consortium, Inc. (a non-profit corporation) formed as the successor to the MIT X Consortium. It released X11R6 on 16 May 1994. In 1995 it took on the development of the Motif toolkit and of the Common Desktop Environment for Unix systems. The X Consortium dissolved at the end of 1996, producing a final revision, X11R6.3, and a legacy of increasing commercial influence in the development.
|
X Window System
|
History
|
The Open Group In January 1997, the X Consortium passed stewardship of X to The Open Group, a vendor group formed in early 1996 by the merger of the Open Software Foundation and X/Open.
|
X Window System
|
History
|
The Open Group released X11R6.4 in early 1998. Controversially, X11R6.4 departed from the traditional liberal licensing terms, as the Open Group sought to assure funding for the development of X, and specifically cited XFree86 as not significantly contributing to X. The new terms would have made X no longer free software: zero-cost for noncommercial use, but a fee otherwise. After XFree86 seemed poised to fork, the Open Group relicensed X11R6.4 under the traditional license in September 1998. The Open Group's last release came as X11R6.4 patch 3.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.