title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Neo-futurism
|
People
|
The relaunch of neo-futurism in the 21st century has been creatively inspired by the Pritzker Architecture Prize-winning architect Zaha Hadid and architect Santiago Calatrava.Neo-futurist architects, designers and artists include people like Denis Laming, Patrick Jouin, Yuima Nakazato, artist Simon Stålenhag and artist Charis Tsevis. Neo-futurism has absorbed some high-tech architectural themes and ideas, incorporating elements of high-tech industry and technology іnto building design: Technology and context has been a focus for some architects such as Buckminster Fuller, Norman Foster, Kenzo Tange, Renzo Piano and Richard Rogers.
|
Signomial
|
Signomial
|
A signomial is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multivariable polynomials—an extension that permits exponents to be arbitrary real numbers (rather than just non-negative integers) while requiring the independent variables to be strictly positive (so that division by zero and other inappropriate algebraic operations are not encountered).
|
Signomial
|
Signomial
|
Formally, a signomial is a function with domain R>0n which takes values f(x1,x2,…,xn)=∑i=1M(ci∏j=1nxjaij) where the coefficients ci and the exponents aij are real numbers. Signomials are closed under addition, subtraction, multiplication, and scaling.
If we restrict all ci to be positive, then the function f is a posynomial. Consequently, each signomial is either a posynomial, the negative of a posynomial, or the difference of two posynomials. If, in addition, all exponents aij are non-negative integers, then the signomial becomes a polynomial whose domain is the positive orthant.
|
Signomial
|
Signomial
|
For example, 2.7 0.7 −2x1−4x32/5 is a signomial. The term "signomial" was introduced by Richard J. Duffin and Elmor L. Peterson in their seminal joint work on general algebraic optimization—published in the late 1960s and early 1970s. A recent introductory exposition involves optimization problems. Nonlinear optimization problems with constraints and/or objectives defined by signomials are harder to solve than those defined by only posynomials, because (unlike posynomials) signomials cannot necessarily be made convex by applying a logarithmic change of variables. Nevertheless, signomial optimization problems often provide a much more accurate mathematical representation of real-world nonlinear optimization problems.
|
Klout
|
Klout
|
Klout was a website and mobile app that used social media analytics to rate its users according to online social influence via the "Klout Score", which was a numerical value between 1 and 100. In determining the user score, Klout measured the size of a user's social media network and correlated the content created to measure how other users interact with that content. Klout launched in 2008.Lithium Technologies, who acquired the site in March 2014, closed the service on May 25, 2018.Klout used Bing, Facebook, Foursquare, Google+, Instagram, LinkedIn (individuals' pages, not corporate/business), Twitter, Wikipedia, and YouTube data to create Klout user profiles that were assigned a "Klout Score". Klout scores ranged from 1 to 100, with higher scores corresponding to a higher ranking of the breadth and strength of one's online social influence. While all Twitter users were assigned a score, users who registered at Klout could link multiple social networks, of which network data was then aggregated to influence the user's Klout Score.
|
Klout
|
Methodology
|
Klout measured influence by using data points from Twitter, such as the following count, follower count, retweets, list memberships, how many spam/dead accounts were following you, how influential the people who retweet you were and unique mentions. This information was combined with data from a number of other social network followings and interactions to come up with the Klout Score. Other accounts such as Flickr, Blogger, Tumblr, Last.fm, and WordPress could also be linked by users, but they did not weigh into the Klout Score. Microsoft announced a strategic investment in Klout in September 2012 whereby Bing would have access to Klout influence technology, and Klout would have access to Bing search data for its scoring algorithm.Klout scores were supplemented with three nominally more specific measures, which Klout calls "true reach", "amplification" and "network impact". True reach is based on the size of a user's engaged audience who actively engage in the user's messages. Amplification score relates to the likelihood that one's messages will generate actions, such as retweets, mentions, likes and comments. Network impact reflects the computed influence value of a person's engaged audience.
|
Klout
|
History
|
In 2007, Joe Fernandez underwent a surgery that required him to wire his mouth shut. Because he could not speak for three months, he turned to Facebook and Twitter for social interaction. During this period, he became obsessed with the idea that "word of mouth was measurable." Pulling data from Twitter’s API, he created a prototype that would assign users a score out of 100 to measure their influence. Midway into 2008, he showed the prototype to some friends, who told him it was "the dumbest thing ever."In May 2018, Klout announced that it would cease operations on May 25, 2018. The closure had been planned for some time and was accelerated by the entry into force of the General Data Protection Regulation.
|
Klout
|
Business model
|
Perks The primary business model for Klout involved companies paying Klout for Perks campaigns, in which a company offers free services or products to Klout users who match a pre-defined set of criteria including their scores, topics, and geographic locations. While Klout users who had received Perks were under no obligation to write about them, the hope was that they will effectively advertise the products on social media. Klout offered the Perks program beginning in 2010. According to Klout CEO Joe Fernandez, about 50 partnerships had been established as of November 2011. In May 2013, Klout announced that its users had claimed more than 1 million Perks across over 400 campaigns.
|
Klout
|
Business model
|
Klout for business In March 2013, Klout announced its intention to begin displaying business analytics aimed at helping business and brand users learn about their online audiences.
Content page In September 2012, Klout announced an information-sharing partnership with the Bing search engine, showing Klout scores in Bing searches and allowing Klout users to post items selected by Bing to social media.
|
Klout
|
Criticism
|
Several objections to Klout's methodology were raised regarding both the process by which scores were generated, and the overall societal effect. Critics pointed out that Klout scores were not representative of the influence a person really has, highlighted by Barack Obama, then President of the United States, having a lower influence score than a number of bloggers. Other social critics argued that the Klout score devalued authentic online communication and promoted social ranking and stratification by trying to quantify human interaction. Klout attempted to address some of these criticisms, and updated their algorithms so that Barack Obama's importance was better reflected.The site was criticized for violating the privacy of minors, and for exploiting users for its own profit.John Scalzi described the principle behind Klout's operation as "socially evil" in its exploitation of its users' status anxiety. Charles Stross described the service as "the Internet equivalent of herpes," blogging that his analysis of Klout's terms and conditions revealed that the company's business model was illegal in the United Kingdom, where it conflicted with the Data Protection Act 1998; Stross advised readers to delete their Klout accounts and opt out of Klout services.Ben Rothke concluded that "Klout has its work cut out, and it seems like they need to be in beta a while longer. Klout can and should be applauded for trying to measure this monstrosity called social influence; but their results of influence should, in truth, carry very little influence."Klout was criticised for the opacity of their methodology. While it was claimed that advanced machine learning techniques were used, leveraging network theory, Sean Golliher analysed Klout scores of Twitter users and found that the simple logarithm of the number of followers was sufficient to explain 95% of the variance. In November 2015 Klout released an academic paper discussing their methodology at the IEEE BigData 2015 Conference.In spite of the controversy, some employers made hiring decisions based on Klout scores. As reported in an article for Wired, a man recruited for a VP position with fifteen years of experience consulting for companies including America Online, Ford and Kraft was eliminated as a candidate specifically because of his Klout score, which at the time was 34, in favour of a candidate with a score of 67.
|
Klout
|
Notable events
|
September 2011: Klout integrated with Google+.
October 2011: Klout changed its scoring algorithm, lowering many scores and creating complaints.
November 2011: Klout partnered with Wahooly for their beta launch.
January 2012: Klout was able to raise an estimated $30 million from a host of venture capital firms.
February 2012: Klout acquired local and mobile neighborhood app Blockboard.
May 2012: Klout announced growth of 2000 new partners over a one-year period.
August 14, 2012: Klout changed its algorithm again.
September 2012: Microsoft announced a strategic investment in Klout for an undisclosed sum.
March 28, 2013: Klout announced inclusion of Instagram analytics in factoring Klout scores.
May 13, 2013: Klout users had claimed more than 1 million Perks across over 400 campaigns.
March 27, 2014: Lithium Technologies acquired Klout.
September 14, 2015: Engagement on YouTube content was factored into the Klout Score October 29, 2015: Klout exposed inner workings of the Klout Score.
May 10, 2018: Lithium announced that they would be ending the service on May 25, 2018.
|
Climate Data Record
|
Climate Data Record
|
A Climate Data Record (CDR) is a specific definition of a climate data series, developed by the Committee on Climate Data Records from NOAA Operational Satellites of the National Research Council at the request of NOAA in the context of satellite records. It is defined as "a time series of measurements of sufficient length, consistency, and continuity to determine climate variability and climate change.".Such measurements provide an objective basis for the understanding and prediction of climate and its variability, such as global warming.
|
Climate Data Record
|
Interim Climate Data Record (ICDR)
|
An Interim Climate Data Record (ICDR) is a dataset that has been forward processed, using the baselined CDR algorithm and processing environment but whose consistency and continuity have not been verified. Eventually it will be necessary to perform a new reprocessing of the CDR and ICDR parts together to guarantee consistency, and the new reprocessed data record will replace the old CDR.
|
Climate Data Record
|
Fundamental Climate Data Record (FCDR)
|
A Fundamental Climate Data Record is a long-term data record of calibrated and quality-controlled data designed to allow the generation of homogeneous products that are accurate and stable enough for climate monitoring.
|
Climate Data Record
|
Examples of CDRs
|
AVHRR Pathfinder Sea Surface Temperature GHRSST-PP Reanalysis Project, on the website for Ghrsst-pp Snow and Ice NOAA's Climate Data Records homepage
|
2-Phospho-L-lactate guanylyltransferase
|
2-Phospho-L-lactate guanylyltransferase
|
2-Phospho-L-lactate guanylyltransferase (EC 2.7.7.68, CofC, MJ0887) is an enzyme with systematic name GTP:2-phospho-L-lactate guanylyltransferase. This enzyme catalyses the following chemical reaction (2S)-2-phospholactate + GTP ⇌ (2S)-lactyl-2-diphospho-5'-guanosine + diphosphateThis enzyme is involved in the biosynthesis of coenzyme F420.
|
Knotenschiefer
|
Knotenschiefer
|
Knotenschiefer is a variety of spotted slate characterized by conspicuous subspherical or polyhedral clots that are often individual minerals such as cordierite, biotite, chlorite, andalusite and others.Like fleckschiefer, fruchtschiefer and garbenschiefer, knotenschiefer is a variety of contact metamorphic slate. It is formed at temperatures of around 400 °C and its dark coloration is caused by graphite. Fruchtschiefer occurs at 500 °C. Knotenschiefer is characterised by small nodules, up to one centimetre in size, and nodular deposits of mica as a result of the growth in grain size during metamorphism. The nodules consist of iron minerals, carbon substances and mica; as the metamorphic temperature rises, minerals such as andalusite or chiastolite increasingly occur.
|
GEN2PHEN
|
GEN2PHEN
|
Genotype to Phenotype Databases: a Holistic Approach (GEN2PHEN) is a European project aiming to develop a knowledge web portal integrating information from the genotype to the phenotype in a unifying portal: The Knowledge Centre].
|
GEN2PHEN
|
Summary and Objectives
|
The GEN2PHEN project aims to unify human and model organism genetic variation databases towards increasingly holistic views into Genotype-To-Phenotype (G2P) data, and to link this system into other biomedical knowledge sources via genome browser functionality. The project will establish the technological building-blocks needed for the evolution of today’s diverse G2P databases into a future seamless G2P biomedical knowledge environment, by the projects end. This will consist of a European-centred but globally networked hierarchy of bioinformatics GRID-linked databases, tools and standards, all tied into the Ensembl genome browser. The project has the following specific objectives: To analyse the G2P field and thus determine emerging needs and practices To develop key standards for the G2P database field To create generic database components, services and integration infrastructures for the G2P database domain To create search modalities and data presentation solutions for G2P knowledge To facilitate the process of populating G2P databases To build a major G2P internet portal To deploy GEN2PHEN solutions to the community To address system durability and long-term financing To undertake a whole-system utility and validation pilot studyThe GEN2PHEN Consortium members have been selected from a talented pool of European research groups and companies that are interested in the G2P database challenge. Additionally, a few non-EU participants have been included to bring extra capabilities to the initiative. The final constellation is characterised by broad and proven competence, a network of established working relationships, and high-level roles/connections within other significant projects in this domain...
|
GEN2PHEN
|
Background and Concept
|
By providing a complete Homo sapiens ‘parts list’ (the gene sequences) and a powerful ‘toolkit’ (technologies), the Human Genome Project has revolutionised mankind’s ability to explore how genes cause disease and other phenotypes. Studies in this domain are proceeding at a rapid and ever-increasing pace, generating unprecedented amounts of raw and processed data. It is now imperative that the scientific community finds ways to effectively manage and exploit this flood of information for knowledge creation and practical benefit to society. This fundamental goal lies at the heart of the “Genotype-To-Phenotype Databases: A Holistic Solution (GEN2PHEN)” project.
|
GEN2PHEN
|
Background and Concept
|
Previous genetics studies have shown that inter-individual genome variation plays a major role in differential normal development and disease processes. However, the details of how these relationships work are far from clear, even in the case of most Mendelian disorders where single genetic alterations are fully penetrant (essentially causative, rather than risk modifying). Background genetic effects (modifier genes), epistasis, somatic variation, and environmental factors all complicate the situation. This is particularly the case in complex, multi-factorial disorders (e.g., cancer, heart disease, diabetes, dementia) that will affect most of us at some stage in our lifetime. Strategies do, however, now exist to study the genetics of these disorders, and such investigations are a major focus of research throughout Europe and beyond. A common thread in these studies is the need to create ever-larger datasets and integrate these more effectively.
|
GEN2PHEN
|
Related Projects and Applications
|
GWAS Central Leiden Open Variation Database Locus Reference Genomic (LRG)
|
GEN2PHEN
|
Partners
|
University of Leicester, UK European Molecular Biology Laboratory, Germany Fundació IMIM, Spain Leiden University Medical Center, Netherlands Institut National de la Santé et de la Recherche Médicale, France Karolinska Institutet, Sweden Foundation for Research and Technology – Hellas, Greece Commissariat à l’Energie Atomique, France Erasmus University Medical Center, Netherlands Institute for Molecular Medicine Finland, University of Helsinki, Finland University of Aveiro – IEETA, Portugal University of Western Cape, South Africa Council of Scientific and Industrial Research, India Swiss Institute of Bioinformatics, Switzerland University of Manchester, UK BioBase GmbH, Germany deCODE genetics ehf, Iceland Biocomputing Platforms Ltd Oy, Finland University of Patras, Greece University Medical Center Groningen (UMCG), Netherlands (From March 2012) University of Lund (ULUND), Sweden (From March 2012) Synapse Research Management Partners, Spain. (From March 2012)
|
Tap water
|
Tap water
|
Tap water (also known as faucet water, running water, or municipal water) is water supplied through a tap, a water dispenser valve. In many countries, tap water usually has the quality of drinking water. Tap water is commonly used for drinking, cooking, washing, and toilet flushing. Indoor tap water is distributed through indoor plumbing, which has existed since antiquity but was available to very few people until the second half of the 19th century when it began to spread in popularity in what are now developed countries. Tap water became common in many regions during the 20th century, and is now lacking mainly among people in poverty, especially in developing countries.
|
Tap water
|
Tap water
|
Governmental agencies commonly regulate tap water quality. Household water purification methods such as water filters, boiling, or distillation can be used to treat tap water's microbial contamination to improve its potability. The application of technologies (such as water treatment plants) involved in providing clean water to homes, businesses, and public buildings is a major subfield of sanitary engineering. Calling a water supply "tap water" distinguishes it from the other main types of fresh water which may be available; these include water from rainwater-collecting cisterns, water from village pumps or town pumps, water from wells, or water carried from streams, rivers, or lakes (whose potability may vary).
|
Tap water
|
Background
|
Providing tap water to large urban or suburban populations requires a complex and carefully designed system of collection, storage, treatment and distribution, and is commonly the responsibility of a government agency.Publicly available treated water has historically been associated with major increases in life expectancy and improved public health. Water disinfection can greatly reduce the risks of waterborne diseases such as typhoid and cholera. There is a great need around the world to disinfect drinking water. Chlorination is currently the most widely used water disinfection method, although chlorine compounds can react with substances in water and produce disinfection by-products (DBP) that pose problems to human health. Local geological conditions affecting groundwater are determining factors for the presence of various metal ions, often rendering the water "soft" or "hard".Tap water remains susceptible to biological or chemical contamination. Water contamination remains a serious health issue around the world, and diseases resulted from consuming contaminated water cause the death of 1.6 million children each year. In the event of contamination deemed dangerous to public health, government officials typically issue an advisory regarding water consumption. In the case of biological contamination, residents are usually advised to boil their water before consumption or to use bottled water as an alternative. In the case of chemical contamination, residents may be advised to refrain from consuming tap water entirely until the matter is resolved.
|
Tap water
|
Background
|
In many areas, low concentration of fluoride (< 1.0 ppm F) is intentionally added to tap water to improve dental health, although in some communities "fluoridation" remains a controversial issue. (See water fluoridation controversy). However, long-term consumption of water with high fluoride concentration (> 1.5 ppm F) can have serious undesirable consequences such as dental fluorosis, enamel mottle and skeletal fluorosis, bone deformities in children. Fluorosis severity depends on how much fluoride is present in the water, as well as people's diet and physical activity. Defluoridation methods include membrane-based methods, precipitation, absorption, and electrocoagulation.
|
Tap water
|
Fixtures and appliances
|
Everything in a building that uses water falls under one of two categories; fixture or appliance. As the consumption points above perform their function, most produce waste/sewage components that will require removal by the waste/sewage side of the system. The minimum is an air gap. See cross connection control & backflow prevention for an overview of backflow prevention methods and devices currently in use, both through the use of mechanical and physical principles.Fixtures are devices that use water without an additional source of power.
|
Tap water
|
Fixtures and appliances
|
Fittings and valves Potable water supply systems are composed of pipes, fittings and valves.
|
Tap water
|
Fixtures and appliances
|
Materials The installation of water pipes can be done using the following plastic and metal materials: Plastic polybutylene (PB) high density cross-linked polyethylene (PE-X) block copolymer of polypropylene (PP-B) the polypropylene copolymer (PP-H) random copolymer of polypropylene (random) (PP-R) Layer: cross-linked polyethylene, aluminum, high-density polyethylene (PE-X / Al / PE-HD) Layer: polyethylene crosslinked, aluminum, cross-linked polyethylene (PE-X / Al / PE-X) Layer copolymer of a random polypropylene, aluminum, polypropylene random copolymer (PP-R / Al / PP-R) polyvinyl chloride, chlorinated (PVC-C) polyvinyl chloride - not softened(only cold water) (PVC-U) Metals carbon steel, ordinary galvanized corrosion resistant steel Deoxidized High Phosphorus copper(Cu-DHP) lead (no longer used for new installations due to its toxicity)Other materials, if the pipes made from them have been let into circulation and the widespread use in the construction of the water supply systems.
|
Tap water
|
Fixtures and appliances
|
Lead pipes For many centuries, water pipes were made of lead, because of its ease of processing and durability. The use of lead pipes was a cause of health problems due to ignorance of the dangers of lead on the human body, which causes miscarriages and high death rates of newborns. Lead pipes, which were installed mostly in the late 1800s in the US, are still common today, much of which are located in the Northeast and the Midwest. Their impact is relatively small due to the fouling of pipes and stone cessation of the evolution of lead in the water; however, lead pipes are still detrimental. Most of the lead pipes that exist today are being removed and replaced with the more common material, copper or some type of plastic.
|
Tap water
|
Fixtures and appliances
|
Remnants of pipes in some languages are the names of the experts involved in the execution, reparation, maintenance, and installation of water supply systems, which have been formed from the Latin word 'lead', English word 'plumber', French word, 'plombier'.
|
Tap water
|
Potable water supply
|
Potable water is water that is drinkable and does not pose a risk to health. This supply may come from several possible sources.
|
Tap water
|
Potable water supply
|
Municipal water supply Water wells Processed water from creeks, streams, rivers, lakes, rainwater, etc.Domestic water systems have been evolving since people first located their homes near a running water supply, such as a stream or river. The water flow also allowed sending wastewater away from the residences.Modern plumbing delivers clean, safe, and potable water to each service point in water distribution system, including taps. It is important that the clean water not be contaminated by the wastewater (disposal) side of the process system. Historically, this contamination of drinking water has been one of the largest killers of humans.Most of the mandates for enforcing drinking water quality standards are not for the distribution system, but for the treatment plant. Even though the water distribution system is supposed to deliver the treated water to the consumers' taps without water quality degradation, complicated physical, chemical, and biological factors within the system can cause contamination of tap water.There is a huge gap regarding the potable water supply between the developed and developing world. In general, Africa, especially Sub-Saharan Africa, has the poorest water supply system in the world because of the insufficient access to the system and the low quality of the water in the region, while Finland has the best tap water quality in the world according to a reports by UNICEF and UNESCO.Tap water can sometimes appear cloudy and is often mistaken for mineral impurities in the water. It is usually caused by air bubbles coming out of solution due to change in temperature or pressure. Because cold water holds more air than warm water, small bubbles will appear in water. It has a high dissolved gas content that is heated or depressurized, which reduces how much dissolved gas the water can hold. The harmless cloudiness of the water disappears quickly as the gas is released from the water.
|
Tap water
|
Potable water supply
|
Hot water supply Domestic hot water is provided by means of water heater appliances, or through district heating. The hot water from these units is then piped to the various fixtures and appliances that require hot water, such as lavatories, sinks, bathtubs, showers, washing machines, and dishwashers.
Water flow reduction Water flow through a tap can be reduced by inexpensive small plastic flow reducers. These restrict flow between 15 and 50%, aiding water conservation and reducing the burden on both water supply and treatment facilities.
|
Tap water
|
Comparison to bottled water
|
United States Contaminant levels found in tap water vary between households and plumbing systems. While the majority of US households have access to high-quality tap water, demand for bottled water increases. In 2002, the Gallup Public Opinion Poll revealed that the possible health risk associated with tap water consumption is one of the main reasons that cause American consumers to prefer bottled water over tap water.The trust level towards tap water depends on various criteria, including the existing governmental regulations towards the water quality and their appliance. In 1993, the cryptosporidium outbreak in Milwaukee, Wisconsin, led to a massive hospitalization of more than 400,000 residents and was considered the largest in US history. Severe violations of tap water standards influence the decrease in public trust.The difference in water quality between bottled and tap water is debatable. In 1999, the Natural Resources Defense Council (NRDC) released controversial findings from a 4-year study on bottled water. The study claimed that one-third of the tested waters were contaminated with synthetic organic chemicals, bacteria, and arsenic. At least one sample exceeded state guidelines for contamination levels in bottled water.In the United States, some municipalities make an effort to use tap water over bottled water on governmental properties and events. Voters in Washington State repealed a bottled water tax via citizen initiative.
|
Tap water
|
Regulation and compliance
|
United States The US Environmental Protection Agency (EPA) regulates the allowable levels of some contaminants in public water systems. There may also be numerous contaminants in tap water that are not regulated by EPA and yet potentially harmful to human health. Community water systems—those systems that serve the same people throughout the year—must provide an annual "Consumer Confidence Report" to customers. The report identifies contaminants, if any, in the water system and explains the potential health impacts. After the Flint lead crisis (2014), researchers have paid special attention in studying quality trends in drinking water all across the US. Unsafe level of lead were found in tap water in different cities, such as Sebring, Ohio in August 2015, and Washington, DC, in 2001. Several studies show that a Safe Drinking Water Act (SDWA) health violation occurs in around 7-8% of community water system (CWS) in an average year. Around 16 million cases of acute gastroenteritis occur each year in the US, due to the existence of contaminants in drinking water.
|
Tap water
|
Regulation and compliance
|
USGS has tested tap water from 716 locations across the United States, finding PFAS exceeding the EPA advisories in approximately 75% of samples from urban areas and in approximately 25% of rural areas.Before a water supply system is constructed or modified, the designer and contractor are required to consult the local plumbing code and obtain a building permit prior to construction. Replacing an existing water heater may require a permit and inspection of the work. The US national standard for potable water piping guidelines is NSF/ANSI 61 certified materials. NSF/ANSI also sets standards for certifying polytanks, though the Food and Drug Administration (FDA) approves the materials.
|
Tap water
|
Regulation and compliance
|
Japan To improve water quality, Japan's Ministry of Health revised its water quality standards, which were implemented in April 2004. Numerous professionals developed the drinking water standards. They also determined ways to manage the high quality water system. In 2008, improved regulations were conducted to improve the water quality and reduce the risk of water contamination.
|
Calcium 2-aminoethylphosphate
|
Calcium 2-aminoethylphosphate
|
Calcium 2-aminoethylphosphate (Ca-AEP or Ca-2AEP) is a compound discovered by the biochemist Erwin Chargaff in 1941. It is the calcium salt of phosphorylethanolamine. It was patented by Hans Alfred Nieper and Franz Kohler.
|
Calcium 2-aminoethylphosphate
|
Terminology and glossary
|
Calcium 2-amino ethyl phosphoric acid (Ca-AEP or Ca-2AEP) is also called calcium ethylamino-phosphate (calcium EAP), calcium colamine phosphate, calcium 2-aminoethyl ester of phosphoric acid, and calcium 2-amino ethanol phosphate 2-AEP plays a role as a component in the cell membrane and at the same time has the property to form complexes with minerals. This mineral transporter goes into the outer layer of the outer cell membrane where it releases its associated mineral and is itself metabolized with the structure of the cell membrane.
|
Calcium 2-aminoethylphosphate
|
History, treatments, uses, and risks
|
Ca-AEP was discovered by Erwin Chargaff in 1953.According to the U.S. National Multiple Sclerosis Society Calcium EAP is often promoted as a cure or therapy for Multiple Sclerosis and many other diseases. However, it states that it is not recommended by its medical advisory board, and also notes that the Food and Drug Administration has classified it as unsafe and unapproved for use.Calcium 2-AEP is manufactured by numerous nutraceutical companies and is sold online and in health food stores.
|
Host microbe interactions in Caenorhabditis elegans
|
Host microbe interactions in Caenorhabditis elegans
|
Caenorhabditis elegans- microbe interactions are defined as any interaction that encompasses the association with microbes that temporarily or permanently live in or on the nematode C. elegans. The microbes can engage in a commensal, mutualistic or pathogenic interaction with the host. These include bacterial, viral, unicellular eukaryotic, and fungal interactions. In nature C. elegans harbours a diverse set of microbes. In contrast, C. elegans strains that are cultivated in laboratories for research purposes have lost the natural associated microbial communities and are commonly maintained on a single bacterial strain, Escherichia coli OP50. However, E. coli OP50 does not allow for reverse genetic screens because RNAi libraries have only been generated in strain HT115. This limits the ability to study bacterial effects on host phenotypes. The host microbe interactions of C. elegans are closely studied because of their orthologs in humans. Therefore, the better we understand the host interactions of C. elegans the better we can understand the host interactions within the human body.
|
Host microbe interactions in Caenorhabditis elegans
|
Natural ecology
|
C. elegans is a well-established model organism in different research fields, yet its ecology however is only poorly understood. They have a short development cycle only lasting three days with a total life span of about two weeks.C. elegans were previously considered a soil-living nematode, but in the last 10 years it was shown that natural habitats of C. elegans are microbe-rich, such as compost heaps, rotten plant material, and rotten fruits. Most of the studies on C. elegans are based on the N2 strain, which has adapted to laboratory conditions. Only in the last few years the natural ecology of C. elegans has been studied in more detail and one current research focus is its interaction with microbes. As C. elegans feeds on bacteria (microbivory), the intestine of worms isolated from the wild is usually filled with a large number of bacteria. In contrast to the very high diversity of bacteria in the natural habitat of C. elegans, the lab strains are only fed with one bacterial strain, the Escherichia coli derivate OP50 . OP50 was not co-isolated with C. elegans from nature, but was rather used because of its high convenience for laboratory maintenance. Bleaching is a common method in the laboratory to clean C. elegans of contaminations and to synchronize a population of worms. During bleaching the worms are treated with 5N NaOH and household bleach, leading to the death of all worms and survival of only the nematode eggs. The larvae hatching from these eggs lack any microbes, as none of the currently known C. elegans-associated microbes can be transferred vertically. Since most laboratory strains are kept under these gnotobiotic conditions, nothing is known about the composition of the C. elegans microbiota. The ecology of C. elegans can only be fully understood in the light of the multiple interactions with the microorganisms, which it encounters in the wild. The effect of microbes on C. elegans can vary from beneficial to lethal.
|
Host microbe interactions in Caenorhabditis elegans
|
Beneficial microbes
|
In its natural habitat C. elegans is constantly confronted with a variety of bacteria that could have both negative and positive effects on its fitness. To date, most research on C. elegans-microbe interactions focused on interactions with pathogens. Only recently, some studies addressed the role of commensal and mutualistic bacteria on C. elegans fitness. In these studies, C. elegans was exposed to various soil bacteria, either isolated in a different context or from C. elegans lab strains transferred to soil. These bacteria can affect C. elegans either directly through specific metabolites, or they can cause a change in the environmental conditions and thus induce a physiological response in the host.
|
Host microbe interactions in Caenorhabditis elegans
|
Beneficial microbes
|
Beneficial bacteria can have a positive effect on the lifespan, generate certain pathogen resistances, or influence the development of C. elegans.
|
Host microbe interactions in Caenorhabditis elegans
|
Beneficial microbes
|
Lifespan extension The lifespan of C. elegans is prolonged when grown on plates with Pseudomonas sp. or Bacillus megaterium compared to individuals living on E.coli. The lifespan extension mediated by B. megaterium is greater than that caused by Pseudomonas sp.. As determined by microarray analysis (a method, which allows the identification of C. elegans genes that are differentially expressed in response to different bacteria), 14 immune defence genes were up-regulated when C. elegans was grown on B. megaterium, while only two were up-regulated when fed with Pseudomonas sp. In addition to immune defence genes, other upregulated genes are involved in the synthesis of collagen and other cuticle components, indicating that the cuticle might play an important role in the interaction with microbes. Although some of the genes are known to be important for C. elegans lifespan extension, the precise underlying mechanisms still remain unclear.
|
Host microbe interactions in Caenorhabditis elegans
|
Beneficial microbes
|
Protection against microbes The microbial communities residing inside the host body have now been recognized to be important for effective immune responses. Yet the molecular mechanisms underlying this protection are largely unknown. Bacteria can help the host to fight against pathogens either by directly stimulating the immune response or by competing with the pathogenic bacteria for available resources. In C. elegans, some associated bacteria seem to generate protection against pathogens. For example, when C. elegans is grown on Bacillus megaterium or Pseudomonas mendocina, worms are more resistant to infection with the pathogenic bacterium Pseudomonas aeruginosa [21], which is a common bacterium in C. elegans’ natural environment and therefore a potential natural pathogen. This protection is characterized by prolonged survival on P. aeruginosa in combination with a delayed colonization of C. elegans by the pathogen. Due to its comparatively large size B. megaterium is not an optimal food source for C. elegans, resulting in a delayed development and a reduced reproductive rate. The ability of B. megaterium to enhance resistance against the infection with P. aeruginosa seems to be linked to the decrease in reproductive rate. However, the protection against P. aeruginosa infection provided by P. mendocina is reproduction independent, and depends on the p38 mitogen-activated protein kinase pathway. P. mendocina is able to activate the p38 MAPK pathway and thus to stimulate the immune response of C. elegans against the pathogen. A common way for an organism to protect itself against microbes is to increase fecundation to increase the surviving individuals in the face of an attack. This defense against parasites are genetically linked to stress response pathways and dependent on the innate immune system.
|
Host microbe interactions in Caenorhabditis elegans
|
Beneficial microbes
|
Effects on development Under natural conditions it might be advantageous for C. elegans to develop as fast as possible to be able to reproduce rapidly. The bacterium Comamonas DA1877 accelerates the development of C. elegans. Neither TOR (target of rapamycin), nor insulin signalling seem to mediate this effect on the accelerated development. It is thus possible that secreted metabolites of Comamonas, which might be sensed by C. elegans, lead to faster development. Worms that were fed with Comamonas DA1877 also showed a reduced number of offspring and a reduced lifespan. Another microbe that accelerates C. elegans' growth are L . sphaericus. This bacteria significantly increased the growth rate of C. elegans when compared to their normal diet of E. coli OP50. C. elegans are mostly grown and observed in a controlled laboratory with a controlled diet, therefore, they may show differential growth rates with naturally occurring microbes.
|
Host microbe interactions in Caenorhabditis elegans
|
Pathogenic microbes
|
In its natural environment C. elegans is confronted with a variety of different potential pathogens. C. elegans has been used intensively as a model organism for studying host-pathogen interactions and the immune system. These studies revealed that C. elegans has well-functioning innate immune defenses. The first line of defense is the extremely tough cuticle that provides an external barrier against pathogen invasion. In addition, several conserved signaling pathways contribute to defense, including the DAF-2/DAF-16 insulin-like receptor pathway and several MAP kinase pathways, which activate physiological immune responses. Finally, pathogen avoidance behavior represents another line of C. elegans immune defense. All these defense mechanisms do not work independently, but jointly to ensure an optimal defense response against pathogens. Many microorganisms were found to be pathogenic for C. elegans under laboratory conditions. To identify potential C. elegans pathogens, worms in the L4 larval stage are transferred to a medium that contains the organism of interest, which is a bacterium in most cases. Pathogenicity of the organism can be inferred by measuring the lifespan of worms. There are several known human pathogens that have a negative effect on C. elegans survival. Pathogenic bacteria can also form biofilms, whose sticky exopolymer matrix could impede C. elegans motility and cloaks bacterial quorum sensing chemoattractants from predator detection. However, only very few natural C. elegans pathogens are currently known.
|
Host microbe interactions in Caenorhabditis elegans
|
Pathogenic microbes
|
Eukaryotic microbes One of the best studied natural pathogens of C. elegans is the microsporidium Nematocida parisii, which was directly isolated from wild-caught C. elegans. N. parisii is an intracellular parasite that is exclusively transmitted horizontally from one animal to another. The microsporidian spores are likely to exit the cells by disrupting a conserved cytoskeletal structure in the intestine called the terminal web. It seems that none of the known immune pathways of C. elegans is involved in mediating resistance against N. parisii. Microsporidia were found in several nematodes isolated from different locations, indicating that microsporidia are common natural parasites of C. elegans. The N. parisii-C. elegans system represents a very useful tool to study infection mechanisms of intracellular parasites. Additionally, a new species of microsporidia was recently found in a wild caught C. elegans that genome sequencing places in the same genus Nematocida as prior microsporidia seen in these nematodes. This new species was named Nematocida displodere, after a phenotype seen in late infected worms that explode at the vulva to release infectious spores. N. displodere was shown to infect a broad range of tissues and cell types in C. elegans, including the epidermis, muscle, neurons, intestine, seam cells, and coelomocytes. Strangely, the majority of intestinal infection fails to grow to later parasite stages, while the muscle and epidermal infection thrives. This is in stark contrast to N. parisii which infects and completes its entire life cycle in the C. elegans intestine. These related Nematocida species are being used to study the host and pathogen mechanisms responsible for allowing or blocking eukaryotic parasite growth in different tissue niches. Another eukaryotic pathogen is the fungus Drechmeria coniospora, which has not been directly co-isolated with C. elegans from nature, but is still considered to be a natural pathogen of C. elegans. D. coniospora attaches to the cuticle of the worm at the vulva, mouth, and anus and its hyphae penetrate the cuticle. In this way D. coniospora infects the worm from the outside, while the majority of bacterial pathogens infect the worm from the intestinal lumen.
|
Host microbe interactions in Caenorhabditis elegans
|
Pathogenic microbes
|
Viral pathogens In 2011 the first naturally associated virus was isolated from C. elegans found outside of a laboratory. The Orsay virus is an RNA virus that is closely related to nodaviruses. The virus is not stably integrated into the host genome. It is transmitted horizontally under laboratory conditions. An antiviral RNAi pathway is essential for C. elegans resistance against Orsay virus infection. To date there has not been a virus, other intracellular pathogens, or multicellular parasite that have been able to affect the nematode. Because of this we cannot use C. elegans as an experimental system for these interactions. In 2005, two reports have shown that vesicular stomatitis virus (VSV), an arbovirus with a many invertebrate and vertebrate host range, could replicate in primary cells derived from C. elegans embryos.
|
Host microbe interactions in Caenorhabditis elegans
|
Pathogenic microbes
|
Bacterial pathogens Two bacterial strains of the genus Leucobacter were co-isolated from nature with the two Caenorhabditis species C. briggsae and C. n. spp 11, and named Verde 1 and Verde 2. These two Leucobacter strains showed contrasting pathogenic effects in C. elegans. Worms that were infected with Verde 2 produced a deformed anal region (“Dar” phenotype), while infections with Verde 1 resulted in slower growth due to coating of the cuticle with the bacterial strain. In liquid culture Verde 1 infected worms stuck together with their tails and formed so called “worm stars”. The trapped worms cannot free themselves and eventually die. After death C. elegans is then used as a food source for the bacteria. Only larvae in the L4 stage seem to be able to escape by autotomy. They split their bodies into half, so that the anterior half can escape. The “half-worms” remain viable for several days. The Gram-positive bacterium Bacillus thuringiensis is likely associated with C. elegans in nature. B. thuringiensis is a soil bacterium that is often used in infection experiments with C. elegans. It produces spore-forming toxins, called crystal (Cry) toxins, which are associated with spores. These are jointly taken up by C. elegans orally. Inside the host, the toxins bind to the surface of intestinal cells, where the formation of pores in intestinal cells is induced, causing their destruction. The resulting change in milieu in the gut leads to germination of the spores, which subsequently proliferate in the worm body. An aspect of the C. elegans–B. thuringiensis system is the high variability in pathogenicity between different strains. There are highly pathogenic strains, but also strains that are less or even non-pathogenic.
|
Kamikaze 1NT
|
Kamikaze 1NT
|
Kamikaze 1NT is a preemptive 1NT opening in the game of contract bridge and in common practice shows a balanced hand with 10-12 high-card points (HCP) - also known as the mini-notrump range. It is used in first or second seat hoping to make 1NT opposite an average hand of about 10 HCP.
Originally developed by John Kierein as part of a bidding system to indicate 9-12 HCP, he modified the point range to 10-13 HCP because American Contract Bridge League (ACBL) rules on conventions did not allow the use of Stayman on opening notrump bids with a lower limit below 10 HCP.
|
Bug bash
|
Bug bash
|
In software development, a bug bash is a procedure where all the developers, testers, program managers, usability researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular day-to-day duties and "pound on the product"—that is, each exercises the product in every way they can think of. Because each person will use the product in slightly different (or very different) ways, and the product is getting a great deal of use in a short amount of time, this approach may reveal bugs relatively quickly.The use of bug-bashing sessions is one possible tool in the testing methodology TMap (test management approach). Bug-bashing sessions are usually announced to the organization some days or weeks ahead of time. The test management team may specify that only some parts of the product need testing. It may give detailed instructions to each participant about how to test, and how to record bugs found.
|
Bug bash
|
Bug bash
|
In some organizations, a bug-bashing session is followed by a party and a prize to the person who finds the worst bug, and/or the person who finds the greatest total of bugs.
Bug Bash is a collaboration event, the step-by-step procedure has been given in the article 'Bug Bash—A Collaboration Episode'.
|
Urmetazoan
|
Urmetazoan
|
The Urmetazoan is the hypothetical last common ancestor of all animals, or metazoans. It is universally accepted to be a multicellular heterotroph — with the novelties of a germline and oogamy, an extracellular matrix (ECM) and basement membrane, cell-cell and cell-ECM adhesions and signaling pathways, collagen IV and fibrillar collagen, different cell types (as well as expanded gene and protein families), spatial regulation and a complex developmental plan, and relegated unicellular stages.
|
Urmetazoan
|
Choanoflagellates
|
All animals are posited to have evolved from a flagellated eukaryote. Their closest known living relatives are the choanoflagellates, collared flagellates whose cell morphology is similar to the choanocyte cells of certain sponges.
Molecular studies place animals in a supergroup called the opisthokonts, which also includes the choanoflagellates, fungi, and a few small parasitic protists. The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella as well.
|
Urmetazoan
|
Hypotheses
|
Several different hypotheses for the animals' last common ancestor have been suggested.
|
Urmetazoan
|
Hypotheses
|
The placula hypothesis, proposed by Otto Bütschli, holds that the last common ancestor of animals was an amorphous blob with no symmetry or axis. The center of this blob rose slightly above the silt, forming a hollow that aided feeding on the sea floor underneath. As the cavity grew deeper and deeper, the organisms resembled a thimble, with an inside and an outside. This body shape is found in sponges and cnidaria. This explanation leads to the formation of the bilaterian body plan; the urbilaterian would develop its symmetry when one end of the placula became adapted for forward movement, resulting in left-right symmetry.The planula hypothesis, proposed by Otto Bütschli, suggests that metazoa are derived from planula; that is, the larva of certain cnidaria, or the adult form of the placozoans. Under this hypothesis, the larva became sexually mature through paedomorphosis, and could reproduce without passing through a sessile phase.The gastraea hypothesis was proposed by Ernst Haeckel in 1874, shortly after his work on the calcareous sponges. He proposed that this group of sponges is monophyletic with all eumetazoans, including the bilaterians. This suggests that the gastrulation and the gastrula stage are universal for eumetazoans. It has been perceived as problematic that gastrulation by invagination is by no means universal among eumetazoans. Only recently has an invagination been confirmed in a Calcarea sponge, albeit too early to form a remaining inner space (archenteron).The bilaterogastraea hypothesis was developed by Gösta Jägersten as an adaptation of Ernst Haeckel's Gastraea hypothesis. He proposed that the Bilaterogastraea have a two-stage life cycle, with a pelagic juvenile and a benthic adult stage. The invagination of the original gastrula stage he saw as bilaterally symmetric rather than radially symmetric.The phagocytella hypothesis was proposed by Élie Metchnikoff.
|
Mechanical aptitude
|
Mechanical aptitude
|
According to Paul Muchinsky in his textbook Psychology Applied to Work, "mechanical aptitude tests require a person to recognize which mechanical principle is suggested by a test item." The underlying concepts measured by these items include sounds and heat conduction, velocity, gravity, and force.
A number of tests of mechanical comprehension and mechanical aptitude have been developed and are predictive of performance in manufacturing/production and technical type jobs, for instance.
|
Mechanical aptitude
|
Background information
|
Military information Aptitude tests have been used for military purposes since World War I to screen recruits for military service. The Army Alpha and Army Beta tests were developed in 1917-1918 so ability of personnel could be measured by commanders. The Army Alpha was a test that assessed verbal ability, numerical ability, ability to follow directions, and general knowledge of specific information. The Army Beta was its non-verbal counterpart used to evaluate the aptitude of illiterate, unschooled, or non-English speaking draftees or volunteers.
|
Mechanical aptitude
|
Background information
|
During World War II, the Army Alpha and Beta tests were replaced by The Army General Classification Test (AGCT) and Navy General Classification Test (NGCT). The AGCT was described as a test of general learning ability, and was used by the Army and Marine Corps to assign recruits to military jobs. About 12 million recruits were tested using the AGCT during World War II, the NGCT was used by the Navy to assign recruits to military jobs sailors were tested using the NGCT during World War II.Additional classification tests were developed early in World War II to supplement the AGCT and the NGCT. These included: Specialized aptitude tests related to the technical fields (mechanical, electrical, and later, electronics) Clerical and administrative tests, radio code operational tests Language tests and driver selection tests.
|
Mechanical aptitude
|
Background information
|
Mechanical aptitude and spatial relations Mechanical aptitude tests are often coupled together with spatial relations tests. Mechanical aptitude is a complex function and is the sum of several different capacities, one of which is the ability to perceive spatial relations. Some research has shown that spatial ability is the most important part of mechanical aptitude for certain jobs. Because of this, spatial relations tests are often given separately, or in part with mechanical aptitude tests.
|
Mechanical aptitude
|
Background information
|
Gender differences There is no evidence that states there is a general intelligence difference between men and women. In recent years, another mechanical aptitude test was created. The main purpose of this test was to create a fair chance for women to perform higher than or at the same level as men. Males still perform at a much higher level than women, but the scores between men and women have been drawn closer together. There is little research that has been devoted to why men are able to complete the tests and perform much higher than women. However studies have found that those with lower spatial ability usually do worse on mechanical reasoning, and this might be tied to women's lower performance in mechanical tasks. Studies have also found that pre-natal androgens such as testosterone positively affect performance in both spatial and mechanical abilities.
|
Mechanical aptitude
|
Uses
|
The major uses for mechanical aptitude testing are: Identify candidates with good spatial perception and mechanical reasoning ability Assess a candidate's working knowledge of basic mechanical operations and physical laws Recognize an aptitude for learning mechanical processes and tasks Predict employee success and appropriately align your workforceThese tests are used mostly for industries involving: Manufacturing/Production Energy/UtilitiesThe major occupations that these tests are relevant to are: Automotive and Aircraft Mechanics Engineers Installation/Maintenance/Repairpersons Industrial/Technical (Non-Retail) Sales Representatives Skilled Tradesperson such as Electricians, Welders, and Carpenters Transportation Trades/Equipment Operators such as Truck driver and Heavy Equipment Operator
|
Mechanical aptitude
|
Types of tests
|
US Department of Defense Test of Mechanical Aptitude The mechanical comprehension subtest of the Armed Services Vocational Aptitude Battery (ASVAB), is one of the most widely used mechanical aptitude tests in the world. The test consists of ten subject-specific tests that measure your knowledge of and ability to perform in different areas, and provides an indication of your level of academic ability. The military would ask that all recruits take this exam to help them be placed in the correct job while enrolled in the military. In the beginning, World War I, the U.S. Army developed the Army Alpha and Beta Tests, which grouped the draftees and recruits for military service. The Army Alpha test measured recruits' knowledge, verbal and numerical ability, and ability to follow directions using 212 multiple-choice questions.
|
Mechanical aptitude
|
Types of tests
|
However, during World War II, the U.S. Army replaced the tests with a newer and improved one called the Army General Classification Test. The test had many different versions until they improved it enough to be used regularly. The current tests consist of three different versions, two of which are on paper and pencil and the other is taken on the computer. The scores from each different version are linked together, so each score has the same meaning no matter which exam you take. Some people find that they score higher on the computer version of the test than the other two versions, an explanation of this is due to the fact that the computer based exam is tailored to their demonstrated ability level. These tests are beneficial because they help measure your potential; it gives you a good indicator of where your talents are. By viewing your scores, you can make intelligent career decisions. The higher score you have, the more job opportunities that are available to you.
|
Mechanical aptitude
|
Types of tests
|
Wiesen Test of Mechanical Aptitude The Wiesen Test of Mechanical Aptitude is a measure of a person's mechanical aptitude, which is referred to as the ability use machinery properly and maintain the equipment in best working order. The test is 30 minutes and has 60 items that can help predict performance for specific occupations involving the operation, maintenance, and servicing of tools, equipment, and machinery. Occupations in these areas require and are facilitated by mechanical aptitude. The Wiesen Test of Mechanical Aptitude was designed with the intent to create an evolution of previous tests that helps to improve the shortcomings of these earlier mechanical aptitude tests, such as the Bennett Test of Mechanical Comprehension. This test was reorganized in order to lessen certain gender and racial biases. The reading level that is required for the Wiesen Test of Mechanical Aptitude has been estimated to be at a sixth-grade level, and it is also available in a Spanish-language version for Spanish-speaking mechanical workers. Overall, this mechanical aptitude test has been shown to have less of an adverse impact [on what?] than previous mechanical aptitude tests.
|
Mechanical aptitude
|
Types of tests
|
There are two scores given to each individual taking the test, a raw score and a percentile ranking. The raw score is a measure of how many questions (out of the 60 total) the individual answered correctly, and the percentile ranking is a relative performance score that indicates how the individual's score rates in relation to the scores of other people who have taken this particular mechanical aptitude test.
|
Mechanical aptitude
|
Types of tests
|
Average test scores for the Wiesen Test of Mechanical Aptitude were determined by giving the test to a sample of 1,817 workers aged 18 and older working in specific industrial occupations that were mentioned previously. Using this sample of workers, it was determined that the Wiesen Test of Mechanical Aptitude has very high reliability (statistics) (.97) in determining mechanical aptitude in relation to performance of mechanical occupations.
|
Mechanical aptitude
|
Types of tests
|
Bennett Test of Mechanical Comprehension The Bennett Mechanical Comprehension Test (BMCT) is an assessment tool for measuring a candidate's ability to perceive and understand the relationship of physical forces and mechanical elements in practical situations. This aptitude is important in jobs and training programs that require the understanding and application of mechanical principles. The current BMCT Forms, S and T, have been used to predict performance in a variety of vocational and technical training settings and have been popular selection tools for mechanical, technical, engineering, and similar occupations for many years.
|
Mechanical aptitude
|
Types of tests
|
The BMCT is composed of 68 items, 30-minute time limited test, that are illustrations of simple, encountered mechanisms used in many different mechanisms. It is not considered a speeded time test, but a timed power test and the cut scores will provide the different job requirements for employers. The reading and exercise level of concentration for this test is below or at a sixth-grade reading level.
|
Mechanical aptitude
|
Types of tests
|
In current studies of internal consistency reliability, the range of estimates were compared from previous studies and found out the range was from .84 to .92. So this shows a high reliable consistency when taking and measuring the BMCT. Muchinsky (1993) evaluated the relationships between the BMCT, a general mental ability test, and an aptitude classification test focused on mechanics, and supervisory ratings of overall performance for 192 manufacturing employees. Of the three tests, he found the BMCT to be the best single predictor of job performance (r = .38, p < .01). He also found that the incremental gain in predictability from the other tests was not significant.
|
Mechanical aptitude
|
Types of tests
|
From a current employer standpoint, these people are typically using cognitive ability tests, aptitude tests, personality tests etc. And the BMCT has been used for positions such as electrical and mechanical positions. Also companies will use these tests for computer operators and operators in manufacturing. This test can also help eliminate any issues or variables to employers about who may need further training and instruction or not. This test will help show employers who is a master of the trade they are applying for, and will also highlight the applicants who still have some "catching up" to do.
|
Mechanical aptitude
|
Types of tests
|
Stenquist Test of Mechanical Aptitude The Stenquist Test consist of a series of problems presented in the form of pictures, where each respondent would try to determine which picture assimilates better with another group of pictures. The pictures are mostly common mechanical objects which do not have an affiliation with a particular trade or profession, nor does the visuals require any prior experience or knowledge. Other variations of the test are used to examine a person's keen perception of mechanical objects and their ability to reason out a mechanical problem. For example, The Stenquist Mechanical Assemblying Test Series III, which was created for young males, consisted of physical mechanical parts for the boys to individually construct items with.
|
Minimax theorem
|
Minimax theorem
|
In the mathematical area of game theory, a minimax theorem is a theorem providing conditions that guarantee that the max–min inequality is also an equality. The first theorem in this sense is von Neumann's minimax theorem about zero-sum games published in 1928, which was considered the starting point of game theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved".
|
Minimax theorem
|
Minimax theorem
|
Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature.Formally, von Neumann's minimax theorem states: Let X⊂Rn and Y⊂Rm be compact convex sets. If f:X×Y→R is a continuous function that is concave-convex, i.e.
f(⋅,y):X→R is concave for fixed y , and f(x,⋅):Y→R is convex for fixed x .Then we have that max min min max x∈Xf(x,y).
|
Minimax theorem
|
Special case: Bilinear function
|
The theorem holds in particular if f(x,y) is a linear function in both of its arguments (and therefore is bilinear) since a linear function is both concave and convex. Thus, if f(x,y)=xTAy for a finite matrix A∈Rn×m , we have: max min min max x∈XxTAy.
The bilinear special case is particularly important for zero-sum games, when the strategy set of each player consists of lotteries over actions (mixed strategies), and payoffs are induced by expected value. In the above formulation, A is the payoff matrix.
|
Allopoiesis
|
Allopoiesis
|
Allopoiesis is the process whereby a system produces something other than the system itself. One example of this is an assembly line, where the final product (such as a car) is distinct from the machines doing the producing. This is in contrast with autopoiesis. Allopoiesis is a compound word formed from allo- (Greek prefix meaning other or different) and -poiesis (Greek suffix meaning production, creation or formation).
|
ISO/IEC 12207
|
ISO/IEC 12207
|
ISO/IEC/IEEE 12207 Systems and software engineering – Software life cycle processes is an international standard for software lifecycle processes. First introduced in 1995, it aims to be a primary standard that defines all the processes required for developing and maintaining software systems, including the outcomes and/or activities of each process.
|
ISO/IEC 12207
|
Revision history
|
ISO/IEC/IEEE 12207:2017 is the newest version, published in November 2017. The IEEE Computer Society joined directly with ISO/IEC JTC 1/SC 7/WG 7 in the editing process for this version. A significant change is that it adopts a process model identical to the ISO/IEC/IEEE 15288:2015 process model (there is one name change, the 15288 "System Requirements Definition" process is renamed to the "System/Software Requirements Definition" process). This harmonization of the two standards led to the removal of separate software development and software reuse processes, bringing the total number of 43 processes from 12207 down to the 30 processes defined in 15288. It also caused changes to the quality management and quality assurance process activities and outcomes. Additionally, the definition of "audit" and related audit activities were updated. Annex I of ISO/IEC/IEEE 12207:2017 provides a process mapping between the 2017 version and the previous version, including the primary process alignments between the two versions; this is intended to enable traceability and ease transition for users of the previous version.
|
ISO/IEC 12207
|
Revision history
|
Prior versions include: ISO/IEC 12207:2008, which was published in February 2008 ISO/IEC 12207:1995/Amd 2:2004, an amended version of the prior, published in November 2004 ISO/IEC 12207:1995/Amd 1:2002, an amended version of the prior, published in May 2002 ISO/IEC 12207:1995, the first iteration, published in July 1995; originally was divided into five primary processes (acquisition, supply, development, operation, and maintenance), with eight supporting and four organizational life cycle processes IEEE versions Prior to the IEEE Computer Society formally joining the editing process (becoming a major stakeholder) for the 2017 release, the IEEE maintained its own versions of ISO/IEC 12207, initially with modifications made jointly with the Electronic Industries Alliance (EIA). With the 2008 update came a "shared strategy of ISO/IEC JTC 1/SC 7 and the IEEE to harmonize their respective collections of standards," resulting in identical standards thereon, but with slightly different names. Those IEEE versions included: IEEE Std. 12207-2008: "integrates ISO/IEC 12207:1995 with its two amendments and was coordinated with the parallel revision of ISO/IEC 15288:2002 (System life cycle processes) to align structure, terms, and corresponding organizational and project processes"; superseded by ISO/IEC/IEEE 12207:2017 IEEE/EIA 12207.2-1997: "provides implementation consideration guidance for the normative clauses of IEEE/EIA 12207.0"; superseded/made obsolete by IEEE Std. 12207-2008, which was then superseded by ISO/IEC/IEEE 12207:2017 IEEE/EIA 12207.1-1997: "provides guidance for recording life cycle data resulting from the life cycle processes of IEEE/EIA 12207.0"; superseded by ISO/IEC/IEEE 15289:2011, which was then superseded by ISO/IEC/IEEE 15289:2017 IEEE/EIA 12207.0-1996: "consists of the clarifications, additions, and changes [to ISO/IEC 12207:1995 for industry implementation] accepted by the Institute of Electrical and Electronics Engineers (IEEE) and the Electronic Industries Alliance (EIA) as formulated by a joint project of the two organizations"; superseded by IEEE Std. 12207-2008, which was then superseded by ISO/IEC/IEEE 12207:2017It's also worth noting that IEEE/EIA 12207 officially replaced MIL-STD-498 (released in December 1994) for the development of DoD software systems on May 27, 1998.
|
ISO/IEC 12207
|
Processes not stages
|
The standard establishes a set of processes for managing the lifecycle of software. The standard "does not prescribe a specific software life cycle model, development methodology, method, modelling approach, or technique.". Instead, the standard (as well as ISO/IEC/IEEE 15288) distinguishes between a "stage" and "process" as follows: stage: "period within the life cycle of an entity that relates to the state of its description or realization". A stage is typically a period of time and ends with a "primary decision gate".
|
ISO/IEC 12207
|
Processes not stages
|
process: "set of interrelated or interacting activities that transforms inputs into outputs". The same process often recurs within different stages.Stages (aka phases) are not the same as processes, and this standard only defines specific processes - it does not define any particular stages. Instead, the standard acknowledges that software life cycles vary, and may be divided into stages (also called phases) that represent major life cycle periods and give rise to primary decision gates. No particular set of stages is normative, but it does mention two examples: The system life cycle stages from ISO/IEC TS 24748-1 could be used (concept, development, production, utilization, support, and retirement).
|
ISO/IEC 12207
|
Processes not stages
|
It also notes that a common set of stages for software is concept exploration, development, sustainment, and retirement.The life cycle processes the standard defines are not aligned to any specific stage in a software life cycle. Indeed, the life cycle processes that involve planning, performance, and evaluation "should be considered for use at every stage". In practice, processes occur whenever they are needed within any stage.
|
ISO/IEC 12207
|
Processes
|
ISO/IEC/IEEE 12207:2017 divides software life cycle processes into four main process groups: agreement, organizational project-enabling, technical management, and technical processes. Under each of those four process groups are a variety of sub-categories, including the primary activities of acquisition and supply (agreement); configuration (technical management); and operation, maintenance, and disposal (technical).
|
ISO/IEC 12207
|
Processes
|
Agreement processes Here ISO/IEC/IEEE 12207:2017 includes the acquisition and supply processes, which are activities related to establishing an agreement between a supplier and acquirer. Acquisition covers all the activities involved in initiating a project. The acquisition phase can be divided into different activities and deliverables that are completed chronologically. During the supply phase a project management plan is developed. This plan contains information about the project such as different milestones that need to be reached.
|
ISO/IEC 12207
|
Processes
|
Organizational project-enabling processes Detailed here are life cycle model management, infrastructure management, portfolio management, human resource management, quality management, and knowledge management processes. These processes help a business or organization enable, control, and support the system life cycle and related projects. Life cycle model management helps ensure acquisition and supply efforts are supported, while infrastructure and portfolio management supports business and project-specific initiatives during the entire system life cycle. The rest ensure the necessary resources and quality controls are in place to support the business' project and system endeavors. If an organization does not have an appropriate set of organizational processes, a project executed by the organization may apply those processes directly to the project instead.
|
ISO/IEC 12207
|
Processes
|
Technical management processes ISO/IEC/IEEE 12207:2017 places eight different processes here: [Project planning] Project assessment and control Decision management Risk management Configuration management Information management Measurement Quality assuranceThese processes deal with planning, assessment, and control of software and other projects during the life cycle, ensuring quality along the way.
|
ISO/IEC 12207
|
Processes
|
Technical processes The technical processes of ISO/IEC/IEEE 12207:2017 encompass 14 different processes, some of which came from the old software-specific processes that were phased out from the 2008 version.The full list includes: Business or mission analysis Stakeholder needs and requirements definition Systems/Software requirements definition Architecture definition Design definition System analysis Implementation Integration Verification Transition Validation Operation Maintenance DisposalThese processes involve technical activities and personnel (information technology, troubleshooters, software specialists, etc.) during pre-, post- and during operation. The analysis and definition processes early on set the stage for how software and projects are implemented. Additional processes of integration, verification, transition, and validation help ensure quality and readiness. The operation and maintenance phases occur simultaneously, with the operation phase consisting of activities like assisting users in working with the implemented software product, and the maintenance phase consisting of maintenance tasks to keep the product up and running. The disposal process describes how the system/project will be retired and cleaned up, if necessary.
|
ISO/IEC 12207
|
Conformance
|
Clause 4 describes the document's intended use and conformance requirements. It is expected that particular projects "may not need to use all of the processes provided by this document." In practice, conforming to this standard normally involves selecting and declaring the set of suitable processes. This can be done through either "full conformance" or "tailored conformance".
"Full conformance" can be claimed in one of two ways. "Full conformance to tasks" can be claimed if all requirements of the declared processes' activities and tasks are met. "Full conformance to outcomes" can be claimed if all required outcomes of the declared processes are met. The latter permits more variation.
"Tailored conformance" may be declared when specific clauses are selected or modified through the tailoring process also defined in the document.
|
Clearing factor
|
Clearing factor
|
In centrifugation the clearing factor or k factor represents the relative pelleting efficiency of a given centrifuge rotor at maximum rotation speed. It can be used to estimate the time t (in hours) required for sedimentation of a fraction with a known sedimentation coefficient s (in svedbergs): t=ks The value of the clearing factor depends on the maximum angular velocity ω of a centrifuge (in rad/s) and the minimum and maximum radius r of the rotor: ln 10 13 3600 As the rotational speed of a centrifuge is usually specified in RPM, the following formula is often used for convenience: 2.53 10 ln 1000 )2 Centrifuge manufacturers usually specify the minimum, maximum and average radius of a rotor, as well as the k factor of a centrifuge-rotor combination.
|
Clearing factor
|
Clearing factor
|
For runs with a rotational speed lower than the maximum rotor-speed, the k factor has to be adjusted: maximum rotor-speed actual rotor-speed ) 2The K-factor is related to the sedimentation coefficient S by the formula: T=KS Where T is the time to pellet a certain particle in hours. Since S is a constant for a certain particle, this relationship can be used to interconvert between different rotors.
|
Clearing factor
|
Clearing factor
|
T1K1=T2K2 Where T1 is the time to pellet in one rotor, and K1 is the K-factor of that rotor. K2 is the K-factor of the other rotor, and T2 , the time to pellet in the other rotor, can be calculated. In this manner, one does not need access to the exact rotor cited in a protocol, as long as the K-factor can be calculated. Many online calculators are available to perform the calculations for common rotors.
|
Hentriacontylic acid
|
Hentriacontylic acid
|
Hentriacontylic acid (also hentriacontanoic acid, henatriacontylic acid, or henatriacontanoic acid) is a carboxylic saturated fatty acid.
|
Hentriacontylic acid
|
Sources
|
Hentriacontylic acid can be derived from peat wax and montan wax.
The olefin triacontene-1 can be reacted to yield linear n-henatriacontanoic acid.
|
Triolein
|
Triolein
|
Triolein is a symmetrical triglyceride derived from glycerol and three units of the unsaturated fatty acid oleic acid. Most triglycerides are unsymmetrical, being derived from mixtures of fatty acids. Triolein represents 4–30% of olive oil.Triolein is also known as glyceryl trioleate and is one of the two components of Lorenzo's oil.The oxidation of triolein is according to the formula: C57H104O6 + 80 O2 → 57 CO2 + 52 H2OThis gives a respiratory quotient of 57 80 or 0.7125. The heat of combustion is 8,389 kcal (35,100 kJ) per mole or 9.474 kcal (39.64 kJ) per gram. Per mole of oxygen it is 104.9 kcal (439 kJ).
|
Maximum satisfiability problem
|
Maximum satisfiability problem
|
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
|
Maximum satisfiability problem
|
Example
|
The conjunctive normal form formula (x0∨x1)∧(x0∨¬x1)∧(¬x0∨x1)∧(¬x0∨¬x1) is not satisfiable: no matter which truth values are assigned to its two variables, at least one of its four clauses will be false.
However, it is possible to assign truth values in such a way as to make three out of four clauses true; indeed, every truth assignment will do this.
Therefore, if this formula is given as an instance of the MAX-SAT problem, the solution to the problem is the number three.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.