title
stringlengths
1
149
section
stringlengths
1
1.9k
text
stringlengths
13
73.5k
Harrington–Hollingsworth experiment
Experiment
The experiment was undertaken in 1950 by William J. Harrington and James W. Hollingsworth, who postulated that in patients with idiopathic thrombocytopenic purpura (ITP), it was a blood factor that caused the destruction of platelets. To test this hypothesis, Harrington received 500 ml of blood from a patient with ITP. Within three hours, his platelets dropped to dangerously low levels and he experienced a seizure. His platelet count remained extremely low for four days, finally returning to normal levels by the fifth day. Bone marrow biopsy from Harrington's sternum demonstrated normal megakaryocytes, the cells necessary for platelet production.Subsequently the experiment was repeated on all suitable staff members at the Barnes-Jewish Hospital. All subjects developed low platelet counts within three hours, and all recovered after a period of several days.
Harrington–Hollingsworth experiment
Implications
Schwartz notes that the Harrington–Hollingsworth experiment was a turning point in the understanding of ITP's pathophysiology: The Harrington–Hollingsworth experiment changed the meaning of the "I" in ITP from idiopathic to immune, but "immune" in this case means "autoimmune," because the antibodies bind to and cause the destruction of the patient's own platelets.
Harrington–Hollingsworth experiment
Implications
The experiment was the first to demonstrate that infusion of an ITP patient's plasma into a normal patient caused a precipitous drop in platelet count. This suggested that low platelet counts (thrombocytopenia) in patients with ITP was caused by a circulating factor found in the blood. Many studies performed since then have demonstrated that this circulating factor is in fact a collection of immunoglobulins.
Harrington–Hollingsworth experiment
Implications
Many physician-scientists believe the findings had a major influence on the field of autoimmunity, which was not universally accepted at the time as a mechanism of human disease.
Von Braun amide degradation
Von Braun amide degradation
The von Braun amide degradation is the chemical reaction of a monosubstituted amide with phosphorus pentachloride or thionyl chloride to give a nitrile and an organohalide. It is named after Julius Jacob von Braun, who first reported the reaction.
Von Braun amide degradation
Reaction mechanism
The secondary amide 1 reacts via its enolized form with phosphorus pentachloride to form the oxonium ion 2. This produces a chloride ion which deprotonates the oxonium ion to form and imine 3 and hydrogen chloride. These then react with one another to form an amine, with loss of the phosphorus chloride residue. The β-chloroimine 4 is unstable and undergoes internal elimination to a form a nitrilium cation 5 which is cleaved by attack by chloride to form a nitrile 6a and a haloalkane 6b.
Protegrin
Protegrin
Protegrins are small peptides containing 16-18 amino acid residues. Protegrins were first discovered in porcine leukocytes and were found to have antimicrobial activity against bacteria, fungi, and some enveloped viruses. The amino acid composition of protegrins contains six positively charged arginine residues and four cysteine residues. Their secondary structure is classified as cysteine-rich β-sheet antimicrobial peptides, AMPs, that display limited sequence similarity to certain defensins and tachyplesins. In solution, the peptides fold to form an anti-parallel β-strand with the structure stabilized by two cysteine bridges formed among the four cysteine residues. Recent studies suggest that protegrins can bind to lipopolysaccharide, a property that may help them to insert into the membranes of gram-negative bacteria and permeabilize them.
Protegrin
Structure
There are five known porcine protegrins, PG-1 to PG-5. Three were identified biochemically and rest of them were deduced from DNA sequences.
Protegrin
Structure
The protegrins are synthesized from quadiripartite genes as 147 to 149 amino acid precursors with a cathelin-like propiece. Protegrin sequence is similar to certain prodefensins and tachyplesins, antibiotic peptides derived from the horseshoe crab. Protegrin-1 that consists of 18 amino acids, six of which are arginine residues, forms two antiparallel β-sheets with a β-turn. Protegrin-2 is missing two carboxy terminal amino acids. So, Protegrin-2 is shorter than Protegrin-1 and it has one less positive charge. Protegrin-3 substitutes a glycine for an arginine at position 4 and it also has one less positive charge. Protegrin-4 substitutes a phenylalanine for a valine at position 14 and sequences are different in the β-turn. This difference makes protegrin-4 less polar than others and less positively charged. Protegrin-5 substitutes a proline for an arginine with one less positive charge.
Protegrin
Mechanism of action
Protegrin-1 induces membrane disruption by forming a pore/channel that leads to cell death. This ability depends on its secondary structure. It forms an oligomeric structure in the membrane that creates a pore. Two ways of the self association of protegrin-1 into a dimeric β-sheet, an antiparallel β-sheet with a turn-next-to-tail association or a parallel β-sheet with a turn-next-to-turn association, were suggested. The activity can be restored by stabilizing the peptide structure with the two disulfide bonds. The interacts with membranes depends on membrane lipid composition and the cationic nature of the protegrin-1 adapts to the amphipathic characteristic which is related to the membrane interaction. The insertion of Protegrin-1 into the lipid layer results in the disordering of lipid packing to the membrane disruption.
Protegrin
Antimicrobial activity
The protegrins are highly microbicidal against Candida albicans, Escherichia coli, Listeria monocytogenes, Neisseria gonorrhoeae, and the virions of the human immunodeficiency virus in vitro under conditions which mimic the tonicity of the extracellular milieu. The mechanism of this microbicidal activity is believed to involve membrane disruption, similar to many other antibiotic peptides
Protegrin
Mimetics as antibiotics
Protegrin-1 (PG-1) peptidomimetics developed by Polyphor AG and the University of Zurich are based on the use of the beta hairpin-stabilizing D-Pro-L-Pro template which promote a beta hairpin loop structure found in PG-I. Fully synthetic cyclic peptide libraries of this peptidomimetic template produced compounds that had an antimicrobial activity like that of PG-1 but with reduced hemolytic activity on human red blood cells. Iterative rounds of synthesis and optimization led to the pseudomonas-specific clinical candidate Murepavadin that successfully completed phase-II clinical tests in hospital patients with life-threatening Pseudomonas lung infections.
Naomi Leonard
Naomi Leonard
Naomi Ehrich Leonard is the Edwin S. Wilsey Professor of Mechanical and Aerospace Engineering at Princeton University. She is the director of the Princeton Council on Science and Technology and an associated faculty member in the Program in Applied & Computational Mathematics, Princeton Neuroscience Institute, and the Program in Quantitative and Computational Biology. She is the founding editor of the Annual Review of Control, Robotics, and Autonomous Systems.
Naomi Leonard
Life
Leonard graduated from Princeton University with a B.S.E. degree in mechanical engineering in 1985. From 1985 to 1989, she worked in the electric power industry. She graduated from the University of Maryland with a M.S. in 1991 and Ph.D. in 1994, in electrical engineering, under the supervision of P. S. Krishnaprasad. She Joined Princeton's faculty as an assistant professor of Mechanical and Aerospace Engineering in 1994.
Naomi Leonard
Research
Leonard's research is in the area of dynamics and control theory. Her early work involved the development of "energy-shaping" methods of feedback control for single vehicles. It has applications to the control theory of more general mechanical systems.
Naomi Leonard
Research
She later expanded her work to the control of multi-agent systems, with an emphasis on collective sensing, decision-making, and motion. Her work includes the study of multi-agent systems in nature and the application of insights from nature to man-made systems.Many of Leonard's projects have involved the control of aquatic vehicles. She operates the underwater robotic tank lab at Princeton. She has worked for a number of years with the Autonomous Ocean Sampling Network. In 2006, she led the Adaptive Sampling and Prediction project, which used 10 underwater vehicles to form an automated and adaptive ocean observing system in Monterey Bay.In developing algorithms for robot control, she integrates physics and fluid mechanics with research about uncertainty and collective decision-making. She draws upon nature for her models, studying the animal flocking behavior of fish, honeybees, and birds. Her autonomous robotic swarms mimic schools of fish and are used to collect data and explore their marine environment.
Naomi Leonard
Awards
1995 National Science Foundation CAREER Award 2004 MacArthur Fellows Program 2007 IEEE Fellow 2011 ASME Fellow 2012 Fellow of the Society for Industrial and Applied Mathematics 2014 Fellow of the International Federation of Automatic Control
Pregnanediol
Pregnanediol
Pregnanediol, or 5β-pregnane-3α,20α-diol, is an inactive metabolic product of progesterone. A test can be done to measure the amount of pregnanediol in urine, which offers an indirect way to measure progesterone levels in the body.From the urine of pregnant women from London clinics, Guy Frederic Marrian isolated a substance that contained two hydroxyl groups and could be converted into a diacetate with acetic anhydride. However, the formula had not been clearly clarified. Almost at the same time, Adolf Butenandt at the Chemical University Laboratory in Göttingen investigated the constituents of pregnant urine and clarified the structure of the diol. The name pregnandiol, coined by Butenandt, is derived from the Latin verb praegnans (pregnant, pregnant), or the English pregnant and pregnancy. This gave rise to the name pregnane for the underlying parent hydrocarbon. In 1936, Venning and Browne demonstrated the presence of pregnanediol, specifically the glucuronide of pregnanediol in pregnancy urine. Their study extracted pregnanediol from pregnancy urine and revealed that pregnanediol concentration in urine indicates the amount of progesterone excreted. Since progesterone levels indicate the functionality of a corpus luteum, and pregnanediol concentration represents 40-45% of the progesterone excreted, estimations of pregnanediol reveal the functionality of a corpus luteum. However, pregnanediol concentrations vary with menstrual cycle phases, so it is essential to consider the menstrual cycle phase when examining them. Furthermore, current research has demonstrated that pregnanediol concentration in urine is also a measure of ovarian activity.
Distributed.net
Distributed.net
Distributed.net is a volunteer computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3).
Distributed.net
Distributed.net
Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key). The RC5-72 project is on pace to exhaust the keyspace in just under 47 years, although the project will end whenever the required key is found. RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. As of August 2019, the throughput was estimated to be the same as a Cray XC40, as used in the Lonestar 5 supercomputer, or around 1.25 petaFLOPs.
Distributed.net
History
A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server.A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot.The RC5-56 challenge was solved on October 19, 1997 after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is: It's time to move to a longer key length".The RC5-64 challenge was solved on July 14, 2002 after 1,757 days. The correct key was "0x63DE7DC154F4D039" and the plaintext message read "The unknown message is: Some things are better left unread".The search for OGRs of order 24, 25, 26, 27 and 28 were completed by distributed.net on 13 October 2004, 25 October 2008, 24 February 2009, 19 February 2014, and 23 November 2022 respectively.
Distributed.net
Client
"DNETC" is the file name of the software application which users run to participate in any active distributed.net project. It is a command line program with an interface to configure it, available for a wide variety of platforms. distributed.net refers to the software application simply as the "client". As of April 2019, volunteers running 32-bit Windows with ATI/AMD Stream enabled GPUs have contributed the most processing power to the RC5-72 project and volunteers running 64-bit Linux have contributed the most processing power to the OGR-28 project.Portions of the source code for the client are publicly available, although users are not permitted to distribute modified versions themselves.Distributed.net's RC5-72 project is available on the BOINC client through the Moo! Wrapper.
Distributed.net
Development of GPU-enabled clients
In recent years, most of the work on the RC5-72 project has been submitted by clients that run on the GPU of modern graphics cards. Although the project had already been underway for almost 6 years when the first GPUs began submitting results, as of May 2023, GPUs represent 86% of all completed work units, and complete more than 93% of all work units each day.
Distributed.net
Development of GPU-enabled clients
NVIDIAIn late 2007, work began on the implementation of new RC5-72 cores designed to run on NVIDIA CUDA-enabled hardware, with the first completed work units reported in November 2008. On high-end NVIDIA video cards at the time, upwards of 600 million keys/second was observed For comparison, a 2008-era high-end single CPU working on RC5-72 achieved about 50 million keys/second, representing a very significant advancement for RC5-72. As of May 2023, CUDA clients have completed 11% of all work on the RC5-72 project.ATISimilarly, near the end of 2008, work began on the implementation of new RC5-72 cores designed to run on ATI Stream-enabled hardware. Some of the products in the Radeon HD 5000 and 6000 series provided key rates in excess of 1.8 billion keys/second. As of May 2023, Stream clients have completed nearly 28% of all work on the RC5-72 project. Daily production from Stream clients has dropped below 0.5% as the majority of AMD GPU contributors now use the OpenCL client.OpenCLAn OpenCL client entered beta testing in late 2012 and was released in 2013. As of May 2023, OpenCL clients have completed more than 47% of all work on the RC5-72 project. No breakdown of OpenCL production by GPU manufacturer exists, as AMD, NVIDIA, and Intel GPUs all support OpenCL.
Distributed.net
Timeline of distributed.net projects
CurrentRSA Lab's 72-bit RC5 Encryption Challenge — In progress, 10.413% complete as of 28 July 2023 (although RSA Labs has discontinued sponsorship)CryptographyRSA Lab's 56-bit RC5 Encryption Challenge — Completed 19 October 1997 (after 250 days and 47% of the key space tested).
Distributed.net
Timeline of distributed.net projects
RSA Lab's 56-bit DES-II-1 Encryption Challenge — Completed 23 February 1998 (after 39 days) RSA Lab's 56-bit DES-II-2 Encryption Challenge — Ended 15 July 1998 (found independently by the EFF DES cracker after 2.5 days) RSA Lab's 56-bit DES-III Encryption Challenge — Completed 19 January 1999 (after 22.5 hours with the help of the EFF DES cracker) CS-Cipher Challenge — Completed 16 January 2000 (after 60 days and 98% of the key space tested).
Distributed.net
Timeline of distributed.net projects
RSA Lab's 64-bit RC5 Encryption Challenge — Completed 14 July 2002 (after 1726 days and 83% of the key space tested).Golomb rulersOptimal Golomb Rulers (OGR-24) — Completed 13 October 2004 (after 1552 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-25) — Completed 24 October 2008 (after 3006 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-26) — Completed 24 February 2009 (after 121 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-27) — Completed 19 February 2014 (after 1822 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-28) — Completed 23 November 2022 (after 3199 days, confirmed predicted best ruler)
Jackie Chan J-Mat Fitness
Jackie Chan J-Mat Fitness
The Jackie Chan J-Mat Fitness is a mat serving as a video game requiring a XaviXPORT console to operate. This 2005 game, similar to the later released Nintendo game Wii Fit, is made to make players exercise. The players control Jackie Chan in a variety of modes such as reflex mode, running and exercising (which is played in a similar style to Dance Dance Revolution games).
Jackie Chan J-Mat Fitness
Articles
Kotaku - "Jackie Chan Kind of Invented Wii Fit" Siliconera - "Jackie Chan's Take on Wii Fit"
Shell game
Shell game
The shell game (also known as thimblerig, three shells and a pea, the old army game) is often portrayed as a gambling game, but in reality, when a wager for money is made, it is almost always a confidence trick used to perpetrate fraud. In confidence trick slang, this swindle is referred to as a short-con because it is quick and easy to pull off. The shell game is related to the cups and balls conjuring trick, which is performed purely for entertainment purposes without any purported gambling element.
Shell game
Play
In the shell game, three or more identical containers (which may be cups, shells, bottle caps, or anything else) are placed face-down on a surface. A small ball is placed beneath one of these containers so that it cannot be seen, and they are then shuffled by the operator in plain view. One or more players are invited to bet on which container holds the ball – typically, the operator offers to double the player's stake if they guess right. Where the game is played honestly, the operator can win if he shuffles the containers in a way which the player cannot follow.In practice, however, the shell game is notorious for its use by confidence tricksters who will typically rig the game using sleight of hand to move or hide the ball during play and replace it as required. Fraudulent shell games are also known for the use of psychological tricks to convince potential players of the legitimacy of the game – for example, by using shills or by allowing a player to win a few times before beginning the scam.
Shell game
History
The shell game dates back at least to Ancient Greece. It can be seen in several paintings of the European Middle Ages. Later, walnut shells were used, and today the use of bottle caps or matchboxes is common. The game has also been called "thimblerig" as it could be played using sewing thimbles. The first recorded use of the term "thimblerig" is in 1826.The swindle became very popular throughout the nineteenth century, and games were often set up in or around traveling fairs. A thimblerig team (comprising operator and confederates) was depicted in William Powell Frith's 1858 painting, The Derby Day. In Frith's 1888 My Autobiography and Reminiscences, the painter-turned-memoirist leaves an account of his encounter with a thimble-rig team (operator and accomplices): Fear of jail and the need to find new "flats" (victims) kept these "sharps" (shell men or "operators") traveling from one town to the next, never staying in one place very long. One of the most infamous confidence men of the nineteenth century, Jefferson Randolph Smith, known as Soapy Smith, led organized gangs of shell men throughout the mid-western United States, and later in Alaska.
Shell game
History
Today, the game is still being played for money in many major cities around the world, usually at locations with a high tourist concentration (for example: La Rambla in Barcelona, Gran Via in Madrid, Westminster Bridge in London, Kurfürstendamm in Berlin, Bahnhofsviertel in Frankfurt am Main and public spaces in Paris, Buenos Aires, Benidorm, New York City, Chicago, and Los Angeles). The swindle is classified as a confidence trick game, and illegal to play for money in most countries.The game also inspired a pricing game on the game show The Price Is Right, in which contestants attempt to win a larger prize by pricing smaller prizes to earn attempts at finding a ball hidden under one of four shells designed to resemble walnut shells. While the ball is not shown during the game, and the host shuffles the shells before the start of the game, contestants can win by either winning all four attempts or winning enough attempts (via big "chips" to mark the shells), and picking the one that has the ball. The shuffling is only allowed before the pricing part of the game begins, and once the first small prize is announced, no further shuffling is permitted. Federal game show regulations are designed to ensure the game is legally a game that can be won.
Proof-carrying code
Proof-carrying code
Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i.e. preventing issues like buffer overflows).
Proof-carrying code
Proof-carrying code
Proof-carrying code was originally described in 1996 by George Necula and Peter Lee.
Proof-carrying code
Packet filter example
The original publication on proof-carrying code in 1996 used packet filters as an example: a user-mode application hands a function written in machine code to the kernel that determines whether or not an application is interested in processing a particular network packet. Because the packet filter runs in kernel mode, it could compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional approaches to this problem include interpreting a domain-specific language for packet filtering, inserting checks on each memory access (software fault isolation), and writing the filter in a high-level language which is compiled by the kernel before it is run. These approaches have performance disadvantages for code as frequently run as a packet filter, except for the in-kernel compilation approach, which only compiles the code when it is loaded, not every time it is executed.
Proof-carrying code
Packet filter example
With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must obey: for example, will not access memory outside of the packet and its scratch memory area. A theorem prover is used to show that the machine code satisfies this policy. The steps of this proof are recorded and attached to the machine code which is given to the kernel program loader. The program loader can then rapidly validate the proof, allowing it to thereafter run the machine code without any additional checks. If a malicious party modifies either the machine code or the proof, the resulting proof-carrying code is either invalid or harmless (still satisfies the security policy).
Managed private cloud
Managed private cloud
Managed private cloud (also known as "hosted private cloud") refers to a principle in software architecture where a single instance of the software runs on a server, serves a single client organization (tenant), and is managed by a third party. The third-party provider is responsible for providing the hardware for the server and also for preliminary maintenance. This is in contrast to multitenancy, where multiple client organizations share a single server, or an on-premises deployment, where the client organization hosts its software instance.
Managed private cloud
Managed private cloud
Managed private clouds also fall under the larger umbrella of cloud computing.
Managed private cloud
Adoption
The need for private clouds arose due to enterprises requiring a dedicated service and infrastructure for their cloud computing needs, such as for business critical operations, improved security and better control over their resources. Managed private cloud adoption is a popular choice among organizations, and has been on the rise due to enterprises requiring a dedicated cloud environment, and preferring to avoid having to deal with management, maintenance, or future upgradation costs for the associated infrastructure and services. Such operational costs are unavoidable in on-premises private cloud data centers.
Managed private cloud
Advantages and challenges of managed private cloud
A managed private cloud cuts down on upkeep costs by outsourcing infrastructure management and maintenance to the managed cloud provider. It is easier to integrate an organization's existing software, services, and applications into a dedicated cloud hosting infrastructure which can be customized to the client's needs, instead of a public cloud platform, whose hardware or infrastructure/software platform cannot be individualized to each client.Customers who choose a managed private cloud deployment usually choose them because of their desire for an efficient cloud deployment, but also have the need for service customization or integration only available in a single-tenant environment.
Managed private cloud
Advantages and challenges of managed private cloud
This chart shows key benefits of the different types of deployments, and shows the overlap between these cloud solutions. This chart shows key drawbacks.
Managed private cloud
Advantages and challenges of managed private cloud
Since deployments are done in a single-tenant environment, it is usually cost-prohibitive for small and medium-sized businesses. While server upkeep and maintenance is handled by the service provider, including network management and security, the client is charged for all such services. It is up to the potential client to determine if a managed private cloud solution aligns with their business objectives and budget. While the service-provider maintains the upkeep of servers, network, and platform infrastructure, sensitive data is typically not stored on managed private clouds as it may leave business-critical information prone to breaches via third-party attacks on the cloud service provider.
Managed private cloud
Advantages and challenges of managed private cloud
Common Customizations and integrations include: Active Directory Single Sign-on Learning Management Systems Video Teleconferencing
Managed private cloud
Deployment strategies and service providers
Software companies have taken a variety of strategies in the Managed Private Cloud realm. Some software organisations have provided managed private cloud options internally, such as Microsoft. Companies that offer an on-premises deployment option, by definition enable third-party companies to market Managed Private Cloud solutions. A few managed private cloud service providers are: Adobe Connect: Adobe Connect may be purchased for on-premises deployment, multi-tenant hosted deployment, managed private cloud as ACMS, or managed by third-party managed private cloud provider ConnectSolutions.
Managed private cloud
Deployment strategies and service providers
Rackspace CenturyLink Microsoft licenses for Lync, SharePoint and Exchange may be purchased for on-premises deployment, a multi-tenant hosted deployment via Office 365, or managed by third-party cloud hosting for from Azaleos, ConnectSolutions and others.Others Popular web conferencing products like Cisco WebEx, Citrix Go-to-Meeting and Skype are available via multitenancy, and not available in a managed private cloud environment.
Days of Memories
Days of Memories
Days of Memories is a series of dating sims from SNK for cell phones, beginning in 2005. SNK released a compilation of the first three games for the Nintendo DS in 2007, with new graphics and an extra viewing mode.
Days of Memories
Summary
The games are dating sims starring SNK and ADK characters that take place in a parallel world to their own. In each game, the player is given the month of July to start a relationship with one of the girls featured in the game, in order to finish the game with the beginnings of a workable relationship.
Days of Memories
Games
Days of Memories ~Boku to Kanojo no Atsui Natsu~ (Days of Memories 〜僕と彼女の熱い夏〜)Released on October 17th, 2005. The cast of this game is considered to be fan favorites from their respective debut games. Features - Athena Asamiya, Kasumi Todoh, B. Jenet, King, Mai Shiranui, Yuri Sakazaki, Leona Heidern, Kula Diamond. Male characters - Kyoya Kaido (original)Days of Memories 2 ~Boku no Ichiban Taisetsu na Kimi e~ (Days of Memories 2 〜僕の一番大切な君へ〜)Released on February 1st, 2006. Debuted the first unique Days Of Memories character. Features - Hotaru Futaba, Kisarah Westfield, Fiolina "Fio" Germi, Chizuru Kagura, Mature, Blue Mary. Male characters - Kyo Kusanagi, Iori Yagami Exclusive character - Shizuku Misawa.Days of Memories ~Ōedo Ren'ai Emaki~ (Days of Memories 〜大江戸恋愛絵巻〜)Released on May 15th, 2006. Is set during the era of Feudal Japan. It is the first game in the series to show where the girls are. Features - Nakoruru, Mina Majikina, Rinka Yoshino, Saya, Mikoto, Shiki, Iroha. Male characters - Haohmaru, Genjuro Kibagami, Ukyo Tachibana, Kyouemon (original) Exclusive characters - Shino, Chiyo. This game features only Samurai Shodown characters, rather than the normal cast of The King of Fighters characters.Days of Memories ~Kare to Watashi no Atsui Natsu~ (Days of Memories 〜彼と私の熱い夏〜)Released on November 1st, 2006. This game is marketed as a dating game for girls, rather than the normal male perspective. Features - Kyo Kusanagi, Iori Yagami, K', Ash Crimson, Terry Bogard, Rock Howard, Alba Meira, Ryo Sakazaki.Days of Memories ~Koi wa Good Job!~ (Days of Memories 〜恋はグッジョブ!〜)Released on April 3rd, 2007. This game focuses on characters at work in various jobs, related to their normal game appearances. Features - Kisarah Westfield, King, Kasumi Todoh, Mai Shiranui, Ai, Athena Asamiya. Male characters - Geese Howard, Wolfgang Krauser, Konoe Hideki (original) Exclusive character - Karen Ōkain. All characters except Ai and Karen appeared first in the original two games.Days of MemoriesReleased on June 14th, 2007. Compilation of the first three Days of Memories games for the Nintendo DS.Days of Memories ~Junpaku no Tenshitachi~ (Days of Memories 〜純白の天使たち〜)Released on June 19th, 2007. The character roster is taken from The King of Fighters XI and KOF: Maximum Impact 2 Features - Ninon Beart, Elisabeth Blanctorche, Luise Meyrink, Momoko, Malin, Vanessa, Kaoru Watabe (Athena Asamiya's fan and friend), Alice Garnet Nakata (from the Fatal Fury slot machine, Alice would later appear in The King of Fighters XIV). Side characters - Mignon Beart Male characters - Magaki, Shion Exclusive characters - Ayame Ichitsuka, Tsugumi Ichitsuka.Days of Memories 2Released on April 24th, 2008. Compilation of the fourth to sixth Days of Memories games for the Nintendo DS.Days of Memories ~Boku to Kanojo to Koto no Koi~ (Days of Memories 〜僕と彼女と古都の恋〜)Released on May 5th, 2008. This game focuses on characters at work in various jobs, related to their normal game appearances. This is the first game in the series to include characters from The Last Blade series. Features - Athena Asamiya, Leona Heidern, Kula Diamond, Angel, Whip Side characters - Rimururu, Tsunami (from the exclusive Iroha game) Male characters - Kyo Kusanagi, K', Ash Crimson, Haohmaru, Genjuro Kibagami, Setsuna, Kojiroh Sanada Exclusive character - Kamisaki Misato
Imidazolidinone
Imidazolidinone
Imidazolidinones or imidazolinones are a class of 5-membered ring heterocycles structurally related to imidazole. Imidazolidinones feature a saturated C3N2 nucleus, except for the presence of a urea or amide functional group in the 2 or 4 positions.
Imidazolidinone
2-Imidazolidinones
The 2-imidazolidinones are cyclic derivatives of urea. 1,3-Dimethyl-2-imidazolidinone is a polar solvent and Lewis base. Drugs featuring this ring system include emicerfont, imidapril, and azlocillin. Dimethylol ethylene urea is the reagent used in permanent press clothing.
Imidazolidinone
4-Imidazolidinones
4-Imidazolidinones can be prepared from phenylalanine in two chemical steps (amidation with methylamine followed by condensation reaction with acetone): Imidazolidinone catalyst work by forming an iminium ion with carbonyl groups of α,β-unsaturated aldehydes (enals) and enones in a rapid chemical equilibrium. This iminium activation lower the substrate's LUMO. Several 4-imidazolidinones have been investigated.Drugs featuring the 4-imidazolidinone ring include hetacillin, NNC 63-0532, spiperone, and spiroxatrine.
Imidazolidinone
Imidazolones
Imidazolones (also called imidazolinones) are oxo derivatives of imidazoline (dihydroimidazoles). Examples include imidazol-4-one-5-propionic acid, a product of the catabolism of histidine, and imazaquin, a member of the imidazolinone class of herbicide.
Carbon tetrachloride
Carbon tetrachloride
Carbon tetrachloride, also known by many other names (such as carbon tet for short and tetrachloromethane, also recognised by the IUPAC) is a chemical compound with the chemical formula CCl4. It is a non-flammable, colourless liquid with a "sweet" chloroform-like smell that can be detected at low levels. It was formerly widely used in fire extinguishers, as a precursor to refrigerants and as a cleaning agent, but has since been phased out because of environmental and safety concerns. Exposure to high concentrations of carbon tetrachloride can affect the central nervous system and degenerate the liver and kidneys. Prolonged exposure can be fatal.
Carbon tetrachloride
Carbon tetrachloride
Tradenames include: Carbon-Tet, Katharin (Germany, 1890s), Benzinoform, Carbona and Thawpit in the cleaning industry, Halon-104 in firefighting, Refrigerant-10 in HVACR, and Necatorina and Seretin as a medication.
Carbon tetrachloride
Properties
In the carbon tetrachloride molecule, four chlorine atoms are positioned symmetrically as corners in a tetrahedral configuration joined to a central carbon atom by single covalent bonds. Because of this symmetric geometry, CCl4 is non-polar. Methane gas has the same structure, making carbon tetrachloride a halomethane. As a solvent, it is well suited to dissolving other non-polar compounds such as fats and oils. It can also dissolve iodine. It is volatile, giving off vapors with a smell characteristic of other chlorinated solvents, somewhat similar to the tetrachloroethylene smell reminiscent of dry cleaners' shops.
Carbon tetrachloride
Properties
Solid tetrachloromethane has two polymorphs: crystalline II below −47.5 °C (225.6 K) and crystalline I above −47.5 °C. At −47.3 °C it has monoclinic crystal structure with space group C2/c and lattice constants a = 20.3, b = 11.6, c = 19.9 (.10−1 nm), β = 111°.With a specific gravity greater than 1, carbon tetrachloride will be present as a dense nonaqueous phase liquid if sufficient quantities are spilled in the environment.
Carbon tetrachloride
Reactions
Despite being generally inert, carbon tetrachloride can undergo various reactions. Hydrogen or an acid in presence of an iron catalyst can reduce carbon tetrachloride to chloroform, dichloromethane, chloromethane and even methane. When its vapours passed through a red-hot tube, carbon tetrachloride dechlorinates to Tetrachloroethylene and hexachloroethane. Carbon tetrachloride, when treated with HF, gives various compounds such as trichlorofluoromethane (R-11), dichlorodifluoromethane (R-12), chlorotrifluoromethane (R-13) and carbon tetrafluoride with HCl as the by-product: CCl HF CCl HCl CCl HF CCl HCl CCl HF CClF HCl CCl HF CF HCl This was once one of the main uses of carbon tetrachloride, as R-11 and R-12 were widely used as refrigerants. An alcohol solution of potassium hydroxide decomposes it to potassium chloride and potassium carbonate in water: CCl KOH KCl CO 3+3H2O When a mixture of carbon tetrachloride and carbon dioxide is heated to 350 degrees C, it gives phosgene: CCl CO COCl 2 A similar reaction with carbon monoxide instead gives phosgene and tetrachloroethylene: CCl CO COCl Cl 4 Reaction with hydrogen sulfide gives thiophosgene: CCl CCl HCl Reaction with sulfur trioxide gives phosgene and pyrosulfuryl chloride: CCl SO COCl Cl 2 Reaction with phosphoric anhydride gives phosgene and phosphoryl chloride: CCl COCl POCl 3 Carbon tetrachloride reacts with dry zinc oxide at 200 degrees Celsius to yield zinc chloride, phosgene and carbon dioxide: CCl ZnO ZnCl COCl CO 2
Carbon tetrachloride
History and synthesis
Carbon tetrachloride was originally synthesized in 1820 by Michael Faraday, who named it "protochloride of carbon", by decomposition of hexachloroethane ("perchloride of carbon") which he synthesized by chlorination of ethylene. The protochloride of carbon has been previously misidentified as tetrachloroethylene due to being made with the same reaction of hexachloroethane. Later in the 19th century, the name protochloride of carbon was used for tetrachloroethylene, and carbon tetrachloride was called "bichloride of carbon" or "perchloride of carbon". Henri Victor Regnault developed another method to synthesise carbon tetrachloride from chloroform, chloroethane or methanol with excess chlorine in 1839.Kolbe made carbon tetrachloride in 1845 by passing chlorine over carbon disulfide through a porcelain tube. Prior to the 1950s, carbon tetrachloride was manufactured by the chlorination of carbon disulfide at 105 to 130 °C: CS2 + 3Cl2 → CCl4 + S2Cl2But now it is mainly produced from methane: CH4 + 4 Cl2 → CCl4 + 4 HClThe production often utilizes by-products of other chlorination reactions, such as from the syntheses of dichloromethane and chloroform. Higher chlorocarbons are also subjected to this process named "chlorinolysis": C2Cl6 + Cl2 → 2 CCl4The production of carbon tetrachloride has steeply declined since the 1980s due to environmental concerns and the decreased demand for CFCs, which were derived from carbon tetrachloride. In 1992, production in the U.S./Europe/Japan was estimated at 720,000 tonnes.
Carbon tetrachloride
History and synthesis
Natural occurrence Carbon Tetrachloride was discovered along with chloromethane and chloroform in oceans, marine algae and volcanoes. The natural emissions of carbon tetrachloride are too little compared to those from anthropogenic sources; for example, the Momotombo Volcano in Nicaragua emits carbon tetrachloride at a flux of 82 grams per year while the global industrial emissions were at 2 × 1010 grams per year.Carbon tetrachloride was found in Red algae Asparagopsis taxiformis and Asparagopsis armata. It was detected in Southern California ecosystems, salt lakes of Kalmykian Steppe and a common liverwort in Czechia.
Carbon tetrachloride
Safety
At high temperatures in air, it decomposes or burns to produce poisonous phosgene. This was a common problem when carbon tetrachloride was used as a fire extinguisher: there have been deaths due to its conversion to phosgene reported.Carbon tetrachloride is a suspected human carcinogen based on sufficient evidence of carcinogenicity from studies in experimental animals. The World Health Organization reports carbon tetrachloride can induce hepatocellular carcinomas (hepatomas) in mice and rats. The doses inducing hepatic tumours are higher than those inducing cell toxicity. The International Agency for Research on Cancer (IARC) classified this compound in Group 2B, "possibly carcinogenic to humans". Carbon tetrachloride is one of the most potent hepatotoxins (toxic to the liver), so much so that it is widely used in scientific research to evaluate hepatoprotective agents. Exposure to high concentrations of carbon tetrachloride (including vapor) can affect the central nervous system and degenerate the liver and kidneys, and prolonged exposure may lead to coma or death. Chronic exposure to carbon tetrachloride can cause liver and kidney damage and could result in cancer. See safety data sheets.Consumption of alcohol increases the toxic effects of carbon tetrachloride and may cause more severe organ damage, such as acute renal failure, in heavy drinkers. The doses that can cause mild toxicity to non-drinkers can be fatal to drinkers.The effects of carbon tetrachloride on human health and the environment have been assessed under REACH in 2012 in the context of the substance evaluation by France.In 2008, a study of common cleaning products found the presence of carbon tetrachloride in "very high concentrations" (up to 101 mg/m3) as a result of manufacturers' mixing of surfactants or soap with sodium hypochlorite (bleach).Carbon tetrachloride is also both ozone-depleting and a greenhouse gas. However, since 1992 its atmospheric concentrations have been in decline for the reasons described above (see atmospheric concentration graphs in the gallery). CCl4 has an atmospheric lifetime of 85 years.
Carbon tetrachloride
Uses
In organic chemistry, carbon tetrachloride serves as a source of chlorine in the Appel reaction. Carbon tetrachloride made from heavy chlorine-37 has been used in the detection of neutrinos.
Carbon tetrachloride
Historical uses
Carbon tetrachloride was widely used as a dry cleaning solvent, as a refrigerant, and in lava lamps. In the last case, carbon tetrachloride is a key ingredient that adds weight to the otherwise buoyant wax.
Carbon tetrachloride
Historical uses
One specialty use of carbon tetrachloride was in stamp collecting, to reveal watermarks on postage stamps without damaging them. A small amount of the liquid is placed on the back of a stamp, sitting in a black glass or obsidian tray. The letters or design of the watermark can then be seen clearly. Today, this is done on lit tables without using carbon tetrachloride.
Carbon tetrachloride
Historical uses
Cleaning Being a good solvent for many materials (such as grease and tar), carbon tetrachloride was widely used as a cleaning fluid for nearly 70 years. It is nonflammable and nonexplosive, and did not leave any odour on the cleaned material unlike gasoline which was also used for cleaning at the time. It was used as a "safe" alternative to gasoline. It was first marketed as Katharin, in 1892 and as Benzinoform later. Carbon tetrachloride was the first chlorinated solvent to be used in dry-cleaning and used until the 1950s. It was corrosive to the dry-cleaning equipment and caused illness among dry-cleaning operators and was replaced by trichloroethylene, tetrachloroethylene and methyl chloroform (trichloroethane).Carbon tetrachloride was also used as an alternative to petrol (gasoline) in dry shampoos, from the beginning of 1903 to the 1930s. Several women had fainted from its fumes during the hair wash in barber shops, the hairdressers often used electric fans to blow the fumes away. In 1909, a baronet's daughter, Helenora Elphinstone-Dalrymple (aged 29), died after having her hair shampooed with carbon tetrachloride.It is assumed that carbon tetrachloride was still used as a dry cleaning solvent in North Korea as of 2006.
Carbon tetrachloride
Historical uses
Medical uses Carbon tetrachloride has been briefly used as a volatile inhalation anaesthetic and analgesic for intense menstruation pains and headaches in the mid-19th century. Its anaesthetic effects were known as early as 1847 or 1848.It was introduced as a safer alternative to Chloroform by Doctor Protheroe Smith in 1864. In December 1865, the Scottish obstetrician who discovered the anaesthetic effects of chloroform on humans, James Young Simpson, had experimented with carbon tetrachloride as an anaesthetic. Simpson named the compound "Chlorocarbon" for its similarity to chloroform. His experiments involved injecting carbon tetrachloride into two women's vaginas. Simpson orally consumed carbon tetrachloride and described it as having "the same effect as swallowing a capsule of chloroform".Because of the higher amount of chlorine atoms (compared to chloroform) in its molecule, carbon tetrachloride has a stronger anaesthetic effect than chloroform and required a smaller amount. Its anaesthetic action was likened to ether, rather than the relates chloroform. It is less volatile than chloroform, therefore it was more difficult to apply and needed warm water to evaporate. Its smell has been described as "fruity", quince-like and "more pleasant than chloroform", and had a "pleasant taste". Carbon tetrachloride for anaesthetic use was made by the chlorination of carbon disulfide. It was used on at least 50 patients, of which most were women in labour. During anaesthesia, carbon tetrachloride has caused violent muscular contractions and negative effects on the heart in some patients that it had to be substituted with chloroform or ether. Such use was experimental and the anaesthetic use of carbon tetrachloride never gained popularity due to its potential toxicity.
Carbon tetrachloride
Historical uses
The veterinary doctor Maurice Crowther Hall (1881-1938) discovered in 1921 that carbon tetrachloride was incredibly effective as an anthelminthic in eradicating hookworm by ingesting it. Beginning in 1922, capsules of pure carbon tetrachloride were marketed by Merck under the name Necatorina (variants include Neo-necatorina and Necatorine). Necatorina was used as a medication against parasitic diseases in humans. This medication was most prevalently used in Latin American countries. Its toxicity was not well-understood at the time and toxic effects were attributed to impurities in the capsules rather than carbon tetrachloride itself.
Carbon tetrachloride
Historical uses
Solvent It once was a popular solvent in organic chemistry, but because of its adverse health effects, it is rarely used today. It is sometimes useful as a solvent for infrared spectroscopy, because there are no significant absorption bands above 1600 cm−1. Because carbon tetrachloride does not have any hydrogen atoms, it was historically used in proton NMR spectroscopy. In addition to being toxic, its dissolving power is low. Its use in NMR spectroscopy has been largely superseded by deuterated solvents (mainly deuterochloroform). Use of carbon tetrachloride in determination of oil has been replaced by various other solvents, such as tetrachloroethylene. Because it has no C–H bonds, carbon tetrachloride does not easily undergo free-radical reactions. It is a useful solvent for halogenations either by the elemental halogen or by a halogenation reagent such as N-bromosuccinimide (these conditions are known as Wohl–Ziegler bromination).
Carbon tetrachloride
Historical uses
Fire suppression In 1910, the Pyrene Manufacturing Company of Delaware filed a patent to use carbon tetrachloride to extinguish fires. The liquid was vaporized by the heat of combustion and extinguished flames, an early form of gaseous fire suppression. At the time it was believed the gas simply displaced oxygen in the area near the fire, but later research found that the gas actually inhibits the chemical chain reaction of the combustion process.In 1911, Pyrene patented a small, portable extinguisher that used the chemical. The extinguisher consisted of a brass bottle with an integrated hand-pump that was used to expel a jet of liquid toward the fire. As the container was unpressurized, it could easily be refilled after use. Carbon tetrachloride was suitable for liquid and electrical fires and the extinguishers were often carried on aircraft or motor vehicles. However, as early as 1920, there were reports of fatalities caused by the chemical when used to fight a fire in a confined space.In the first half of the 20th century, another common fire extinguisher was a single-use, sealed glass globe known as a "fire grenade", filled with either carbon tetrachloride or salt water. The bulb could be thrown at the base of the flames to quench the fire. The carbon tetrachloride type could also be installed in a spring-loaded wall fixture with a solder-based restraint. When the solder melted by high heat, the spring would either break the globe or launch it out of the bracket, allowing the extinguishing agent to be automatically dispersed into the fire.A well-known brand of fire grenade was the "Red Comet", which was variously manufactured with other fire-fighting equipment in the Denver, Colorado area by the Red Comet Manufacturing Company from its founding in 1919 until manufacturing operations were closed in the early 1980s.Since carbon tetrachloride freezes at –23 °C, the fire extinguishers would contain only 89-90% carbon tetrachloride and 10% trichloroethylene (m.p. –85 °C) or chloroform (m.p. –63 °C) for lowering its freezing point. The extinguishers with 10% trichloroethylene would contain 1% carbon disulfide as a stabiliser.
Carbon tetrachloride
Historical uses
Refrigerants Prior to the Montreal Protocol, large quantities of carbon tetrachloride were used to produce the chlorofluorocarbon refrigerants R-11 (trichlorofluoromethane) and R-12 (dichlorodifluoromethane). However, these refrigerants play a role in ozone depletion and have been phased out. Carbon tetrachloride is still used to manufacture less destructive refrigerants. Fumigant Carbon tetrachloride was widely used as a fumigant to kill insect pests in stored grain. It was employed in a mixture known as 80/20, that was 80% carbon tetrachloride and 20% Carbon disulfide. The United States Environmental Protection Agency banned its use in 1985.
Carbon tetrachloride
Society and culture
The French writer René Daumal intoxicated himself by inhalation of carbon tetrachloride which he used to kill the beetles he collected, to "encounter another worlds" by voluntarily plunging himself into intoxications close to comatose states. Carbon tetrachloride is listed (along with salicylic acid, toluene, sodium tetraborate, silica gel, methanol, potassium carbonate, ethyl acetate and "BHA") as an ingredient in Peter Parker's (Spider-Man) custom web fluid formula in the book The Wakanda Files: A Technological Exploration of the Avengers and Beyond.
Carbon tetrachloride
Society and culture
Australian YouTuber Tom of Explosions&Fire and Extractions&Ire made a video on extracting carbon tetrachloride from an old fire extinguisher in 2019, and later experimenting with it by mixing it with sodium, and the chemical gained a fan base called "Tet Gang" on social media (especially on Reddit). The channel owner later used carbon tetrachloride themed designs in the channel's merch.
Carbon tetrachloride
Society and culture
In the Ramones song "Carbona Not Glue" released in 1977, the narrator says that huffing the vapours of Carbona, a carbon tetrachloride-based stain remover, was better than huffing glue. They later removed the song from album as Carbona was a corporate trademark. Famous deaths from carbon tetrachloride poisoning Evalyn Bostock, (1917 – 1944) British actress who died from accidentally drinking carbon tetrachloride after mistaking it for her drink while working in a photographic darkroom. Harry Edwards (1887–1952), American director who died from carbon tetrachloride poisoning shortly after directing his first television production. Zilphia Horton, (1910–1952) American musician and activist who died from accidentally drinking a glass full of carbon tetrachloride-based typewriter cleaning fluid that she mistook for water. Margo Jones, (1911–1955) American stage director who was exposed to the fumes of carbon tetrachloride that was used to clean off paint from a carpet. She died a week later from kidney failure. Jim Beck, (1919–1956), American record producer, died after exposure to carbon tetrachloride fumes that he was exposed to during cleaning recording equipment. Tommy Tucker, (1933–1982) American blues singer, died after using carbon tetrachloride in floor refinishing.
Similac
Similac
Similac (for "similar to lactation") is a brand of infant formula that was developed by Alfred Bosworth of Tufts University and marketed by Abbott Laboratories. It was first released in the late 1920s, and then reformulated and concentrated in 1951. Today, Similac is sold in 96 countries worldwide.
Similac
History
1903 - Harry C. Moores and Stanley M. Ross launch the Moores & Ross Milk Company which specialized on bottling milk for home delivery. 1925 - Alfred Bosworth creates an infant formula called “Franklin Infant Food”, later renamed to Similac. 1928 - Company renames itself to "M &R Diatetic Laboratories", sells off its regular milk operations to Borden and focuses on infant milk. 1950 - Company introduces "Similac Concentrated Liquid" in the USA, a non-powder infant formula. 1959 - Company launches "Similac with Iron", an iron-fortified infant formula. 1961 - Similac opens a new plant in The Netherlands, its first factory outside of the US 1962 - Similac begins offering "Similac PM 60/40", for babies with specific medical conditions. 1964 - Company merges with Abbott Laboratories 1966 - Similac introduces "Isomil", a soy-based formula. 1970 - Similac arrives in Israel 1994 - Similac launches "NeoCare", a formula tailored to premature babies. Later renamed to "Similac NeoSure". 1999 - Similac creates "Similac with Iron Ready to Feed" formula bottle. 2000 - Similac starts offering "Human Milk Fortifier". 2002 - Similac introduces "Similac Advance with Iron", an infant formula with DHA and ARA. 2006 - Similac launches "Similac Organic", a certified USDA organic infant formula. 2011 - Simiilac launches "Similac Advance Plus", "Similac LeMehadrin" and "Similac Gentle" (lactose-free formula). 2013 Similac begins offering "Similac Human Milk Fortifier Concentrated Liquid" for preterm babies in NICUs. Similac launches a formula designed for breastfeeding moms who choose to supplement. Similac launches "The Baby Journal" app, Diaper Decoder and Ecodu developmental kits 2014 - Similac promotes "Similac Breastfeeding Supplement" for nursing mothers 2015 Similac brings forward "Similac Advance NON-GMO", a formula with ingredients not genetically engineered. Similac delivers a "big hit" commercial, whereby Hilary and Haylie Duff teamed up with Similac "to help raise awareness against mom-on-mom bullying" 2016 Similac introduces "Go & Grow by Similac Food Mix-Ins", a supplement designed to mix into the food of toddlers.
Similac
History
Similac begins offering "Pure Bliss by Similac", a formula starting with fresh milk from grass-fed cows that has no artificial growth hormones or antibiotics Similac launches "Similac Pro-Advance" and "Similac Pro-Sensitive", formulas containing 2’-FL Human Milk Oligosaccharide 2022 By February 2022, Abbott had initiated a voluntary recall of some Similac and Alimentum powdered infant formula (PIF) after finding evidence of Cronobacter sakazakii in some areas of Abbott's Sturgis, Michigan facility, known for manufacturing Similac, the leading PIF brand. In the United States, about 90% of the multibillion-dollar PIF market is controlled by only four companies, including Abbott, and the Sturgis facility is Abbott's largest. Most of Abbott's powdered formula was produced there—mainly under the Similac brand name—representing 40% of the US market. The Office of the Commissioner of the Food and Drug Administration (FDA) published a May 2022 update on the recall of certain Similac, Alimentum and EleCare products as they investigate four cases of hospitalized infants involving Cronobacter sakazakii infection following the infants' consumption of PIF produced in Sturgis plant. Abbott shut down the Sturgis plant, out of an abundance of caution. There is no evidence that the infants' infections were caused by the powdered formula. The closure of the Sturgis plant for five months exacerbated the 2022 United States infant formula shortage which peaked in May. As of June 2022, the FDA was unable to prove a causal relationship between the deaths of nine infants who had consumed Abbott's PIF and Abbott products. The plant reopened in June.
Similac
Product Lineup
Premature Newborn & Infants Toddlers For Mothers
Similac
Ingredients
Each formula contains various ingredients but most have OptiGRO, a mixture containing DHA Lutein Vitamin E Nucleotides Antioxidants Prebiotics
Epimestrol
Epimestrol
Epimestrol (INN, USAN, BAN) (brand names Alene, Stimovul; former developmental code name ORG-817), also known as 3-methoxy-17-epiestriol, is a synthetic, steroidal estrogen and an estrogen ether and prodrug of 17-epiestriol. It has been used as a component of ovulation induction in combination with gonadotropin-releasing hormone.
Mediation (statistics)
Mediation (statistics)
In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known as a mediator variable (also a mediating variable, intermediary variable, or intervening variable). Rather than a direct causal relationship between the independent variable and the dependent variable, a mediation model proposes that the independent variable influences the mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the relationship between the independent and dependent variables.Mediation analyses are employed to understand a known relationship by exploring the underlying mechanism or process by which one variable influences another variable through a mediator variable. In particular, mediation analysis can contribute to better understanding the relationship between an independent variable and a dependent variable when these variables do not have an obvious direct connection.
Mediation (statistics)
Baron and Kenny's (1986) steps for mediation analysis
Baron and Kenny (1986) laid out several requirements that must be met to form a true mediation relationship. They are outlined below using a real-world example. See the diagram above for a visual representation of the overall mediating relationship to be explained. Note: Hayes (2009) critiqued Baron and Kenny's mediation steps approach, and as of 2019, David A. Kenny on his website stated that mediation can exist in the absence of a 'significant' total effect, and therefore step 1 below may not be needed. This situation is sometimes referred to as "inconsistent mediation". Later publications by Hayes also questioned the concepts of full or partial mediation and advocated for these terms, along with the classical mediation steps approach outlined below, to be abandoned.
Mediation (statistics)
Baron and Kenny's (1986) steps for mediation analysis
Step 1 Regress the dependent variable on the independent variable to confirm that the independent variable is a significant predictor of the dependent variable.Independent variable → dependent variable 10 11 X+ε1 β11 is significant Step 2 Regress the mediator on the independent variable to confirm that the independent variable is a significant predictor of the mediator. If the mediator is not associated with the independent variable, then it couldn’t possibly mediate anything.Independent variable → mediator 20 21 X+ε2 β21 is significant Step 3 Regress the dependent variable on both the mediator and independent variable to confirm that a) the mediator is a significant predictor of the dependent variable, and b) the strength of the coefficient of the previously significant independent variable in Step #1 is now greatly reduced, if not rendered nonsignificant.
Mediation (statistics)
Baron and Kenny's (1986) steps for mediation analysis
30 31 32 Me+ε3 β32 is significant β31 should be smaller in absolute value than the original effect for the independent variable (β11 above) Example The following example, drawn from Howell (2009), explains each step of Baron and Kenny's requirements to understand further how a mediation effect is characterized. Step 1 and step 2 use simple regression analysis, whereas step 3 uses multiple regression analysis.
Mediation (statistics)
Baron and Kenny's (1986) steps for mediation analysis
How you were parented (i.e., independent variable) predicts how confident you feel about parenting your own children (i.e., dependent variable). How you were parented (i.e., independent variable) predicts your feelings of competence and self-esteem (i.e., mediator).
Mediation (statistics)
Baron and Kenny's (1986) steps for mediation analysis
Your feelings of competence and self-esteem (i.e., mediator) predict how confident you feel about parenting your own children (i.e., dependent variable), while controlling for how you were parented (i.e., independent variable).Such findings would lead to the conclusion implying that your feelings of competence and self-esteem mediate the relationship between how you were parented and how confident you feel about parenting your own children.
Mediation (statistics)
Baron and Kenny's (1986) steps for mediation analysis
If step 1 does not yield a significant result, one may still have grounds to move to step 2. Sometimes there is actually a significant relationship between independent and dependent variables but because of small sample sizes, or other extraneous factors, there could not be enough power to predict the effect that actually exists.
Mediation (statistics)
Direct versus indirect effects
In the diagram shown above, the indirect effect is the product of path coefficients "A" and "B". The direct effect is the coefficient " C' ". The direct effect measures the extent to which the dependent variable changes when the independent variable increases by one unit and the mediator variable remains unaltered. In contrast, the indirect effect measures the extent to which the dependent variable changes when the independent variable is held constant and the mediator variable changes by the amount it would have changed had the independent variable increased by one unit.
Mediation (statistics)
Direct versus indirect effects
In linear systems, the total effect is equal to the sum of the direct and indirect (C' + AB in the model above). In nonlinear models, the total effect is not generally equal to the sum of the direct and indirect effects, but to a modified combination of the two.
Mediation (statistics)
Full versus partial mediation
A mediator variable can either account for all or some of the observed relationship between two variables. Full mediation Maximum evidence for mediation, also called full mediation, would occur if inclusion of the mediation variable drops the relationship between the independent variable and dependent variable (see pathway c in diagram above) to zero. Partial mediation Partial mediation maintains that the mediating variable accounts for some, but not all, of the relationship between the independent variable and dependent variable. Partial mediation implies that there is not only a significant relationship between the mediator and the dependent variable, but also some direct relationship between the independent and dependent variable.
Mediation (statistics)
Full versus partial mediation
In order for either full or partial mediation to be established, the reduction in variance explained by the independent variable must be significant as determined by one of several tests, such as the Sobel test. The effect of an independent variable on the dependent variable can become nonsignificant when the mediator is introduced simply because a trivial amount of variance is explained (i.e., not true mediation). Thus, it is imperative to show a significant reduction in variance explained by the independent variable before asserting either full or partial mediation. It is possible to have statistically significant indirect effects in the absence of a total effect. This can be explained by the presence of several mediating paths that cancel each other out, and become noticeable when one of the cancelling mediators is controlled for. This implies that the terms 'partial' and 'full' mediation should always be interpreted relative to the set of variables that are present in the model. In all cases, the operation of "fixing a variable" must be distinguished from that of "controlling for a variable," which has been inappropriately used in the literature. The former stands for physically fixing, while the latter stands for conditioning on, adjusting for, or adding to the regression model. The two notions coincide only when all error terms (not shown in the diagram) are statistically uncorrelated. When errors are correlated, adjustments must be made to neutralize those correlations before embarking on mediation analysis (see Bayesian network).
Mediation (statistics)
Sobel's test
Sobel's test is performed to determine if the relationship between the independent variable and dependent variable has been significantly reduced after inclusion of the mediator variable. In other words, this test assesses whether a mediation effect is significant. It examines the relationship between the independent variable and the dependent variable compared to the relationship between the independent variable and dependent variable including the mediation factor.
Mediation (statistics)
Sobel's test
The Sobel test is more accurate than the Baron and Kenny steps explained above; however, it does have low statistical power. As such, large sample sizes are required in order to have sufficient power to detect significant effects. This is because the key assumption of Sobel's test is the assumption of normality. Because Sobel's test evaluates a given sample on the normal distribution, small sample sizes and skewness of the sampling distribution can be problematic (see Normal distribution for more details). Thus, the rule of thumb as suggested by MacKinnon et al., (2002) is that a sample size of 1000 is required to detect a small effect, a sample size of 100 is sufficient in detecting a medium effect, and a sample size of 50 is required to detect a large effect.
Mediation (statistics)
Sobel's test
The equation for Sobel is: z=abb2sa2+a2sb2
Mediation (statistics)
Preacher–Hayes bootstrap method
The bootstrapping method provides some advantages to the Sobel's test, primarily an increase in power. The Preacher and Hayes bootstrapping method is a non-parametric test and does not impose the assumption of normality. Therefore, if the raw data is available, the bootstrap method is recommended. Bootstrapping involves repeatedly randomly sampling observations with replacement from the data set to compute the desired statistic in each resample. Computing over hundreds, or thousands, of bootstrap resamples provide an approximation of the sampling distribution of the statistic of interest. The Preacher–Hayes method provides point estimates and confidence intervals by which one can assess the significance or nonsignificance of a mediation effect. Point estimates reveal the mean over the number of bootstrapped samples and if zero does not fall between the resulting confidence intervals of the bootstrapping method, one can confidently conclude that there is a significant mediation effect to report.
Mediation (statistics)
Significance of mediation
As outlined above, there are a few different options one can choose from to evaluate a mediation model.
Mediation (statistics)
Significance of mediation
Bootstrapping is becoming the most popular method of testing mediation because it does not require the normality assumption to be met, and because it can be effectively utilized with smaller sample sizes (N < 25). However, mediation continues to be most frequently determined using the logic of Baron and Kenny or the Sobel test. It is becoming increasingly more difficult to publish tests of mediation based purely on the Baron and Kenny method or tests that make distributional assumptions such as the Sobel test. Thus, it is important to consider your options when choosing which test to conduct.
Mediation (statistics)
Approaches to mediation
While the concept of mediation as defined within psychology is theoretically appealing, the methods used to study mediation empirically have been challenged by statisticians and epidemiologists and interpreted formally. Experimental-causal-chain design An experimental-causal-chain design is used when the proposed mediator is experimentally manipulated. Such a design implies that one manipulates some controlled third variable that they have reason to believe could be the underlying mechanism of a given relationship. Measurement-of-mediation design A measurement-of-mediation design can be conceptualized as a statistical approach. Such a design implies that one measures the proposed intervening variable and then uses statistical analyses to establish mediation. This approach does not involve manipulation of the hypothesized mediating variable, but only involves measurement.
Mediation (statistics)
Criticisms of mediation measurement
Experimental approaches to mediation must be carried out with caution. First, it is important to have strong theoretical support for the exploratory investigation of a potential mediating variable. A criticism of a mediation approach rests on the ability to manipulate and measure a mediating variable. Thus, one must be able to manipulate the proposed mediator in an acceptable and ethical fashion. As such, one must be able to measure the intervening process without interfering with the outcome. The mediator must also be able to establish construct validity of manipulation. One of the most common criticisms of the measurement-of-mediation approach is that it is ultimately a correlational design. Consequently, it is possible that some other third variable, independent from the proposed mediator, could be responsible for the proposed effect. However, researchers have worked hard to provide counter-evidence to this disparagement. Specifically, the following counter-arguments have been put forward: Temporal precedence For example, if the independent variable precedes the dependent variable in time, this would provide evidence suggesting a directional, and potentially causal, link from the independent variable to the dependent variable.
Mediation (statistics)
Criticisms of mediation measurement
Nonspuriousness and/or no confounds For example, should one identify other third variables and prove that they do not alter the relationship between the independent variable and the dependent variable he/she would have a stronger argument for their mediation effect. See other 3rd variables below.Mediation can be an extremely useful and powerful statistical test; however, it must be used properly. It is important that the measures used to assess the mediator and the dependent variable are theoretically distinct and that the independent variable and mediator cannot interact. Should there be an interaction between the independent variable and the mediator one would have grounds to investigate moderation.
Mediation (statistics)
Other third variables
Confounding Another model that is often tested is one in which competing variables in the model are alternative potential mediators or an unmeasured cause of the dependent variable. An additional variable in a causal model may obscure or confound the relationship between the independent and dependent variables. Potential confounders are variables that may have a causal impact on both the independent variable and dependent variable. They include common sources of measurement error (as discussed above) as well as other influences shared by both the independent and dependent variables.