title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
LibGDX
|
History
|
libGDX Jam From 18 December 2015 to 18 January 2016 a libGDX game jam was organized together with RoboVM, itch.io and Robotality. From initially 180 theme suggestions "Life in space" was chosen as the jam's main theme, and 83 games were created over the course of the competition.
Release versions
|
LibGDX
|
Architecture
|
libGDX allows the developer to write, test, and debug their application on their own desktop PC and use the same code on Android. It abstracts away the differences between a common Windows/Linux application and an Android application. The usual development cycle consists of staying on the desktop PC as much as possible while periodically verifying that the project still works on Android. Its main goal is to provide total compatibility between desktop and mobile devices, the main difference being speed and processing power.
|
LibGDX
|
Architecture
|
Backends The library transparently uses platform-specific code through various backends to access the capabilities of the host platform. Most of the time the developer does not have to write platform-specific code, except for starter classes (also called launchers) that require different setup depending on the backend.
On the desktop the Lightweight Java Game Library (LWJGL) is used. There is also an experimental JGLFW backend that is not being continued anymore. In Version 1.8 a new LWJGL 3 backend was introduced, intended to replace the older LWJGL 2 backend.
The HTML5 backend uses the Google Web Toolkit (GWT) for compiling the Java to JavaScript code, which is then run in a normal browser environment. libGDX provides several implementations of standard APIs that are not directly supported there, most notably reflection.
The Android backend runs Java code compiled for Android with the Android SDK.
For iOS a custom fork of RoboVM is used to compile Java to native iOS instructions. Intel's Multi-OS Engine has been provided as an alternative since the discontinuation of RoboVM.
Other JVM languages While libGDX is written primarily in Java, the compiled bytecode is language-independent, allowing many other JVM languages to directly use the library. The documentation specifically states the interoperability with Ceylon, Clojure, Kotlin, Jython, JRuby and Scala.
|
LibGDX
|
Extensions
|
Several official and third-party extensions exist that add additional functionality to the library.
gdxAI An artificial intelligence (AI) framework that was split from the main library with version 1.4.1 in October 2014 and moved into its own repository. While it was initially made for libGDX, it can be used with other frameworks as well. The project focuses on AI useful for games, among them pathfinding, decision making and movement.
gdx freetype Can be used to render FreeType fonts at run time instead of using static bitmap images, which do not scale as well.
Box2D A wrapper for the Box2D physics library was introduced in 2010 and moved to an extension with the 1.0 release.
packr A helper tool that bundles a custom JRE with the application so end users do not have to have their own one installed.
|
LibGDX
|
Notable games
|
Drag Racing: Streets Ingress (before it was relaunched as Ingress Prime) Slay the Spire Delver HOPLITE Deep Town Sandship Unciv Mindustry Space Haven Pathway Halfway Riiablo Mirage Realms Raindancer PokeMMO Zombie Age 3 Epic Heroes War Shattered Pixel Dungeon Hair Dash Antiyoy Wildermyth Line-Of-Five Labyrinthian
|
Diethyl selenide
|
Diethyl selenide
|
Diethyl selenide is an organoselenium compound with the formula C4H10Se. First reported in 1836, it was the first organoselenium compound to be discovered. It is the selenium analogue of diethyl ether. It has a strong and unpleasant smell.
|
Diethyl selenide
|
Occurrence
|
Diethyl selenide has been detected in biofuel produced from plantain peel.
It is also a minor air pollutant in some areas.
|
Diethyl selenide
|
Preparation
|
It may be prepared by a substitution reaction similar to the Williamson ether synthesis: reaction of a metal selenide, such as sodium selenide, with two equivalents of ethyl iodide or similar reagent to supply the ethyl groups:
|
ZMYND11
|
ZMYND11
|
Zinc finger MYND domain-containing protein 11 is a protein that in humans is encoded by the ZMYND11 gene.
|
ZMYND11
|
Function
|
The protein encoded by this gene was first identified by its ability to bind the adenovirus E1A protein. The protein localizes to the nucleus. It functions as a transcriptional repressor, and expression of E1A inhibits this repression. Alternatively spliced transcript variants encoding different isoforms have been identified.
|
ZMYND11
|
Interactions
|
ZMYND11 has been shown to interact with: BMPR1A, C11orf30, ETS2, and TAB1.
H3.3K36me3
|
K-index (meteorology)
|
K-index (meteorology)
|
The K-Index or George's Index is a measure of thunderstorm potential in meteorology. According to the National Weather Service, the index harnesses measurements such as "vertical temperature lapse rate, moisture content of the lower atmosphere, and the vertical extent of the moist layer." It was developed by the American meteorologist Joseph J. George, and published in the 1960 book Weather Forecasting for Aeronautics.
|
K-index (meteorology)
|
Definition
|
The index is derived arithmetically by: 850 500 850 700 700 ) Where : 850 = Dew point at 850 hPa 850 = Temperature at 850 hPa 700 = Dew point at 700 hPa 700 = Temperature at 700 hPa 500 = Temperature at 500 hPa
|
K-index (meteorology)
|
Interpretation
|
The K Index is related to the probability of occurrence of a thunderstorm. It was developed with the idea that Potential = 4 x (KI - 15), which gives the following interpretation:
|
Millosevichite
|
Millosevichite
|
Millosevichite is a rare sulfate mineral with the chemical formula Al2(SO4)3. Aluminium is often substituted by iron. It forms finely crystalline and often porous masses.
|
Millosevichite
|
Millosevichite
|
It was first described in 1913 for an occurrence in Grotta dell'Allume, Porto Levante, Vulcano Island, Lipari, Aeolian Islands, Sicily. It was named for Italian mineralogist Federico Millosevich (1875–1942) of the University of Rome.The mineral is mainly known from burning coal dumps, acting as one of the main minerals forming sulfate crust. It can be also found in volcanic solfatara environments.
|
Millosevichite
|
Millosevichite
|
It occurs with native sulfur, sal ammoniac, letovicite, alunogen and boussingaultite.
|
Gudjonsson suggestibility scale
|
Gudjonsson suggestibility scale
|
The Gudjonsson suggestibility scale (GSS) is a psychological test that measures suggestibility of a subject. It was created in 1983 by Icelandic psychologist Gísli Hannes Guðjónsson. It involves reading a short story to the subject and testing recall. This test has been used in court cases in several jurisdictions but has been the subject of various criticisms.
|
Gudjonsson suggestibility scale
|
History
|
The Gudjonsson suggestibility scale (GSS) was created in 1983 by Icelandic psychologist Gísli Hannes Guðjónsson. Given his large number of publications on suggestibility, Gísli was often called as an expert witness in court cases where the suggestibility of those involved in the case was crucial to the proceedings. To measure suggestibility, Gísli created a scale that was relatively straightforward and could be administered in a wide variety of settings. He noticed that while there was a significant body of research on the effects of leading questions on suggestibility, less was known about the effects of "specific instruction" and "interpersonal pressure". Previous methods of measuring suggestibility were primarily aimed at "hypnotic phenomena"; however, Gísli's scale was the first created to be used specifically in conjunction with interrogative events.His test relies on two different aspects of interrogative suggestibility: it measures how much an interrogated person yields to leading questions, as well as how much an interrogated person shifts their responses when additional interrogative pressure is applied. The test is designed specifically to measure the effects of suggestive questions and instructions. Although originally developed in English, the scale has been translated into several different languages, including Portuguese, Italian, Dutch, and Polish.
|
Gudjonsson suggestibility scale
|
History
|
Method The GSS involves reading a short story to the subject, followed by a general recall activity, a test, and a retest. It begins with a short story being read to the subject: Anna Thomson of South Croydon was on holiday in Spain when she was held up outside her hotel and robbed of her handbag, which contained $50 worth of traveler's checks and her passport. She screamed for help and attempted to put up a fight by kicking one of the assailants in the shins. A police car shortly arrived and the woman was taken to the nearest police station, where she was interviewed by Detective Sergeant Delgado. The woman reported that she had been attacked by three men, one of whom she described as oriental looking. The men were said to be slim and in their early twenties. The police officer was touched by the woman’s story and advised her to contact the British Embassy. Six days later, the police recovered the lady’s handbag, but the contents were never found. Three men were subsequently charged, two of whom were convicted and given prison sentences. Only one had had previous convictions for similar offences. The lady returned to Britain with her husband Simon and two friends but remained frightened of being out on her own.
|
Gudjonsson suggestibility scale
|
History
|
The subject is instructed to listen carefully to the story being read to them because they will have to report what they remember afterward. After the researcher reads the story aloud to the participant, the subject is asked to engage in free recall in which they report everything remembered of what was just read. To make the assessment more difficult, subjects may be asked to report these facts after 50 minutes in addition to immediately following the story. This part of the assessment is scored based on how many facts the subject recalls correctly.The second part of the assessment consists of the actual scale. It consists of twenty questions regarding the short story: fifteen questions being suggestive and five being neutral. The fifteen suggestive questions can be separated into three types of suggestibility: leading questions, affirmative questions, and false alternative questions. Their purpose is to measure how much a participant "yields" to suggestive questions.
|
Gudjonsson suggestibility scale
|
History
|
Leading questions contained some "salient precedence" and are worded in such a way that they seem plausible and lend themselves to an affirmative answer. A leading question on the GSS would ask, "Did the woman's glasses break in the struggle?"Affirmative questions were those that presented facts that did not appear in the story, but that contain an affirmative response bias. An example of an affirmative question would be "Were the assailants convicted six weeks after their arrest?"False alternative questions also contain information not present in the story; however, these questions focus specifically on objects, people, and events not found in the story. One of these questions would be, "Did the woman hit one of the assailants with her fist or handbag?"The five neutral questions contain a correct answer that is affirmative; the correct answer is yes. After 1987, the GSS was altered so that these five questions were included in the shift score as well. This version is referred to as the Gudjonsson suggestibility scale 2, or GSS2.
|
Gudjonsson suggestibility scale
|
History
|
The twenty questions are dispersed within the assessment in order to conceal its aim. The person under interrogation is told in a "forceful manner" that there are errors in their story, and they must answer the questions a second time. After answering the initial questionnaire, the subjects are told that they made a certain number of errors and are instructed to go over the assessment again and correct any errors they detect. Any changes made in the suggestive questions are recorded.
|
Gudjonsson suggestibility scale
|
History
|
Scoring Scoring can be broken down into two main categories: memory recall and suggestibility. Memory recall refers to the number of facts the subject correctly remembered during the free recall. Each fact is worth one point, and the subject can earn a maximum of forty points for this section.The suggestibility section is broken into three subcategories-yield, shift, and total. Yield refers to the number of suggestive questions answered incorrectly, based on the original story. With each question being worth one point, subjects can score up to fifteen points on this section. If the subject engaged in two recall activities, the score for the second trial is not included in the scoring. Shift refers to any notable significant change in the participant's answers after they were told to go over their original answers and correct their mistakes. Subjects can also score up to fifteen points on this section. The total score refers to the sum of both the Yield and Shift scores.In a sample of 195 people, the Yield 1 mean score was 4.9, with a standard deviation of 3.0. The Yield 2 mean score was 6.9, with a standard deviation of 3.4. The average Shift score was 3.6, with a standard deviation of 2.7. For total suggestibility (Yield + Shift), the average score was 8.5, with a standard deviation of 4.3. The average memory recall score was 19.2, with a standard deviation of 8.0.
|
Gudjonsson suggestibility scale
|
History
|
Measures of reliability and validity Internal consistency scores between Yield 1 and Shift for the GSS range from −.23 to .28. Internal consistency for the fifteen Yield and fifteen Shift questions were reportedly 0.77 and 0.67, respectively.The GSS2 showed higher internal consistency than the GSS1. Test-retest reliability was reportedly 0.55. Overall, Shift scores showed the lowest internal consistency, at 0.11. Other scores were significant. External validity, tested with the Portuguese version of the GSS, showed no correlation between interrogative suggestibility and factors of personality, nor interrogative suggestibility and anxiety. Immediate recall and delayed recall correlated negatively with all suggestibility scores.
|
Gudjonsson suggestibility scale
|
Uses in the justice system
|
Use in criminal proceedings The GSS is used most often in criminal justice systems. The human memory has been known to be unreliable, as is eyewitness testimony. But Western countries rely strongly on such testimony, and wrongful convictions based on incorrect eyewitness testimony have been publicized, raising this as an issue to the wider public.The GSS allows psychologists to identify individuals who may be susceptible to giving false accounts of events when questioned. The GSS could be useful in a situation where a defendant is being interrogated or cross-examined. There is evidence that GSS scores vary between inmates and the general population. In the general population, high scores on the GSS are associated with an increased likelihood of false confession. Pires (2014) studied 40 Portuguese prisoners and found that inmates had higher suggestibility scores than the general population. This group had the lowest scores in the immediate recall portion of the GSS, suggesting that their higher suggestibility was due to their lower memory capacity.Possible explanations for this may be that the inmates participated in the study voluntarily, and were told that participation would have no negative effect on them. Therefore, even for inmates with antisocial personality disorder, the study took place in a "cooperative atmosphere". Inmates who had a negative attitude toward the test situation or the examiner had decreased vulnerability to suggestion. Additionally, repeat offenders were more resistant to interrogative pressure than those without prior convictions; this may be due to their experience in interrogation settings. Studies have found that GSS scores are higher in people who confess to crimes they did not commit, than in people who are more resistant to police questioning.The use of the GSS in court proceedings has been met with mixed responses. In the United States, courts in many states have ruled that the GSS does not meet either the Frye standard or the Daubert standard for the admissibility of expert testimony. In Soares v. Massachusetts(2001), for example, the Massachusetts Appeals Court stated that the case was "devoid of evidence demonstrating either the scientific validity or reliability of the GSS as a measure of susceptibility to suggestion or appropriate applications of the test results."In the same year, the Wisconsin Supreme Court, in Summers v. Wisconsin affirmed the trial court's decision to exclude the defense's expert testimony on the GSS because it was "vague regarding what information or insights the expert could offer that would assist the jury and the scientific bases of these insights." Despite these decisions, the GSS has been permitted to be used in several court cases. For example, in Oregon v. Romero (2003), the Oregon Court of Appeals held that the testimony of a defense expert about the results of a Gudjonsson suggestibility test—offered in support of the defendant's claim that her confession to police was involuntary—met "the threshold for admissibility" because "It would have been probative, relevant, and helpful to the trier of fact."Experts have linked GSS suggestibility to the voluntary aspect of Miranda waivers during legal proceedings. Despite this, there are very few appellate cases in which the GSS has been presented to a court with any reference to whether a waiver of Miranda rights by a suspect was voluntary. Rogers (2010) specifically examined the GSS in terms of its ability to predict people's ability to understand and agree to Mirand rights. This study found that suggestibility, as assessed by the GSS, appeared to be unrelated to "Miranda comprehension, reasoning, and detainees' perceptions of police coercion". Defendants with high compliance were found to have significantly lower Miranda comprehension and ability to reason about exercising Miranda rights when compared to counterparts with low compliance.
|
Gudjonsson suggestibility scale
|
Uses in the justice system
|
Use in juvenile delinquency proceedings Scores of adolescents in the justice system differ from those of adults. Richardson (1995), administered the GSS to 65 juvenile offenders. When matched with adult offenders on IQ and memory, juveniles were much more susceptible to giving into interrogative pressure (Shift), specifically by changing their answers after they were given negative feedback. Their answers to the leading questions, however, were no more affected by suggestibility than their adult cohorts.These results were likely not due to memory capacity, as studies have shown that information that children can retrieve during free recall increases with age and is equal to adults by around age 12. Singh (1992) compared non-offending adults and adolescents, and showed that adolescents still showed higher suggestibility scores than adults. A study comparing delinquent adolescents to normal adults found the same results Researchers suggest that police interviewers not place adolescent suspects and witnesses under excessive pressure by criticizing their answers.
|
Gudjonsson suggestibility scale
|
Critiques
|
Use with people with intellectual disabilities Use of the GSS with people who have an intellectual disability has been met with criticism. This controversy is partially due to the large memory component of the GSS. Research has shown that the high levels of suggestibility demonstrated by people with intellectual disabilities are related to poor memory for the information presented in the GSS. People with intellectual disabilities have difficulty remembering aspects of the fictional story of GSS because it is not relevant to them. When those with intellectual disabilities are tested based on events that are of personal significance to them, suggestibility decreases significantly. In terms of false confession, which involves a situation in which the defendant was not present, the GSS might have more relevance to confessions than it does to witness testimony. Another context in which the GSS is sometimes used is as part of the assessment of whether people accused of a crime have the capacity to plead to the charge. Despite this perceived usefulness, it is advised that the GSS not be used in court, as their results may not accurately represent their ability to understand the charges against them or to stand trial.
|
Gudjonsson suggestibility scale
|
Critiques
|
Internal consistency reliability One issue with the GSS is internal consistency reliability, specifically in regards to the Shift portion of the measure. Both Shift-positive and Shift-negative are associated with levels of internal consistency reliability of x2 < .60. Internal Shift scores have been reported as x2 = .60, which is "unacceptably low". These numbers serve as a possible explanation for why studies have not found "theoretically meaningful correlations" between the Shift sub-scale and other external criteria. Researchers argue against the use of a Total suggestibility composite due to evidence that Yield 1 and Shift scores do not significantly correlate with each other. This absence of a correlation is problematic because it "suggests that yielding to a leading question and yielding to negative feedback from an interviewer operate under completely different processes". Other researchers have found that there are two types of suggestibility: direct and indirect. The failure to take these into account may have led to methodological problems with the GSS. Researchers suggest that until these issues have been addressed, the GSS should only be limited to the Yield sub-scale.
|
Gudjonsson suggestibility scale
|
Critiques
|
Effects of cognitive load on suggestibility Drake et al. (2013) aimed to discover the effects that increasing cognitive load had on suggestibility scores on the GSS, and specifically attempts at faking interrogative suggestibility. The study was conducted using 80 undergraduate students, each of whom were assigned to one of four conditions from a combination of instruction type (genuine or instructed faking) and concurrent task (yes or no). Findings showed that instructed fakers not performing a concurrent task scored significantly higher on yield 1 compared with "genuine interviewees". Instructed fakers who were performing a concurrent task scored significantly lower on yield 1 scores. Genuines (non-fakers) did not exhibit this pattern in response to cognitive load differences. These results suggest that an increase in cognitive load may indicate an attempt at faking on the yield portion of the GSS. Increasing cognitive load may facilitate the detection of deception because it is more difficult to act deceptively under these conditions.
|
Gudjonsson suggestibility scale
|
Critiques
|
Validity One possible issue with the GSS is its validity – whether it measures genuine "internalization of the suggested materials" or simply "compliance with the interrogator". To test this, Mastroberardino (2013) conducted two experiments. In the first, participants were administered the GSS2 and then immediately performed a "source identification task" for the items on the scale. In the second experiment, half of the participants were administered this identification task immediately while the other have were administered it after 24 hours. Both experiments found a higher proportion of compliant responses. Participants internalized more suggested information after yield 1, and made more compliant responses during the shift portion of the assessment. In the second experiment, participants in the delayed condition internalized less material than those in the immediate condition. These results support the idea that different processes underlie the yield 1 and shift parts of the GSS2-yield 1 may include internalization of suggested materials and compliance, while shift may be due mostly to compliance with the interrogator. The GSS is not able to differentiate between compliance and suggestibility, as the outcome behaviors of these two cognitive processes are the same.
|
Gudjonsson suggestibility scale
|
Critiques
|
Suggestibility and false memory Leavitt (1997) compared suggestibility (evaluated by the GSS) in participants who recovered memories of sexual assault to that of those without a history of sexual trauma. The results of this study showed that those who had recovered memories had a lower average suggestibility scores than those who did not have a history of sexual abuse – 6.7 versus 10.6. These results suggest that suggestibility does not play as large a role in the formation of memories than previously assumed.
|
4-Methoxyestriol
|
4-Methoxyestriol
|
4-Methoxyestriol (4-MeO-E3) is an endogenous estrogen metabolite. It is the 4-methyl ether of 4-hydroxyestriol and a metabolite of estriol and 4-hydroxyestriol. 4-Methoxyestriol has very low affinities for the estrogen receptors. Its relative binding affinities (RBAs) for estrogen receptor alpha (ERα) and estrogen receptor beta (ERβ) are both about 1% of those of estradiol. For comparison, estriol had RBAs of 11% and 35%, respectively.
|
Phenyl-C61-butyric acid methyl ester
|
Phenyl-C61-butyric acid methyl ester
|
PCBM is the common abbreviation for the fullerene derivative [6,6]-phenyl-C61-butyric acid methyl ester. It is being investigated in organic solar cells.PCBM is a fullerene derivative of the C60 buckyball that was first synthesized in the 1990s. It is an electron acceptor material and is often used in organic solar cells (plastic solar cells) or flexible electronics in conjunction with electron donor materials such as P3HT or other conductive polymers. It is a more practical choice for an electron acceptor when compared with fullerenes because of its solubility in chlorobenzene. This allows for solution processable donor/acceptor mixes, a necessary property for "printable" solar cells. However, considering the cost of fabricating fullerenes, it is not certain that this derivative can be synthesized on a large scale for commercial applications.
|
Electrocochleography
|
Electrocochleography
|
Electrocochleography (abbreviated ECochG or ECOG) is a technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation, using an electrode placed in the ear canal or tympanic membrane. The test is performed by an otologist or audiologist with specialized training, and is used for detection of elevated inner ear pressure (endolymphatic hydrops) or for the testing and monitoring of inner ear and auditory nerve function during surgery.
|
Electrocochleography
|
Clinical applications
|
The most common clinical applications of electrocochleography include: Objective identification and monitoring of Ménière's disease and endolymphatic hydrops (EH) Intraoperative monitoring of auditory system function during surgery on the brainstem or cerebellum Enhancement of Wave I of the auditory brainstem response, particularly in patients who are hard of hearing Diagnosis of auditory neuropathy
|
Electrocochleography
|
Cochlear physiology
|
The basilar membrane and the hair cells of the cochlea function as a sharply tuned frequency analyzer. Sound is transmitted to the inner ear via vibration of the tympanic membrane, leading to movement of the middle ear bones (malleus, incus, and stapes). Movement of the stapes on the oval window generates a pressure wave in the perilymph within the cochlea, causing the basilar membrane to vibrate. Sounds of different frequencies vibrate different parts of the basilar membrane, and the point of maximal vibration amplitude depends on the sound frequency.As the basilar membrane vibrates, the hair cells attached to this membrane are rhythmically pushed up against the tectorial membrane, bending the hair cell stereocilia. This opens mechanically gated ion channels on the hair cell, allowing influx of potassium (K+) and calcium (Ca2+) ions. The flow of ions generates an AC current through the hair cell surface, at the same frequency as the acoustic stimulus. This measurable AC voltage is called the cochlear microphonic (CM), which mimics the stimulus. The hair cells function as a transducer, converting the mechanical movement of the basilar membrane into electrical voltage, in a process requiring ATP from the stria vascularis as an energy source.
|
Electrocochleography
|
Cochlear physiology
|
The depolarized hair cell releases neurotransmitters across a synapse to primary auditory neurons of the spiral ganglion. Upon reaching receptors on the postsynaptic spiral ganglion neurons, the neurotransmitters induce a postsynaptic potential or generator potential in the neuronal projections. When a certain threshold potential is reached, the spiral ganglion neuron fires an action potential, which enters the auditory processing pathway of the brain.
|
Electrocochleography
|
Cochlear physiology
|
Cochlear potentials A resting endolymphatic potential of a normal cochlea is + 80 mV. There are at least 3 other potentials generated upon cochlear stimulation: Cochlear microphonic (CM) Summating potential (SP) Action potential (AP)As described above, the cochlear microphonic (CM) is an alternating current (AC) voltage that mirrors the waveform of the acoustic stimulus. It is dominated by the outer hair cells of the organ of Corti. The magnitude of the recording is dependent on the proximity of the recording electrodes to the hair cells. The CM is proportional to the displacement of the basilar membrane. A fourth potential, the auditory nerve neurophonic, is sometimes dissociated from the CM. The neurophonic represents the neural part (auditory nerve spikes) phased-locked to the stimulus and is similar to the Frequency following response.The summating potential (SP), first described by Tasaki et al. in 1954, represents the direct current (DC) response of the hair cells as they move in conjunction with the basilar membrane, as well as the DC response from dendritic and axonal potentials of the auditory nerve. The SP is the stimulus-related potential of the cochlea. Although historically it has been the least studied, renewed interest has surfaced due to changes in the SP reported in cases of endolymphatic hydrops or Ménière's disease.
|
Electrocochleography
|
Cochlear physiology
|
The auditory nerve action potential, also called the compound action potential (CAP), is the most widely studied component in ECochG. The AP represents the summed response of the synchronous firing of the nerve fibers. It also appears as an AC voltage. The first and largest wave (N1) is identical to wave I of auditory brainstem response (ABR). Following this is N2, which is identical to wave II of the ABR. The magnitude of the action potential reflects the number of fibers that are firing. The latency of the AP is measured as the time between the onset and the peak of the N1 wave.
|
Electrocochleography
|
Procedure and recording parameters
|
ECochG can be performed with either invasive or non-invasive electrodes. Invasive electrodes, such as transtympanic (TT) needles, give clearer, more robust electrical responses (with larger amplitudes) since the electrodes are very close to the voltage generators. The needle is placed on the promontory wall of the middle ear and the round window. Non-invasive, or extratympanic (ET), electrodes have the advantage of not causing pain or discomfort to the patient. Unlike with invasive electrodes, there is no need for sedation, anesthesia, or medical supervision. The responses, however, are smaller in magnitude.
|
Electrocochleography
|
Procedure and recording parameters
|
Auditory stimuli in the form of broadband clicks 100 microseconds in duration are used. The stimulus polarity can be rarefaction polarity, condensation polarity, or alternating polarity. Signals are recorded from a primary recording (non-inverted) electrode located in the ear canal, tympanic membrane, or promontory (depending on type of electrode used). Reference (inverting) electrodes can be placed on the contralateral earlobe, mastoid, or ear canal.
|
Electrocochleography
|
Procedure and recording parameters
|
The signal is processed, including signal amplification (by as much as a factor 100000 for extratympanic electrode recordings), noise filtration, and signal averaging. A band-pass filter from 10 Hz to 1.5 kHz is often used.
|
Electrocochleography
|
Interpretation of results
|
The CM, SP, and AP are all used in the diagnosis of endolymphatic hydrops and Ménière's disease. In particular, abnormally high SP and a high SP:AP ratio are signs of Ménière's disease. An SP:AP ratio of 0.45 or greater is considered abnormal.
|
Electrocochleography
|
History
|
The CM was first discovered in 1930 by Ernest Wever and Charles Bray in cats. Wever and Bray mistakenly concluded that this recording was generated by the auditory nerve. They named the discovery the "Wever-Bray effect". Hallowell Davis and A.J. Derbyshire from Harvard replicated the study and concluded that the waves were in fact cochlear origin and not from the auditory nerve.Fromm et al. were the first investigators to employ the ECochG technique in humans by inserting a wire electrode through the tympanic membrane and recording the CM from the niche of the round window and cochlear promontory. Their first measurement of the CM in humans was in 1935. They also discovered the N1, N2, and N3 waves following the CM, but it was Tasaki who identified these waves as auditory nerve action potentials.
|
Electrocochleography
|
History
|
Fisch and Ruben were the first to record the compound action potentials from both the round window and the eighth cranial nerve (CN VIII) in cats and mice. Ruben was also the first person to use CM and AP clinically.
|
Electrocochleography
|
History
|
The summating potential, a stimulus-related hair cell potential, was first described by Tasaki and colleagues in 1954. Ernest J. Moore was the first investigator to record the CM from surface electrodes. In 1971, Moore conducted five experiments in which he recorded CM and AP from 38 human subjects using surface electrodes. The purpose of the experiment was to establish the validity of the responses and to develop an artifact-free earphone system. Unfortunately, bulk of his work was never published.
|
Graphene morphology
|
Graphene morphology
|
A graphene morphology is any of the structures related to, and formed from, single sheets of graphene. 'Graphene' is typically used to refer to the crystalline monolayer of the naturally occurring material graphite. Due to quantum confinement of electrons within the material at these low dimensions, small differences in graphene morphology can greatly impact the physical and chemical properties of these materials. Commonly studied graphene morphologies include the monolayer sheets, bilayer sheets, graphene nanoribbons and other 3D structures formed from stacking of the monolayer sheets.
|
Graphene morphology
|
Monolayer sheets
|
In 2013 researchers developed a production unit that produces continuous monolayer sheets of high-strength monolayer graphene (HSMG). The process is based on graphene growth on a liquid metal matrix.
|
Graphene morphology
|
Bilayer
|
Bilayer graphene displays the anomalous quantum Hall effect, a tunable band gap and potential for excitonic condensation. Bilayer graphene typically can be found either in twisted configurations where the two layers are rotated relative to each other or graphitic Bernal stacked configurations where half the atoms in one layer lie atop half the atoms in the other. Stacking order and orientation govern its optical and electronic properties.
|
Graphene morphology
|
Bilayer
|
One synthesis method is chemical vapor deposition, which can produce large bilayer regions that almost exclusively conform to a Bernal stack geometry.
|
Graphene morphology
|
Superlattices
|
Periodically stacked graphene and its insulating isomorph provide a fascinating structural element in implementing highly functional superlattices at the atomic scale, which offers possibilities in designing nanoelectronic and photonic devices. Various types of superlattices can be obtained by stacking graphene and its related forms. The energy band in layer-stacked superlattices is more sensitive to the barrier width than that in conventional III–V semiconductor superlattices. When adding more than one atomic layer to the barrier in each period, the coupling of electronic wavefunctions in neighboring potential wells can be significantly reduced, which leads to the degeneration of continuous subbands into quantized energy levels. When varying the well width, the energy levels in the potential wells along the L–M direction behave distinctly from those along the K–H direction.
|
Graphene morphology
|
Superlattices
|
Precisely aligned graphene on h-BN always produces giant superlattice known as Moiré pattern. Moiré patterns are observed and the sensitivity of moiré interferometry proves that the graphene grains can align precisely with the underlying h-BN lattice within an error of less than 0.05°. The occurrence of moiré pattern clearly indicates that the graphene locks into h-BN via van der Waals epitaxy with its interfacial stress greatly released.
|
Graphene morphology
|
Superlattices
|
The existence of the giant Moiré pattern in graphene nanoribbon (GNR) embedded in hBN indicates that the graphene was highly crystalline and precisely aligned with the h-BN underneath. It was noticed that the Moiré pattern appeared to be stretched along the GNR, while it appeared relaxed laterally. This trend differs from regular hexagons with a periodicity of ~14 nm, which have always been observed with well-aligned graphene domains on h-BN. This observation gives a strong indication of the in-plane epitaxy between the graphene and the h-BN at the edges of the trench, where the graphene is stretched by tensile strain along the ribbon, due to a lattice mismatch between the graphene and h-BN.
|
Graphene morphology
|
Nanoribbons
|
Graphene nanoribbons ("nanostripes" in the "zig-zag" orientation), at low temperatures, show spin-polarized metallic edge currents, which suggest spintronics applications. (In the "armchair" orientation, the edges behave like semiconductors.)
|
Graphene morphology
|
Fiber
|
In 2011, researchers reported making fibers using chemical vapor deposition grown graphene films. The method was scalable and controllable, delivering tunable morphology and pore structure by controlling the evaporation of solvents with suitable surface tension. Flexible all-solid-state supercapacitors based on such fibers were demonstrated in 2013.In 2015 intercalating small graphene fragments into the gaps formed by larger, coiled graphene sheets after annealing provided pathways for conduction, while the fragments helped reinforce the fibers. The resulting fibers offered better thermal and electrical conductivity and mechanical strength. Thermal conductivity reached 1290 watts per meter per kelvin, while tensile strength reached 1080 megapascals.In 2016, kilometer-scale continuous graphene fibers with outstanding mechanical properties and excellent electrical conductivity were produced by high-throughput wet-spinning of graphene oxide liquid crystals followed by graphitization through a full-scale synergetic defect-engineering strategy.
|
Graphene morphology
|
3D
|
Three dimensional bilayer graphene was reported in 2012 and 2014.In 2013, a three-dimensional honeycomb of hexagonally arranged carbon was termed 3D graphene. Self-supporting 3D graphene was produced that year. Researchers at Stony Brook University have reported a novel radical-initiated crosslinking method to fabricate porous 3D free-standing architectures of graphene and carbon nanotubes using nanomaterials as building blocks without any polymer matrix as support. 3D structures can be fabricated by using either CVD or solution-based methods. A 2016 review summarized the techniques for fabrication of 3D graphene and other related two-dimensional materials. These 3D graphene (all-carbon) scaffolds/foams have potential applications in fields such as energy storage, filtration, thermal management and biomedical devices and implants.In 2016, a box-shaped graphene (BSG) nanostructure resulted from mechanical cleavage of pyrolytic graphite has been reported. The discovered nanostructure is a multilayer system of parallel hollow nanochannels located along the surface that displayed quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm, the typical width of channel facets makes about 25 nm. Potential applications include: ultra-sensitive detectors, high-performance catalytic cells, nanochannels for DNA sequencing and manipulation, high-performance heat sinking surfaces, rechargeable batteries of enhanced performance, nanomechanical resonators, electron multiplication channels in emission nanoelectronic devices, high-capacity sorbents for safe hydrogen storage.
|
Graphene morphology
|
3D
|
In 2017 researchers simulated a graphene gyroid that has five percent of the density of steel, yet is ten times as strong with an enormous surface area to volume ratio. They compressed heated graphene flakes. They then constructed high resolution 3D-printed models of plastic of various configurations – similar to the gyroids that graphene form naturally, though thousands of times larger. These shapes were then tested for tensile strength and compression, and compared to the computer simulations. When the graphene was swapped out for polymers or metals, similar gains in strength were seen.A film of graphene soaked in solvent to make it swell and become malleable was overlaid on an underlying substrate "former". The solvent evaporated, leaving behind a layer of graphene that had taken on the shape of the underlying structure. In this way the team was able to produce a range of relatively intricate micro-structured shapes. Features vary from 3.5 to 50 μm. Pure graphene and gold-decorated graphene were each successfully integrated with the substrate.An aerogel made of graphene layers separated by carbon nanotubes was measured at 0.16 milligrams per cubic centimeter. A solution of graphene and carbon nanotubes in a mold is freeze dried to dehydrate the solution, leaving the aerogel. The material has superior elasticity and absorption. It can recover completely after more than 90% compression, and absorb up to 900 times its weight in oil, at a rate of 68.8 grams per second.At the end of 2017, fabrication of freestanding graphene gyroids with 35nm and 60nm unit cells was reported. The gyroids were made via controlled direct chemical vapor deposition and are self-supporting and can be transferred onto a variety of substrates. Furthermore, they represent the smallest free standing periodic graphene 3D structures yet produced with a pore size of tens of nm. Due to their high mechanical strength, good conductivity (sheet resistance : 240 Ω/sq) and huge ratio of surface area per volume, the graphene gyroids might find their way to various applications, ranging from batteries and supercapacitors to filtration and optoelectronics.
|
Graphene morphology
|
Pillared
|
Pillared graphene is a hybrid carbon structure consisting of an oriented array of carbon nanotubes connected at each end to a graphene sheet. It was first described theoretically in 2008. Pillared graphene has not been synthesized in the laboratory.
|
Graphene morphology
|
Reinforced
|
Graphene sheets reinforced with embedded carbon nanotubes ("rebar") are easier to manipulate, while improving the electrical and mechanical qualities of both materials.Functionalized single- or multiwalled carbon nanotubes are spin-coated on copper foils and then heated and cooled, using the nanotubes as the carbon source. Under heating, the functional carbon groups decompose into graphene, while the nanotubes partially split and form in-plane covalent bonds with the graphene, adding strength. π–π stacking domains add more strength. The nanotubes can overlap, making the material a better conductor than standard CVD-grown graphene. The nanotubes effectively bridge the grain boundaries found in conventional graphene. The technique eliminates the traces of substrate on which later-separated sheets were deposited using epitaxy.Stacks of a few layers have been proposed as a cost-effective and physically flexible replacement for indium tin oxide (ITO) used in displays and photovoltaic cells.
|
Graphene morphology
|
Nanocoil
|
In 2015 a coiled form of graphene was discovered in graphitic carbon (coal). The spiraling effect is produced by defects in the material's hexagonal grid that causes it to spiral along its edge, mimicking a Riemann surface, with the graphene surface approximately perpendicular to the axis. When voltage is applied to such a coil, current flows around the spiral, producing a magnetic field. The phenomenon applies to spirals with either zigzag or armchair orientations, although with different current distributions. Computer simulations indicated that a conventional spiral inductor of 205 microns in diameter could be matched by a nanocoil just 70 nanometers wide, with a field strength reaching as much as 1 tesla, about the same as the coils found in typical loudspeakers, about the same field strength as some MRI machines. They found the magnetic field would be strongest in the hollow, nanometer-wide cavity at the spiral's center.A solenoid made with such a coil behaves as a quantum conductor whose current distribution between the core and exterior varies with applied voltage, resulting in nonlinear inductance.
|
H.241
|
H.241
|
H.241 is a Recommendation from the ITU Telecommunication Standardization Sector (ITU-T) that defines extended video procedures and control signals for H.300-series terminals, including H.323 and H.320.
This Recommendation defines the use of advanced video codecs, including H.264: Command and Indication Capability exchange signaling Transport requires support of single NAL unit mode (packetization mode 0) of RFC 6184 Reduced-Complexity Decoding Operation (RCDO) for H.264 baseline profile bit streams Negotiation of video submodes
|
Transformers: Super-God Masterforce
|
Transformers: Super-God Masterforce
|
Transformers: Super-God Masterforce (トランスフォーマー 超神マスターフォース, Toransufōmā: Chōjin Masutāfōsu) is a Japanese Transformers line of toys and anime series that ran from April 12, 1988, to March 7, 1989, for 42 episodes. On July 3, 2006, the series was released on DVD in the UK, and it was aired on AnimeCentral in the UK a few years later. In 2008, Madman Entertainment released the series on DVD in Australia in Region 4, PAL format. On May 1, 2012, the series was released on DVD in the US. It serves as the second sequel series to the Japanese dub of the original The Transformers cartoon series as part of the Generation 1 franchise, preceded by Transformers: The Headmasters and followed by Transformers: Victory.
|
Transformers: Super-God Masterforce
|
Story
|
The core concept of Masterforce begins with the human beings themselves rising up to fight and defend their home, rather than the alien Transformers doing it for them. Going hand-in-hand with this idea, the Japanese incarnations of the Autobot Pretenders actually shrink down to pass for normal human beings, whose emotions and strengths they value and wish to safeguard. The Decepticon Pretenders tend to remain large monsters, unless they battle in their robot forms. Later on children and adults would be recruited to become Headmaster Juniors for both the Autobots and Decepticons but as the story progressed the story focuses more on the Godmasters (released as Powermasters in the West) and they became the more powerful Transformers on the show. The Godmasters themselves are human beings with the ability to merge with their Transtectors (robot bodies). Most of the Godmasters would be adults with the exception of Clouder who is about the same age as the Junior Headmasters. Other characters would later appear including Black Zarak who would later merge with the Decepticons leader; Devil Z for the final battle and for the Autobots comes Grand Maximus who has a Pretender guise and is Fortress Maximus' younger brother. Also the Firecons make a brief appearance in one episode and a robot who transforms into a gun (similar to G1 Megatron) was given to Cancer of the Headmaster Junior Decepticons as a gift from Lady Mega. His name was Browning (or BM in the dub). The Decepticons also had the Targetmaster Seacons under their command, but like the Pretenders, they were sentient robots and didn't require humans to operate them. The Autobots would also gain the help of another sentient robot called Sixknight (Or as he is known outside Japan; Quickswitch), who appeared on Earth as a travelling warrior who wanted to challenge Ginrai (who is the Godmaster of the body of Optimus Prime) to a battle, but soon decided for himself to fight for the Autobots cause. The story basically tells the efforts of the heroic Autobot forces as they protect the Earth from the Decepticons. Only this time round, human characters played a more important role than in other Transformers series.
|
Transformers: Super-God Masterforce
|
Development
|
With the conclusion of the US Transformers cartoon series in 1987, Japan produced their first exclusive anime series, Transformers: The Headmasters, to replace the fourth and final US season and to carry out the story concepts begun in The Transformers: The Movie and carried on through the third season, using the existing cast and adding the eponymous Headmasters into the mix. With the completion of the series, the evil Decepticons had finally been forced off Earth, and the stage was set for the beginning of Super-God Masterforce.
|
Transformers: Super-God Masterforce
|
Development
|
Although nominally occurring in the same continuity as the previous Transformers series, there was a very obvious effort on head writer Masumi Kaneda's part to make Masterforce a "fresh start" as a mecha story, introducing an entirely new cast of characters from scratch, rather than using any of the previous ones. To this end, although the toys are mostly the same in both Japan and the West (barring some different color schemes), the characters which they represent are vastly different—most prominently, Powermaster Optimus Prime's counterpart is Ginrai, a human trucker who combines with a transtector (a non-sentient Transformer body, a concept lifted from Headmasters) to become a Transformer himself, the same applies to the other Powermasters' counterparts; the Godmasters. The Pretender figures released during that year were the same but in Masterforce the Autobot pretenders disguise themselves regular sized humans that can wear normal clothing instead of being giant humans wearing armor as they were in contemporary Marvel comics.
|
Transformers: Super-God Masterforce
|
Development
|
The attempt to start things afresh with Masterforce does give rise to some continuity quirks, however, such as Earth technology being portrayed as contemporary, rather than futuristic as in 2010 and Headmasters, and some characters being totally unaware of what Transformers are, even though they have been public figures for over two decades. Similarly, the show never supplied the viewer with the full backstory - within the main 42 episodes of the series, important aspects such as what the true villain, Devil Z is or who BlackZarak is are never explained. Even the timeframe of the show was never revealed, with the series taking place an indeterminate amount of time after Headmasters. Most of these facts would all be revealed later in made-for-video clip shows and other media, including a Special Secrets episode where both Shuta and Grand Maximus would explain and reveal several pieces of trivia about the show.
|
Transformers: Super-God Masterforce
|
Adaptations
|
The series was dubbed into English in Hong Kong by the dubbing company; Omni Productions, for broadcast on the Malaysian TV channel, RTM1 along with Headmasters and the following series, Victory. These dubs, however, are more famous for their time on the Singapore satellite channel, Star TV, where they were grouped under the umbrella title of "Transformers Takara", and all given Victory's opening sequence. Later acquired by the US Transformers animated series creator Sunbow Productions, they were given English-language closing credits (even including the English Transformers theme), but no official release of them has ever been carried in the US, because of their poor quality. Performed by a small group (less than half-a-dozen actors), the dubs feature many incorrect names and nonsensical translations - in the case of the Masterforce, especially, all the English-equivalent names are used for the characters, so throughout the series, the clearly human Ginrai is referred to as "Optimus Prime", and the little blonde girl called Minerva is referred to by the inappropriate name "Nightbeat".
|
Transformers: Super-God Masterforce
|
Adaptations
|
In 2006, the complete series was released in Region 2 with the Japanese audio with subtitles (although like Shout! Factory, it does not contain the English dub). For the Shout! Factory release, the Cybertronians are still referred to as Autobots and the Destrons are still known as the Decepticons, and many of the characters are given the names of the American releases of their toys.
|
Transformers: Super-God Masterforce
|
Adaptations
|
A twelve-chapter manga adaptation of this anime was written by Masami Kaneda and illustrated by Ban Magami.
|
Transformers: Super-God Masterforce
|
Theme songs
|
Openings"Super-God Masterforce Theme" (超神マスターフォースのテーマ, Chōjin Masutāfōsu no Tēma) April 12, 1988 - March 7, 1989 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki / String Arranger: Tomoyuki Asakawa / Singers: Toshiya Igarashi Episodes: 1–47Endings"Let's Go! Transformers" (燃えろ!トランスフォーマー, Moero! Toransufōmā) April 12, 1988 - March 7, 1989 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki/ String Arranger: Tomoyuki Asakawa / Singers: Toshiya Igarashi, Mori no Ki Jido Gassho-dan Episodes: 1–47Insert Songs"Miracle Transformers" (奇跡のトランスフォーマー, Kiseki no Toransufōmā) September 13, 1988, November 1, 1988, November 15, 1988, December 6, 1988 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki / Singers: Toshiya Igarashi Episodes: 20, 27, 29, 32 "Advance! Super-God Masterforce" (進め!超神マスターフォース, Susume! Chōjin Masutāfōsu) September 27, 1988, November 8, 1988 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki / Singers: Toshiya Igarashi Episodes: 22, 28 "WE BELIEVE TOMORROW" December 13, 1988, February 28, 1989 Lyricist: Machiko Ryu / Composer: Komune Negishi / Arranger: Kimio Nomura / Singers: Toshiya Igarashi Episodes: 33, 42 "Super Ginrai Theme" (スーパージンライのテーマ, Sūpā Jinrai no Tēma) Lyricist: Machiko Ryu / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Toshiya Igarashi Episodes: 34, 39 "Transform! Godmaster" (変身!ゴッドマスター, Henshin! Goddomasutā) Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Kimio Nomura / Singers: Toshiya Igarashi Episodes: None "Small Warrior: Headmaster Jr Theme" (小さな勇士~ヘッドマスターJrのテーマ~, Chīsana Yūshi: Heddomasutā Junia no Tēma) Lyricist: Kayoko Fuyusha / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Yumi Toma, Hiroko Emori, Yuriko Yamamoto Episodes: None "See See Seacons" (See See シーコンズ, Sī Sī Shīkonzu) Lyricist: Kayoko Fuyusha / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Masato Hirano Episodes: None "Ruler of the Universe: Devil Z" (宇宙の支配者・デビルZ, Uchū no Shihaisha: Debiru Zetto) Lyricist: Machiko Ryu / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Toshiya Igarashi Episodes: None
|
Transformers: Super-God Masterforce
|
Characters
|
Error, link leads to OG page, removed link
|
Completeness (cryptography)
|
Completeness (cryptography)
|
In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits. This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way.
|
FASTOPEN
|
FASTOPEN
|
In computing, FASTOPEN is a DOS terminate-and-stay-resident command, introduced in MS-DOS version 3.3, that provides accelerated access to frequently-used files and directories. The command is also available in SISNE plus.
|
FASTOPEN
|
Overview
|
The command works with hard disks, but not with diskettes (probably for security when swapping) or with network drives (probably because such drives do not offer block-level access, only file-level access).
It is possible to specify for which drives FASTOPEN should operate, how many files and directories should be cached on each (10 by default, up to 999 total), how many regions for each drive should be cached and whether the cache should be located in conventional or expanded memory.
If a disk defragmenter tool is used, or if Windows Explorer is to move files or directories, while FASTOPEN is installed, it is necessary to reboot the computer afterwards, because FASTOPEN would remember the old position of files and directories, causing MS-DOS to display garbage if e.g. "DIR" was performed.
DR DOS 6.0 includes an implementation of the FASTOPEN command. FASTOPEN is also part of the Windows XP MS-DOS subsystem to maintain MS-DOS and MS OS/2 version 1.x compatibility. It is not available on Windows XP 64-Bit Edition.The "fastopen" name has since been reused for various other "accelerating" software products.
|
Calculation of glass properties
|
Calculation of glass properties
|
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories).
|
Calculation of glass properties
|
History
|
Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time.In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work on the problem further and to evaluate all possible glass components systematically. Finally, Schott succeeded in producing homogeneous glass samples, and he invented borosilicate glass with the optical properties Abbe needed. These inventions gave rise to the well-known companies Zeiss and Schott Glass (see also Timeline of microscope technology). Systematic glass research was born. In 1908, Eugene Sullivan founded glass research also in the United States (Corning, New York).At the beginning of glass research it was most important to know the relation between the glass composition and its properties. For this purpose Otto Schott introduced the additivity principle in several publications for calculation of glass properties. This principle implies that the relation between the glass composition and a specific property is linear to all glass component concentrations, assuming an ideal mixture, with Ci and bi representing specific glass component concentrations and related coefficients respectively in the equation below. The additivity principle is a simplification and only valid within narrow composition ranges as seen in the displayed diagrams for the refractive index and the viscosity. Nevertheless, the application of the additivity principle lead the way to many of Schott’s inventions, including optical glasses, glasses with low thermal expansion for cooking and laboratory ware (Duran), and glasses with reduced freezing point depression for mercury thermometers. Subsequently, English and Gehlhoff et al. published similar additive glass property calculation models. Schott’s additivity principle is still widely in use today in glass research and technology.
|
Calculation of glass properties
|
Global models
|
Schott and many scientists and engineers afterwards applied the additivity principle to experimental data measured in their own laboratory within sufficiently narrow composition ranges (local glass models). This is most convenient because disagreements between laboratories and non-linear glass component interactions do not need to be considered. In the course of several decades of systematic glass research thousands of glass compositions were studied, resulting in millions of published glass properties, collected in glass databases. This huge pool of experimental data was not investigated as a whole, until Bottinga, Kucuk, Priven, Choudhary, Mazurin, and Fluegel published their global glass models, using various approaches. In contrast to the models by Schott the global models consider many independent data sources, making the model estimates more reliable. In addition, global models can reveal and quantify non-additive influences of certain glass component combinations on the properties, such as the mixed-alkali effect as seen in the adjacent diagram, or the boron anomaly. Global models also reflect interesting developments of glass property measurement accuracy, e.g., a decreasing accuracy of experimental data in modern scientific literature for some glass properties, shown in the diagram. They can be used for accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). In the following sections (except melting enthalpy) empirical modeling techniques are presented, which seem to be a successful way for handling huge amounts of experimental data. The resulting models are applied in contemporary engineering and research for the calculation of glass properties.
|
Calculation of glass properties
|
Global models
|
Non-empirical (deductive) glass models exist. They are often not created to obtain reliable glass property predictions in the first place (except melting enthalpy), but to establish relations among several properties (e.g. atomic radius, atomic mass, chemical bond strength and angles, chemical valency, heat capacity) to gain scientific insight. In future, the investigation of property relations in deductive models may ultimately lead to reliable predictions for all desired properties, provided the property relations are well understood and all required experimental data are available.
|
Calculation of glass properties
|
Methods
|
Glass properties and glass behavior during production can be calculated through statistical analysis of glass databases such as GE-SYSTEM SciGlass and Interglad, sometimes combined with the finite element method. For estimating the melting enthalpy thermodynamic databases are used.
|
Calculation of glass properties
|
Methods
|
Linear regression If the desired glass property is not related to crystallization (e.g., liquidus temperature) or phase separation, linear regression can be applied using common polynomial functions up to the third degree. Below is an example equation of the second degree. The C-values are the glass component concentrations like Na2O or CaO in percent or other fractions, the b-values are coefficients, and n is the total number of glass components. The glass main component silica (SiO2) is excluded in the equation below because of over-parametrization due to the constraint that all components sum up to 100%. Many terms in the equation below can be neglected based on correlation and significance analysis. Systematic errors such as seen in the picture are quantified by dummy variables. Further details and examples are available in an online tutorial by Fluegel.
|
Calculation of glass properties
|
Methods
|
Glass Property =b0+∑i=1n(biCi+∑k=inbikCiCk) Non-linear regression The liquidus temperature has been modeled by non-linear regression using neural networks and disconnected peak functions. The disconnected peak functions approach is based on the observation that within one primary crystalline phase field linear regression can be applied and at eutectic points sudden changes occur.
|
Calculation of glass properties
|
Methods
|
Glass melting enthalpy The glass melting enthalpy reflects the amount of energy needed to convert the mix of raw materials (batch) to a melt glass. It depends on the batch and glass compositions, on the efficiency of the furnace and heat regeneration systems, the average residence time of the glass in the furnace, and many other factors. A pioneering article about the subject was written by Carl Kröger in 1953.
|
Calculation of glass properties
|
Methods
|
Finite element method For modeling of the glass flow in a glass melting furnace the finite element method is applied commercially, based on data or models for viscosity, density, thermal conductivity, heat capacity, absorption spectra, and other relevant properties of the glass melt. The finite element method may also be applied to glass forming processes.
Optimization It is often required to optimize several glass properties simultaneously, including production costs.
|
Calculation of glass properties
|
Methods
|
This can be performed, e.g., by simplex search, or in a spreadsheet as follows: Listing of the desired properties; Entering of models for the reliable calculation of properties based on the glass composition, including a formula for estimating the production costs; Calculation of the squares of the differences (errors) between desired and calculated properties; Reduction of the sum of square errors using the Solver option in Microsoft Excel with the glass components as variables. Other software (e.g. Microcal Origin) can also be used to perform these optimizations.It is possible to weight the desired properties differently. Basic information about the principle can be found in an article by Huff et al. The combination of several glass models together with further relevant technological and financial functions can be used in six sigma optimization.
|
CSI 300 Index
|
CSI 300 Index
|
The CSI 300 (Chinese: 沪深300) is a capitalization-weighted stock market index designed to replicate the performance of the top 300 stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange. It has two sub-indexes: the CSI 100 Index and the CSI 200 Index. Over the years, it has been deemed the Chinese counterpart of the S&P 500 index and a better gauge of the Chinese stock market than the more traditional SSE Composite Index.
|
CSI 300 Index
|
CSI 300 Index
|
The index is compiled by the China Securities Index Company, Ltd.It has been calculated since April 8, 2005. Its value is normalized relative to a base of 1000 on December 31, 2004.It is considered to be a blue chip index for Mainland China stock exchanges.
|
CSI 300 Index
|
Annual Returns
|
The following table shows the annual development of the CSI 300 Index since 2005.
|
CSI 300 Index
|
Sub-Indices
|
Moreover, there are the following ten sub-indices, which reflect specific sectors: CSI 300 Energy Index CSI 300 Materials Index CSI 300 Industrials Index CSI 300 Consumer Discretionary Index CSI 300 Consumer Staples Index CSI 300 Health Care Index CSI 300 Financial Index CSI 300 Information Technology Index CSI 300 Telecommunications Index CSI 300 Utilities IndexCSI 300 Index also split into CSI 100 Index and CSI 200 Index for top 100 companies and 101st to 300th companies
|
Degenerative disease
|
Degenerative disease
|
Degenerative disease is the result of a continuous process based on degenerative cell changes, affecting tissues or organs, which will increasingly deteriorate over time.In neurodegenerative diseases, cells of the central nervous system stop working or die via neurodegeneration. An example of this is Alzheimer's disease. The other two common groups of degenerative diseases are those that affect circulatory system (e.g. coronary artery disease) and neoplastic diseases (e.g. cancers).Many degenerative diseases exist and some are related to aging. Normal bodily wear or lifestyle choices (such as exercise or eating habits) may worsen degenerative diseases, but this depends on the disease. Sometimes the main or partial cause behind such diseases is genetic. Thus some are clearly hereditary like Huntington's disease. Sometimes the cause is viruses, poisons or other chemicals. The cause may also be unknown.Some degenerative diseases can be cured. In those that can not, it may be possible to alleviate the symptoms.
|
Gaia philosophy
|
Gaia philosophy
|
Gaia philosophy (named after Gaia, Greek goddess of the Earth) is a broadly inclusive term for relating concepts about, humanity as an effect of the life of this planet.
|
Gaia philosophy
|
Gaia philosophy
|
The Gaia hypothesis holds that all organisms on a life-giving planet regulate the biosphere in such a way as to promote its habitability. Gaia concepts draw a connection between the survivability of a species (hence its evolutionary course) and its usefulness to the survival of other species. While there were a number of precursors to Gaia hypothesis, the first scientific form of this idea was proposed as the Gaia hypothesis by James Lovelock, a UK chemist, in 1970. The Gaia hypothesis deals with the concept of biological homeostasis, and claims the resident life forms of a host planet coupled with their environment have acted and act like a single, self-regulating system. This system includes the near-surface rocks, the soil, and the atmosphere. Today, many scientists consider such ideas to be unsupported by, or at odds with, the available evidence (see Gaia hypothesis criticism). These theories are ,however, significant in green politics.
|
Gaia philosophy
|
Predecessors to the Gaia theory
|
There are some mystical, scientific and religious predecessors to the Gaia philosophy, which had a Gaia-like conceptual basis. Many religious mythologies had a view of Earth as being a whole that is greater than the sum of its parts (e.g. some Native American religions and various forms of shamanism).
|
Gaia philosophy
|
Predecessors to the Gaia theory
|
Isaac Newton wrote of the earth, "Thus this Earth resembles a great animal or rather inanimate vegetable, draws in æthereall breath for its dayly refreshment & vitall ferment & transpires again with gross exhalations, And according to the condition of all other things living ought to have its times of beginning youth old age & perishing."Pierre Teilhard de Chardin, a paleontologist and geologist, believed that evolution fractally unfolded from cell to organism to planet to solar system and ultimately the whole universe, as we humans see it from our limited perspective. Teilhard later influenced Thomas Berry and many Catholic humanist thinkers of the 20th century.
|
Gaia philosophy
|
Predecessors to the Gaia theory
|
Lewis Thomas believed that Earth should be viewed as a single cell; he derived this view from Johannes Kepler's view of Earth as a single round organism.Buckminster Fuller is generally credited with making the idea respectable in Western scientific circles in the 20th century. Building to some degree on his observations and artifacts, e.g. the Dymaxion map of the Earth he created, others began to ask if there was a way to make the Gaia theory scientifically sound.
|
Gaia philosophy
|
Predecessors to the Gaia theory
|
In 1931, L.G.M. Baas Becking delivered an inaugural lecture about Gaia in the sense of life and earth.Oberon Zell-Ravenheart in 1970 in an article in Green Egg Magazine, independently articulated the Gaia Thesis.Many believe that these ideas cannot be considered scientific hypotheses; by definition a scientific hypothesis must make testable predictions. As the above claims are not currently testable, they are outside the bounds of current science. This does not mean that these ideas are not theoretically testable. As one can postulate tests that could be applied, given enough time and space, then these ideas should be seen as scientific hypotheses.
|
Gaia philosophy
|
Predecessors to the Gaia theory
|
These are conjectures and perhaps can only be considered as social and maybe political philosophy; they may have implications for theology, or thealogy as Zell-Ravenheart and Isaac Bonewits put it.
|
Gaia philosophy
|
Range of views
|
According to James Kirchner there is a spectrum of Gaia hypotheses, ranging from the undeniable to radical. At one end is the undeniable statement that the organisms on the Earth have radically altered its composition. A stronger position is that the Earth's biosphere effectively acts as if it is a self-organizing system which works in such a way as to keep its systems in some kind of equilibrium that is conducive to life. Today many scientists consider that such a view (and any stronger views) are unlikely to be correct. An even stronger claim is that all lifeforms are part of a single planetary being, called Gaia. In this view, the atmosphere, the seas, the terrestrial crust would be the result of interventions carried out by Gaia, through the coevolving diversity of living organisms.
|
Gaia philosophy
|
Range of views
|
The most extreme form of Gaia theory is that the entire Earth is a single unified organism with a highly intelligent mind that arose as an emergent property of the whole biosphere. In this view, the Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence at all to support this last point of view, and it has come about because many people do not understand the concept of homeostasis. Many non-scientists instinctively and incorrectly see homeostasis as a process that requires conscious controlThe more speculative versions of Gaia, including versions in which it is believed that the Earth is actually conscious, sentient, and highly intelligent, are usually considered outside the bounds of what is usually considered science.
|
Gaia philosophy
|
Gaia in biology and science
|
Buckminster Fuller has been credited as the first to incorporate scientific ideas into a Gaia theory, which he did with his Dymaxion map of the Earth.
The first scientifically rigorous theory was the Gaia hypothesis by James Lovelock, a UK chemist.
A variant of this hypothesis was developed by Lynn Margulis, a microbiologist, in 1979.
Her version is sometimes called the "Gaia Theory" (note uppercase-T). Her model is more limited in scope than the one that Lovelock proposed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.