title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Tetramethyl acetyloctahydronaphthalenes
|
Chemical Summary
|
OTNE is the abbreviation for the fragrance material with Chemical Abstract Service (CAS) numbers 68155-66-8, 54464-57-2 and 68155-67-9 and EC List number 915-730-3. It is a multi-constituent isomer mixture containing: 1-(1,2,3,4,5,6,7,8-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one (CAS 54464-57-2) 1-(1,2,3,5,6,7,8,8a-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one (CAS 68155-66-8) 1-(1,2,3,4,6,7,8,8a-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one (CAS 68155-67-9)All isomers conform to the chemical formula C16H26O and have a molecular weight of 234.4 g/mol.
|
Tetramethyl acetyloctahydronaphthalenes
|
Physical-chemical properties
|
OTNE is a clear yellow liquid at 20 °C. Its melting point is below −20 °C at atmospheric pressure, and its boiling point is determined to be at around 290 °C (modified OECD 103 method). All physicochemical data have been obtained from the OTNE REACH registration dossier.
|
Tetramethyl acetyloctahydronaphthalenes
|
Safety
|
Iso E Super may cause allergic reactions detectable by patch tests in humans and chronic exposure to Iso E Super from perfumes may result in permanent hypersensitivity. In a study with female mice, Iso E Super was positive in the local lymph node assay (LLNA) and irritancy assay (IRR), but negative in the mouse ear swelling test (MEST).No data were available regarding chemical disposition, metabolism, or toxicokinetics; acute, short-term, subchronic, or chronic toxicity; synergistic or antagonistic activity; reproductive or teratological effects; carcinogenicity; genotoxicity; or immunotoxicity.The International Fragrance Association (IFRA) has published safe use levels for Iso E Super in consumer products.OTNE is not toxic and not a CMR substance.OTNE is classified as a skin irritant (R38 EU DSD, H315 EU CLP) and is positive in the Local Lymph Node Assay (LLNA – OECD 429) and therefore classified as a skin sensitiser (R43 EU DSD, H317 EU CLP), though OTNE lacks any structural alerts for sensitisation in in silico prediction models (DEREK) and is not identified as an allergen in in vivo Human Repeated Patch Tests.Several health related studies have been conducted on OTNE, and based on these studies, OTNE has been determined to be safe under the current conditions of use.Given the sensitization classification of OTNE, and its use in fragrances, the International Fragrance Association (IFRA) has published safe use levels for OTNE in consumer products, which have been in effect since August 2009.
|
Tetramethyl acetyloctahydronaphthalenes
|
Environmental data
|
OTNE is classified as H410 Very toxic to aquatic life with long-lasting effects (EU-CLP) or R51/53 Toxic to aquatic organisms, may cause long-term adverse effects in the aquatic environment (EU DSD).
The biodegradation of OTNE in fresh water (T1/2) is at most 40 days, and at most 120 days in sediment (OECD 314 test), though the biodegradation within the 28day window was around 11% (OECD 301-C). Given the outcome of the OECD 314 test OTNE does not meet the criteria for “Persistent” (P) or “very Persistent” (vP).
The measured Bio Concentration Factor (BCF) is 391 L/kg, which is well below the EU limit of 2000 and US limit of 1000 for Bioaccumulation (B) classification. The LogKow for OTNE has been measured to be 5.65.OTNE is therefore not classified as a PBT or vPvB substance for the EU or any other global criteria.
OTNE has been detected in surface water at levels of 29–180 ng/L, These values are well below the Predicted No Effect Concentration (PNEC) and as a result the overall environmental risk ratio (also referred to as RCR or PEC/PNECS) is determined to be below 1.
|
Tetramethyl acetyloctahydronaphthalenes
|
Regulatory status
|
OTNE is registered on all major chemical inventories (US, Japan, China, Korea, Philippines, and Australia) and has been EU REACH registered in 2010.
|
Tetramethyl acetyloctahydronaphthalenes
|
Regulatory status
|
In 2014 the US National Toxicology Program (NTP) conducted a 13-week repeat dose toxicity study and found no adverse effects.OTNE has been recommended for inclusion in an update for the EU Fragrance Allergens labelling for cosmetic products based on a small number of positive reactions in dermatological clinics of around 0.2% to 1.7% of patients tested in three studiesIf the proposed SCCS Opinion is taken forward into legislation then OTNE will be labelled on cosmetic products in the EU, several years after publication of a new legislation.
|
Tetramethyl acetyloctahydronaphthalenes
|
Commercial products
|
The fragrance Molecule 01 (Escentric Molecules, 2005) is a specific isomer of Iso E Super, by the company IFF. Its partner fragrance Escentric 01 contains Iso E Super along with ambroxan, pink pepper, green lime with balsamic notes like benzoin mastic and incense.
The fragrance Eternity by Calvin Klein (1988) contained 11.7% Iso E Super in the fragrance portion of the formula.
The fragrance Scent of a Dream by Charlotte Tilbury contains Iso E Super.
The fragrance No.1 Invisible by Perfume Extract contains Iso E Super.
|
Tetramethyl acetyloctahydronaphthalenes
|
History
|
OTNE was patented in 1975 as an invention of International Flavors and Fragrances.
|
Bromoxylene
|
Bromoxylene
|
A bromoxylene is an aromatic compound containing a benzene ring linked with two methyl groups, and a bromine atom. There are several isomers.
|
Phaclofen
|
Phaclofen
|
Phaclofen, or phosphonobaclofen, is a selective antagonist for the GABAB receptor. It was the first selective GABAB antagonist discovered, but its utility was limited by the fact that it does not cross the blood brain barrier.
|
Dernford Fen
|
Dernford Fen
|
Dernford Fen is a 10.3-hectare (25-acre) biological Site of Special Scientific Interest north-west of Sawston in Cambridgeshire.The site is a rare surviving example of rough fen and carr. Other habitats are dry grassland and scrub, together with ditches and a chalk stream. There are breeding warblers, and the diverse habitats are valuable for amphibians and reptiles.The site is private land with no public access.
|
Operator associativity
|
Operator associativity
|
In programming language theory, the associativity of an operator is a property that determines how operators of the same precedence are grouped in the absence of parentheses. If an operand is both preceded and followed by operators (for example, ^ 3 ^), and those operators have equal precedence, then the operand may be used as input to two different operations (i.e. the two operations indicated by the two operators). The choice of which operations to apply the operand to, is determined by the associativity of the operators. Operators may be associative (meaning the operations can be grouped arbitrarily), left-associative (meaning the operations are grouped from the left), right-associative (meaning the operations are grouped from the right) or non-associative (meaning operations cannot be chained, often because the output type is incompatible with the input types). The associativity and precedence of an operator is a part of the definition of the programming language; different programming languages may have different associativity and precedence for the same type of operator.
|
Operator associativity
|
Operator associativity
|
Consider the expression a ~ b ~ c. If the operator ~ has left associativity, this expression would be interpreted as (a ~ b) ~ c. If the operator has right associativity, the expression would be interpreted as a ~ (b ~ c). If the operator is non-associative, the expression might be a syntax error, or it might have some special meaning. Some mathematical operators have inherent associativity. For example, subtraction and division, as used in conventional math notation, are inherently left-associative. Addition and multiplication, by contrast, are both left and right associative. (e.g. (a * b) * c = a * (b * c)).
|
Operator associativity
|
Operator associativity
|
Many programming language manuals provide a table of operator precedence and associativity; see, for example, the table for C and C++.
|
Operator associativity
|
Operator associativity
|
The concept of notational associativity described here is related to, but different from, the mathematical associativity. An operation that is mathematically associative, by definition requires no notational associativity. (For example, addition has the associative property, therefore it does not have to be either left associative or right associative.) An operation that is not mathematically associative, however, must be notationally left-, right-, or non-associative. (For example, subtraction does not have the associative property, therefore it must have notational associativity.)
|
Operator associativity
|
Examples
|
Associativity is only needed when the operators in an expression have the same precedence. Usually + and - have the same precedence. Consider the expression 7 - 4 + 2. The result could be either (7 - 4) + 2 = 5 or 7 - (4 + 2) = 1. The former result corresponds to the case when + and - are left-associative, the latter to when + and - are right-associative.
|
Operator associativity
|
Examples
|
In order to reflect normal usage, addition, subtraction, multiplication, and division operators are usually left-associative, while for an exponentiation operator (if present) and Knuth's up-arrow operators there is no general agreement. Any assignment operators are typically right-associative. To prevent cases where operands would be associated with two operators, or no operator at all, operators with the same precedence must have the same associativity.
|
Operator associativity
|
Examples
|
A detailed example Consider the expression 5^4^3^2, in which ^ is taken to be a right-associative exponentiation operator. A parser reading the tokens from left to right would apply the associativity rule to a branch, because of the right-associativity of ^, in the following way: Term 5 is read.
Nonterminal ^ is read. Node: "5^".
Term 4 is read. Node: "5^4".
Nonterminal ^ is read, triggering the right-associativity rule. Associativity decides node: "5^(4^".
Term 3 is read. Node: "5^(4^3".
Nonterminal ^ is read, triggering the re-application of the right-associativity rule. Node "5^(4^(3^".
Term 2 is read. Node "5^(4^(3^2".
No tokens to read. Apply associativity to produce parse tree "5^(4^(3^2))".This can then be evaluated depth-first, starting at the top node (the first ^): The evaluator walks down the tree, from the first, over the second, to the third ^ expression.
It evaluates as: 32 = 9. The result replaces the expression branch as the second operand of the second ^.
Evaluation continues one level up the parse tree as: 49 = 262,144. Again, the result replaces the expression branch as the second operand of the first ^.
Again, the evaluator steps up the tree to the root expression and evaluates as: 5262144 ≈ 6.2060699×10183230. The last remaining branch collapses and the result becomes the overall result, therefore completing overall evaluation.A left-associative evaluation would have resulted in the parse tree ((5^4)^3)^2 and the completely different result (6253)2 = 244,140,6252 ≈ 5.9604645×1016.
|
Operator associativity
|
Right-associativity of assignment operators
|
In many imperative programming languages, the assignment operator is defined to be right-associative, and assignment is defined to be an expression (which evaluates to a value), not just a statement. This allows chained assignment by using the value of one assignment expression as the right operand of the next assignment expression.
|
Operator associativity
|
Right-associativity of assignment operators
|
In C, the assignment a = b is an expression that evaluates to the same value as the expression b converted to the type of a, with the side effect of storing the R-value of b into the L-value of a. Therefore the expression a = (b = c) can be interpreted as b = c; a = b;. The alternative expression (a = b) = c raises an error because a = b is not an L-value expression, i.e. it has an R-value but not an L-value where to store the R-value of c. The right-associativity of the = operator allows expressions such as a = b = c to be interpreted as a = (b = c).
|
Operator associativity
|
Right-associativity of assignment operators
|
In C++, the assignment a = b is an expression that evaluates to the same value as the expression a, with the side effect of storing the R-value of b into the L-value of a. Therefore the expression a = (b = c) can still be interpreted as b = c; a = b;. And the alternative expression (a = b) = c can be interpreted as a = b; a = c; instead of raising an error. The right-associativity of the = operator allows expressions such as a = b = c to be interpreted as a = (b = c).
|
Operator associativity
|
Non-associative operators
|
Non-associative operators are operators that have no defined behavior when used in sequence in an expression. In Prolog the infix operator :- is non-associative because constructs such as "a :- b :- c" constitute syntax errors.
|
Operator associativity
|
Non-associative operators
|
Another possibility is that sequences of certain operators are interpreted in some other way, which cannot be expressed as associativity. This generally means that syntactically, there is a special rule for sequences of these operations, and semantically the behavior is different. A good example is in Python, which has several such constructs. Since assignments are statements, not operations, the assignment operator does not have a value and is not associative. Chained assignment is instead implemented by having a grammar rule for sequences of assignments a = b = c, which are then assigned left-to-right. Further, combinations of assignment and augmented assignment, like a = b += c are not legal in Python, though they are legal in C. Another example are comparison operators, such as >, ==, and <=. A chained comparison like a < b < c is interpreted as (a < b) and (b < c), not equivalent to either (a < b) < c or a < (b < c).
|
Calligra Words
|
Calligra Words
|
Calligra Words is a word processor, which is part of Calligra Suite and developed by KDE as free software.
|
Calligra Words
|
History
|
When the Calligra Suite was formed, unlike the other Calligra applications Words was not a continuation of the corresponding KOffice application – KWord. The Words was largely written from scratch – in May 2011 a completely new layout engine was announced. The first release was made available on April 11, 2012 (2012-04-11), using the version number 2.4 to match the rest of Calligra Suite.
|
Calligra Words
|
Reception
|
Initial reception of Calligra Words shortly after the 2.4 release was mixed. While Linux Pro Magazine Online's Bruce Byfield wrote “Calligra needed an impressive first release. Perhaps surprisingly, and to the development team’s credit, it has managed one in 2.4.”, he also noted that “Words in particular is still lacking features”. He concluded that Calligra is “worth keeping an eye on”.On the other hand, Calligra Words became the default word processor in Kubuntu 12.04 – replacing LibreOffice Writer.
|
Calligra Words
|
Formula editor
|
Formulas in Calligra Words are provided by the Formula plugin. It is a formula editor with a WYSIWYG interface.
|
Temporary equilibrium method
|
Temporary equilibrium method
|
The temporary equilibrium method has been devised by Alfred Marshall for analyzing economic systems that comprise interdependent variables of different speed. Sometimes it is referred to as the moving equilibrium method.
|
Temporary equilibrium method
|
Temporary equilibrium method
|
For example, assume an industry with a certain capacity that produces a certain commodity. Given this capacity, the supply offered by the industry will depend on the prevailing price. The corresponding supply schedule gives short-run supply. The demand depends on the market price. The price in the market declines if supply exceeds demand, and it increases, if supply is less than demand. The price mechanism leads to market clearing in the short run. However, if this short-run equilibrium price is sufficiently high, production will be very profitable, and capacity will increase. This shifts the short-run supply schedule to the right, and a new short-run equilibrium price will be obtained. The resulting sequence of short-run equilibria are termed temporary equilibria. The overall system involves two state variables: price and capacity. Using the temporary equilibrium method, it can be reduced to a system involving only state variable. This is possible because each short-run equilibrium price will be a function of the prevailing capacity, and the change of capacity will be determined by the prevailing price. Hence the change of capacity will be determined by the prevailing capacity. The method works if the price adjusts fast and capacity adjustment is comparatively slow. The mathematical background is provided by the Moving equilibrium theorem.
|
Temporary equilibrium method
|
Temporary equilibrium method
|
In physics, the method is known as scale separation,
|
Undergrowth
|
Undergrowth
|
In forestry and ecology, understory (American English), or understorey (Commonwealth English), also known as underbrush or undergrowth, includes plant life growing beneath the forest canopy without penetrating it to any great extent, but above the forest floor. Only a small percentage of light penetrates the canopy so understory vegetation is generally shade-tolerant. The understory typically consists of trees stunted through lack of light, other small trees with low light requirements, saplings, shrubs, vines and undergrowth. Small trees such as holly and dogwood are understory specialists.
|
Undergrowth
|
Undergrowth
|
In temperate deciduous forests, many understory plants start into growth earlier in the year than the canopy trees, to make use of the greater availability of light at that particular time of year. A gap in the canopy caused by the death of a tree stimulates the potential emergent trees into competitive growth as they grow upwards to fill the gap. These trees tend to have straight trunks and few lower branches. At the same time, the bushes, undergrowth, and plant life on the forest floor become denser. The understory experiences greater humidity than the canopy, and the shaded ground does not vary in temperature as much as open ground. This causes a proliferation of ferns, mosses, and fungi and encourages nutrient recycling, which provides favorable habitats for many animals and plants.
|
Undergrowth
|
Understory structure
|
The understory is the underlying layer of vegetation in a forest or wooded area, especially the trees and shrubs growing between the forest canopy and the forest floor.
|
Undergrowth
|
Understory structure
|
Plants in the understory comprise an assortment of seedlings and saplings of canopy trees together with specialist understory shrubs and herbs. Young canopy trees often persist in the understory for decades as suppressed juveniles until an opening in the forest overstory permits their growth into the canopy. In contrast understory shrubs complete their life cycles in the shade of the forest canopy. Some smaller tree species, such as dogwood and holly, rarely grow tall and generally are understory trees.
|
Undergrowth
|
Understory structure
|
The canopy of a tropical forests are typically about 10m thick, and intercepts around 95% of the sunlight. The understory therefore receives less intense light than plants in the canopy and such light as does penetrate is impoverished in wavelengths of light that are most effective for photosynthesis. Understory plants therefore must be shade tolerant—they must be able to photosynthesize adequately using such light as does reach their leaves. They often are able to use wavelengths that canopy plants cannot. In temperate deciduous forests towards the end of the leafless season, understory plants take advantage of the shelter of the still leafless canopy plants to "leaf out" before the canopy trees do. This is important because it provides the understory plants with a window in which to photosynthesize without the canopy shading them. This brief period (usually 1–2 weeks) is often a crucial period in which the plant can maintain a net positive carbon balance over the course of the year.
|
Undergrowth
|
Understory structure
|
As a rule forest understories also experience higher humidity than exposed areas. The forest canopy reduces solar radiation, so the ground does not heat up or cool down as rapidly as open ground. Consequently, the understory dries out more slowly than more exposed areas do. The greater humidity encourages epiphytes such as ferns and mosses, and allows fungi and other decomposers to flourish. This drives nutrient cycling, and provides favorable microclimates for many animals and plants, such as the pygmy marmoset.
|
Ventricular-brain ratio
|
Ventricular-brain ratio
|
Ventricular-brain ratio (VBR), also known as the ventricle-to-brain ratio or ventricle-brain ratio, is the ratio of total ventricle area to total brain area, which can be calculated with planimetry from brain imagining techniques such as CT scans.
|
Ventricular-brain ratio
|
Ventricular-brain ratio
|
It is a common measure of ventricular dilation or cerebral atrophy in patients with traumatic brain injury or hydrocephalus ex vacuo. VBR also tends to increase with age.Generally, a higher VBR means a worse prognosis for recovering from a brain injury. For example, VBR is significantly correlated with performance on the Luria-Nebraska neuropsychological battery. Studies have found people with schizophrenia have larger third ventricles and VBR. Correlational studies have found relationships between ventricle-brain ratio and binge eating and inversely with plasma thyroid hormone concentration.
|
Collaborative e-democracy
|
Collaborative e-democracy
|
Collaborative e-democracy refers to a hybrid democratic model combining elements of direct democracy, representative democracy, and e-democracy (or the incorporation of ICTs into democratic processes). This concept, first introduced at international academic conferences in 2009, offers a pathway for citizens to directly or indirectly engage in policymaking. Steven Brams and Peter Fishburn describe it as an "innovative way to engage citizens in the democratic process," that potentially makes government "more transparent, accountable, and responsive to the needs of the people."Collaborative e-democracy is a political system that enables governmental stakeholders (such as politicians, parties, ministers, MPs) and non-governmental stakeholders (including NGOs, political lobbies, local communities, and individual citizens) to collaborate in the development of public laws and policies. This collaborative policymaking process occurs through a government-sanctioned social networking site, with all citizens as members, thus facilitating collaborative e-policy-making. Michael Gallagher suggests that it can be a "powerful tool that can be used to improve the quality of decision-making." Andrew Reynolds even believes that "collaborative e-democracy is the future of democracy."In this system, directly elected government officials, or ‘proxy representatives’, would undertake most law and policy-making processes, embodying aspects of representative democracy. However, citizens retain final voting power on each issue, a feature of direct democracy. Furthermore, every citizen is empowered to propose their own policies and, where relevant, initiate new policy processes (initiative). Collaboratively formulated policies, considering the views of a larger proportion of the citizenry, may result in more just, sustainable, and therefore, implementable outcomes. As Steven Brams and Peter Fishburn suggest, "collaborative e-democracy can help to ensure that all voices are heard, and that decisions are made in the best interests of the community." They argue that this can lead to "more just and sustainable outcomes."Collaborative e-democracy can also help to improve the quality of decision-making, as noted by Michael Gallagher, who states, "By involving a wider range of people in the decision-making process, collaborative e-democracy can help to ensure that decisions are made on the basis of sound evidence and reasoning." Gallagher further proposes that this collaborative approach can contribute to "more sustainable outcomes."Andrew Reynolds posits that "Collaborative e-democracy can help to make government more responsive to the needs of the people. By giving citizens a direct say in the decision-making process, collaborative e-democracy can help to ensure that government is more accountable to the people. This can lead to more implementable outcomes, as decisions are more likely to be supported by the people." Additional references support the idea that collaborative e-democracy can lead to more just, sustainable, and implementable outcomes.
|
Collaborative e-democracy
|
Theoretical Framework
|
Collaborative e-democracy encompasses the following theoretical components: Collaborative Democracy: A political framework where electors and elected officials actively collaborate to achieve optimal solutions using technologies that facilitate broad citizen participation in government.
|
Collaborative e-democracy
|
Theoretical Framework
|
Collaborative e-Policymaking (CPM): A software-facilitated, five-phase policy process in which citizens participate either directly or indirectly via proxy representatives. This process unfolds on a government-backed social networking site, with all citizens as members. Each member can propose issues, evaluate and rank other members' suggestions, and vote on laws and policies that will affect them. In a broader context, CPM is a universal process that could enable every organization (e.g., businesses, governments) or self-selected group (e.g., unions, online communities) to co-create their own regulations (such as laws or codes of conduct) and strategies (e.g., governmental actions, business strategies), involving all stakeholders in the respective decision-making processes.
|
Collaborative e-democracy
|
Theoretical Framework
|
Proxy voting and Liquid Democracy: In a collaborative e-democracy, the system takes into account the limitations of direct democracy, where each citizen is expected to vote on every policy issue. Recognizing that this could impose an excessive burden, collaborative e-democracy allows citizens to delegate voting power to trusted representatives, or proxies, for issues or domains where they lack the time, interest, or expertise for direct participation. Despite this delegation, the original citizen maintains final voting power on each issue, amalgamating the benefits of both direct and representative democracy on the social networking platform.
|
Collaborative e-democracy
|
Policy Process
|
Collaborative e-democracy engages various stakeholders such as affected individuals, domain experts, and parties capable of implementing solutions in the process of shaping public laws and policies. The cycle of each policy begins with the identification of a common issue or objective by the collective participants - citizens, experts, and proxy representatives. As Steven Brams and Peter Fishburn argue, "collaborative e-democracy can help to ensure that all voices are heard, and that decisions are made in the best interests of the community." Suggestion & Ranking Phase: Participants are prompted to offer policy solutions aimed at resolving the identified issue or reaching the proposed goal, a method known as policy crowdsourcing. Subsequently, these suggestions are ranked with those having the most support taking precedence. This process, according to Michael Gallagher, helps to "improve the quality of decision-making" by involving a wider range of people, ensuring that "decisions are made on the basis of sound evidence and reasoning." Evaluation Phase: For each top-ranking proposal (i.e., law or government action), pros and cons of its implementation are identified, enabling the collective to assess how they might be impacted by each policy. Independent domain experts assist this evaluation process.
|
Collaborative e-democracy
|
Policy Process
|
Voting Phase: Based on the collectively created information, the group votes for the proposal perceived as the most optimal solution for the identified issue or goal. The outcome of this phase may result in the introduction of a new law or execution of a new government action. As Andrew Reynolds notes, giving citizens a "direct say in the decision-making process... can lead to more implementable outcomes, as decisions are more likely to be supported by the people." Revision Phase: A predetermined period post-implementation, the collective is consulted to ascertain whether the policy enacted was successful in resolving the issue or attaining the goal. If the policy is deemed successful, the cycle concludes; if not, the process reinitiates with the suggestion phase until a resolution is reached.Note that as a software process, CPM is automated and conducted on a governmental social networking site.
|
Collaborative e-democracy
|
Principles
|
Collaborative e-democracy operates on several key principles: Self-government and Direct Democracy: Collaborative e-democracy is grounded in the ideal of self-governance and direct democracy. It embodies the ancient Roman law maxim, quod omnes tangit ab omnibus approbetur, which translates to “that which affects all people must be approved by all people.” This stands in stark contrast to representative democracy, which is often influenced by corporate lobbies (Corporatocracy).
|
Collaborative e-democracy
|
Principles
|
Open source governance: This philosophy promotes the application of open source and open content principles to democracy, enabling any engaged citizen to contribute to policy creation.
Aggregation: The social networking platform plays a role in gathering citizens' opinions on different issues, such as agreement with a specific policy. Based on these common views, ad hoc groups may form to address these concerns.
Collaboration: The platform also encourages collaboration of like-minded individuals on shared issues, aiding the co-creation of policy proposals within or between groups. Groups with contrasting strategies or perspectives but similar goals can compete with each other.
Collective intelligence: The CPM process leverages collective intelligence — a group intelligence emerging from aggregation, collaboration, competition, and consensus decision-making. This collective intelligence helps identify issues and co-create solutions beneficial for most people, reflecting the design pattern of Web 2.0.
|
Collaborative e-democracy
|
Principles
|
Collective Learning & Adoption: The direct democracy aspect of collaborative e-democracy shifts policymaking responsibility from government teams (top-down) to the citizen collective (bottom-up). The repercussions of their decisions initiate a collective learning process. Collaborative e-democracy, being flexible and adaptable, integrates learning experiences quickly and adjusts to new social, economic, or environmental circumstances. This principle mirrors 'Perpetual Beta,' another design pattern of Web 2.0.
|
Collaborative e-democracy
|
Benefits and Limitations
|
Collaborative e-democracy aims to bring forth several benefits: Transparency and Accessibility: The CPM process aspires to provide transparency and make governmental operations accessible to all citizens via the internet.Political efficacy: Engaging citizens in governmental processes could heighten political efficacy and help counter the democratic deficit.Deliberation: The governmental social networking site, serving as the primary platform for political information and communication, could enhance deliberation quality among the nation's various governmental and non-governmental stakeholders.Collective Awareness: Large-scale online participation could boost public awareness of collective problems, goals, or policy issues, including minority opinions, and facilitate harnessing the nation's collective intelligence for policy development.However, collaborative e-democracy has its limitations: Constitutional Constraints: Many democratic nations have constitutional limits on direct democracy, and governments may be reluctant to surrender policymaking authority to the collective.Digital divide: People without internet access could be at a disadvantage in a collaborative e-democracy. Traditional democratic procedures need to remain available until the digital divide is resolved.Majority rule: As in most democratic decision processes, majorities could overshadow minorities. The evaluation process could provide advance notice if a minority group would be significantly disadvantaged by a proposed policy.Potential for Naive Voting: Voters may not have comprehensive understanding of the facts and data related to their options, leading to votes that do not represent their actual intentions. However, the system's included proxy voting/delegation, coupled with potential improvement in education, critical thinking, and reasoning skills (potentially fostered by a better form of government and internet usage), should help mitigate this issue. Additionally, the CPM process incorporates proxies and experts to educate people on policy implications before decisions are made.
|
Collaborative e-democracy
|
Research and Development
|
The concepts of collaborative e-democracy and collaborative e-policy-making were first introduced at two academic conferences on e-governance and e-democracy in 2009. The key presentations were: Petrik, Klaus (2009). “Participation and e-Democracy: How to Utilize Web 2.0 for Policy Decision-Making.” Presented at the 10th International Digital Government Research Conference: "Social Networks: Making Connections between Citizens, Data & Government" in Puebla, Mexico.Petrik, Klaus (2009). “Deliberation and Collaboration in the Policy Process: A Web 2.0 Approach.” Presented at The 3rd Conference on Electronic Democracy in Vienna, Austria.An additional publication appeared in the "Journal of eDemocracy and Open Government", Vol 2, No 1 (2010).
|
Cadbury Creme Egg
|
Cadbury Creme Egg
|
A Cadbury Creme Egg, originally named Fry's Creme Egg, is a chocolate confection produced in the shape of an egg. It originated from the British chocolatier Fry's in 1963 before being renamed by Cadbury in 1971. The product consists of a thick chocolate shell containing a sweet white and yellow filling that resembles fondant. The filling mimics the albumen and yolk of a soft boiled egg.
|
Cadbury Creme Egg
|
Cadbury Creme Egg
|
The confectionery is produced by Cadbury in the United Kingdom, by The Hershey Company in the United States, and by Cadbury Adams in Canada.
|
Cadbury Creme Egg
|
History
|
While filled eggs were first manufactured by the Cadbury Brothers in 1923, the Creme Egg in its current form was introduced in 1963. Initially sold as "Fry's Creme Eggs" (incorporating the Fry's brand, after the British chocolatier), they were renamed "Cadbury's Creme Eggs" in 1971.
|
Cadbury Creme Egg
|
Composition
|
Cadbury Creme Eggs are manufactured as two chocolate half shells, each of which is filled with a white fondant made from sugar, glucose syrup, inverted sugar syrup, dried egg white, and flavouring. The fondant in each half is topped with a smaller amount of the same mixture coloured yellow with paprika extract, to mimic the yolk and white of a real egg. Both halves are then quickly joined and cooled, the shell bonding together in the process. The solid eggs are removed from the moulds and wrapped in foil.During an interview in a 2007 episode of Late Night with Conan O'Brien, actor B. J. Novak drew attention to the fact that American market Cadbury Creme Eggs had decreased in size, despite the official Cadbury website stating otherwise. American Creme Eggs at the time weighed 34 g (1.2 oz) and contained 150 kcal. Before 2006, the eggs marketed by Hershey were identical to the UK version, weighing 39 g (1.4 oz) and containing 170 kcal.In 2015, the British Cadbury company under the American Mondelēz International conglomerate announced that it had changed the formula of the Cadbury Creme Egg by replacing its Cadbury Dairy Milk chocolate with "standard cocoa mix chocolate". It had also reduced the packaging from 6 eggs to 5 with a less than proportionate decrease in price. This resulted in a large number of complaints from consumers. Analysts at IRI found that Cadbury lost more than $12 million in Creme Egg sales in the UK.
|
Cadbury Creme Egg
|
Manufacture and sales
|
Creme Eggs are produced by Cadbury in the United Kingdom, by The Hershey Company in the United States, and by Cadbury Adams in Canada. They are sold by Mondelez International in all markets except the US, where The Hershey Company has the local marketing rights. At the Bournville factory in Birmingham in the UK, they are manufactured at a rate of 1.5 million per day. The Creme Egg was also previously manufactured in New Zealand, but has been imported from the UK since 2009. A YouGov poll saw the Creme Egg ranked as the most famous confectionery in the UK.As of 2011 the Creme Egg was the best-selling confectionery item between New Year's Day and Easter in the UK, with annual sales in excess of 200 million eggs and a brand value of approximately £55 million. However, in 2016 sales plummeted after the controversial decision to change the recipe from the original Cadbury Dairy Milk chocolate to a cheaper substitute, with reports of a loss of more than £6M in sales.
|
Cadbury Creme Egg
|
Manufacture and sales
|
Creme Eggs are available individually and in boxes, with the numbers of eggs per package varying per country. The foil wrapping of the eggs was traditionally pink, blue, purple, and yellow in the United Kingdom and Ireland, though green was removed and purple replaced blue early in the 21st century. In the United States, some green is incorporated into the design, which previously featured the product's mascot, the Creme Egg Chick. As of 2015, the packaging in Canada has been changed to a 34 g (1.2 oz), purple, red and yellow soft plastic shell.
|
Cadbury Creme Egg
|
Manufacture and sales
|
Creme Eggs are available annually between 1 January and Easter Sunday. In the UK in the 1980s, Cadbury made Creme Eggs available year-round but sales dropped and they returned to seasonal availability. In 2018, white chocolate versions of the Creme Eggs were made available. These eggs were not given a wrapper that clearly marked them as white chocolate eggs, and were mixed in with the normal Creme Eggs in the United Kingdom. Individuals who discovered an egg would win money via a ticket that had a code printed on it inside of the wrapper.Creme Eggs were manufactured in New Zealand at the Cadbury factory in Dunedin from 1983 to 2009. Cadbury in New Zealand and Australia went through a restructuring process, with most Cadbury products previously produced in New Zealand being manufactured instead at Cadbury factories in Australia. Cadbury Australia produces some Creme Eggs products for the Australian market, most prominently the Mini Creme Egg. New Zealand's Dunedin plant later received a $69 million upgrade to specialise in boxed products such as Cadbury Roses, and Creme Eggs were no longer produced there. The result of the changes meant that Creme Eggs were instead imported from the United Kingdom. The change also saw the range of Creme Eggs available for sale decrease. The size also dropped from 40 g (1.4 oz) to 39 g (1.4 oz) in this time. The response from New Zealanders was not positive, with complaints including the filling not being as runny as the New Zealand version. As of 2023, Cadbury Australia continue to produce the Mini Egg variant.
|
Cadbury Creme Egg
|
Advertising
|
The Creme Egg has been marketed in the UK and Ireland with the question "How do you eat yours?" and in New Zealand with the slogan "Don't get caught with egg on your face". Australia and New Zealand have also used a variation of the UK question, using the slogan "How do you do it?"In the US, Creme Eggs are advertised on television with a small white rabbit called the Cadbury Bunny (alluding to the Easter Bunny) which clucks like a chicken. Other animals dressed with bunny ears have also been used in the television ads, and in 2021, out of over 12,000 submissions in the Hershey Company's third annual tryouts, an Australian tree frog named Betty was named the newest Cadbury Bunny. Ads for caramel eggs use a larger gold-coloured rabbit which also clucks, and chocolate eggs use a large brown rabbit which clucks in a deep voice. The advertisements use the slogan "Nobunny knows Easter better than him", spoken by TV personality Mason Adams. The adverts have continued to air nearly unchanged into the high definition era and after Adams's death in 2005, though currently the ad image is slightly zoomed to fill the screen. The majority of rabbits used in the Cadbury commercials are Flemish Giants.In the UK, around the year 2000, selected stores were provided standalone paperboard cutouts of something resembling a "love tester". The shopper would press a button in the centre and a "spinner" (a series of LED lights) would select at random a way of eating the Creme Egg, e.g. "with chips". These were withdrawn within a year. There are also the "Creme Egg Cars" which are, as the name suggest, ovular vehicles painted to look like Creme Eggs. They are driven to various places to advertise the eggs but are based mainly at the Cadbury factory in Bournville. Five "Creme Egg Cars" were built from Bedford Rascal chassis. The headlights are taken from a Citroën 2CV.For the 2008/2009 season, advertising in the UK, Ireland, Australia, New Zealand and Canada consisted of stopmotion adverts in the "Here Today, Goo Tomorrow" campaign which comprised a Creme Egg stripping its wrapper off and then breaking its own shell, usually with household appliances and equipment, while making various 'goo' sounds, and a 'relieved' noise when finally able to break its shell. The Cadbury's Creme Egg website featured games where the player had to prevent the egg from finding a way to release its goo.
|
Cadbury Creme Egg
|
Advertising
|
A similar advertising campaign in 2010 featured animated Creme Eggs destroying themselves in large numbers, such as gathering together at a cinema before bombarding into each other to release all of the eggs' goo, and another which featured eggs being destroyed by mouse traps.In Halloween 2011, 2012 and 2013, advertising in Canada and New Zealand consisted of the "Screme Egg" Easter aliens, such as 48 seconds in the advertising.
|
Cadbury Creme Egg
|
Advertising
|
Campaigns/slogans c. 1970s: "Shopkeeper" campaign in which a boy asks for 6000 Cadbury Creme Eggs "Irresistibly" campaign showing characters prepared to do something unusual for a Creme Egg, similar to the "What would you do for a Klondike bar?" campaign Early 1980s: "Can't Resist Them" 1985: The "How Do You Eat Yours?" campaign Mid-1980s–present: "Nobunny Knows Easter Better than Cadbury" 1985–1996: "Don't get caught with egg on your face" 1990–1993: The first television campaign to use the "How Do You Eat Yours?" theme, featuring the zodiac signs 1994–1996: Spitting Image characters continued "How Do You Eat Yours?" 1997–1999: Matt Lucas, with the catchphrase "I've seen the future, and it's egg shaped!" 2000–2003: The "Pointing Finger" 2004: The "Roadshow" finger 2005: "Licky, Sticky, Happy" 2006–2007: "Eat It Your Way" 2008–2010: "Here Today, Goo Tomorrow" 2008–2009: "Unleash the Goo" 2009: "Release the Goo" 2010: "You’ll Miss Me When I’m Gone" 2011: "Goo Dares Wins" 2011: "Get Your Goo On!" 2012: "Gooing For Gold" 2012: "It's Goo Time" 2013–2016: "Have a fling with a Creme Egg" 2017–2019: "It's Hunting Season" 2020–2021: "Creme Egg Eatertainment" 2021: "Creme Egg Golden Goobilee" 2022-2023: "How do you NOT eat yours?” Creme Egg Café In 2016, Cadbury opened a pop-up café titled "Crème de la Creme Egg Café" in London. Tickets for the café sold out within an hour of being published online. The café on Greek Street, Soho, was open every Friday, Saturday and Sunday from 22 January to 6 March 2016.
|
Cadbury Creme Egg
|
Advertising
|
Creme Egg Camp In 2018, Cadbury opened a pop-up camp. The camp in Last Days of Shoreditch, Old Street was open every Thursday to Sunday from 19 January, to 18 February 2018
|
Cadbury Creme Egg
|
Varieties
|
Cadbury has introduced many variants to the original Creme Egg, including: Other products include: Creme Egg Fondant in a narrow cardboard tube (limited edition) Creme Egg ice cream with a fondant sauce in milk chocolate Creme Egg Pots Of Joy – melted Cadbury milk chocolate with a fondant layer Screme Egg Pots Of Joy – melted Cadbury milk chocolate but with a layer of Screme Egg fondant Creme Egg Layers Of Joy – A layered sharing dessert with Cadbury milk chocolate, chocolate mousse, chocolate chip cookie and fondant dessert with a creamy topping.
|
Cadbury Creme Egg
|
Varieties
|
Jaffa Egg – Manufactured in New Zealand, Dark chocolate with orange filling Marble Egg – Manufactured in New Zealand, Dairy Milk and Dream Chocolate swirled together
|
Fibrous tunic of eyeball
|
Fibrous tunic of eyeball
|
The sclera and cornea form the fibrous tunic of the bulb of the eye; the sclera is opaque, and constitutes the posterior five-sixths of the tunic; the cornea is transparent, and forms the anterior sixth.
The term "corneosclera" is also used to describe the sclera and cornea together.
|
Noncommutative projective geometry
|
Noncommutative projective geometry
|
In mathematics, noncommutative projective geometry is a noncommutative analog of projective geometry in the setting of noncommutative algebraic geometry.
|
Noncommutative projective geometry
|
Examples
|
The quantum plane, the most basic example, is the quotient ring of the free ring: k⟨x,y⟩/(yx−qxy) More generally, the quantum polynomial ring is the quotient ring: k⟨x1,…,xn⟩/(xixj−qijxjxi)
|
Noncommutative projective geometry
|
Proj construction
|
By definition, the Proj of a graded ring R is the quotient category of the category of finitely generated graded modules over R by the subcategory of torsion modules. If R is a commutative Noetherian graded ring generated by degree-one elements, then the Proj of R in this sense is equivalent to the category of coherent sheaves on the usual Proj of R. Hence, the construction can be thought of as a generalization of the Proj construction for a commutative graded ring.
|
Federated learning
|
Federated learning
|
Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm via multiple independent sessions, each using its own dataset. This approach stands in contrast to traditional centralized machine learning techniques where local datasets are merged into one training session, as well as to approaches that assume that local data samples are identically distributed.
|
Federated learning
|
Federated learning
|
Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus addressing critical issues such as data privacy, data security, data access rights and access to heterogeneous data. Its applications engage industries including defense, telecommunications, Internet of Things, and pharmaceuticals. A major open question is when/whether federated learning is preferable to pooled data learning. Another open question concerns the trustworthiness of the devices and the impact of malicious actors on the learned model.
|
Federated learning
|
Definition
|
Federated learning aims at training a machine learning algorithm, for instance deep neural networks, on multiple local datasets contained in local nodes without explicitly exchanging data samples. The general principle consists in training local models on local data samples and exchanging parameters (e.g. the weights and biases of a deep neural network) between these local nodes at some frequency to generate a global model shared by all nodes.
|
Federated learning
|
Definition
|
The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are independent and identically distributed (i.i.d.) and roughly have the same size. None of these hypotheses are made for federated learning; instead, the datasets are typically heterogeneous and their sizes may span several orders of magnitude. Moreover, the clients involved in federated learning may be unreliable as they are subject to more failures or drop out since they commonly rely on less powerful communication media (i.e. Wi-Fi) and battery-powered systems (i.e. smartphones and IoT devices) compared to distributed learning where nodes are typically datacenters that have powerful computational capabilities and are connected to one another with fast networks.
|
Federated learning
|
Definition
|
Mathematical formulation The objective function for federated learning is as follows: f(x1,…,xK)=1K∑i=1Kfi(xi) where K is the number of nodes, xi are the weights of model as viewed by node i , and fi is node i 's local objective function, which describes how model weights xi conforms to node i 's local dataset.
The goal of federated learning is to train a common model on all of the nodes' local datasets, in other words: Optimizing the objective function f(x1,…,xK) Achieving consensus on xi . In other words, x1,…,xK converge to some common x at the end of the training process.
|
Federated learning
|
Definition
|
Centralized federated learning In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system.
|
Federated learning
|
Definition
|
Decentralized federated learning In the decentralized federated learning setting, the nodes are able to coordinate themselves to obtain the global model. This setup prevents single point failures as the model updates are exchanged only between interconnected nodes without the orchestration of the central server. Nevertheless, the specific network topology may affect the performances of the learning process. See blockchain-based federated learning and the references therein.
|
Federated learning
|
Definition
|
Heterogeneous federated learning An increasing number of application domains involve a large set of heterogeneous clients, e.g., mobile phones and IoT devices. Most of the existing Federated learning strategies assume that local models share the same global model architecture. Recently, a new federated learning framework named HeteroFL was developed to address heterogeneous clients equipped with very different computation and communication capabilities. The HeteroFL technique can enable the training of heterogeneous local models with dynamically varying computation and non-iid data complexities while still producing a single accurate global inference model.
|
Federated learning
|
Main features
|
Iterative learning To ensure good task performance of a final, central machine learning model, federated learning relies on an iterative process broken up into an atomic set of client-server interactions known as a federated learning round. Each round of this process consists in transmitting the current global model state to participating nodes, training local models on these local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying it to the global model.In the methodology below, a central server is used for aggregation, while local nodes perform local training depending on the central server's orders. However, other strategies lead to the same results without central servers, in a peer-to-peer approach, using gossip or consensus methodologies.Assuming a federated round composed by one iteration of the learning process, the learning procedure can be summarized as follows: Initialization: according to the server inputs, a machine learning model (e.g., linear regression, neural network, boosting) is chosen to be trained on local nodes and initialized. Then, nodes are activated and wait for the central server to give the calculation tasks.
|
Federated learning
|
Main features
|
Client selection: a fraction of local nodes are selected to start training on local data. The selected nodes acquire the current statistical model while the others wait for the next federated round.
Configuration: the central server orders selected nodes to undergo training of the model on their local data in a pre-specified fashion (e.g., for some mini-batch updates of gradient descent).
Reporting: each selected node sends its local model to the server for aggregation. The central server aggregates the received models and sends back the model updates to the nodes. It also handles failures for disconnected nodes or lost model updates. The next federated round is started returning to the client selection phase.
|
Federated learning
|
Main features
|
Termination: once a pre-defined termination criterion is met (e.g., a maximum number of iterations is reached or the model accuracy is greater than a threshold) the central server aggregates the updates and finalizes the global model.The procedure considered before assumes synchronized model updates. Recent federated learning developments introduced novel techniques to tackle asynchronicity during the training process, or training with dynamically varying models. Compared to synchronous approaches where local models are exchanged once the computations have been performed for all layers of the neural network, asynchronous ones leverage the properties of neural networks to exchange model updates as soon as the computations of a certain layer are available. These techniques are also commonly referred to as split learning and they can be applied both at training and inference time regardless of centralized or decentralized federated learning settings.
|
Federated learning
|
Main features
|
Non-IID data In most cases, the assumption of independent and identically distributed samples across local nodes does not hold for federated learning setups. Under this setting, the performances of the training process may vary significantly according to the unbalanced local data samples as well as the particular probability distribution of the training examples (i.e., features and labels) stored at the local nodes. To further investigate the effects of non-IID data, the following description considers the main categories presented in the preprint by Peter Kairouz et al. from 2019.The description of non-IID data relies on the analysis of the joint probability between features and labels for each node.
|
Federated learning
|
Main features
|
This allows to decouple each contribution according to the specific distribution available at the local nodes.
The main categories for non-iid data can be summarized as follows: Covariate shift: local nodes may store examples that have different statistical distributions compared to other nodes. An example occurs in natural language processing datasets where people typically write the same digits/letters with different stroke widths or slants.
Prior probability shift: local nodes may store labels that have different statistical distributions compared to other nodes. This can happen if datasets are regional and/or demographically partitioned. For example, datasets containing images of animals vary significantly from country to country.
Concept drift (same label, different features): local nodes may share the same labels but some of them correspond to different features at different local nodes. For example, images that depict a particular object can vary according to the weather condition in which they were captured.
Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed.
Unbalanced: the amount of data available at the local nodes may vary significantly in size.The loss in accuracy due to non-iid data can be bounded through using more sophisticated means of doing data normalization, rather than batch normalization.
|
Federated learning
|
Algorithmic hyper-parameters
|
Network topology The way the statistical local outputs are pooled and the way the nodes communicate with each other can change from the centralized model explained in the previous section. This leads to a variety of federated learning approaches: for instance no central orchestrating server, or stochastic communication.In particular, orchestrator-less distributed networks are one important variation. In this case, there is no central server dispatching queries to local nodes and aggregating local models. Each local node sends its outputs to several randomly-selected others, which aggregate their results locally. This restrains the number of transactions, thereby sometimes reducing training time and computing cost.
|
Federated learning
|
Algorithmic hyper-parameters
|
Federated learning parameters Once the topology of the node network is chosen, one can control different parameters of the federated learning process (in addition to the machine learning model's own hyperparameters) to optimize learning: Number of federated learning rounds: T Total number of nodes used in the process: K Fraction of nodes used at each iteration for each node: C Local batch size used at each learning iteration: B Other model-dependent parameters can also be tinkered with, such as: Number of iterations for local training before pooling: N Local learning rate: η Those parameters have to be optimized depending on the constraints of the machine learning application (e.g., available computing power, available memory, bandwidth). For instance, stochastically choosing a limited fraction C of nodes for each iteration diminishes computing cost and may prevent overfitting, in the same way that stochastic gradient descent can reduce overfitting.
|
Federated learning
|
Technical limitations
|
Federated learning requires frequent communication between nodes during the learning process. Thus, it requires not only enough local computing power and memory, but also high bandwidth connections to be able to exchange parameters of the machine learning model. However, the technology also avoids data communication, which can require significant resources before starting centralized machine learning. Nevertheless, the devices typically employed in federated learning are communication-constrained, for example IoT devices or smartphones are generally connected to Wi-Fi networks, thus, even if the models are commonly less expensive to be transmitted compared to raw data, federated learning mechanisms may not be suitable in their general form.Federated learning raises several statistical challenges: Heterogeneity between the different local datasets: each node may have some bias with respect to the general population, and the size of the datasets may vary significantly; Temporal heterogeneity: each local dataset's distribution may vary with time; Interoperability of each node's dataset is a prerequisite; Each node's dataset may require regular curations; Hiding training data might allow attackers to inject backdoors into the global model; Lack of access to global training data makes it harder to identify unwanted biases entering the training e.g. age, gender, sexual orientation; Partial or total loss of model updates due to node failures affecting the global model; Lack of annotations or labels on the client side.
|
Federated learning
|
Federated learning variations
|
In this section, the notation of the paper published by H. Brendan McMahan and al. in 2017 is followed.To describe the federated strategies, let us introduce some notations: K : total number of clients; k : index of clients; nk : number of data samples available during training for client k ;kt : model's weight vector on client k , at the federated round t ;ℓ(w,b) : loss function for weights w and batch b ;E : number of local updates; Federated stochastic gradient descent (FedSGD) Deep learning training mainly relies on variants of stochastic gradient descent, where gradients are computed on a random subset of the total dataset and then used to make one step of the gradient descent.
|
Federated learning
|
Federated learning variations
|
Federated stochastic gradient descent is the direct transposition of this algorithm to the federated setting, but by using a random fraction C of the nodes and using all the data on this node. The gradients are averaged by the server proportionally to the number of training samples on each node, and used to make a gradient descent step.
|
Federated learning
|
Federated learning variations
|
Federated averaging Federated averaging (FedAvg) is a generalization of FedSGD, which allows local nodes to perform more than one batch update on local data and exchanges the updated weights rather than the gradients. The rationale behind this generalization is that in FedSGD, if all local nodes start from the same initialization, averaging the gradients is strictly equivalent to averaging the weights themselves. Further, averaging tuned weights coming from the same initialization does not necessarily hurt the resulting averaged model's performance.
|
Federated learning
|
Federated learning variations
|
Federated Learning with Dynamic Regularization (FedDyn) Federated learning methods suffer when the device datasets are heterogeneously distributed. Fundamental dilemma in heterogeneously distributed device setting is that minimizing the device loss functions is not the same as minimizing the global loss objective. In 2021, Acar et al. introduced FedDyn method as a solution to heterogenous dataset setting. FedDyn dynamically regularizes each devices loss function so that the modified device losses converges to the actual global loss. Since the local losses are aligned, FedDyn is robust to the different heterogeneity levels and it can safely perform full minimization in each device. Theoretically, FedDyn converges to the optimal (a stationary point for nonconvex losses) by being agnostic to the heterogeneity levels. These claims are verified with extensive experimentations on various datasets.Minimizing the number of communications is the gold-standard for comparison in federated learning. We may also want to decrease the local computation levels per device in each round. FedDynOneGD is an extension of FedDyn with less local compute requirements. FedDynOneGD calculates only one gradients per device in each round and update the model with a regularized version of the gradient. Hence, the computation complexity is linear in local dataset size. Moreover, gradient computation can be parallelizable within each device which is different from successive SGD steps. Theoretically, FedDynOneGD achieves the same convergence guarantees as in FedDyn with less local computation.
|
Federated learning
|
Federated learning variations
|
Personalized Federated Learning by Pruning (Sub-FedAvg) Federated Learning methods cannot achieve good global performance under Non-IID settings which motivates the participating clients to yield personalized models in federation. Recently, Vahidian et al. introduced Sub-FedAvg opening a new personalized FL algorithm paradigm by proposing Hybrid Pruning (structured + unstructured pruning) with averaging on the intersection of clients’ drawn subnetworks which simultaneously handles communication efficiency, resource constraints and personalized models accuracies.Sub-FedAvg is the first work which shows existence of personalized winning tickets for clients in federated learning through experiments. Moreover, it also proposes two algorithms on how to effectively draw the personalized subnetworks. Sub-FedAvg tries to extend "Lottery Ticket Hypothesis" which is for centrally trained neural networks to federated learning trained neural networks leading to this open research problem: “Do winning tickets exist for clients’ neural networks being trained in federated learning? If yes, how to effectively draw the personalized subnetworks for each client?” Dynamic Aggregation - Inverse Distance Aggregation IDA (Inverse Distance Aggregation) is a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data. It uses the distance of the model parameters as a strategy to minimize the effect of outliers and improve the model's convergence rate.
|
Federated learning
|
Federated learning variations
|
Hybrid Federated Dual Coordinate Ascent (HyFDCA) Very few methods for hybrid federated learning, where clients only hold subsets of both features and samples, exist. Yet, this scenario is very important in practical settings. Hybrid Federated Dual Coordinate Ascent (HyFDCA) is a novel algorithm proposed in 2022 that solves convex problems in the hybrid FL setting. This algorithm extends CoCoA, a primal-dual distributed optimization algorithm introduced by Jaggi et al. (2014) and Smith et al. (2017), to the case where both samples and features are partitioned across clients.
|
Federated learning
|
Federated learning variations
|
HyFDCA claims several improvement over existing algorithms: HyFDCA is a provably convergent primal-dual algorithm for hybrid FL in at least the following settings.
Hybrid Federated Setting with Complete Client Participation Horizontal Federated Setting with Random Subsets of Available Clients The authors show HyFDCA enjoys a convergence rate of O(1⁄t) which matches the convergence rate of FedAvg (see below).
Vertical Federated Setting with Incomplete Client Participation The authors show HyFDCA enjoys a convergence rate of O(log(t)⁄t) whereas FedBCD exhibits a slower O(1⁄sqrt(t)) convergence rate and requires full client participation.
HyFDCA provides the privacy steps that ensure privacy of client data in the primal-dual setting. These principles apply to future efforts in developing primal-dual algorithms for FL.
|
Federated learning
|
Federated learning variations
|
HyFDCA empirically outperforms FedAvg in loss function value and validation accuracy across a multitude of problem settings and datasets. The authors also introduce a hyperparameter selection framework for FL with competing metrics using ideas from multiobjective optimization.There is only one other algorithm that focuses on hybrid FL, HyFEM proposed by Zhang et al. (2020). This algorithm uses a feature matching formulation that balances clients building accurate local models and the server learning an accurate global model. This requires a matching regularizer constant that must be tuned based on user goals and results in disparate local and global models. Furthermore, the convergence results provided for HyFEM only prove convergence of the matching formulation not of the original global problem. This work is substantially different than HyFDCA's approach which uses data on local clients to build a global model that converges to the same solution as if the model was trained centrally. Furthermore, the local and global models are synchronized and do not require the adjustment of a matching parameter between local and global models. However, HyFEM is suitable for a vast array of architectures including deep learning architectures, whereas HyFDCA is designed for convex problems like logistic regression and support vector machines.
|
Federated learning
|
Federated learning variations
|
Federated ViT using Dynamic Aggregation (FED-REV) Federated Learning (FL) provides training of global shared model using decentralized data sources on edge nodes while preserving data privacy. However, its performance in the computer vision applications using Convolution neural network (CNN) considerably behind that of centralized training due to limited communication resources and low processing capability at edge nodes. Alternatively, Pure Vision transformer models (VIT) outperform CNNs by almost four times when it comes to computational efficiency and accuracy. Hence, we propose a new FL model with reconstructive strategy called FED-REV, Illustrates how attention-based structures (pure Vision Transformers) enhance FL accuracy over large and diverse data distributed over edge nodes, in addition to the proposed reconstruction strategy that determines the dimensions influence of each stage of the vision transformer and then reduce its dimension complexity which reduce computation cost of edge devices in addition to preserving accuracy achieved due to using the pure Vision transformer.
|
Federated learning
|
Current research topics
|
Federated learning has started to emerge as an important research topic in 2015 and 2016, with the first publications on federated averaging in telecommunication settings. Another important aspect of active research is the reduction of the communication burden during the federated learning process. In 2017 and 2018, publications have emphasized the development of resource allocation strategies, especially to reduce communication requirements between nodes with gossip algorithms as well as on the characterization of the robustness to differential privacy attacks. Other research activities focus on the reduction of the bandwidth during training through sparsification and quantization methods, where the machine learning models are sparsified and/or compressed before they are shared with other nodes. Developing ultra-light DNN architectures is essential for device-/edge- learning and recent work recognises both the energy efficiency requirements for future federated learning and the need to compress deep learning, especially during learning.Recent research advancements are starting to consider real-world propagating channels as in previous implementations ideal channels were assumed. Another active direction of research is to develop Federated learning for training heterogeneous local models with varying computation complexities and producing a single powerful global inference model.A learning framework named Assisted learning was recently developed to improve each agent's learning capabilities without transmitting private data, models, and even learning objectives. Compared with Federated learning that often requires a central controller to orchestrate the learning and optimization, Assisted learning aims to provide protocols for the agents to optimize and learn among themselves without a global model.
|
Federated learning
|
Use cases
|
Federated learning typically applies when individual actors need to train models on larger datasets than their own, but cannot afford to share the data in itself with others (e.g., for legal, strategic or economic reasons). The technology yet requires good connections between local servers and minimum computational power for each node.
|
Federated learning
|
Use cases
|
Transportation: self-driving cars Self-driving cars encapsulate many machine learning technologies to function: computer vision for analyzing obstacles, machine learning for adapting their pace to the environment (e.g., bumpiness of the road). Due to the potential high number of self-driving cars and the need for them to quickly respond to real world situations, traditional cloud approach may generate safety risks. Federated learning can represent a solution for limiting volume of data transfer and accelerating learning processes.
|
Federated learning
|
Use cases
|
Industry 4.0: smart manufacturing In Industry 4.0, there is a widespread adoption of machine learning techniques to improve the efficiency and effectiveness of industrial process while guaranteeing a high level of safety. Nevertheless, privacy of sensitive data for industries and manufacturing companies is of paramount importance. Federated learning algorithms can be applied to these problems as they do not disclose any sensitive data. In addition, FL also implemented for PM2.5 prediction to support Smart city sensing applications.
|
Federated learning
|
Use cases
|
Medicine: digital health Federated learning seeks to address the problem of data governance and privacy by training algorithms collaboratively without exchanging the data itself. Today's standard approach of centralizing data from multiple centers comes at the cost of critical concerns regarding patient privacy and data protection. To solve this problem, the ability to train machine learning models at scale across multiple medical institutions without moving the data is a critical technology. Nature Digital Medicine published the paper "The Future of Digital Health with Federated Learning" in September 2020, in which the authors explore how federated learning may provide a solution for the future of digital health, and highlight the challenges and considerations that need to be addressed. Recently, a collaboration of 20 different institutions around the world validated the utility of training AI models using federated learning. In a paper published in Nature Medicine "Federated learning for predicting clinical outcomes in patients with COVID-19", they showcased the accuracy and generalizability of a federated AI model for the prediction of oxygen needs in patients with COVID-19 infections. Furthermore, in a published paper "A Systematic Review of Federated Learning in the Healthcare Area: From the Perspective of Data Properties and Applications", the authors trying to provide a set of challenges on FL challenges on medical data-centric perspective.
|
Federated learning
|
Use cases
|
Robotics Robotics includes a wide range of applications of machine learning methods: from perception and decision-making to control. As robotic technologies have been increasingly deployed from simple and repetitive tasks (e.g. repetitive manipulation) to complex and unpredictable tasks (e.g. autonomous navigation), the need for machine learning grows. Federated Learning provides a solution to improve over conventional machine learning training methods. In the paper, mobile robots learned navigation over diverse environments using the FL-based method, helping generalization. In the paper, Federated Learning is applied to improve multi-robot navigation under limited communication bandwidth scenarios, which is a current challenge in real-world learning-based robotic tasks. In the paper, Federated Learning is used to learn Vision-based navigation, helping better sim-to-real transfer.
|
LibGDX
|
LibGDX
|
libGDX is a free and open-source game-development application framework written in the Java programming language with some C and C++ components for performance dependent code. It allows for the development of desktop and mobile games by using the same code base. It is cross-platform, supporting Windows, Linux, Mac OS X, Android, iOS, BlackBerry and web browsers with WebGL support.
|
LibGDX
|
History
|
In the middle of 2009 Mario Zechner, the creator of libGDX, wanted to write Android games and started developing a framework called AFX (Android Effects) for this. When he found that deploying the changes from Desktop to Android device was cumbersome, he modified AFX to work on the Desktop as well, making it easier to test programs. This was the first step toward the game framework later known as libGDX.In March 2010 Zechner decided to open-source AFX, hosting it on Google Code under the GNU Lesser General Public License (LGPL). However, at the time he stated that "It's not the intention of the framework to be used for creating desktop games anyway", intending the framework to primarily target Android. In April, it got its first contributor.When Zechner created a Box2D JNI wrapper, this attracted more users and contributors because physics games were popular at the time. Many of the issues with Android were resolved because of this.Because many users suggested switching to a different license due to LGPL not being suitable for Android, libGDX changed its license to the Apache License 2.0 in July 2010, making it possible to use the framework in closed-source commercial games. The same month its phpBB forum was launched.Due to issues with Java Sound the audio desktop implementation switched to OpenAL in January 2011. Development of a small image manipulation library called Gdx2D was finished as well, which depends on the open source STB library.The rest of 2011 was spent adding a UI library and working on the basics of a 3D API.At the start of 2012 Zechner created a small helper library called gdx-jnigen for easing the development of JNI bindings. This made it possible for the gdx-audio and gdx-freetype extensions to be developed over the following months.Inspired by Google's PlayN cross-platform game development framework that used Google Web Toolkit (GWT) to compile Java to JavaScript code, Zechner wrote an HTML/JavaScript backend over the course of several weeks, which allowed libGDX applications to be run in any browser with WebGL support. After Google abandoned PlayN, it was maintained by Michael Bayne, who added iOS support to it. libGDX used parts of this work for its own MonoTouch-based backend.In August 2012 the project switched its version control system from Subversion to Git, moving from Google Code to GitHub. However, the issue tracker and wiki remained on Google Code for another year. The main build system was also changed to Maven, making it easier for developers with different IDEs to work together.Because of issues with the MonoTouch iOS backend Niklas Thernig wrote a RoboVM backend for libGDX in March 2013, which was integrated into the project in September. From March to May 2013 a new 3D API was developed as well and integrated into the library.In June 2013 the project's website was redone, now featuring a gallery where users can submit their games created with libGDX. As of January 2016 more than 3000 games have been submitted.After the source code migration to GitHub the year before, in September 2013 the issue tracker and wiki were also moved there from Google Code. The same month the build and dependency management system was switched from Maven to Gradle.After a cleanup phase in the first months of 2014 libGDX version 1.0 was released on 20 April, more than four years after the start of the project.In 2014 libGDX was one of the annual Duke's Choice Award winners, being chosen for its focus on platform-independence.
|
LibGDX
|
History
|
From a diverse team of open source enthusiasts comes libGDX, a cross-platform game development framework that allows programmers to write, test, and debug Java games on a desktop PC running Windows, Linux, or Mac OS X and deploy that same code to Android, iOS and WebGL-enabled browsers—something not widely available right now. The goal of libGDX, says creator Mario Zechner, "is to fulfill the 'write once, run anywhere' promise of the Java platform specifically for game development." In April 2016 it was announced that libGDX would switch to Intel's Multi-OS Engine on the iOS backend after the discontinuation of RoboVM. With the release of libGDX 1.9.3 on 16 May 2016 Multi-OS is provided as an alternative, while by default the library uses its own fork of the open source version of RoboVM.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.