content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
A sizable body of research has emerged over the past 30 years examining the role that justification for aggression may play in the enjoyment of drama. For the most part, this research has suggested that audiences enjoy seeing good characters rewarded and bad characters punished, whereas images of good characters receiving undue punishment or bad characters receiving benefits are met largely with repugnance (Zillmann, 2000). Research has also shown that these factors may play a role in attitudes toward and empathy for the characters in question. For example, Zillmann and Bryant (1975) found that even disliked characters are met with a certain degree of empathy if they receive punishment that exceeds some sort of predetermined range of acceptable retribution.
Although these studies contribute greatly to the understanding of audience enjoyment and response to violence, two issues are apparent. First, notions regarding the circumstances that distinguish "just" and "unjust" actions have always been operationally defined in terms of varying degrees of retribution severity. The possibility of the same act being judged just or unjust depending on dispositional and motivational characteristics surrounding the exchange has gone largely unexplored. Instead, over-retributive and under-retributive sanctions have been seen invariably as unjust. Second, these studies have paid little attention to the likelihood that attitudes toward and empathy for victims may be contingent upon attitudes toward the perpetrator and perceptions of the motive for aggression. To that end, the current study attempts to explore the manner in which incongruity between the dispositional features linked to perpetrators and motives for their violence can influence both subsequent enjoyment of narratives and attitudes toward victims and perpetrators.
Perceptions of Justified Violence
Kohlberg (1958) posited that at basic levels of moral deliberation, perceptions of justice are contingent upon evaluations of whether an act of aggressive reprisal is strictly equal to the provoking act. For young children, these simple determinations can be superseded by the evaluation of some authority figure (e.g., "it's wrong because my mom said so"). In later stages of cognitive development, the essential feature of justification becomes the notion of strict equivalence. An act of violent reprisal is just if its inherent qualities are equivalent to the violence that preceded it, and unjust if violence in the reprisal falls below or exceeds the initiating violent act.
However, in more complicated judgment circumstances, Kohlberg maintained that justice appraisals are moderated by consideration of the actors involved and an appraisal of the context surrounding the exchange. Moral appraisals are therefore based on "strict equality and literal reciprocity are modified by reference to shared norms or to motives that indicate a good or bad person or deservingness" (Colby & Kohlberg, 1987, p. 27). Among adults, moral judgment is made based on the evaluation of whether or not an act falls within a range of behaviors considered equitable given the provocation. This appraisal is then moderated by the observer's disposition toward the participants involved and perceptions of their motives for the provoking and retaliatory acts.
Justified Violence and Latitude of Moral Sanctions. Zillmann's (2000) moral-sanction theory of delight and repugnance distinguished the more deliberate process of forming "moral judgments" from less contemplative "moral sanctions." Whereas moral judgment can be characterized by comparatively formal thought processes which may prescribe specific rewards and punishments for particular acts, moral sanctions are thought of more simply as a "readiness to accept, in moral terms," the observed outcomes of events (Zillmann, 2000, p. 59). In this sense, moral sanctions include any and all behaviors one is ready to accept. Thus, instead of a clear-cut judgment of an act's morality based on its deviating from specific retribution called for by an exacting moral code, the comparatively impulsive "readiness to accept" nature of moral-sanction appraisals allows for broader latitude in determining which acts are deemed morally acceptable or justified. Understood this way, the perception of justified violence can be conceptually defined as an appraisal of violent retribution based on its relationship to the normatively determined range of retribution acts an individual deems morally acceptable---or one's "latitude of moral sanction" for violent reprisal (Zillmann, 2000, p. 59). Raney and Bryant (2002) trace this approach to understanding justice in terms of a range of acts that are deemed morally acceptable to work on Balance Theory (Heider, 1958).
Cognitive Consistency, Justice, and Latitude-of-Moral-Sanction. Heider (1958) argued that humans prefer situations in which relative harmony exists between their feelings toward an object (i.e., person or event) and circumstances surrounding the object--a condition called "cognitive consistency" (p. 201). Disharmony is an unpleasant state that motivates people to act in ways that restore cognitive consistency by producing circumstances consistent with their disposition, or a disposition consistent with the situation. For example, people hearing a message with which they strongly disagree from somebody whom they respect may attribute more credibility toward the message in order to create harmony between their perceptions of the message source and the events under consideration. Similarly, they may change their attitude toward the speaker to consider him/her less credible.
Heider explicated justice in terms of the cognitive consistency between one's thoughts about observed events and people involved in the events. Justice is perceived when there is a match between the outcome of events observed and the latitude of events considered appropriate by the observer given the person and the circumstances involved. In terms that foreshadow Zillmann's (2000) discussion of disposition's role in forming moral sanctions, Heider (1958) argued that, on the whole, harmony and perceived justice occur when observers see reward, happiness, and fortune fall upon those who are judged as "good," and correspondingly when ill fortune, punishment, and discord fall upon those who are judged as "evil." If any of these outcomes were observed, they would fall within the observer's latitude of appropriate outcomes and be experienced as harmonious states. Such harmonious states are seen as instances of justice, and disharmonious states are considered unjustified.
Raney and Bryant (2002) applied logic from work on cognitive consistency and latitudes of moral sanction to their theoretical model of moral judgment in crime-drama enjoyment. They asserted that the evaluation of crime drama is based on observation of a "justice sequence" (p. 404) comprised of some act of provocation and subsequent retribution. Each person views a justice sequence with an idea of appropriate retribution defined by the range of behaviors falling within their "latitude of moral sanction" (p. 411). This range is based on consideration of audience inputs (individual differences in readiness to accept) and message inputs (content related to provocation and reprisal). According to the model, the degree to which message inputs are consistent with audience inputs will affect appraisal of reprisal as just or unjust. When the level of violence contained in the act of reprisal falls within the latitude of moral sanction that results from the combination of message and audience inputs, viewers will appraise the reprisal as justified and enjoy the observed violence.
Raney and Bryant's (2002) discussion of audience and message inputs that influence viewer perceptions of the justice sequence pointed to factors that moderate perceptions of justified violence, a position consistent with the definition of justified violence adopted for use here. Raney (2004) further argued that the formation of dispositions often results from heuristic judgments of characters as good or bad before moral appraisal of their behavior occurs. In other words, viewers often evaluate the appropriateness of behavior using frames based on preexisting dispositions toward the perpetrator or victim.
In submitting that perception of justified violence is best understood as the range in levels of violence one is ready to accept as moral, the current study maintains that one's readiness-to-accept is moderated by critical audience and message factors: the audience member's disposition toward perpetrator and victim, and the motivations for retribution made implicit by the message. Although reasoning from Zillmann's (2000) moral-sanction theory along with Raney and Bryant's (2002) model of moral judgment suggest that perpetrator motive, along with disposition toward perpetrator and victim should influence audience reactions to violent reprisal, neither theory nor logic provide a clear prediction about their combined influence. Disposition-based research would suggest that even beyond dispositional concerns, viewers will only enjoy witnessing violent acts if the extent of the violence meets some level considered appropriate given the events that surround the act (Zillmann & Bryant, 1975; Zillmann & Cantor, 1977). Audiences for the most part enjoy seeing fair and due punishment to those who deserve it. Likewise, audiences are likely to express liking or disliking of perceived violent perpetrators and victims based on the degree to which they perceive the act as just.
The logic underlying thought in this area is supported in research conducted by Zillmann and Bryant (1975). In this study, children at different stages of cognitive... | https://law-journals-books.vlex.com/vid/perpetrator-dispositional-enjoyment-55236544 |
“Brad DeLong”:http://delong.typepad.com/sdj/2008/01/do-the-cossacks.html disagrees with “Timothy Burke”:http://weblogs.swarthmore.edu/burke/?p=486 on the practical consequences of the inscrutability of motivations among key figures in the Bush administration. Not only do I think Brad is right on this, but his arguments (with the addition of a healthy dollop of economic sociology) help elucidate what’s happening in this “post”:http://balkin.blogspot.com/2008/01/real-cia-tapes-scandal-that-everyone-is.html by Marty Lederman.
First Tim:
One of the consequences of the perspective I’m taking is that I’m perpetually skeptical about whether we ever ought to talk about individual intentions in an atomistic way, e.g., where we break down what an individual meant to do and assign proportionate value to different components of intention, and equally skeptical about whether we can ever atomistically describe the relationship between intention and result. That’s just with one individual, but it’s even more so once we talk about how a decision actually is made by small groups of advisors and is then transmitted to larger institutional networks. … one of the interesting bits of information to come out of the Iraq War so far has to do with why US intelligence was so off about Hussein’s possession of weapons of mass destruction. People who want to argue that intelligence was purely concocted for political purposes are too simplistic, people who want to reduce it all to the will of Dick Cheney or a few neocons are too simplistic, people who want to make it a sincere mistake are too simplistic. …
very indirectly, almost “culturally” or ideologically, actors inside the Bush Administration made it known that they, even more than their predecessors, would not welcome intelligence which blatantly contradicted beliefs or assumptions that they were inclined to make. No one ever sends an order down that says, “Here’s the casus belli we need, please write it up! kthnx.” This kind of pressure gets exerted when someone like Cheney says in a conversation that includes key advisors and heads of executive departments that intelligence has been “too timid” in the past, or is too dominated by experts who are unwilling to act. …
Cheney (or various neocons) could believe that statement as a reaction to some factual understanding of the history of US intelligence, could say it as a reflection of a much more intuitive kind of personal, emotional orientation towards leadership (think John Bolton here), and so on–and could not entirely know themselves why they say it, or how that statement is likely to be received or interpreted. … the movement of information through institutions is rather like a game of telephone, that there is a kind of drift and transformation which has less to do with intentionality and more to do with processes of translation, reparsing, repackaging and repurposing as information travels from office to office, up and down hierarchies. So at one level of action and knowledge, you can get a very granular, nuanced understanding of the extremely limited value of a source like “Curveball”, but a process rather like genetic drift starts to mutate that knowledge into something else by the time it reaches the layer where ultimate decisions are made. … I do think traditional political and diplomatic history sometimes mirrors a flaw of a lot of social science. Some social scientists confuse explanatory models for empirical reality; some political historians confuse explanatory narratives about decision-making for the messy processes that shape intentions and translate intentions into action and event.
Now Brad:
Tim Burke is both right and wrong. He is right: courts are the natural habitats of deceitful courtiers who tell the princes exactly what the princes want to hear, the people on the spot who control implementation matter in ways that the people around polished walnut tables in rooms with green silk walls do not, and the movement of information through bureaucracies does resemble a game of telephone with distortions amplified at every link. But. Those with sufficient virtu to become princes in this modern age are well aware of all these deficiencies of bureaucracies and courts. …
When Lloyd Bentsen became Secretary of the U.S. Treasury, he scattered his–loyal–senate staff throughout the Treasury Department at all levels, and used them as a second, separate, parallel web of communication in order to gauge the distortions that were being introduced into the paper that crossed his desk by the game of bureaucratic telephone. When Kangxi became Emperor of China, he scattered the–loyal–hereditary bondsmen of his Manchu clan throughout the imperial Chinese bureaucracy at all levels, with instructions to write to him regularly through secret channels to tell him what was really going on, as a second, separate, parallel web of communication … But by the time anyone (a) possesses sufficient virtu, (b) is forty-five or fifty-five or sixty-five, and (c ) has seen the world, there is no excuse for not understanding that as a czar your cossacks respond to the incentives you set them, that you can change those incentives, and that you are responsible for the behavior that your incentives elicit. …
Richard Cheney and Donald Rumsfeld knew damned well–unless they are much farther into their dotage than I believe–that their confidence in Saddam Hussein’s WMD program was based not on intelligence but on their judgment that they would have active WMD programs if they were Saddam Hussein. The frictions and distortions of the bureaucracy and the court exist. They are, however, counterbalanced by the intelligence, the sophistication, and the energy of the principals at the top. If the czar wishes, the cossacks _do_ work for him.
As I said, I’m with Brad on this, but I want to go one step further. The very fact of ambiguous motivations and uncertainty about what the people at the top really want can be a _crucial source of strategic power_ for those people. By combining ambiguous information about the motivations of those in power with implicit incentives to please them, powerful people can strategically shape the things that underlings do and do not do, without ever specifically demanding that they do anything. This is a point that John Padgett and Christopher Ansell develop in their “classic article”:http://links.jstor.org/sici?sici=0002-9602(199305)98%3A6%3C1259%3ARAATRO%3E2.0.CO%3B2-J on ‘robust action’ in the court of Cosimo de Medici. As Padgett and Ansell describe it, Cosimo de Medici, far from being decisive and oriented as Machiavelli’s ideal princes were supposed to be, was an indecipherable sphinx, who preferred to lurk in the background. He never assumed public office and hardly ever gave a public speech. His actions were reactive rather than proactive, responding to a flow of requests in a way that ‘just happened’ to serve his multiple interests. He was able to engage in this “robust action” precisely because single actions can be judged from multiple perspectives simultaneously, and can be moves in many games. Because requests had to flow to him, others, not Cosimo himself, struggled to infer and thus to serve Cosimo’s inscrutable interests. “Control was diffused throughout the structure of others’ self-fashionings.”(Padgett and Ansell, page 1264).
This concept of robust action is one in which the actors at the center of the network never want to disclose their absolute interests and desires, because this would limit their options. Instead, they prefer to make _others_ disclose their desires. Crucial for maintaining discretion is not to pursue specific goals, for:
“in nasty strategic games like Florence or chess, positional play is the maneuvering of opponents into the forced clarification of their (but not your) tactical lines of action. Locked-in commitment to lines of action, and thence to goals, is the product not of individual choice but at least as much of others’ successful “ecological control” over you.” (Padgett and Ansell, p. 1264)
But in modern contexts, robust action helps the powerful in other ways. It allows the powerful to evade responsibility for their actions. If you never issue a direct order, instead allowing inferiors to infer your desires from what you don’t explicitly forbid, you make it _extremely difficult_ for others to hold you accountable for what your inferiors end up doing. This is most likely what happened in Abu Ghraib and elsewhere. There likely never _were_ any formal orders to torture and humiliate inmates – instead, there was a diffuse understanding, encouraged by those at the top of the hierarchy, that torture and humiliation were appropriate and acceptable tools of interrogation. The same thing seems to have happened with the destruction of the CIA waterboarding tapes, as per Marty Lederman:
when the Commission closed up shop, mid-level CIA lawyers Steven Hermes and Robert Eatinger told Jose Rodriguez that the destruction would then be lawful. (This advice was probably equivocal and might well have been mistaken. In light of the potential breadth of the broadly worded federal obstruction statutes, and the warnings that had been repeatedly given to the CIA not to destroy the tapes, it is unlikely that good lawyers could have advised Rodriguez that the coast was clear with any degree of confidence.) Rodriguez knew that if he asked anyone else, he might get conflicting legal advice, or even a directive not to destroy. And if Rodriguez didn’t ask for a direct order one way or the other, no one was eager to give him one. …
CIA General Counsel John Rizzo “advised” against the destruction. And then-CIA Director Porter Goss “recommended” against it. These are the verbs of officials who hope their advice goes unheeded: Notably, no one actually _instructed_ Rodriguez not to destroy the tapes, or that it would be illegal to do so. Rodriguez therefore interpreted the repeated failure of his superiors to require retention of the tapes as an implicit green light to destroy—and he may well have been right about that, as a practical (if not a legal) matter. … Personally, I think it would be unfortunate to point the finger exclusively at Rodriguez and others at his level and below. The obvious wrongdoers were those in the CIA and White House who implicitly or expressly condoned the destruction by repeatedly failing to say “no.” But it is, of course, much more difficult to establish criminal culpability for such willful blindness. As the many at the CIA feared all along, the political folks who pushed for the program have left the career officials holding the bag
Marty may be right on grounds of fairness to say that Rodriguez shouldn’t be held entirely accountable, but there is an incentive problem for the future in letting mid-level people like him go. If underlings have well-grounded reason to fear that when they are prosecuted for their actions, they will be hung out to dry by their superiors in the absence of any explicit orders, then they are likely to demand explicit orders so as to protect themselves. And often, those superiors aren’t likely to want to give those orders explicitly, for all the obvious reasons.
More generally, the problem of ambiguity, reflects, as Brad says, to a very considerable degree the desires of those at the top. Moreover, it may be a crucial source of power for them. It allows them to blur lines of accountability and responsibility, by making underlings guess what they want, while never having the comfort of explicit instructions. Hence decisions by underlings over torture, to destroy tapes, to skew intelligence in the one way rather than another, that are based on _well grounded inferences_ about the preferences of those above, but which don’t allow others later to reconstruct clear chains of causation and responsibility that lead from those at top to those who want to implement their wishes. That motivations may not be unambiguously discernible from context doesn’t mean that their motivations don’t exist, or that beliefs about those motivations aren’t important. Moreover, precisely that ambiguity over motivations allows for all sorts of strategic actions that wouldn’t be possible otherwise.
I think what Tim is doing is to over-emphasize the epistemological consequences of this ambiguity (we can never be _entirely sure_ that Porter Goss wanted those tapes destroyed, and almost certainly we can never prosecute him for it), and under-emphasize the strategic consequences (that we can never prove what Porter Goss wanted, allows Porter Goss to get away with a lot of stuff that he couldn’t get away with otherwise). There may be contexts in which the epistemological consequences are more important than the strategic consequences, but I strongly suspect that the inner workings of the 2000-2008 Bush administration isn’t one of those contexts. | https://crookedtimber.org/2008/01/17/robust-action-in-the-topkapi-palace/ |
The potential impact of Brexit on the creative sector is explored in a recent report from the Digital, Culture, Media and Sport Select Committee, The potential impact of Brexit on the creative industries, tourism and the digital single market.
The report highlights concerns and actions including: maintaining access to talent, UK production tax credits, and clarity around regulatory equivalence with the EU.
Key recommendations included:
- Workforce – need for reliable data on the workforce and its skills gaps, should access to talent not be maintained post-Brexit
- Immigration – a call for clarity on proposed immigration rules allowing creative sector businesses time to prepare for any new Brexit environment
- Funding – ease long-term funding concerns with a government mapping exercise into current and future direct European funding streams for creative and cultural organisations
- Copyright protection – detail on the government’s intentions for copyright protection and enforcement with our European neighbours
- Country of Origin rules – address the uncertainty around Country of Origin rules framework and contingencies if the existing frameworks ends after Brexit
The committee took evidence in five sessions, including Belfast and the specific issues facing Northern Ireland, 150 written submissions and fact-finding trips to Berlin and Barcelona.
Chair of the DCMS Committee, Damian Collins MP, said
“The UK is a global leader in the creative and digital technology sectors, including telecommunications, and our tourism industry is also one of the largest and most innovative in the world. Our creativity, favourable production tax credits, and the access to talent, all underpin our success in these areas. The challenge of Brexit is to maintain these advantages in a new regulatory environment, and to remove uncertainty for businesses and organisations, in particular those that work from the UK, with employees, suppliers and customers across Europe.
“An honest assessment of likely outcomes of the Brexit negotiations—whether regarding regulatory equivalence or divergence, the workforce or the effects of losing direct EU funding—is needed from the Government.
“London—Europe’s most visited city—is likely to be sufficiently well-established to withstand challenges from other potential European creative hubs, although other major European cities—including Berlin, Paris, Amsterdam, Barcelona and Dublin—do have ambitions of their own, which should not be under-estimated.
“It is essential that we get clarity of proposed revised immigration rules and reliable data about possible skills gaps.
“British institutions are already missing out on funding. The Government should publish a map of all EU funding streams that support tourism and creative projects.
“Brexit presents challenges for all these industries because of the uncertain nature of the future regulatory environment. The Government should set out as a matter of urgency those areas where it believes that Brexit offers an opportunity for beneficial regulatory reforms, and how it intends to capitalise on any such opportunities. It should also set out where it believes that maintaining equivalence would be the most favourable outcome, for the industries and consumers alike.”
The summary, full report including a downloadable PDF version is available from the UK Parliament website. | https://creativeeconomy.team/news/dcms-committee-publishes-report-brexits-potential-impact-creative-sector/ |
Tolo, S (2016) Uncertainty Quantification and Risk Assessment Methods for Complex Systems subject to Natural Hazards. PhD thesis, University of Liverpool.
|
Text
|
200940300_Aug2016.pdf - Unspecified
Access to this file is embargoed until 1 August 2022.
Download (21MB)
Abstract
The interaction between natural events and technological installations involves complex mechanisms which have the potential to affect simultaneously more critical systems, nullifying the redundancy measures common to industrial safety systems and endangering the integrity of facilities. The concerns related to this kind of events are far from being restricted to a merely economic or industrial nature. On the contrary, due to the sensitivity of most processes performed in industrial plants and the negative consequences of eventual releases of hazardous materials, the impact of simultaneous failures embraces also the environment and population surrounding the installations. The risk is further widened by the trend of climate extremes: both observations over the past century and projections for next decades suggest an increase of the severity of extreme weather events and their frequency, both on local and global scales. The rise of sea water levels together with the exacerbation of extreme winds and precipitations, enlarge the geographic area of risk and rise the likelihood of accidents in regions historically susceptible to natural hazards. The prevention of technological accidents triggered by natural hazards lies unavoidably with the development of efficient theoretical and computational tools for the vulnerability assessment of industrial installations and the identification of effective strategies to tackle the growing risks to which they are subject. In spite of the increasing trend of the risk and the high-impact consequences, the current scientific literature still lacks robust means to tackle these issues effectively. The research presented in this dissertation addresses the critical need for novel theoretical and computational methods tailored for the risk assessment of complex systems threaten by extreme natural events. The specific requirements associated with the modelling of the interaction between external hazards and engineering systems have been determined, resulting in the identification of two main bottlenecks. On the one hand, this kind of analysis has to deal with the difficulty of representing accurately the complexity of technological systems and the mutual influence among their subsystems. On the other, the high degree of uncertainty affecting climate variables (due to their inner aleatory nature and the restricted information generally available), strongly bounds the accuracy and credibility of the results on which risk-informed decisions must be made. In this work, well-known traditional approaches (such as Bayesian Networks, Monte Carlo methods etc.) as well as cutting-edge methods from different sectors of the scientific literature have been adopted and integrated in order to obtain a novel theoretical strategy and computational tool able to overcome the limitations of the current state of the art. The result of the research is a complete tool for risk assessment and decision making support, based on the use of probabilistic graphical models and able to fully represent a wide spectrum of variables types and their uncertainty, to provide the implementation of flexible computational models as well as their computation and uncertainty quantification. | https://livrepository.liverpool.ac.uk/3005411/ |
Our Telephone lines are closed until tomorrow. Please contact us by using our contact form.
Hiring within the private sector has fallen to its lowest level in the past three years in the face of uncertainty over the Brexit vote. Financial centres in London and Edinburgh were worst hit by this hiring freeze, according to a survey by Manpower’s Employment Outlook, one of the world’s largest recruitment firms.
The company surveyed 2,000 employers throughout the UK on their hiring intentions for the second quarter of 2017. Most sectors said they would keep staff numbers about the same, with only the construction industry reporting an improvement in their hiring plans.
Until now, hiring has remained rather steady despite predictions from economists to the contrary. However, the job market appears to have softened following the Brexit vote and continued uncertainty about how the negotiation process will play out. With the latest news that Article 50 won’t be triggered until the end of March, this has only fuelled the uncertainty for employers. There has also been a downturn in graduates from the EU seeking work in the UK, which could point to a potential skills shortage if freedom of movement comes to an end.
Mark Cahill, managing director of ManpowerGroup UK said: “With huge uncertainty surrounding sectors like banking and financial services — critical to the economy in London and Edinburgh — it’s no surprise that confidence in these regions is suffering.”
Uncertainty over the outcome of the Brexit negotiations has fuelled this latest slump, although the UK employment rate has continued to grow to the highest rate since records began in 1971. However, the number of people of in work between October–December 2016 increased by 37,000, which is much lower than the average increase of 137,000 per quarter between 2012 and 2015.
Jonas Prising, chairman & chief executive of ManpowerGroup said: Having seen the surprising election results in the UK and US in 2016, European businesses know to expect the unexpected. | https://iasservices.org.uk/uk-companies-are-hiring-fewer-people-in-face-of-brexit/ |
Alberta legislation, the Matrimonial Property Act, governs the division of assets and debts of a marriage after separation. It creates a legal regime of common property that in many ways ignores legal title in the context of the marriage, on the basis that marriage is also an important financial partnership. The legislation describes the default rules for property division and also permits, under strict conditions, for people to modify or even opt-out of those rules by written agreement. Such agreements can be made before (a Prenuptial Agreement), during (Marriage Agreement) or after (Separation Agreement) a marriage.
In the vast majority of situations, the division of marital property is negotiated and confirmed in a written agreement or by order. A deep understanding of the law enables appropriate, timely and fair negotiations. Competent advice from experienced counsel can make a profoundly positive difference in outcomes, which can have long-term financial consequences after a separation.
Although legislative reforms are being actively worked on, currently, no legislation exists in Alberta governing the division of property of an un-married couple. Instead, complex Judge-made law applies, built up from a large body of Court decisions written over decades. Unlike for married couples, the Courts do not start with a presumption that any property acquired during an un-married relationship will be divided equally after separation. Instead, if a partner makes a claim against the property owned by their spouse at separation, they must prove they contributed in some way directly or indirectly to that property, but were not fairly compensated for their contribution (unjust enrichment). Contributions can be financial, personal, involve time and labour, or involve the parties’ roles and intentions in their relationship and in the raising of their children.
Court decisions have increasingly recognized that unmarried persons can be engaged in a “joint family venture”, where their intentions have been to build assets together and share in the financial rewards of their relationship, quite similar to the basic concept underlying the Matrimonial Property Act. However, the legal concept of a “joint family venture” is still based in the idea of proving that a person’s contributions to property made during the relationship have gone uncompensated, and that it would be unjust for one partner to come out of the relationship financially better off than the other as a result.
Because of the state of the law, common law property division in Alberta is very fact-specific and based on the parties’ particular circumstances. There is a large amount of discretion with the Courts to listen to the circumstances of the parties and their history, and then decide how most fairly to deal with their competing claims to each other’s property. Until Alberta law in this area is reformed through legislation, the division of property for unmarried people will continue to be very challenging. Leamy Family Law have extensive experience in the legal principles underlying common-law property division. We will discuss your particular circumstances and educate you on the extent of your potential claim, then develop a plan to achieve your goals so you can move on with your life.
Currently, there is no government legislation in Alberta that indicates how to divide property when an unmarried couple breaks up, resulting in uncertainty and costly legal battles.
Bill 28 updates the Matrimonial Property Act to make it easier for unmarried partners to divide their property if their relationship breaks down.
Existing property division agreements that were enforceable under the law when they were signed would still be enforceable when the new legislation comes into force. | https://www.leamyfamilylaw.ca/services/marital-common-law-property-division/ |
The global economy is now projected to grow at 3.3% in 2019, down from 3.6% in 2018, according to the April 2019 edition of the IMF’s World Economic Outlook (IMF 2019). The IMF points out several developments that have prompted the downward revision of the global economy. This includes the escalation of US–China trade tensions, the need for credit tightening in China, economic stress in countries such as Argentina and Turkey, and disruptions to the auto sector in Germany caused by the introduction of new emissions standards.
Behind all the different reasons for the downward revision of global growth, there is one thing in common: rising uncertainty. Chapter 1 of the World Economic Outlook – which focuses on the prospects and policies for the global economy – mentions the word “uncertain” and its variants 36 times. Some of the references discuss the impact of uncertainty on global economic growth. For instance, the report notes that “amid high policy uncertainty and weakening prospects for global demand, industrial production decelerated… The slowdown was broad based, notably across advanced economies”. The report also points out that political uncertainties “add downside risk to global investment and growth. These include policy uncertainty about the agenda of new administrations or surrounding elections, geo-political conflict in the Middle East, and tensions in east Asia”. On the impact of uncertainty and trade tensions, the report notes that “higher trade policy uncertainty and concerns of escalation and retaliation would reduce business investment, disrupt supply chains, and slow productivity growth”.
Similarly, the IMF’s report also discusses the impact of uncertainty on economic growth for specific countries. For instance, the report points out that a downward revision for growth in the UK partly reflects the “negative effect of prolonged uncertainty about the Brexit outcome”. And for South Africa, the downward revision for growth reflects “continued policy uncertainty”.
Rising uncertainty – here, there, and everywhere
These references are line with the latest reading of the World Uncertainty Index (WUI). The WUI’s latest data shows a sharp increase in global uncertainty in the first quarter of 2019 (Figure 1).
Uncertainty is rising in many parts of the world – in advanced, emerging, and low-income countries alike (Figure 2). Examples include uncertainty in Ireland regarding the outcome of Brexit, in Gabon related to President Ali Bongo Ondimba being admitted to the hospital after reportedly suffering a stroke, in South Africa around key policies, especially land reform, and in the Democratic Republic of Congo regarding the recent elections.
Implications of higher uncertainty
Higher uncertainty matters because it has serious consequences for the economy. For instance, in times of high uncertainty, companies may reduce investment and delay projects (Bloom et al. 2018, Dixit and Pindyck 1994). They do this because it is costly to reverse investment. So, they prefer to ‘wait and see’. Similarly, households reduce consumption as they wait for less uncertain times (Carrol 1997). Rising uncertainty also affects all sectors of the economy. It can increase the cost of credit to households and firms (Kelly et al. 2014, Gilchrist et al. 2010).
In our work, we find that increases in uncertainty foreshadow significant output declines. Based on our estimate, the increase in uncertainty observed in the first quarter could be enough to knock up to 0.5% of global growth over the course of the year. This average effect, however, masks significant heterogeneity within and across countries. Within countries, we find larger effects for sectors with higher financial constraints. Across countries, the effect is estimated to be larger and more persistent in countries with lower institutional quality (Figure 3). | https://www.weforum.org/agenda/2019/05/the-global-economy-hit-by-higher-uncertainty/ |
I have recently found myself thinking a lot about good intentions.
Inherently, they seem positive, right? However, they have a deeper negative impact that many do not fully appreciate. Good intentions are nothing more then idyllic dreams masquerading as concrete goals.
Good intentions are often things we say we want to do, we intend to do, yet there is not a clear plan in place to put them into action. They are the behaviors that we are preaching and projecting on to some undefinable, unclear ‘later’ date.
Good intentions speak to life as we imagine it in our hopes, dreams, fears, wants, wishes, attitudes, expectations and perceptions. Yet promoting our good intentions do not seem to protect us from bad — or at least unexpected — outcomes.
This often happens because we haven’t fully thought out all the potential consequences of acting on our good intentions. Even the simplest actions undertaken for the best reasons, can produce results we didn’t anticipate.
Life is complicated. A spontaneous gift for one of our children can lead to hurt and resentful feelings from our children, who then takes the resentment out on the “privileged” sibling. Other times our intentions may be adequate, but our ability to follow through is lacking. We might want to really surprise our spouse by balancing the checkbook, only to make an even bigger mess of it because we just aren’t very good at it.
Good intentions are ultimately self indulgent, counter productive and often lead to destructive choices. Beneath our wishful hopes, the motivation that compels us to act is rooted in antagonism, masked as altruism. These are acts based in paternal protection exhibited by controlling behaviors that are used to prove our worth and hide our inadequacy to cope with uncertainty.
Good intentions are only good for ourselves and most of the time they are very bad in relation to someone else. It turns out that even when we are seemingly doing ‘good things’, we can be almost certain that we are doing ‘bad things’ from the viewpoint of others.
Good intensions can be understood in terms of four motivations that were modeled during childhood.
The first type of good intention is over-ambitious person, which applies to those who decide what others should be. Until the person becomes what other want, they feel worthless and inadequate. An over-ambitious good intention sets others up to fail by living in the future and feeling incompetent in the meantime.
The second type of good intention is the over-critical person who finds fault with everything others do because they only want them to be their best, which means perfect. This teaches others that they cannot do anything right and cannot trust their own judgment.
The third type of good intention is the over-indulgent person who gives others everything they want and more. Because the others are not taught to work for anything, they become dependent on others and full of self–doubt when alone.
The fourth type of good intention is the over-protective person who teaches others that danger is lurking around the corner; something bad is bound to happen soon. People end up feeling inadequate to cope and scared of everything. This is a recipe for anxiety.
What are real intentions? Real intentions involve acting in accordance with the demands of the present situation. Real intentions arise from:
1) Perceiving reality and its demands clearly.
2) Accurately assessing what the situation requires us to do.
3) Deciding on an appropriate intervention.
4) Implementing our decision in the reality that exists in the here and now.
Reality is the world as it is, not as we imagine it in our hopes, dream, fears, wants, wishes, attitudes, expectations and perceptions. We can do what reality requires and use real intentions to: | https://blogs.psychcentral.com/anger/2018/12/avoiding-the-good-intentions-that-pave-the-way-to-hell/ |
(Recieved from the Office of Governor Bob McDonnell)
Governor Bob McDonnell announced today that total general fund revenue collections rose by 15.7 percent in October, primarily due to growth in individual withholding, nonwithholding and corporate income tax payments. Two additional deposit days in October 2012, which is typically not a significant month for revenue collections, compared to October 2011 also contributed to the growth.
Comparing October 2012 to October 2011, collections of payroll withholding taxes rose 13.8 percent. Collections of individual nonwitholding taxes rose 43.2 percent. Collections of sales and use taxes, reflecting September sales, rose 1.1 percent in October. Due in part to late corporate September payments, collections of corporate income taxes rose 143.0 percent.
On a year-to-date basis, total revenue collections rose 4.8 percent through October, ahead of the annual forecast of 2.9 percent growth. Adjusting for the accelerated sales tax program, total revenues grew 4.0 percent through October, ahead of the adjusted forecast of 2.7 percent growth.
Governor McDonnell said "October's increase in revenue after a 0.7 percent decrease in September is a reminder of the continued volatility in the Commonwealth's financial outlook. While any increase in revenues is certainly positive, the continuing uncertainty surrounding our federal government's financial outlook, and the looming fiscal cliff, mean Virginia must look beyond these short-term increases and prudently prepare for how to weather any potential financial challenges in the coming months. To not do so would be irresponsible.
"That is why earlier this week, all state agencies were asked to submit plans outlining how they would best reduce spending in their departments, in this case by 4 percent, should such reductions become necessary. No final decisions have been made. Our future budgetary actions will be determined by this nation's economic recovery, and how leaders in Washington D.C. address the looming fiscal cliff. A failure to find a resolution prior to this fast approaching deadline would have negative economic consequences on all the states, Virginia included. At this time, we are simply preparing for this possible, but still avoidable, outcome. I continue to urge leaders in both parties to work together to find a solution to this pressing issue.
As we continue to prepare Virginia's budget and ready for any changes that may occur at the federal level, and the impact on Virginia's finances from them, we are aware that more than 250,000 Virginians are still out of work-an unacceptable statistic that hits home throughout the state. We continue to contend with federal policies that are detrimental to private-sector job creation and are making any recovery more difficult. Our focus will continue to be on taking every step necessary to help the private sector create good paying jobs for our citizens. That includes making state government more efficient and effective, and ensuring that we spend taxpayer dollars wisely and responsibly." | https://www.hrchamber.com/news/article/november/13/2012/commonwealth-posts-15.7-percent-revenue-increase-in-october |
Workplace violence has become a major issue for many businesses and healthcare systems in the U.S. for decades. According to the Occupational Safety and Health Administration (OSHA), almost 2 million U.S. workers report having been a victim of violence at work each year. This number is even more staggering when you consider that about 3 out of every 4 workplace violence incidents occur in the healthcare or social services industries.
According to the Bureau of Labor Statistics, 20,790 workers in the private industry experienced trauma from nonfatal workplace violence in 2018. Of these workers, over 70% were female and 20% required a month or longer away from work to recover.
These jarring statistics should alone provide enough motivation for employers to commit to a non-violent workplace environment, but the data even shows that there are significant financial consequences of persistent workplace violence as well. In healthcare for instance, over 58 incidents occur annually per 10,000 workers with an average cost per incident of over $3,000. This means that annually, per 10,000 workers, healthcare systems are losing nearly $200,000.
Despite all this negativity, there are many things that your organization can do today to help prevent workplace violence incidents from occurring regularly.
1. Identify workplace violence types
There are 4 main types of workplace violence that can be categorized depending on the perpetrator’s relationship to the victim and/or place of employment. Understanding these types can be crucial for identifying potential patterns or risk levels at your particular workplace, thereby allowing you to take efficient action to protect against further incidents. The 4 types are as follows:
Criminal intent
The perpetrator has no legitimate business relationship to the workplace and usually enters the affected workplace to commit a robbery or other criminal act.
Prevention strategy for this type: cash control, lighting control, entry and exit control, surveillance, signage, training on robbery response, training on dealing with aggressive and disorderly persons.
Customer/client
The perpetrator is either the recipient or the object of a service provided by the affected workplace or the victim. The assailant may be a current or former client, patient, customer, passenger, criminal suspect, inmate, or prisoner.
Prevention strategy for this type: Adequate staffing and training (low quality service can result in frustrated customers or clients), training to recognize behavioural cues, violence de-escalation techniques, interpersonal communication skills and proper restraint/take-down techniques.
Co-worker
The perpetrator has some employment-related involvement with the affected workplace. Usually this involves an assault by a current or former employee, supervisor or manager.
Prevention strategy for this type: Maintain a thorough hiring process (conduct criminal background screens and check former employee references), train employees regularly on company policies and how/when to report signs of workplace violence.
Personal relationship
The perpetrator is someone who does not work there but has or is known to have had a personal relationship with an employee.
Prevention strategy for this type: Training employees to identify victims or perpetrators of intimate partner violence (IPV), maintain a culture of support (no penalties for coming forward, confidentiality, safety and security protocols implemented, community service referrals offered).
2. Create a plan for reducing workplace violence
As a rule of thumb, your organization should always have an updated Emergency Action Plan (EAP) in place. Communicating this plan to employees and other community members is also vital. But not every organization elects to include workplace violence prevention into their EAP. Some organizations may feel that they don’t need it because workplace violence is not common in their industry. This can be risky, as it increases liability and probability of an incident occurring.
If your organization has an EAP in place, check to see if it includes workplace violence prevention. If not, suggest that changes be made to include this oversight (or begin planning for changes yourself if you are able to).
Remember that after an EAP is created, it’s imperative that staff members are trained on how to respond to certain scenarios and what their specific roles and responsibilities might be during these critical situations.
3. Assess potential threats
A threat assessment team is a committee of employees from various levels and expertise within an organization whose role is to assess the seriousness and likelihood of a threat after it has been recognized. Employers may need to seek outside talent to assist in workplace violence prevention, intervention and risk management. The goal of the threat assessment team should be to review non-emergency incident reports and recommend appropriate action.
To help prevent co-worker violence in the workplace, the following are some examples of questions that can be asked to individuals familiar with an offender after threatening comments or behaviour has occurred:
- How does the offender cope with disappointment, loss or failure?
- Does the offender blame others for their failures?
- Does the offender indicate they are being treated unfairly by the company?
- Does the offender have problems with supervisors or management?
- Does the offender speak of personal problems such as divorce, death in the family, health problems, or other personal losses or issues?
- Is the offender obsessed with others or engaged in any stalking or surveillance activity?
- Has the offender spoken about homicide or suicide?
4. Encourage reporting
It’s vital that employers are doing everything they can to encourage their employees to report their concerns regarding workplace violence. Employees need to feel secure in their positions and confident that their reports will be taken seriously. Also, it’s important that employee reporting of such incidents remains confidential and there are methods in place for submitting such reports that are conducive to this.
Proper training of newer employees is a good way to start, though reminders should be given at specific times to all employees about reporting procedures and policies, even when there are no updates. It’s always better to encourage over-reporting than running the risk of having someone not report something that ends up being costly in the long run.
5. Proactive prevention
Preventing workplace violence happens in many phases, it can be an easier pill to swallow if we consider the angle of just ‘finding the bad person’ and swooping in to prevent them from doing anything. This, however, can be an oversimplification of this complicated issue. It’s important for employers to provide a variety of supportive resources to employees before they get the chance to become agitated or disapprove of supervisor behavior.
For example, providing comprehensive mental health resources to employees will likely give them confidence that their employer’s priorities are on point, and that employees are encouraged to express themselves in a non-threatening environment.
It’s also important that employees feel that they are heard at work as well, even if they’re not reporting a workplace violence incident or sign. It’s far more likely that an employee will report a workplace violence indicator if they already trust the employer for providing them additional resources at other times as well.
Reversing the trend
Workplace violence is no longer a secret, but unfortunately that doesn’t mean the trend has reversed course for the better quite yet. Incidents are still under-reported in many industries and it can even be seen as ‘part of the job’ for some nurses and healthcare workers. This devastating reality has to stop, and it’s up to each of us to play our part in the fight against violence in the workplace.
Learn more about workplace violence prevention solutions. | https://blog.911cellular.com/workplace-violence-prevention |
Evolution of DNA
DNA has been an important discovery for many reasons, a key one being its relationship to evolutionary theory. Evolutionists have been particularly excited with DNA advances because the basis of DNA is such that it can be utilised to document the history of evolution. By comparing DNA sequences of genes from one organism to another, we can learn an enormous amount about their relationships. In fact, this learning goes far beyond what we can learn simply from morphology. At the same time, scientists are still aware that the DNA document of history, so to speak, is one that has gaps and is somewhat fragmented. This means that there must be awareness regarding the genetic changes that occur, otherwise we can become biased regarding how evolution occurred.
Using DNA to Study Variation Among OrganismsDNA has been used to study many aspects pertaining to the evolution of organisms. By investigating variation among species and the structure of different populations, scientists have learned a great deal about molecular evolution. Evolutionists look for specific patterns of DNA variation and then make logical inferences from the information. There is an enormous amount of variation within species, which does include humans. Your DNA will be different from another person's DNA. As such, two individuals cannot share the same DNA, although this does not hold true for identical twins. The result is that phenotype variation occurs. This type of variation refers to differences in the appearance and the behaviour of organisms within the same species.
Genetic Mutations and RecombinationVariation can arise though mutations, which happen when DNA copying is faulty and there is a difference between parent and offspring genes. While a mutation can be 'fixed' by DNA repair systems, some may have negligible effects. Others, however, can impact large pieces of DNA and lead to changes within a species. Recombination occurs when two parent genes are essentially mixed up to create an offspring, a common occurrence in asexual reproduction. Although the parents typically are members of the same species, occasionally genes can be moved between less related organisms. This is more likely to occur in organisms such as bacteria.
Evolution of the Genetic Code and Genetic RelatednessDNA actually has numerous roles, the most well-known being its ability to code for proteins. This coding, however, is quite different from DNA's function in terms of our genetic code. Our genes include those that are expressed; they also have special sequences that dictate precisely when and where DNA is transcribed into another molecule called RNA for the creation of proteins. Our genetic code is the system that allows an RNA strand to be translated into a necessary sequence of amino acids.
Genomes for organisms hold a significant amount of evidence for evolution given that living species share the commonality of basic hereditary systems that use DNA or RNA to pass on genes from parent to offspring. By quantifying the similar aspects as well as the differences between and within species, scientists can assess the relationships between species. This tells us which species are closely or distantly related. This pattern then relays what is essentially Darwin's branched out tree for life.
Investigating Genetic SimilaritiesDNA supports genetic similarities and these similarities help researchers to understand effects from human genes through research on other species. For instance, there are genes governing DNA repair systems in bacteria, flies and rodents, which have been found to impact cancers in humans.
Evolution is certainly an area that generates debate although the scientific community at large is confident in the theory of evolution and it is mandated in the United Kingdom curriculum as well as most other places in the world. The discovery of DNA has supported evolutionary theory and our continued understanding of this molecule can help scientists to make predictions about the direction of evolution in the future. | http://www.exploredna.co.uk/evolution-dna.html |
Scientists must collect accurate information that allows them to make evolutionary connections among organisms. Similar to detective work, scientists must use evidence to uncover the facts. In the case of phylogeny, evolutionary investigations focus on two types of evidence: morphologic (form and function) and genetic.
You are watching: How are evolutionary relationships important in classification
Two Options for Similarities
In general, organisms that share similar physical features and genomes are more closely related than those that do not. We refer to such features that overlap both morphologically (in form) and genetically as homologous structures. They stem from developmental similarities that are based on evolution. For example, the bones in bat and bird wings have homologous structures ((Figure)).
Bat and bird wings are homologous structures, indicating that bats and birds share a common evolutionary past. (credit a: modification of work by Steve Hillebrand, USFWS; credit b: modification of work by U.S. DOI BLM)
This website has several examples to show how appearances can be misleading in understanding organisms’ phylogenetic relationships.
Molecular Comparisons
The advancement of DNA technology has given rise to molecular systematics, which is use of molecular data in taxonomy and biological geography (biogeography). New computer programs not only confirm many earlier classified organisms, but also uncover previously made errors. As with physical characteristics, even the DNA sequence can be tricky to read in some cases. For some situations, two very closely related organisms can appear unrelated if a mutation occurred that caused a shift in the genetic code. Inserting or deleting a mutation would move each nucleotide base over one place, causing two similar codes to appear unrelated.
Sometimes two segments of DNA code in distantly related organisms randomly share a high percentage of bases in the same locations, causing these organisms to appear closely related when they are not. For both of these situations, computer technologies help identify the actual relationships, and, ultimately, the coupled use of both morphologic and molecular information is more effective in determining phylogeny.
Why Does Phylogeny Matter?Evolutionary biologists could list many reasons why understanding phylogeny is important to everyday life in human society. For botanists, phylogeny acts as a guide to discovering new plants that can be used to benefit people. Think of all the ways humans use plants—food, medicine, and clothing are a few examples. If a plant contains a compound that is effective in treating cancer, scientists might want to examine all of the compounds for other useful drugs.
A research team in China identified a DNA segment that they thought to be common to some medicinal plants in the family Fabaceae (the legume family). They worked to identify which species had this segment ((Figure)). After testing plant species in this family, the team found a DNA marker (a known location on a chromosome that enabled them to identify the species) present. Then, using the DNA to uncover phylogenetic relationships, the team could identify whether a newly discovered plant was in this family and assess its potential medicinal properties.
Dalbergia sissoo (D. sissoo) is in the Fabaceae, or legume family. Scientists found that D. sissoo shares a DNA marker with species within the Fabaceae family that have antifungal properties. Subsequently, researchers found that D. sissoo had fungicidal activity, supporting the idea that DNA markers are useful to screen plants with potential medicinal properties.
Which animals in this figure belong to a clade that includes animals with hair? Which evolved first, hair or the amniotic egg?
Rabbits and humans belong in the clade that includes animals with hair. The amniotic egg evolved before hair because the Amniota clade is larger than the clade that encompasses animals with hair.–>
Clades can vary in size depending on which branch point one references. The important factor is that all organisms in the clade or monophyletic group stem from a single point on the tree. You can remember this because monophyletic breaks down into “mono,” meaning one, and “phyletic,” meaning evolutionary relationship. (Figure) shows various clade examples. Notice how each clade comes from a single point; whereas, the non-clade groups show branches that do not share a single point.
All the organisms within a clade stem from a single point on the tree. A clade may contain multiple groups, as in the case of animals, fungi and plants, or a single group, as in the case of flagellates. Groups that diverge at a different branch point, or that do not include all groups in a single branch point, are not clades.
Shared CharacteristicsOrganisms evolve from common ancestors and then diversify. Scientists use the phrase “descent with modification” because even though related organisms have many of the same characteristics and genetic codes, changes occur. This pattern repeats as one goes through the phylogenetic tree of life:A change in an organism’s genetic makeup leads to a new trait which becomes prevalent in the group.Many organisms descend from this point and have this trait.New variations continue to arise: some are adaptive and persist, leading to new traits.With new traits, a new branch point is determined (go back to step 1 and repeat).
If a characteristic is found in the ancestor of a group, it is considered a shared ancestral character because all of the organisms in the taxon or clade have that trait. The vertebrate in (Figure) is a shared ancestral character. Now consider the amniotic egg characteristic in the same figure. Only some of the organisms in (Figure) have this trait, and to those that do, it is called a shared derived character because this trait derived at some point but does not include all of the ancestors in the tree.
The tricky aspect to shared ancestral and shared derived characters is that these terms are relative. We can consider the same trait one or the other depending on the particular diagram that we use. Returning to (Figure), note that the amniotic egg is a shared ancestral character for the Amniota clade, while having hair is a shared derived character for some organisms in this group. These terms help scientists distinguish between clades in building phylogenetic trees.
Choosing the Right Relationships
Imagine being the person responsible for organizing all department store items properly—an overwhelming task. Organizing the evolutionary relationships of all life on Earth proves much more difficult: scientists must span enormous blocks of time and work with information from long-extinct organisms. Trying to decipher the proper connections, especially given the presence of homologies and analogies, makes the task of building an accurate tree of life extraordinarily difficult. Add to that advancing DNA technology, which now provides large quantities of genetic sequences for researchers to use and analzye. Taxonomy is a subjective discipline: many organisms have more than one connection to each other, so each taxonomist will decide the order of connections.
To aid in the tremendous task of describing phylogenies accurately, scientists often use the concept of maximum parsimony, which means that events occurred in the simplest, most obvious way. For example, if a group of people entered a forest preserve to hike, based on the principle of maximum parsimony, one could predict that most would hike on established trails rather than forge new ones.
For scientists deciphering evolutionary pathways, the same idea is used: the pathway of evolution probably includes the fewest major events that coincide with the evidence at hand. Starting with all of the homologous traits in a group of organisms, scientists look for the most obvious and simple order of evolutionary events that led to the occurrence of those traits.
Head to this website to learn how researchers use maximum parsimony to create phylogenetic trees.
These tools and concepts are only a few strategies scientists use to tackle the task of revealing the evolutionary history of life on Earth. Recently, newer technologies have uncovered surprising discoveries with unexpected relationships, such as the fact that people seem to be more closely related to fungi than fungi are to plants. Sound unbelievable? As the information about DNA sequences grows, scientists will become closer to mapping the evolutionary history of all life on Earth.
Section Summary
To build phylogenetic trees, scientists must collect accurate information that allows them to make evolutionary connections between organisms. Using morphologic and molecular data, scientists work to identify homologous characteristics and genes. Similarities between organisms can stem either from shared evolutionary history (homologies) or from separate evolutionary paths (analogies). Scientists can use newer technologies to help distinguish homologies from analogies. After identifying homologous information, scientists use cladistics to organize these events as a means to determine an evolutionary timeline. They then apply the concept of maximum parsimony, which states that the order of events probably occurred in the most obvious and simple way with the least amount of steps. For evolutionary events, this would be the path with the least number of major divergences that correlate with the evidence.
(Figure) Which animals in this figure belong to a clade that includes animals with hair? Which evolved first, hair or the amniotic egg?
(Figure) Rabbits and humans belong in the clade that includes animals with hair. The amniotic egg evolved before hair because the Amniota clade is larger than the clade that encompasses animals with hair.
(Figure) What is the largest clade in this diagram?
(Figure) The largest clade encompasses the entire tree.
Review Questions
Which statement about analogies is correct?They occur only as errors.They are synonymous with homologous traits.They are derived by similar environmental constraints.They are a form of mutation.
What do scientists use to apply cladistics?homologous traitshomoplasiesanalogous traitsmonophyletic groups
What is true about organisms that are a part of the same clade?They all share the same basic characteristics.They evolved from a shared ancestor.They usually fall into the same classification taxa.They have identical phylogenies.
Why do scientists apply the concept of maximum parsimony?to decipher accurate phylogeniesto eliminate analogous traitsto identify mutations in DNA codesto locate homoplasies
Dolphins and fish have similar body shapes. Is this feature more likely a homologous or analogous trait?
Dolphins are mammals and fish are not, which means that their evolutionary paths (phylogenies) are quite separate. Dolphins probably adapted to have a similar body plan after returning to an aquatic lifestyle, and, therefore, this trait is probably analogous.
Why is it so important for scientists to distinguish between homologous and analogous characteristics before building phylogenetic trees?
Phylogenetic trees are based on evolutionary connections. If an analogous similarity were used on a tree, this would be erroneous and, furthermore, would cause the subsequent branches to be inaccurate.
See more: Driving Distance Between San Francisco And Seattle, Wa, Distance From San Francisco, Ca To Seattle, Wa
Maximum parsimony hypothesizes that events occurred in the simplest, most obvious way, and the pathway of evolution probably includes the fewest major events that coincide with the evidence at hand. | https://centregalilee.com/how-are-evolutionary-relationships-important-in-classification/ |
Does a 6,000-year-old earth match the findings of modern science? Secular scientists have answered forcefully in the negative for generations. However, their arguments rest on the assumption of constant natural processes and constant rates, and new discoveries from ICR’s geneticists present a strong challenge to these claims.
Genetic “Clocks”
Ticking within every species is a “clock” of sorts that measures the length of time that a species has existed on the earth. Since DNA is passed on imperfectly from parent to offspring, each generation grows more genetically distant from prior generations. Consequently, with each successive generation reproductively isolated groups within species grow more and more genetically distant from each other.
This is true for DNA found not only in the nucleus of the cell but also in the cellular energy factories termed mitochondria. Mitochondrial DNA is present in both males and females, but unlike nuclear DNA, it is inherited only from mothers. Thus, mitochondrial DNA differences among modern individuals within a created “kind” trace back to the maternal ancestor of the kind.
For kinds that survived the Flood on board the Ark, modern differences are mutated versions of the mitochondrial DNA sequence that was present in the female representative on the Ark (one representative for the unclean kinds; several representatives for the clean kinds). For kinds that survived off the Ark, modern differences may trace back to the individual females that God created during days three through six of the creation week. Since God likely created many individuals of each kind, some modern mitochondrial DNA differences for off-Ark kinds may be due to God creating DNA differences among individuals and not to mutation over time. Nevertheless, for both on-Ark and off-Ark kinds, most mitochondrial DNA differences among members of the same kind likely reflect the length of time that the kind has existed on Earth, to a first approximation, and thus represent the “ticks” of the mitochondrial DNA clock.
These biological facts create a new venue in which to compare the young-earth creation timescale to the secular timescale head to head. The true age of any given kind will be reflected in the amount of mitochondrial DNA diversity among its modern descendants. If kinds have existed on this planet for millions of years, then they should be quite genetically diverse. In contrast, if their origins trace back only 6,000 years, then they should be more genetically homogeneous.
These qualitative statements can be restated with mathematical rigor. Predicting mitochondrial DNA diversity with precision is a straightforward calculation. Secular scientists have spent many years developing the equations for estimating DNA differences over time. The mitochondrial DNA differences between isolated groups of individuals are a product of twice the DNA mutation rate and their time of separation.1,2 We can show this in mathematical notation as follows:
(1) d = 2*r*t
where
d = DNA differences between two individuals
r = the measured mutation rate in the species or lineage
t = time of origin derived from each origins model
As long as the mitochondrial mutation rate has been accurately measured in the laboratory, equation (1) can be used to predict genetic diversity.
Secular scientists have measured the mitochondrial DNA mutation rate for four species—humans, fruit flies, roundworms, and water fleas. The Bible puts the origin of each of these about 6,000 years ago, and we rounded it up to 10,000 years.3 However, the published evolutionary literature puts the origin of modern humans about 180,000 years ago; fruit flies, about 20 million years ago; roundworms, about 18 million years ago; and water fleas, about 7.6 million years ago.4
Plugging these numbers into equation (1) reveals a sharp contrast between the creation and evolutionary predictions (Figures 1 and 2). For example, the measured mitochondrial DNA mutation rate for humans is, on average, ~0.00048 mutations per year.4,5 Multiplying 0.00048 by 2 and by 10,000 years yields a prediction of about 10 mutations after 10,000 years of existence. Conversely, multiplying 0.00048 by 2 and by 180,000 years yields a prediction of about 174 mutations after 180,000 years of existence.4,6
Comparing these predictions to the range of actual human mitochondrial DNA diversity shows a striking result (Figure 1).4 On average, human mitochondrial DNA sequences differ at 10 positions. The biblical model predicts a range of diversity that accurately captures this value. In contrast, the evolutionary timescale (and, by extension, the old-earth creation timescale) predicts levels of genetic diversity that are 12–29 times off the real DNA differences that we see today (124–290 mitochondrial DNA differences versus 10).
Similar calculations for fruit flies, roundworms, and water fleas depict the same result—evolutionary predictions that are orders of magnitude off from the real DNA differences we see today and creation predictions that either match actual diversity or are very close to it (Figures 2A–C).
The evolutionary results cannot in any way be explained by invoking a slower mutation rate in the past. First, this would be inconsistent with the assumption of constant rates and constant processes invoked in astronomy and geology. Second, for species to be as genetically similar as they are today yet as old as the evolutionists claim, they would need to mutate only once every 21,000–36,000 years and consistently so for millions of years (Table 1). This incredibly slow rate is completely counter to the actual mutation rates observed in genetics; in fact, rates this slow seem biologically impossible. These results appear to present a dramatic challenge to the millions of years espoused by evolution and old-earth creation, and they seem to powerfully confirm the biblical account.
Answering Objections
However, a case this simple and powerful will be met with some measure of opposition. Can the evolutionists find a hole in these arguments? Let’s look at their possible objections.
Objection 1: The results of this study are contradicted by the many evolutionary molecular clocks published previously.
Comparing the clock in this study to the evolutionary molecular “clock” is essentially an apples-to-oranges comparison. While both clocks are based on the same biological principles, the evolutionists have used a shortcut to determine the mutation rate in their version of the clock. Rather than measure the actual rate of genetic change in the laboratory, evolutionists have determined the “ticking” of the clock from the dates they have assigned to the layers in the fossil record. This would be analogous to a young-earth creationist determining the mutation rate by measuring the genetic differences between two species, assuming a date of origin of 6,000 years, calculating the mutation rate from the genetic differences divided by 6,000 years, and then claiming that modern genetic differences confirm a 6,000-year origin for these species. Hence, evolutionary molecular “clocks” are actually a form of circular reasoning, not independent scientific data points, and they cannot logically contradict the results noted in Figures 1 and 2.
Objection 2: The results of this study are contradicted by empirically measured molecular clocks for nuclear DNA.
The nuclear DNA clocks that evolutionists use assume that all nuclear DNA differences are the product of mutation, and this interpretation is in error. Unlike mitochondrial DNA, nuclear DNA comes in two copies and is inherited from both parents, which means that DNA differences in offspring are the result of DNA mutation and of pre-existing DNA variation in the parents. For example, under the creation model, some of this pre-existing variation in humans traces back ultimately to the two parents, Adam and Eve, whom God created with pre-existing DNA differences. When this fact is accounted for, nuclear-genetic clocks point to recent creation, not millions of years.7
Objection 3: The results of this study are based on flawed methods—too few modern individuals were represented.
Inclusion of more modern individuals when tallying actual genetic diversity fails to help the evolutionary model for two reasons. First, the rate of mitochondrial DNA mutation might be different in the lineages that led up to these additional individuals. Hence, if the evolutionists wish to better represent the worldwide diversity in mitochondrial DNA sequences, then they must also better represent the worldwide diversity in mitochondrial DNA mutation rates. Any increase in actual genetic diversity afforded by more individual sequences might be counteracted by the discovery of faster mutation rates in these new lineages. Second, the magnitude between actual diversity and predicted diversity is far too great a gap to be bridged by even a hundred more DNA sequences from additional individuals or species. Sampling error does not reconcile evolutionary predictions with reality.
Objection 4: The results of this study are based on flawed methods—too few historic/fossil individuals were represented.
Fossil DNA sequences were deliberately omitted from this study because they are too fraught with scientific uncertainty. Our own in-house analysis revealed that most fossil DNA sequences are highly degraded and unreliable. Furthermore, it is currently impossible to verify the accuracy of these sequences, even if they do not appear degraded, since we lack an independent means to verify their accuracy. (The evolutionary interpretation of the fossil record is not an independent test.) Finally, even if fossil sequences were reliable, the magnitude between actual diversity and predicted diversity is far too great a gap to be bridged by the inclusion of fossil DNA sequences. Again, sampling error simply does not reconcile evolutionary predictions with reality.
Objection 5: The results of this study falsely represent the evolutionary expectations. Mutational saturation and homoplasy (independently acquired identical mutations) would lower the absolute value of the expected DNA differences under the evolutionary model.
Mutational saturation could theoretically rescue the evolutionary model but fails to do so for lack of scientific evidence in its favor. If the individuals in this study had mutated to saturation such that every DNA position had been mutated, then the DNA identity between them should have been no different than a random alignment of DNA sequences. Since every position in a DNA sequence has four possibilities due to the four bases—A, T, G, C—in the DNA code, a random alignment matches 25% of the time by chance and mismatches 75% of the time. None of the comparisons in this study even came close to 25% identity. The lowest match was 86%—far in excess of 25%.
Likewise, independently acquired identical mutations could also theoretically rescue the evolutionary model, but this explanation strains credulity. Under the evolutionary model, these creatures have undergone hundreds of thousands to millions of random mutations to a DNA sequence that is less than 20,000 DNA bases long, yet they have maintained sequence identities of 86% or greater. To postulate that these high identities resulted from separate species arriving at the same mutation repeatedly, by chance, over millions of years is to invoke a statistical miracle—a practice evolutionists reject in other fields. Clearly, homoplasy is not a tenable scientific explanation for these results.
Objection 6: The results of this study failed to account for all evolutionary mechanisms. Natural selection would have eliminated millions of deleterious mutations over the past several million years in each of these species.
This evolutionary rescuing device could potentially solve the numerical discrepancy problem, but it is entirely ad hoc and, therefore, unscientific. Scientific explanations must make testable predictions, and if natural selection explains why the evolutionary predictions are so far off from reality, then it must also predict levels of genetic diversity in species for which diversity is currently unknown. Until evolutionists actually make these predictions, this line of reasoning does not pass muster.
Objection 7: The results of this study are simply a statistical artifact. Four species do not represent biological diversity on Earth.
This objection is perhaps the strongest that the evolutionists could raise, and it appears compelling at first pass. Four species is a far cry from the millions of species that currently exist on Earth. However, these four belong to three separate phyla (humans—Chordata; fruit flies and water fleas—Arthropoda; roundworms—Nematoda) that allegedly diverged deeply in evolutionary history. Therefore, the results from these species span a broad swath of life and of supposed evolutionary time. Any evolutionary explanation that seeks to dismiss these results has very broad implications for the history of this planet, and simple explanations will not come easily, especially in light of the fact that the mutation rates for each species were obtained independently. Hence, statistical error is not a compelling explanation, and I encourage any reader to perform mutation-rate studies on organisms for which this rate has yet to be measured. (I predict similar results to those depicted in this article.)
None of the above objections have yet to appear in peer-reviewed scientific literature—either in the creationist or secular journals. In fact, no peer-reviewed objections have been published at all to date. The objections above are ones that I anticipate or that have been expressed in popular forums such as evolutionary blogs—a common source of origins information and ideas that are unencumbered by the accountability of scientific professionals.
Will evolutionists be able to reconcile these genetic data with their millions-of-years claims in geology and astronomy? The results in this article were derived using the same assumptions pervading these latter fields—for example, the constancy of the rate of change—yet these results flatly contradict the secular conclusions in geology and astronomy. Can the evolutionary community resolve this great paradox without undermining the logical foundations of their arguments for deep time?
References
- Futuyma, D. J. 2009. Evolution. Sunderland, MA: Sinauer Associates.
- Howell, N. et al. 2003. The pedigree rate of sequence divergence in the human mitochondrial genome: There is a difference between phylogenetic and pedigree rates. American Journal of Human Genetics. 72 (3): 659–670.
- We allowed the extra time because Septuagint manuscripts differ in their ages for the patriarchs.
- Jeanson, N. T. 2013. Recent, Functionally Diverse Origin for Mitochondrial Genes from ~2700 Metazoan Species. Answers Research Journal. 6: 467-501.
- This is for a subset of the mitochondrial DNA, the “D-loop,” the only region of the human mitochondrial DNA for which a mutation rate has been measured to appropriate statistical confidence.
- When the statistical variation in the measured mutation rate is factored in, both creation and evolutionary predictions yield a range of values, but these ranges (creation = 7–16 mutations; evolution = 124–290 mutations) are still very distinct from one another.
- Carter, R. The Non-Mythical Adam and Eve! Refuting errors by Francis Collins and BioLogos. Creation Ministries International. Posted on creation.com August 20, 2011, accessed January 9, 2014.
Figures adapted with permission from Answers Research Journal
* Dr. Jeanson is Deputy Director for Life Sciences Research at the Institute for Creation Research and received his Ph.D. in cell and developmental biology from Harvard University. | https://www.icr.org/article/new-genetic-clock-research-challenges |
In 1969, Roy J. Britten and Eric H. Davidson published “Gene Regulation for Higher Cells: A Theory,” in Science. “A Theory” proposes a minimal model of gene regulation, in which various types of ‘genes’ interact to control the differentiation of cells through differential gene expression. Britten worked at the Carnegie Institute of Washington in Washington, D.C., while Davidson worked at the California Institute of Technology in Pasadena, California. Their paper was an early theoretical and mechanistic description of gene regulation in higher organisms.
“A Theory’ stated the hypothesis that repetitive non-coding sequences are at the core of genetic regulation, which was unconventional in the late 1960s and early 1970s. While working at the City of Hope Medical Center in Duarte, California, Susumi Ohno in 1972 termed repetitive non-coding DNA as junk DNA, but by the end of the 2000s many biologists abandoned that term based on multiple discoveries that indicated the functional roles of these non-coding DNA strands. “A Theory” concludes that the model proposed in it allows for both the profound developmental consistency within species, and the remarkable variation that is observed in nature. “A Theory” indicates how evolutionary novelties can arise without the requirement of chance beneficial mutations. The model predicts that, for duplications to and/or relocations of regulatory regions, if those events lead to changes in the location, level, and timing of the process of transcription, then they could allow for stable systems of genes that could enable evolutionary novelties, ultimately leading to the diversity of life. While researchers later accumulated evidence for such an explanation of body plan evolution, many still debated the importance of regulatory expansion in evolution as a whole. The descriptions and implications proposed in “A Theory” inspired thousands of papers on gene regulation, most of which treat “A Theory” as a cornerstone of evolutionary developmental biology.
Prior to “A Theory”, the concept of gene regulation had little theoretical support. In his 1940 book “The Material Basis of Evolution,” Richard Goldschmidt at the University of California in Berkeley, California had considered how mutations could lead to changes in genetic regulation and impact evolution. In 1940 Edgar Stedman and Ellen Stedman at the University of Edinburgh in Edinburgh, UK suggested that cells may differentiate due to gene activity, and they highlighted evidence that most cells in an organism contain identical genomes, and yet protein contents of cells vary. Despite the suggestions that gene regulation may be an important biological mechanism, these musings provided no causal explanations in the form of molecular mechanisms. Without such explanations, the Stedmans and other biologists couldn't infer how regulation could contribute to the structure, function, and evolution of higher organisms. Britten’s discovery of highly repetitive DNA in higher organisms in 1968, and Davidson’s work on gene expression and RNA synthesis precedded “Gene Regulation in Higher Cells: A Theory”. This theory of gene regulation incorporated current findings, proposed a detailed mechanistic model, and suggested broad evolutionary implications of such a regulatory mechanism.
The introduction of “A Theory” recapitulates pieces of evidence that, put together, led the authors to develop the model. It also introduces the main elements of the model. “A Theory” proposes that there must be a minimum of five interactive elements in a regulatory model to allow a genetic system to differentially respond to various stimuli.
First, there must be a producer gene, which produces a protein. This concept is analogous to a later concept of the coding region of genes. Second, there must be a receptor gene, a DNA sequence that links with a producer gene that promotes DNA transcription.
Third, activator RNA binds to the receptor in a sequence specific manner and signals transcription of the producer. While the authors dubbed this element as RNA for systemic simplicity, they predicted that activator RNA may be a protein, a prediction confirmed by later research into transcription factors and RNA polymerase.
Fourth, genes that code for activator RNAs Britten and Davidson named integrators, and later research showed that they code for transcription factors. The authors argued that integrator genes needn't be spatially linked to the genes that the activator RNA interacts with.
Finally, there must be DNA sequences that activator RNA can recognize and bind to, thus influencing the rate of transcription of the producer gene. These are termed sensor genes, and are analogous to the cis-regulatory elements described by later research.
“A Theory” argues that sensor genes most likely bind to an intermediary structure that can then bind to non-genetic stimuli. For example, the intermediary structure could be a specific protein that has the ability to interact with non-genetic stimuli like hormones. This indirect binding between a non-genetic agent and a sensor gene influences the transcription of the producer or integrator genes that sensors connect to.
The second section of the paper describes the integrative function of the model’s elements in more detail, and it demonstrates their relationships using hypothetical wiring diagrams that Davidson would refine in later publications. Many biologists at the time held that histone proteins bind to DNA and inhibit transcription. The authors diverge from this idea, and they suggest that histones are general inhibitors and that they cannot control transcription in any meaningful way. Using the wiring diagrams as a proof of concept, “A Theory” illustrates a fine tuned response mechanism at the level of individual genes. This illustration describes system that can regulate the development of complex higher organisms.
As the authors explore the idea of the sensitivity of a genetic regulatory system, they proposes their first genetic regulatory motif: the feed forward loop. The model predicts that a fine tuned network response to a specific initiating event requires that sensor genes be sensitive to the gene product that they activate. The authors state that self-regulation of a gene could be the reason for sequential patterns of gene activation that result in the stabilization and subsequent differentiation of cell types. This type of interaction, in which a transcription factor maintains transcription of its own gene via a sensor gene, Davidson later dubbed the feed-forward loop. Researchers have found feed-forward loops in nearly all genetic networks. Consistent with “A Theory’s” predictions, this motif helps lock in a cell's differentiated state.
The third section of “A Theory” discusses the model’s implications for evolution. First, the authors discuss genome size as it relates to phenotypic complexity. Citing previous studies on genome sizes in extant taxa, the paper suggests that an increase in producer gene sequences cannot account for the nearly thirty-fold increase in genome size between simple organisms such as sponges, and higher organisms such as mammals. “A Theory” suggests that the principle difference between organisms of different complexity must be due to an expansion in regulatory genes, resulting in an expanding range of cellular activities and complexity. Researchers confirmed that proposal in the 1980s when they discovered Hox genes in many different taxa and correlated expansions and duplications of Hox genes with with established explosions in physiological diversity. The model suggests that the most efficient way to facilitate evolutionary change is not to evolve novel gene function, but to make use of existing network components in novel ways. The model contains all the elements that enable such a regulatory expansion to happen.
The fourth section of “A Theory” addresses the experimental justification of the elements of the model. The authors admit that while the definitions of the elements of their model may not be strictly accurate, the functions described must be present in the true mechanism of gene regulation. The evidence that supports the existence of each element is then discussed using specific examples, such as the two subunits of hemoglobin, whose producer genes were known to be genetically unlinked but mutually exclusive to proper biological function of red blood cells (erythrocytes). The model indicates that separate genes can be co-opted into a single network by the presence of the same sensor genes, and that these sensor genes would be repeated throughout the genome. Roy Britten and David Kohn’s 1968 publication "Repeated Sequences in DNA. Hundreds of Thousands of Copies of DNA Sequences Have Been Incorporated into the Genomes of Higher Organisms" described highly repetitive sequences in the genomes of higher organisms. Building off of that description, “A Theory” indicates that sensor genes may be utilized repeatedly throughout the genome, and suggests the widespread expansion of regulatory genes throughout the genome correlates with the complexity of higher organisms.
The authors state that activator RNA is the heart of the regulatory model, though its existence posed a problem for their model, as the existence of nuclear confined RNA was unclear in 1969. While there was evidence for the ability of RNA to bind to DNA, there was no evidence that this happened in situ in the nucleus. However, the authors' proposal that activator RNA may not be RNA at all, but a protein that can bind to DNA, was confirmed with the discovery of transcription factors in the early 1970’s. As predicted in “A Theory”, transcription factors are at the heart of gene regulation.
In the next section “A Theory” suggests that if the products of integrator genes impact a multitude of different genes, then a mutation to an integrator gene should have many effects in various tissues throughout an organism. Throughout the 1920’s Thomas Hunt Morgan, working at Columbia University in New York, New York, cataloged the extensive range of phenotypic abnormalities caused by mutations in the Notch locus in the fruitfly Drosophila. The wide range of abnormal morphologies caused by Notch mutants seemed to provide evidence for the existence of integrator genes. While it has since been discovered that the Notch gene is not an integrator by the original definition, the Notch signaling pathway does influence development by indirectly regulating transcription.
In the final section, “A Theory” contemplates the evolutionary and genetic consequences of the model. First, because DNA sequences are inactive in transcription unless something turns them on, the genome of an organism can withstand potentially deleterious or neutral mutations in inactive regions of the genome. Only through direct incorporation into a regulatory system would the biological repercussions of a divergent DNA sequence be tested. Second, the authors argue that the model balances both extreme consistency with flexibility. Conservation of developmental patterns is maintained through complex and often redundant regulatory interactions, while flexibility is allowed through integration of gene products in different tissues without changing the genes themselves. After decades of theoretical and methodological contributions, Eric Davidson published the first experimentally validated, systematic description of a complete gene regulatory network in 2002. “A Genomic Regulatory Network for Development” describes the gene regulatory network controlling specification of the endomesoderm of the purple sea urchin, Strongylocentrotus purpuratus. This publication describes the complex interactions of over forty genes, and the network architecture reveals many features of the early development of an organism. Inspired by the regulatory circuit first described in “A Theory”, this publication maps this gene regulatory network. “A Theory” predicts that development unfolds through activation and interaction of multiple gene networks, such as the endomesoderm specification network in S. purpuratus, and that the architectural flexibility of those networks results in countless combinations of genetic interactions.
“Gene Regulation for Higher Cells: A Theory” influenced the fields of genetics and development. Thousands of publications have been produced on gene regulation since “A Theory’s” publication, including greater than 350 by Davidson. By incorporating molecular descriptions of development into evolutionary accounts of variation, this publication was a forerunner for evolutionary developmental biology, that, according to many evolutionary and developmental biologists such as Sean Carroll and Günter Wagner, both in the US, holds some of the most promising perspectives on evolutionary biology.
Sources
- Britten, Roy J., and Eric H. Davidson. "Gene regulation for higher cells: a theory." Science 165 (1969):349–57.
- Britten, Roy J. and Eric H. Davidson. "Repetitive and Non-Repetitive DNA Sequences and a Speculation on Origins of Evolutionary Novelty." Quarterly Review of Biology 46 (1971):111–38.
- Britten, Roy J., and David E. Kohne. "Repeated Sequences in DNA. Hundreds of Thousands of Copies of DNA Sequences Have Been Incorporated into the Genomes of Higher Organisms." Science 161 (1968):529–40.
- Davidson, Eric. H., Jonathan P. Rast, Paola Oliveri, Andrew Ransick, Cristina Calestani, Chiou-Hwa Yuh, Takuya Minokawa, Gabriele Amore, Veronica Hinman, Cesar Arenas-Mena, Ochan Otim, C. Titus Brown, Carolina B. Livi, Pei Yun Lee, Roger Revilla, Alistair G. Rust, Zheng jun Pan, Maria J. Schilstra, Peter J.C. Clarke, Maria I. Arnone, Lee Rowen, R. Andrew Cameron, David R. McClay, Leroy Hood, and Hamid Bolouri. "A Genomic Regulatory Network For Development." Science 295 (2002):1669–78.
- Depew, Daniel J., Bruce H. Weber. Darwinism Evolving: Systems Dynamics and the Genealogy of Natural Selection. Cambridge, MA: MIT Press, 1995.
- Goldschmidt, Richard. The Material Basis of Evolution. New Haven, CT: Yale University Press, 1940.
- Laubichler, Manfred. "Evolutionary Developmental Biology." In The Philosophy of Biology, eds David L. Hull and Michael Ruse, 342–60. Cambridge, MA: Cambridge University Press, 2007.
- Ohno, Susumu. 1972. "So Much "Junk" DNA in our Genome." Brookhaven Symposia in Biology 23 (1972): 366–70.
- Stedman, Edgar, and Ellen Stedman. "Cell Specificity of Histones." Nature 166 (1950):780–1. | https://embryo.asu.edu/pages/gene-regulation-higher-cells-theory-1969-roy-j-britten-and-eric-h-davidson |
[By Aaron Dubrow, originally published on the TACC website]
E. coli bacteria multiplied in their Erlenmeyer flasks, evolving slowly over more than 50,000 generations. Throughout that time, scientists at Michigan State University kept records of each successive generation, waiting for the telltale signs of evolution to show themselves.
The most dramatic sign first appeared after approximately 33,000 generations in the form of a cloudy bloom of bacteria in one flask.
"Over time, the E. coli developed a mutation that allowed them to have a leap in function, an innovation," said Jeffrey Barrick, a former researcher in the Michigan State lab, and now professor of chemistry and biochemistry at The University of Texas at Austin. "It was feeding on citrate, an untapped carbon source that had been there all along. This enabled the bacteria that mutated to have a huge advantage, to take over the whole population, and even grow to a higher population density inside of this flask, which you could see by eye."
Typically, the inability of E. coli to consume citrate in normal conditions is a defining feature of the species. How did it suddenly overcome its deep-rooted bias?
Based on the recordings of each successive generation, the researchers identified not only the final cells that had realized this new potential, but also all the preceding lines whose subtle, latent changes had enabled the final ability to emerge.
"About 70 or 80 changes occurred between the ancestor and this final bacterium," Barrick said. "We picked a bunch of individuals from the population at different time points and created a phylogenetic tree."
Using next-generation DNA sequencers and the powerful Lonestar and Ranger supercomputers at the Texas Advanced Computing Center (TACC) to test 40 genomes from the population, the researchers traced the key changes that potentiated the mutation and showed the role of promoter capture and altered gene regulation in evolutionary innovations.
The results of the study were published in Nature in September 2012.
To make this discovery, Barrick used a host of technologies that have made DNA sequencing cheaper, faster, and more accurate in recent years. He also developed tools, including breseq, which are capable of finding more, and more difficult-to-locate, mutations.
"None of the other tools were up to par," he explained. "They didn't find these other categories of mutations. And from other evidence, we knew they weren't finding about one-third of the mutations that were happening."
Duplications, reordering, and mobile genetic elements that can jump to different parts of the genome are hard to identify with conventional tools, but are important for the evolution of novelty because they change the genome more than one letter at a time, sometimes rearranging sequences entirely.
"Breseq is a transformative application for our field," said Vaughn Cooper, an associate professor of molecular, cellular and biomedical sciences at the University of New Hampshire. "It has made what was typically a month-long process of data filtering and analysis possible within a day or two, even without high-power computing. Supercomputing would make the output from his tool applied to even larger data sets virtually instantaneous."
If the E. coli experiment showed the potential of genetic analysis to reveal the path to evolutionary innovation and to map them across generations, the work that interests Barrick now applies this understanding to the design of artificial life.
"I think a lot about the engineering aspect: making bacteria do useful things," Barrick said. "I would like bacteria to solve our energy crisis, whether that means making biofuels or something crazy like putting molecular motors in algae that push water to run a generator."
Despite all we know about E. coli, scientists are hard pressed to pick mutations that make bacteria really good at a given process. But Barrick believes that understanding where to make those changes, and how to make different kinds of changes, could make the process of evolutionary design smarter and more efficient.
His report on efforts to use computer algorithms to test strategies for improving the evolution of complex functional nucleic acids was awarded Best Synthetic Biology Paper at Artificial Life XIII: Proceedings of the Thirteenth International Conference on the Synthesis and Simulation of Living Systems in July 2012.
Across the field, advanced computing is allowing researchers like Barrick to analyze genomes, develop and test synthetic organisms, and experiment with artificial populations, all of which help scientists explore evolution on a far finer-grained level.
"When this E. coli experiment started, all they could measure was fitness. They had no clue why certain E. coli strains were better," Barrick said. "Now we can understand at the molecular level what's going on, and that's really powerful." | https://cns.utexas.edu/news/evolutionary-innovation |
# Mitochondrial DNA
Mitochondrial DNA (mtDNA or mDNA) is the DNA located in mitochondria, cellular organelles within eukaryotic cells that convert chemical energy from food into a form that cells can use, such as adenosine triphosphate (ATP). Mitochondrial DNA is only a small portion of the DNA in a eukaryotic cell; most of the DNA can be found in the cell nucleus and, in plants and algae, also in plastids such as chloroplasts.
Human mitochondrial DNA was the first significant part of the human genome to be sequenced. This sequencing revealed that the human mtDNA includes 16,569 base pairs and encodes 13 proteins.
Since animal mtDNA evolves faster than nuclear genetic markers, it represents a mainstay of phylogenetics and evolutionary biology. It also permits an examination of the relatedness of populations, and so has become important in anthropology and biogeography.
## Origin
Nuclear and mitochondrial DNA are thought to be of separate evolutionary origin, with the mtDNA being derived from the circular genomes of bacteria engulfed by the early ancestors of today's eukaryotic cells. This theory is called the endosymbiotic theory. In the cells of extant organisms, the vast majority of the proteins present in the mitochondria (numbering approximately 1500 different types in mammals) are coded for by nuclear DNA, but the genes for some, if not most, of them are thought to have originally been of bacterial origin, having since been transferred to the eukaryotic nucleus during evolution.
The reasons mitochondria have retained some genes are debated. The existence in some species of mitochondrion-derived organelles lacking a genome suggests that complete gene loss is possible, and transferring mitochondrial genes to the nucleus has several advantages. The difficulty of targeting remotely-produced hydrophobic protein products to the mitochondrion is one hypothesis for why some genes are retained in mtDNA; colocalisation for redox regulation is another, citing the desirability of localised control over mitochondrial machinery. Recent analysis of a wide range of mtDNA genomes suggests that both these features may dictate mitochondrial gene retention.
## Genome structure and diversity
Across all organisms, there are six main genome types found in mitochondrial genomes, classified by their structure (i.e. circular versus linear), size, presence of introns or plasmid like structures, and whether the genetic material is a singular molecule or collection of homogeneous or heterogeneous molecules.
In many unicellular organisms (e.g., the ciliate Tetrahymena and the green alga Chlamydomonas reinhardtii), and in rare cases also in multicellular organisms (e.g. in some species of Cnidaria), the mtDNA is found as linearly organized DNA. Most of these linear mtDNAs possess telomerase-independent telomeres (i.e., the ends of the linear DNA) with different modes of replication, which have made them interesting objects of research because many of these unicellular organisms with linear mtDNA are known pathogens.
### Animals
Most animals, specifically bilaterian animals, have a circular mitochondrial genome. Medusozoa and calcarea clades however have species with linear mitochondrial chromosomes.
In terms of base pairs, the anemone Isarachnanthus nocturnus has the largest mitochondrial genome of any animal at 80,923 bp.
In February 2020, a jellyfish-related parasite – Henneguya salminicola – was discovered that lacks mitochondrial genome but retains structures deemed mitochondrion-related organelles. Moreover, nuclear DNA genes involved in aerobic respiration and in mitochondrial DNA replication and transcription were either absent or present only as pseudogenes. This is the first multicellular organism known to have this absence of aerobic respiration and lives completely free of oxygen dependency.
### Plants and fungi
There are three different mitochondrial genome types found in plants and fungi. The first type is a circular genome that has introns (type 2) and may range from 19 to 1000 kbp in length. The second genome type is a circular genome (about 20–1000 kbp) that also has a plasmid-like structure (1 kb) (type 3). The final genome type that can be found in plants and fungi is a linear genome made up of homogeneous DNA molecules (type 5).
Great variation in mtDNA gene content and size exists among fungi and plants, although there appears to be a core subset of genes that are present in all eukaryotes (except for the few that have no mitochondria at all). In Fungi, however, there is no single gene shared among all mitogenomes. Some plant species have enormous mitochondrial genomes, with Silene conica mtDNA containing as many as 11,300,000 base pairs. Surprisingly, even those huge mtDNAs contain the same number and kinds of genes as related plants with much smaller mtDNAs. The genome of the mitochondrion of the cucumber (Cucumis sativus) consists of three circular chromosomes (lengths 1556, 84 and 45 kilobases), which are entirely or largely autonomous with regard to their replication.
### Protists
Protists contain the most diverse mitochondrial genomes, with five different types found in this kingdom. Type 2, type 3 and type 5 mentioned in the plant and fungal genomes also exist in some protists, as do two unique genome types. One of these unique types is a heterogeneous collection of circular DNA molecules (type 4) while the other is a heterogeneous collection of linear molecules (type 6). Genome types 4 and 6 each range from 1–200 kbp in size.
The smallest mitochondrial genome sequenced to date is the 5,967 bp mtDNA of the parasite Plasmodium falciparum.
Endosymbiotic gene transfer, the process by which genes that were coded in the mitochondrial genome are transferred to the cell's main genome, likely explains why more complex organisms such as humans have smaller mitochondrial genomes than simpler organisms such as protists.
## Replication
Mitochondrial DNA is replicated by the DNA polymerase gamma complex which is composed of a 140 kDa catalytic DNA polymerase encoded by the POLG gene and two 55 kDa accessory subunits encoded by the POLG2 gene. The replisome machinery is formed by DNA polymerase, TWINKLE and mitochondrial SSB proteins. TWINKLE is a helicase, which unwinds short stretches of dsDNA in the 5' to 3' direction. All these polypeptides are encoded in the nuclear genome.
During embryogenesis, replication of mtDNA is strictly down-regulated from the fertilized oocyte through the preimplantation embryo. The resulting reduction in per-cell copy number of mtDNA plays a role in the mitochondrial bottleneck, exploiting cell-to-cell variability to ameliorate the inheritance of damaging mutations. According to Justin St. John and colleagues, "At the blastocyst stage, the onset of mtDNA replication is specific to the cells of the trophectoderm. In contrast, the cells of the inner cell mass restrict mtDNA replication until they receive the signals to differentiate to specific cell types."
## Genes on the human mtDNA and their transcription
The two strands of the human mitochondrial DNA are distinguished as the heavy strand and the light strand. The heavy strand is rich in guanine and encodes 12 subunits of the oxidative phosphorylation system, two ribosomal RNAs (12S and 16S), and 14 transfer RNAs (tRNAs). The light strand encodes one subunit, and 8 tRNAs. So, altogether mtDNA encodes for two rRNAs, 22 tRNAs, and 13 protein subunits, all of which are involved in the oxidative phosphorylation process.
The complete sequence of the human mitochondrial DNA in graphic form
Between most (but not all) protein-coding regions, tRNAs are present (see the human mitochondrial genome map). During transcription, the tRNAs acquire their characteristic L-shape that gets recognized and cleaved by specific enzymes. With the mitochondrial RNA processing, individual mRNA, rRNA, and tRNA sequences are released from the primary transcript. Folded tRNAs therefore act as secondary structure punctuations.
### Regulation of transcription
The promoters for the initiation of the transcription of the heavy and light strands are located in the main non-coding region of the mtDNA called the displacement loop, the D-loop. There is evidence that the transcription of the mitochondrial rRNAs is regulated by the heavy-strand promoter 1 (HSP1), and the transcription of the polycistronic transcripts coding for the protein subunits are regulated by HSP2.
Measurement of the levels of the mtDNA-encoded RNAs in bovine tissues has shown that there are major differences in the expression of the mitochondrial RNAs relative to total tissue RNA. Among the 12 tissues examined the highest level of expression was observed in heart, followed by brain and steroidogenic tissue samples.
As demonstrated by the effect of the trophic hormone ACTH on adrenal cortex cells, the expression of the mitochondrial genes may be strongly regulated by external factors, apparently to enhance the synthesis of mitochondrial proteins necessary for energy production. Interestingly, while the expression of protein-encoding genes was stimulated by ACTH, the levels of the mitochondrial 16S rRNA showed no significant change.
## Mitochondrial inheritance
In most multicellular organisms, mtDNA is inherited from the mother (maternally inherited). Mechanisms for this include simple dilution (an egg contains on average 200,000 mtDNA molecules, whereas a healthy human sperm has been reported to contain on average 5 molecules), degradation of sperm mtDNA in the male genital tract and in the fertilized egg; and, at least in a few organisms, failure of sperm mtDNA to enter the egg. Whatever the mechanism, this single parent (uniparental inheritance) pattern of mtDNA inheritance is found in most animals, most plants and also in fungi.
In a study published in 2018, human babies were reported to inherit mtDNA from both their fathers and their mothers resulting in mtDNA heteroplasmy.
### Female inheritance
In sexual reproduction, mitochondria are normally inherited exclusively from the mother; the mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. Also, mitochondria are only in the sperm tail, which is used for propelling the sperm cells and sometimes the tail is lost during fertilization. In 1999 it was reported that paternal sperm mitochondria (containing mtDNA) are marked with ubiquitin to select them for later destruction inside the embryo. Some in vitro fertilization techniques, particularly injecting a sperm into an oocyte, may interfere with this.
The fact that mitochondrial DNA is mostly maternally inherited enables genealogical researchers to trace maternal lineage far back in time. (Y-chromosomal DNA, paternally inherited, is used in an analogous way to determine the patrilineal history.) This is usually accomplished on human mitochondrial DNA by sequencing the hypervariable control regions (HVR1 or HVR2), and sometimes the complete molecule of the mitochondrial DNA, as a genealogical DNA test. HVR1, for example, consists of about 440 base pairs. These 440 base pairs are compared to the same regions of other individuals (either specific people or subjects in a database) to determine maternal lineage. Most often, the comparison is made with the revised Cambridge Reference Sequence. Vilà et al. have published studies tracing the matrilineal descent of domestic dogs from wolves. The concept of the Mitochondrial Eve is based on the same type of analysis, attempting to discover the origin of humanity by tracking the lineage back in time.
### The mitochondrial bottleneck
Entities subject to uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this through a developmental process known as the mtDNA bottleneck. The bottleneck exploits random processes in the cell to increase the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo in which different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilisation or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of the random partitioning of mtDNAs at cell divisions and the random turnover of mtDNA molecules within the cell.
### Male inheritance
Male mitochondrial DNA inheritance has been discovered in Plymouth Rock chickens. Evidence supports rare instances of male mitochondrial inheritance in some mammals as well. Specifically, documented occurrences exist for mice, where the male-inherited mitochondria were subsequently rejected. It has also been found in sheep, and in cloned cattle. Rare cases of male mitochondrial inheritance have been documented in humans. Although many of these cases involve cloned embryos or subsequent rejection of the paternal mitochondria, others document in vivo inheritance and persistence under lab conditions.
Doubly uniparental inheritance of mtDNA is observed in bivalve mollusks. In those species, females have only one type of mtDNA (F), whereas males have F type mtDNA in their somatic cells, but M type of mtDNA (which can be as much as 30% divergent) in germline cells. Paternally inherited mitochondria have additionally been reported in some insects such as fruit flies, honeybees, and periodical cicadas.
### Mitochondrial donation
An IVF technique known as mitochondrial donation or mitochondrial replacement therapy (MRT) results in offspring containing mtDNA from a donor female, and nuclear DNA from the mother and father. In the spindle transfer procedure, the nucleus of an egg is inserted into the cytoplasm of an egg from a donor female which has had its nucleus removed, but still contains the donor female's mtDNA. The composite egg is then fertilized with the male's sperm. The procedure is used when a woman with genetically defective mitochondria wishes to procreate and produce offspring with healthy mitochondria. The first known child to be born as a result of mitochondrial donation was a boy born to a Jordanian couple in Mexico on 6 April 2016.
## Mutations and disease
### Susceptibility
The concept that mtDNA is particularly susceptible to reactive oxygen species generated by the respiratory chain due to its proximity remains controversial. mtDNA does not accumulate any more oxidative base damage than nuclear DNA. It has been reported that at least some types of oxidative DNA damage are repaired more efficiently in mitochondria than they are in the nucleus. mtDNA is packaged with proteins which appear to be as protective as proteins of the nuclear chromatin. Moreover, mitochondria evolved a unique mechanism which maintains mtDNA integrity through degradation of excessively damaged genomes followed by replication of intact/repaired mtDNA. This mechanism is not present in the nucleus and is enabled by multiple copies of mtDNA present in mitochondria. The outcome of mutation in mtDNA may be an alteration in the coding instructions for some proteins, which may have an effect on organism metabolism and/or fitness.
### Genetic illness
Mutations of mitochondrial DNA can lead to a number of illnesses including exercise intolerance and Kearns–Sayre syndrome (KSS), which causes a person to lose full function of heart, eye, and muscle movements. Some evidence suggests that they might be major contributors to the aging process and age-associated pathologies. Particularly in the context of disease, the proportion of mutant mtDNA molecules in a cell is termed heteroplasmy. The within-cell and between-cell distributions of heteroplasmy dictate the onset and severity of disease and are influenced by complicated stochastic processes within the cell and during development.
Mutations in mitochondrial tRNAs can be responsible for severe diseases like the MELAS and MERRF syndromes.
Mutations in nuclear genes that encode proteins that mitochondria use can also contribute to mitochondrial diseases. These diseases do not follow mitochondrial inheritance patterns, but instead follow Mendelian inheritance patterns.
### Use in disease diagnosis
Recently a mutation in mtDNA has been used to help diagnose prostate cancer in patients with negative prostate biopsy. mtDNA alterations can be detected in the bio-fluids of patients with cancer. mtDNA is characterized by the high rate of polymorphisms and mutations. Some of which are increasingly recognized as an important cause of human pathology such as oxidative phosphorylation (OXPHOS) disorders, maternally inherited diabetes and deafness (MIDD), Type 2 diabetes mellitus, Neurodegenerative disease, heart failure and cancer.
### Relationship with aging
Though the idea is controversial, some evidence suggests a link between aging and mitochondrial genome dysfunction. In essence, mutations in mtDNA upset a careful balance of reactive oxygen species (ROS) production and enzymatic ROS scavenging (by enzymes like superoxide dismutase, catalase, glutathione peroxidase and others). However, some mutations that increase ROS production (e.g., by reducing antioxidant defenses) in worms increase, rather than decrease, their longevity. Also, naked mole rats, rodents about the size of mice, live about eight times longer than mice despite having reduced, compared to mice, antioxidant defenses and increased oxidative damage to biomolecules. Once, there was thought to be a positive feedback loop at work (a 'Vicious Cycle'); as mitochondrial DNA accumulates genetic damage caused by free radicals, the mitochondria lose function and leak free radicals into the cytosol. A decrease in mitochondrial function reduces overall metabolic efficiency. However, this concept was conclusively disproved when it was demonstrated that mice, which were genetically altered to accumulate mtDNA mutations at accelerated rate do age prematurely, but their tissues do not produce more ROS as predicted by the 'Vicious Cycle' hypothesis. Supporting a link between longevity and mitochondrial DNA, some studies have found correlations between biochemical properties of the mitochondrial DNA and the longevity of species. Extensive research is being conducted to further investigate this link and methods to combat aging. Presently, gene therapy and nutraceutical supplementation are popular areas of ongoing research. Bjelakovic et al. analyzed the results of 78 studies between 1977 and 2012, involving a total of 296,707 participants, and concluded that antioxidant supplements do not reduce all-cause mortality nor extend lifespan, while some of them, such as beta carotene, vitamin E, and higher doses of vitamin A, may actually increase mortality. In a recent study, it was showed that dietary restriction can reverse aging alterations by affecting the accumulation of mtDNA damage in several organs of rats. For example, dietary restriction prevented age-related accumulation of mtDNA damage in the cortex and decreased it in the lung and testis.
### Neurodegenerative diseases
Increased mtDNA damage is a feature of several neurodegenerative diseases.
The brains of individuals with Alzheimer’s disease have elevated levels of oxidative DNA damage in both nuclear DNA and mtDNA, but the mtDNA has approximately 10-fold higher levels than nuclear DNA. It has been proposed that aged mitochondria is the critical factor in the origin of neurodegeneration in Alzheimer’s disease.
In Huntington’s disease, mutant huntingtin protein causes mitochondrial dysfunction involving inhibition of mitochondrial electron transport, higher levels of reactive oxygen species and increased oxidative stress. Mutant huntingtin protein promotes oxidative damage to mtDNA, as well as nuclear DNA, that may contribute to Huntington’s disease pathology.
The DNA oxidation product 8-oxoguanine (8-oxoG) is a well-established marker of oxidative DNA damage. In persons with amyotrophic lateral sclerosis (ALS), the enzymes that normally repair 8-oxoG DNA damages in the mtDNA of spinal motor neurons are impaired. Thus oxidative damage to mtDNA of motor neurons may be a significant factor in the etiology of ALS.
### Correlation of the mtDNA base composition with animal life spans
Over the past decade, an Israeli research group led by Professor Vadim Fraifeld has shown that strong and significant correlations exist between the mtDNA base composition and animal species-specific maximum life spans. As demonstrated in their work, higher mtDNA guanine + cytosine content (GC%) strongly associates with longer maximum life spans across animal species. An additional observation is that the mtDNA GC% correlation with the maximum life spans is independent of the well-known correlation between animal species metabolic rate and maximum life spans. The mtDNA GC% and resting metabolic rate explain the differences in animal species maximum life spans in a multiplicative manner (i.e., species maximum life span = their mtDNA GC% * metabolic rate). To support the scientific community in carrying out comparative analyses between mtDNA features and longevity across animals, a dedicated database was built named MitoAge.
### Relationship with non-B (non-canonical) DNA structures
Deletion breakpoints frequently occur within or near regions showing non-canonical (non-B) conformations, namely hairpins, cruciforms and cloverleaf-like elements. Moreover, there is data supporting the involvement of helix-distorting intrinsically curved regions and long G-tetrads in eliciting instability events. In addition, higher breakpoint densities were consistently observed within GC-skewed regions and in the close vicinity of the degenerate sequence motif YMMYMNNMMHM.
## Use in forensics
Unlike nuclear DNA, which is inherited from both parents and in which genes are rearranged in the process of recombination, there is usually no change in mtDNA from parent to offspring. Although mtDNA also recombines, it does so with copies of itself within the same mitochondrion. Because of this and because the mutation rate of animal mtDNA is higher than that of nuclear DNA, mtDNA is a powerful tool for tracking ancestry through females (matrilineage) and has been used in this role to track the ancestry of many species back hundreds of generations.
mtDNA testing can be used by forensic scientists in cases where nuclear DNA is severely degraded. Autosomal cells only have two copies of nuclear DNA, but can have hundreds of copies of mtDNA due to the multiple mitochondria present in each cell. This means highly degraded evidence that would not be beneficial for STR analysis could be used in mtDNA analysis. mtDNA may be present in bones, teeth, or hair, which could be the only remains left in the case of severe degradation. In contrast to STR analysis, mtDNA sequencing uses Sanger sequencing. The known sequence and questioned sequence are both compared to the Revised Cambridge Reference Sequence to generate their respective haplotypes. If the known sample sequence and questioned sequence originated from the same matriline, one would expect to see identical sequences and identical differences from the rCRS. Cases arise where there are no known samples to collect and the unknown sequence can be searched in a database such as EMPOP. The Scientific Working Group on DNA Analysis Methods recommends three conclusions for describing the differences between a known mtDNA sequence and a questioned mtDNA sequence: exclusion for two or more differences between the sequences, inconclusive if there is one nucleotide difference, or cannot exclude if there are no nucleotide differences between the two sequences.
The rapid mutation rate (in animals) makes mtDNA useful for assessing genetic relationships of individuals or groups within a species and also for identifying and quantifying the phylogeny (evolutionary relationships; see phylogenetics) among different species. To do this, biologists determine and then compare the mtDNA sequences from different individuals or species. Data from the comparisons is used to construct a network of relationships among the sequences, which provides an estimate of the relationships among the individuals or species from which the mtDNAs were taken. mtDNA can be used to estimate the relationship between both closely related and distantly related species. Due to the high mutation rate of mtDNA in animals, the 3rd positions of the codons change relatively rapidly, and thus provide information about the genetic distances among closely related individuals or species. On the other hand, the substitution rate of mt-proteins is very low, thus amino acid changes accumulate slowly (with corresponding slow changes at 1st and 2nd codon positions) and thus they provide information about the genetic distances of distantly related species. Statistical models that treat substitution rates among codon positions separately, can thus be used to simultaneously estimate phylogenies that contain both closely and distantly related species
Mitochondrial DNA was admitted into evidence for the first time ever in a United States courtroom in 1996 during State of Tennessee v. Paul Ware.
In the 1998 United States court case of Commonwealth of Pennsylvania v. Patricia Lynne Rorrer, mitochondrial DNA was admitted into evidence in the State of Pennsylvania for the first time. The case was featured in episode 55 of season 5 of the true crime drama series Forensic Files (season 5).
Mitochondrial DNA was first admitted into evidence in California, United States, in the successful prosecution of David Westerfield for the 2002 kidnapping and murder of 7-year-old Danielle van Dam in San Diego: it was used for both human and dog identification. This was the first trial in the U.S. to admit canine DNA.
The remains of King Richard III, who died in 1485, were identified by comparing his mtDNA with that of two matrilineal descendants of his sister who were alive in 2013, 527 years after he died.
## Use in evolutionary biology and systematic biology
mtDNA is conserved across eukaryotic organism given the critical role of mitochondria in cellular respiration. However, due to less efficient DNA repair (compared to nuclear DNA) it has a relatively high mutation rate (but slow compared to other DNA regions such as microsatellites) which makes it useful for studying the evolutionary relationships—phylogeny—of organisms. Biologists can determine and then compare mtDNA sequences among different species and use the comparisons to build an evolutionary tree for the species examined.
For instance, while most nuclear genes are nearly identical between humans and chimpanzees, their mitochondrial genomes are 9.8% different. Human and gorilla mitochondrial genomes are 11.8% different, suggesting that we may be more similar to chimps than gorillas.
## mtDNA in nuclear DNA
Whole genome sequences of more than 66,000 people revealed that most of them had some mitochondrial DNA inserted into their nuclear genomes. More than 90% of these nuclear-mitochondrial segments (NUMTs) were inserted into the nuclear genome within the last 5 or 6 million years, that is, after humans diverged from apes. Results indicate such transfers currently occur as frequent as once in every ~4,000 human births.
It appears that organellar DNA is much more often transferred to nuclear DNA than previously thought. This observation also supports the idea of the endosymbiont theory that eukaryotes have evolved from endosymbionts which turned into organelles while transferring most of their DNA to the nucleus so that the organellar genome shrunk in the process.
## History
Mitochondrial DNA was discovered in the 1960s by Margit M. K. Nass and Sylvan Nass by electron microscopy as DNase-sensitive threads inside mitochondria, and by Ellen Haslbrunner, Hans Tuppy and Gottfried Schatz by biochemical assays on highly purified mitochondrial fractions.
## Mitochondrial sequence databases
Several specialized databases have been founded to collect mitochondrial genome sequences and other information. Although most of them focus on sequence data, some of them include phylogenetic or functional information.
AmtDB: a database of ancient human mitochondrial genomes. InterMitoBase: an annotated database and analysis platform of protein-protein interactions for human mitochondria. (apparently last updated in 2010, but still available) MitoBreak: the mitochondrial DNA breakpoints database. MitoFish and MitoAnnotator: a mitochondrial genome database of fish. See also Cawthorn et al. Mitome: a database for comparative mitochondrial genomics in metazoan animals (no longer available) MitoRes: a resource of nuclear-encoded mitochondrial genes and their products in metazoa (apparently no longer being updated) MitoSatPlant: Mitochondrial microsatellites database of viridiplantae. MitoZoa 2.0: a database for comparative and evolutionary analyses of mitochondrial genomes in Metazoa. (no longer available)
## MtDNA-phenotype association databases
Genome-wide association studies can reveal associations of mtDNA genes and their mutations with phenotypes including lifespan and disease risks. In 2021, the largest, UK Biobank-based, genome-wide association study of mitochondrial DNA unveiled 260 new associations with phenotypes including lifespan and disease risks for e.g. type 2 diabetes.
### Mitochondrial mutation databases
Several specialized databases exist that report polymorphisms and mutations in the human mitochondrial DNA, together with the assessment of their pathogenicity.
MitImpact: A collection of pre-computed pathogenicity predictions for all nucleotide changes that cause non-synonymous substitutions in human mitochondrial protein coding genes . MITOMAP: A compendium of polymorphisms and mutations in human mitochondrial DNA . | https://en.wikipedia.org/wiki/MtDNA |
Seven things about evolution
A quick look at the basics of biological evolution, and what sets it apart from other processes of change.
What is evolution?
In its original sense, evolution meant “unrolling”, as if a papyrus scroll were being unrolled to reveal its contents. We may talk about the “evolution” of many things, from an individual’s lifetime to the evolution of the universe. In the most general sense, evolution means “change”.
Biologists are very specific about the kinds of processes that qualify as “evolution” in the biological sense. Biological evolution is genetic change in a population over time. Populations and individuals change in many ways, but only some changes are evolution.
Here’s a list of seven things about evolution. It’s not comprehensive but it hits on several important issues that help to understand how evolutionary biologists think about the process of evolutionary change.
- Evolution is change in a population. Individuals change during their lifetimes, even day to day. Those changes are not biological evolution, although they may be products of evolution in past populations. Likewise, a forest may change over time, as some kinds of trees proliferate and others disappear. Those changes in community structure are not themselves biological evolution, although they may influence the evolution of the populations of trees composing the forest.
- Evolution is genetic change. Many kinds of phenotypic changes don’t involve evolution. For example, many human populations have markedly increased in lifespan during the last 100 years, mostly as a result of improvements in nutrition and reductions in disease. Those changes are important and highly visible, but they are not biological evolution. Physical characteristics and behaviors can only evolve if they have some genetic contribution to their variation in the population – that is, if they are heritable.
- Many kinds of genetic changes are important to evolution. Mutations happen when a DNA sequence is not replicated perfectly. A sequence may undergo a mutation to a single nucleotide, small sequences of nucleotides can be inserted or deleted, large parts of chromosomes can be duplicated or transposed into other chromosomes. Some plant populations have undergone duplications or triplications of their entire genomes. These patterns of genetic change can have a wide range of effects on the physical form and behavior of organisms, or may have no effects at all. But all of them follow the same mathematical principles as they change in frequency within populations.
- Evolution can be non-random. Populations of organisms cannot grow in numbers indefinitely, so that individuals that successfully reproduce will have their genes increase in proportion over time. Among the genes carried by such successful individuals may be some that actually cause them to survive or reproduce, because they fit the environment better. The survival and proliferation of such genes is not a matter of chance; it is a result of their value in the environment. This process is called natural selection, and it is the reason why populations come to have forms and behaviors that are well-suited to their environments.
- Evolution can be random, too. Many genetic changes are invisible and make no difference to the organisms. Many changes that do make a noticeable difference to the organisms’ form or behavior nevertheless still do not change the chance of reproducing. Even individuals with the best genes still have a strong random component to their reproduction, and in sexual organisms genes assort randomly into sperm and egg cells. As a result, even when an individual has a beneficial gene that increases the chance of reproducing, that valuable gene still is very likely to disappear quickly after it first appears in the population. Genetic drift is strongest when populations are small or genes rare, but it is there all the time. Random chance has a continual role in evolutionary change.
- Populations evolve all the time. No population can stay static for long. Reproduction is not uniform, and no organism replicates DNA perfectly. The genome of the simplest bacterium has thousands of nucleotides, ours has billions. Keeping these sequences constant, generation after generation, is a task no population has ever managed to do. Genetic variation is constantly introduced into populations by mutation and immigration, rare genetic variations are constantly disappearing when individuals who carry them don’t pass them on, and occasionally rare genes become common – whether by natural selection or genetic drift. If a population’s physical form remains the same for a long time, we have a good reason to suspect that natural selection is working to oppose random changes.
- Evolutionary theory has changed a lot since Darwin’s day. Charles Darwin recognized several key insights about biological evolution, including the process of natural selection, the tree-like pattern of relationships among species, and the potential for significant changes when processes act through small, incremental steps across geological timescales. But we know a lot more now than Darwin knew. We understand the molecular basis of genetic changes, and many of the ways that the features of organisms can be affected by genetic and environmental change. We have learned much about the limits of evolution, the alternative patterns of change caused by environments, and the importance of randomness. We now know much about the changing pace of evolution, seeing it as a dynamic process that can happen in fits and starts.
Evolution is the most powerful idea in biology, organizing our knowledge about the history and diversity of life. We understand our own origins using the same tools that we use for organisms across the tree of life, from the simplest bacteria to the largest whales.
John Hawks Newsletter
Join the newsletter to receive the latest updates in your inbox. | https://johnhawks.net/weblog/seven-things-about-evolution/ |
Here is an interesting tidbit for the creation-evolution debate, which concerns the creationist vs. the mainstream scientific takes on the true age of mitochondrial Eve. I may be the first to spot this particular connection, so I'm throwing it out there as a rather hasty blog post.
|The infamous king Richard III of England.|
Picture from Wikipedia.
I have previously written a blog post (in Swedish) about evidence and how it can be used to confirm or falsify hypotheses. In that blog post I mentioned the case of the missing and found remains of Richard III of England, the ill-reputed usurper and probable regicide (see Rex Factor Podcast). The historical chronicles and the modern archaeological case regarding Richard III are also abolutely fascinating stories all by themselves. In short: the recently rediscovered remains of Richard III were identified by comparisons with mitochondrial DNA from his skeleton and that of two modern-day matrilineal descendants from his sister. I originally used this only as an illustrative example of evidentiary support for a hypothesis, but during a discussion in the Evolution Fairytale Forum it occurred to me that mitochondrial data of this kind also has general relevance for estimations of mutation rates in mitochondrial DNA. Which brings us to the case of mitochondrial Eve. I am indebted to forum member Mike the Wiz for throwing mitochondrial Eve into the discussion (and probably not realizing in advance that by doing this he was biting off a lot more than he could chew).
Estimates of mutation rates (=base substitution rates) in mitochondrial DNA over time have been performed from various sources, based on combined palaeontological, achaeologial, historical and genetic evidence (see e.g. Rieux et al. Molecular Biology and Evolution 2014). Based on these rates, and the overall variation between different human mitochondrial DNA lineages from within and outside Africa (see Wikipedia, Oven and Kayser 2008, Endicott et al. 2009, and Mellars 2006), a time point for their divergence from the last matrilineal common ancestor of all humans can be calculated. These calculations by mainstream science seem to clock in at approximately 200 000 years ago, with the likely point of divergence somewhere in Africa.
However: other studies of mutation rates have yielded substantially higher estimates. These rates are obtained from modern comparisons of changes over a few generations, as determined from pedigrees or cell lines (see e.g. Parsons et al. Nature Genetics 1997; Howell et al. American Journal of Human Genetics 2003; Madrigal et al. American Journal of Physical Anthropology 2012). Creationists have leapt on this to claim that, based on these rates, a better time estimate for mitochondrial Eve would be around 6000-6500 years, bringing her close to their biblical estimate for the time of creation and the biblical Eve. The creationist take on the Mitochondrial Eve story is presented as a factual case (number 4) in Don Batten's list of 101 (supposed) evidence for a young age of the earth and the universe.
Note that the discrepancies between estimates over short-term and long-term time scales are considered genuine, and acknowledged by both mainstream and creationist publications. See an example here from the creationist publication Creaton ex Nihilo Technical Journal / Journal of Creation:
Note that the discrepancies between estimates over short-term and long-term time scales are considered genuine, and acknowledged by both mainstream and creationist publications. See an example here from the creationist publication Creaton ex Nihilo Technical Journal / Journal of Creation:
What are we to make of this? How should we then resolve these discrepancies between mutation rates over different time scales? Should either the short-term or the long-term rates be considered artefacts, or is there some explanatory model that would allow us to reconcile these different rates?
The most parsimonious explanation for the observed data, however, is probably that both rates are valid, but that differences between short-term and long-term mutation rates represent two different processes, acting over different time scales. The short-term rates represent the instantaneous frequency of mutations in the mitochondrial DNA molecule, whereas the long-term rates represent what remains in the gene pool after some filtering process has acted on the genetic variation over several generations and removed 90-95% of all mitochondrial DNA variants. Evidence that mitochondrial DNA is subjected to such purifying selection can be found in the differing substitution rates among bases in the first and second codon positions, and the third (synonymous) coding position, respectively (Rieux et al. 2014). This explanation allows us to harmonize the differing mutation rates, and provides strong support that the long-term rates would indeed be most relevant for comparisons over longer time scales.
When confronted with this very reasonable solution, however, creationists would predictably respond with the usual canard about “observational” science. From the creationist point of view, the case appears shut: The short-term, higher rates should be preferred over long-term rates, ostensibly because they constitute "real" observations, rather than being calibrated estimates based on "evolutionary assumptions". In reality, both time frames are well supported by independent lines of evidence, and both estimates should therefore be considered factual. Anyone arguing this sensible point of view would probably have a hard time to break through the creationist trenches, fortified with rhetoric about evolutionary assumptions being a necessary presupposition for the long-term rates.
Here is where good old (or bad old) Richard Plantagenet comes to our rescue, literally from the grave. The time frames for the Plantagenet dynasty and Richard’s short reign are not subject to any evolutionary assumptions, but constitute reliable historical facts that could hardly be questioned even by the most ardent creationist. As we have access to mitochondrial DNA from Richard himself, as well as from two independent matrilineal relatives, we can directly calculate a factual mutation rate over medium-range time scales, which can be compared with both short-term and long-term rates.
The main creationist case for a mitochondrial Eve living roughly 6000 years ago comes from the study by Parsons et al. 1997, which found an unprecedented high rate of 1/33 generations per mtDNA molecule or 2.5 mutations per base pair per million years (2.5 x 10-6 per base per year). The forensic material from Richard and his modern relatives allows us to directly test the validity of the creationist approach to simply extrapolate short-term mutation rates over longer time frames. Have a look at the genealogy of Richard’s matrilineal relatives until the present day, from King et al. 2014:
There is a time span of roughly 550 years between the birth of Richard’s mother Cecily Neville and the modern descendants whose mitochondrial DNA was sequenced along with Richard’s. From this follows that the mitochondrial DNA molecule of each of the modern relatives will have accumulated mutations for roughly 550 years, compared with Richard’s template DNA. This means that we could expect the following number of mutations to have occurred:
16570 (the length of the human mtDNA) x 550 x 2.5 / 1000 000 = 22-23 mutations.
16570 (the length of the human mtDNA) x 550 x 2.5 / 1000 000 = 22-23 mutations.
In the King et al. 2014 study, the scientists sequenced the full mtDNA molecules of both Richard III and his modern relatives and found…wait for it…that the mitochondrial DNA of one relative was identical with Richard’s, whereas that of the second relative differed only by a single mutation!
This is far from the expected value if the short-term rates were valid, even over a 550-year time span, and constitutes a slam dunk against the validity of the creationist approach to estimate longer time frames based on short-term mutation rates.
The two mitochondrial DNA lineages of Richard’s modern relatives constituted a single lineage for roughly 115 years, until they split apart with the sisters Barbara and Everhilda Constable in the 1530:s. These two mitochondrial genomes have thus accumulated mutations for a total of 115 + 2 x 435 = 985 years in total. With a single mutation over this time, the mutational rate can be calculated as:
1 / (1657 x 985) = 6 x 10-8 mutations per base per year.
This is almost exactly 40 times lower than found by Parsons et al. 1997. It is also very close to the mutation rates of approximately 2-5 x 10-8 found by Rieux et al. 2014, based on different estimates using medium-to-long time frames. The data from the mitochondrial DNA of Richard III and his modern relatives thus constitutes a factual demonstration that mutation rates can approach the long-term rates already after only ca 500 years. This also provides a strong case that purifying selection or some other process is at work to filter out most mutations in the mitochondrial DNA after a very limited time span.
Unlike the long-range mutational rate estimates, the creationists cannot bring out their old canards against the case of Richard III. The estimated rates in this case rests on only three parameters, neither of which can be in reasonable doubt:
1) The confirmed relationships between the skeleton identified as Richard III and the two modern matrilineal relatives. King et al. 2014 build a very strong cumulative case for this in their paper, based on genealogy, genetics, and historical descriptions of Richard’s battle wounds and the damage and deformities of the skeleton.
2) The genealogy and historical dates of the members of the Plantagenet dynasty and their descendants.
3) The sequences of the mitochondrial DNA molecules from Richard’s skeleton and those of two of his modern-day maternal relatives with strictly matrilineal descent, all of which have been confirmed by means of multiple sequencing.
Thus, the genetic case for a mitochondrial Eve living more than 100 000 years ago stands stronger than ever. On the other hand: based on the approximately 1 mutations per 1000 years per mtDNA molecule found between Richard and his relatives, the creationist case for a most recent common mitochondrial ancestor only 6000 years ago is more absurd than ever. More cases like that of Richard III could probably be found, in order to strengthen the validity of this approach even further.
When we take a step back and survey the overall situation with regards to biogeographical age estimates for the human species and its populations, we find an amazing consistency between different types of evidence from historical sources, archaeology, paleontology, radiometric dating, and population genetics (Endicott et al. 2009; Rieux et al. 2014). The scientific case for age estimates of the human species does not rest on any “evolutionary presuppositions”, but constitutes a bottom-up reconstruction based on real historical data with a high degree of concordance between multiple independent lines of evidence.
Let us contrast this scientific success story with the creationist approach to epistemological and scientific consistency. If you recall the original publications by Carl Wieland and CMI, the ostensible justification for preferring modern, short-term estimates for mitochondrial mutation rates was that these represent present-day observations, rather than shady and uncertain values calibrated by evolutionary assumptions. This may seem like an epistemically sound approach, but is it consistent with how the same scientists accept other modern, observed rates in other contexts? (see Age of Rocks for examples):
What is their take on modern, observed rates of radiometric decay?
And modern, observed amounts of C14 in the atmosphere (calibrated tens of thousands of years back by tree rings, sediment varves, and speleotherms)?
Or modern, observed rates of continental spread from the mid-Atlantic ridge?
In those cases they seem perfectly willing to accept any fairy-tale alternative for historical differences in rates, with little or no evidentiary support.
A more cynical, but entirely justified, interpretation of the creationist modus operandi is that they will always favour the alternative that supports their presuppositions and prior commitments to a literalist biblical interpretation. Verily, I say unto you, the level of hypocrisy routinely displayed by the "scientific" representatives of the major creationist organizations would make even a hardened pharisee blush. | http://mattiaswebarchive.blogspot.se/2015/02/ |
Falsification of the Junk DNA Paradigm
For example, University of Chicago geneticist Dr. Jerry A. Coyne offered philosophical arguments to defend his conclusion that human DNA was not intelligently designed. These arguments were founded on the existence of perceived worthless segments of genetic code. In defending evolution, he wrote,
"Perfect design would truly be the sign of a skilled and intelligent designer. Imperfect design is the mark of evolution... we expect to find, in the genomes of many species, silenced, or 'dead,' genes: genes that once were useful but are no longer intact or expressed. These are called pseudogenes... the evolutionary prediction that we'll find pseudogenes has been fulfilled—amply. Indeed, our genome—and that of other species—are truly well populated graveyards of dead genes"
Brown University biologist Kenneth R. Miller wrote,
"The human genome is littered with pseudogenes, gene fragments, "orphaned" genes, "junk" DNA, and so many repeated copies of pointless DNA sequences that cannot be attributed to anything that resembles intelligent design.... In fact, the genome resembles nothing so much as a hodgepodge of borrowed, copied, mutated, and discarded sequences and commands that has been cobbled together by millions of years of trial and error against the relentless test of survival."
In both of these statements, it should be noted that the founding evidence for evolution is the philosophical belief that DNA does not appear intelligently designed.
Evolutionary biologists believe that the driving mechanism of evolution has been incremental fine tuning of a mutated genetic code, made possible through natural selection through the ages. At the same time, it is concluded that natural selection has replaced 98% of the genome with useless leftovers that it has been incapable of eliminating. If the doctrine of natural selection is to be believed, it remains to be explained how natural selection is capable of creating millions of finely balanced intricacies of nature and is incapable of ridding the human genome of 98% of its baggage. Specifically, why would a mutated offspring which was endowed with a piece of junk DNA be favored by natural selection such that its newly mutated DNA replaced all non-mutated individuals in the population who had not been endowed with similarly useless information?
During the last twenty years, geneticists have documented massive evidence that non-protein coding DNA segments are indeed functional.
Dr. Jonathan Wells summarized these findings in this commentary:
"That view [junk DNA] has turned out to be spectacularly wrong. Since 1990—and especially after completion of the Human Genome Project in 2003—many hundreds of articles have appeared in the scientific literature documenting the various functions of non-protein coding DNA, and more are being published every week."
In reference to the collapse of the junk DNA paradigm, evolutionist Dr. John Mattick, director for the Institute of Molecular Bioscience (Queensland, Australia), wrote,
"The failure to recognize the full implications of this--particularly the possibility that the intervening noncoding sequences may be transmitting parallel information... may well go down as one of the biggest mistakes in the history of molecular biology."
The evidence of the important functionality of what was previously referred to as "junk" is now undeniable. This is a profound blow to the entire theory of evolution. The greater the percentage of DNA that is shown to be functional, the weaker the evolutionary hypothesis becomes. With the progressive expansion of man's knowledge of the sophisticated integrated components and elaborate control systems of DNA, geneticists are becoming increasingly aware that any proposed naturalistic origin of the genetic code is unthinkable.
Evolutionary theorists have relied heavily on the existence of non-functional DNA to counter probability challenges to evolution. Also, the presumed large repository of "junk" DNA" has been cited to effectively deny the deterioration of the human genome. Although mutations occur with each generation, they were believed to occur mostly in non-functional segments and therefore have been considered to be irrelevant. The existence of "junk DNA" is a mathematical necessity to justify the assumption that random mutations can result in purposeful changes in genetic code.
What was proclaimed as "predicted" by evolution just a few years ago now is viewed by many biologists as evidentiary of intelligent design by virtue of the great complexity of the genetic code and the extreme difficulty of explaining its existence in terms of evolutionary mechanisms. Now that the junk DNA paradigm has been falsified, leading evolutionary biologists are attempting to disavow themselves of their previous predictions, claiming that evolution predicts any percentage of non-functional DNA. Although evolution is commonly declared to be a unifying principle of all fields of biology, this example demonstrates how rigid adherence to evolutionary dogma has obstructed man's understanding of fundamental principles of molecular biology.
Failed predictions of a hypothesis should prompt careful re-evaluation of its fundamental premises. It is a serious error to contrive ad hoc explanations for such unexpected results in the attempt to preserve a theory that cannot be supported by empirical observation. The junk DNA paradigm unfortunately remains an icon of evolution. It is still widely propagated in popular science books and reviews, despite the fact that it has been invalidated and results published in numerous scientific journals.* This illustrates the embarrassing lack of objective peer review that is so characteristic of evolutionary biology today. Most evolutionary biologists are not re-evaluating their commitment to the general theory of evolution or even to the junk DNA paradigm, but continue searching for naturalistic explanations to account for observations. Such proposals only require one to accept even greater complexities of nature at the molecular level.
One of the functions of DNA is that it
encodes for proteins. This means that
the sequencing of complex proteins is directed by DNA. The idea that most of DNA was “junk” was
based on the assumption that segments of DNA that did not direct protein synthesis
were useless. This conclusion was drawn
in plain view of the fact the encoding of proteins only represents one small
aspect of the genesis of complex living organisms. Proteins themselves have no ability to oversee
and direct many biologic processes. The shape of a person’s skull, the complex
integrated circuitry of the brain, subtle traits such as the sound of one's voice, the creation
of instinctive behavior, the direction of cell differentiation and numerous
other vital biologic processes are all controlled by DNA. The same voices that proclaimed that DNA
controls homosexual behavior and every other conceivable human trait also believe
that all non-protein-encoding DNA segments are useless. | https://www.maskofscience.com/junk-dna-paradigm |
Green et al.1 have recently reported the sequencing of a full-length Neandertal mitochondrial genome. This is not a complete nuclear genome, but only that of one small organelle (the mitochondrion) that exists within all animal cells. From their analysis they concluded, ‘Neandertals made no lasting contribution to the modern human mtDNA gene pool.’ While this primary conclusion does not necessarily conflict with the creationist position that Neandertals lived after the Flood and are fully human, there are a lot of evolutionary assumptions behind that statement that must be carefully considered. There are actually three separate issues here: is the sequence accurate? Does the sequence prove that Neandertals were a different species? Do the number of variations between Neandertal and modern humans prove that a vast time span separates us?
The accuracy of ancient DNA sequencing
Ancient DNA (aDNA) is problematic. DNA is a long macromolecule that breaks easily, especially between G–T residues where breaks are nearly three times more likely than at other positions.1,2 In this particular case, the DNA fragments recovered from the Neandertal bone had an average length of only 69.4 bp. That means that thousands of pieces were required to reassemble the 16.5 thousand bp mtDNA genome, and multiple copies of each section are required to correct for the high error rates inherent in sequencing aDNA. Green et al. estimated they would need 12-fold coverage to achieve an error-rate of 1 in 10,000. To put that in perspective, the Human Genome Project required only 4–5-fold coverage to complete the draft sequence. The Neandertal mtDNA was completed with a 34.9-fold average coverage, but without a complete modern human mtDNA for comparison, the Neandertal assembly would have been impossible.
Contamination of ancient samples by modern DNA is a constant issue, for the sequencing reactions tend to amplify high-quality modern DNA at the expense of fragmented aDNA. The presence of nuclear copies of the mtDNA is also a concern. The nuclear copies are not exactly identical to the mtDNA and separating the two can be difficult, especially with the short average read length. There are actually four types of mtDNA that the authors had to be concerned about: the fragmented Neandertal mtDNA, low copy number fragmented nuclear copies of Neandertal mtDNA, contaminating modern mtDNA, and low copy number nuclear copies of contaminating modern mtDNA. The authors went to great lengths to address this problem and probably could not have done much more, given the nature of the material.
In ancient DNA, individual DNA residues are chemically altered over time. In particular, frequent deamination of cytosine residues leads to high rates of C–T transitions (and A-G transitions on the complimentary strand).2 This occurs more often close to the ends of DNA fragments,3 which is a considerable problem when one considers the small average size of the recovered DNA fragments. The reported Neanderthal mtDNA differs from the standard human mitochondrion (the Revised Cambridge Reference Sequence,4 or rCRS) by 206 nucleotides (1.2% of the 16,569 nucleotide mitochondrial genome), including 195 transitions and 11 transversions.5 To put that in perspective, any two modern humans selected at random will differ by an average of about 40 nucleotides, and the most divergent mtDNAs from living humans differ at just over 120 nucleotides.6 The mutations found in the Neandertal mtDNA are fairly standard. No large indels were found and transversions are uncommon. In fact, the bulk of the differences found between the Neandertal and modern mtDNA are C–T transitions. These are among the most common mutations that occur within living organisms, but it is not clear if they are the result of ancestry or post-mortem alteration of the Neandertal sequence. Many of these mutations might be indicative of errors in the genome assembly that, despite the authors’ best efforts, carried through their analysis.
Of particular concern is the discovery of several non-synonymous amino acid changes in protein coding regions of the Neandertal mitochondrial genome, especially that of subunit 2 of the cytochrome c oxidase gene. They claim that this is evidence that purifying selection in the Neandertal mtDNA was reduced probably to due a small population size. This is because these types of substitutions are rare because most are assumed to be detrimental, and because selection breaks down in small populations due to high rates of random shifts in gene frequencies (the fixation rate of new mutations is inversely proportional to population size). But small populations are also at risk due to the high rate of mutation accumulation,7 which eventually leads to extinction due to ‘error catastrophe’. The accumulation of non-synonymous mutations in important genes is evidence for a high mutation rate acting on a small population under threat of extinction. It could also indicate the presence of post-mortem DNA degeneration that their techniques could not discern. If the results are valid, the accumulation of deleterious mutations might help to explain the disappearance of the Neandertals. However, the adaptive significance of the synonymous to non-synonymous ratio has recently come under fire,8,9 so we must interpret these findings carefully.
There exists a large body of literature dealing with the pitfalls and assumptions inherent in working with ancient DNA. The authors are aware of this knowledge and did their best to avoid potential problems, but time has been a fickle judge of previous aDNA sequencing efforts. Green et al. concluded that this single Neandertal mtDNA ‘unequivocally’ falls outside the range of modern humans. While this is true at face value, it assumes the sequence is accurate. See Criswell for a detailed discussion on post-mortem DNA decay and problems with current Neandertal mtDNA sequencing efforts.10
Were Neandertals a different species?
Let us assume the Neandertal mtDNA sequence is accurate. Even then, comparing a single Neandertal to a representative sample of modern humans is not highly informative. It may be that Neandertals were a unique side branch of modern humans with limited genetic diversity due to inbreeding. Alternatively, it may be that Neandertals were a highly heterogeneous group with a rich genetic heritage that encompasses modern humans. I suspect the former is true, but we will have to wait for additional Neandertal sequences to become available before we can make strong conclusions.
It is entirely possible that Neandertals accumulated mutations very rapidly in the years after the Flood. Based on this single sample, Neandertals have many mutations not seen in any modern human. Even so, this Neandertal sequence is closer to modern humans than many living chimps are to one another! Diversity within living chimpanzees is three-to four-fold higher than within the modern human population,11,12 even though chimpanzees are descended from a single pre-Flood pair and thus should have less genetic diversity than humans. This is evidence for a chimpanzee genome in rapid decline and might indicate some degree of entropy was acting on the Neandertal genome.
Coalescence theory13 predicts that living populations should be descended from only a small fraction of the ancestral population. The Recent African Origins Theory14 originally postulated that all people alive today descend from a single female (‘Mitochondrial Eve’) living in Africa about 200,000 years ago (the estimated date varies from author to author). This does not mean that she was the only female alive at that time, but that the lineages of every living person coalesce in this single person. The theory has since been expanded to include ‘Y Chromosome Adam’.15 Coalescence has been demonstrated in the Icelandic population, where only 6.6% of the females and 10% of the males alive between 1698–1742 are, respectively, the ancestors of 62% of females and 71% of the males alive today.16 Coalescence might be a general phenomenon in all populations, acting like a funnel to channel genetic diversity from a limited pool of ancestors. It seems there has been a loss of variation within the English population over the past 1,000 years17 due to disease and other factors. If processes like this have existed throughout human history, we should not expect modern humans and Neandertals to share the same ‘mutations’.
Coalescence in small populations might occur several times in its history. One founder might be the mitochondrial ancestor of the entire population, fixing those founder mutations. If the population remains small, a second founder event could occur, adding the mutations that have accumulated in a later individual to the pool of fixed mutations. This is a concern for the captive maintenance of endangered species in zoos and becomes evident in various breeds of domestic animals when they display characteristic debilitating mutations. Small populations drift rapidly and this is what may have happened to Neandertals, allowing for the rapid accumulation of new mutations.
Green et al. made explicit the standard assumption that mtDNA is only maternally inherited and that mitochondrial recombination does not occur. They then conclude that Neandertals made no lasting contribution to the modern human mtDNA gene pool. Although these two assumptions have been argued back and forth for several years in the literature, the latest evidence seems to indicate that they may in fact be incorrect.18 If evidence for mitochondrial recombination continues to accumulate, this conclusion will need to be re-evaluated, for it might then be possible that parts of the Neandertal mitochondrial genome are present in modern humans. Green et al. found one mutation in a modern human that is found nowhere else but in Neandertal and they attributed this to a reversion back to the Neandertal/ancestral state. The correct conclusion is probably that the mutation appeared twice in two separate lineages, but it is an interesting observation.
Is this evidence of great age?
Green et al. date the divergence of the modern human and Neandertal lineages to 660,000 ± 140,000 years bp. In order to do this, they outwardly stated they based this on the assumption of a molecular clock, on an assumed human-chimpanzee split 6–8 million years ago, and on the Standard Neutral Model of evolution.19 The neutral model makes the assumption that mutation accumulation in mitochondria occurs without natural selection and that the genetic (and cultural) factors that control mutation rates do not vary across the human population or over time. But since we do not know current mutation rates, since we do not know historic mutation rates, and since the assumptions behind the Standard Neutral Model are all questionable,20 we must conclude that the degree of relatedness of this single Neandertal specimen to modern humans is unknown at this time.
They make an interesting admission, ‘However, if the estimated date of the divergence between humans and chimpanzees, or current assumptions about how the mtDNA evolves, were found to be incorrect, the estimates in calendar years of the divergence of the Neandertal and human mtDNAs would need to be revised.’ They also admit that ‘the evolutionary dates are clearly dependent on many tenuous assumptions’.
There are several particular issues that pertain to the age of this fossil that I would like to discuss.
Mutation rates are unknown
Most mutation rate estimates we see in the scientific literature are biased downwards because of the assumption of deep-time evolution. This is a problem for two reasons. First, they may be calculating divergence times for two species that were created separately (e.g. chimps and humans) and are thus not technically comparable. Second, divergence rate calculations are calibrated by comparing them to imagined past events. For example, if humans and chimps are X% different, to calculate mutation rates, they divide X by 6–8 million (the number of years since we supposedly diverged from chimps). The timing of the split between modern humans and Neandertals is based on divergence rate calculated from the assumed human-chimp divergence time. Mutation rates based on genealogy are much higher than those based on phylogeny20 and are probably much more realistic. Recent studies have shown that measurable mutation rates are much higher than either the phylogenetic or even the genealogical methods predict.21
Neutral theory does not allow for mutations in DNA polymerase or in anything that affects DNA copying or repair to occur in only a single subpopulation. That would destroy the very notion of a molecular clock, for then mutations would not be expected to accumulate evenly across the board. But we can measure the fidelity of DNA polymerases, including the human mitochondrial DNA polymerase,22 and we know that mutations in DNA polymerases can elevate error rates in human mitochondria.23
Rapid mutation in harsh environments
Bruce Ames, a member of the prestigious US National Academy of Science, has suggested that genetic damage can be directly linked to poor nutrition.24 According to the theory, when under starvation conditions, the body has to decide which systems to keep working and which to shut down. This genetic ‘triage’ mechanism would keep an organism alive, but at the expense of less-than-critical cellular operations like DNA repair.
It has been suggested by several creationists that the Neandertal population lived in Europe under less-than-ideal conditions and was subjected to nutrient limitations, specifically vitamin D deficiency due to the perpetually cloudy weather during the post-Flood Ice Age. Add a harsh environment and poor nutrition to a small inbreeding population and you have an instant recipe for the rapid accumulation of mutations in any human population.
Predetermined mutation pathways?
One of the assumptions behind neutral theory is that all mutations are independent and random. It is probably a mistake to believe that mutations occur at random and that they do not interact, however, for any mutation can only occur in the context of the surrounding genetic information. One mutation may be excluded by another (because the combination might be deadly) or may lead to a series of other mutations (because some mutation sets may be excluded by specific individual mutations). Evidence for this kind of mutation interaction is limited, but it does exist.21 If some mutations lead to others, we should not expect two separate lineages to follow the same mutational pathway. This is especially true if a mutation affects the fidelity of the DNA copying mechanism.
Conclusions
While evolutionists (including theistic evolutionists) and ‘progressive creationists’ will probably be trumpeting this new paper as evidence that Neandertals and modern humans are two distinct species, I believe their conclusions are premature. As I have briefly outlined above, there is a lot we do not know about the science of modern genetics. And there are factors like coalescence, rapid genetic drift, and genetic triage in small isolated populations that can potentially explain the findings. In any case, modern chimps can differ more from each other in their mtDNA than modern humans differ from this Neandertal specimen, so beware of anyone who claims Neandertals are a separate species based on genetic differences.
When we approach evidence like this, we need to be skeptical, we need to understand the theory that led to the conclusions and we need to question the assumptions behind the theory. If we do these three things, we need not be afraid that Neandertal Man will in some way fall outside the biblical creation model.
References
- Green, R.E. et al., A complete Neandertal mitochondrial genome sequence determined by high-throughput sequencing, Cell 134:416–426, 2008. Return to text.
- Briggs, A.W. et al., Patterns of damage in genomic DNA sequences from a Neandertal, Proceedings of the National Academy of Science (USA) 104:14616–14621, 2007. Return to text.
- See Green et al., ref. 1, for supplementary information of details on the deamination of residues close to fragment ends. Return to text.
- Andrews, R.M. et al., Reanalysis and revision of the Cambridge reference sequence for human mitochondrial DNA, Nature Genetics 23:147, 1999. Return to text.
- A transition is a mutational change in DNA from one purine (A or G) to another or from one pyrimidine (T or C) to another. Transitions are much more likely to occur than transversions, which involve the replacement of a purine with a pyrimidine (e.g. A to C) or visa versa. Return to text.
- Carter, R., Mitochondrial diversity within modern human populations, Nucleic Acids Research 35(9):3039–3045, 2007. Return to text.
- Lynch, M., Conery, J. and Burger, R., Mutation accumulation and the extinction of small populations, American Naturalist 146:489–518, 1995. Return to text.
- Albu, M. et al., Uncorrected nucleotide bias in mtDNA can mimic the effects of positive Darwinian selection, Molecular Biology and Evolution 25(12):2521–2524, 2008. Return to text.
- Hasegawa, M., Cao, Y. and Yang, Z., Preponderance of slightly deleterious polymorphism in mitochondrial DNA: nonsynonymous/synonymous rate ratio is much higher within species than between species, Molecular Biology and Evolution 15(11):1499–1505, 1998. Return to text.
- Criswell, D., Neandertal DNA and modern humans, Creation Research Society Quarterly, 2009 (in press). Return to text.
- Becquet, C. et al., Genetic structure of chimpanzee populations, PLoS Genetics 3(4):617–626, 2007. See also <www.sciencedaily.com/releases/2007/04/070420104723.htm>. Return to text.
- Kaessmann, H., Wiebe, V. and Pääbo, S., Extensive nuclear DNA sequence diversity among chimpanzees, Science 286:1159–1162, 1999. See also <www.sciencedaily.com/releases/1999/11/991108090738.htm>. Return to text.
- Kingman, J.F.C., Origins of the coalescent: 1974–1982, Genetics 156:1461–1463, 2000. Return to text.
- Cann R.L., Stoneking, M. and Wilson, A.C., Mitochondrial DNA and human evolution, Nature 325:31–36, 1987. Return to text.
- Ke, Y. et al., African origin of modern humans in East Asia: a tale of 12,000 Y chromosomes, Science 292:1151–1153, 2001. Return to text.
- Helgason, A. et al., A populationwide coalescent analysis of Icelandic matrilineal and patrilineal genealogies: evidence for a faster evolutionary rate of mtDNA lineages than Y chromosomes, American Journal of Human Genetics 72:1370–1388, 2003. Return to text.
- Töpf1, A.L. et al., Ancient human mtDNA genotypes from England reveal lost variation over the last millennium, Biology Letters 3:550– 553, 2005. Return to text.
- Zsurka, G. et al., Inheritance of mitochondrial DNA recombinants in double-heteroplasmic families: potential implications for phylogenetic analysis, American Journal of Human Genetics 80:298–305, 2007. Return to text.
- Kimura, M., Evolutionary rate at the molecular level, Nature 217:624– 626, 1968. See also <en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution>. Return to text.
- Howell, N. et al., The pedigree rate of sequence divergence in the human mitochondrial genome: there is a difference between phylogenetic and pedigree rates, American Journal of Human Genetics 72:659–670, 2003. Return to text.
- Elliott, H.R. et al., Pathogenic mitochondrial DNA mutations are common in the general population, American Journal of Human Genetics 83:254–260, 2008. Return to text.
- Lee, H.R. and Johnson, K.A., Fidelity of the human mitochondrial DNA polymerase, The Journal of Biological Chemistry 281(47): 36236–36240, 2006. Return to text.
- Copeland, W.C. et al., Mutations in DNA polymerase gamma cause error prone DNA synthesis in human mitochondrial disorders. Acta Biochemica Polonica 50(1):155–167, 2003. Return to text.
- Ames, B., Low micronutrient intake may accelerate the degenerative diseases of aging through allocation of scarce micronutrients by triage, Proceedings of the National Academy of Science (USA) 103(47): 17589–17594, 2006. Return to text. | https://creation.com/neandertal-mitochondrial-genome |
Multiple lines of evidence indicate that important functional properties are embedded in the non-coding portion of the human genome, but identifying and defining these features remains a major challenge. An initial estimate of the magnitude of functional non-coding DNA was derived from comparative analysis of the first available mammalian genomes (human and mouse), which indicated that fewer than half of the evolutionary constrained sequences in the human genome encode proteins1, a prospect that gained further support when additional vertebrate genomes became available for comparative genomic analyses2.
The overall impact of these presumably functional non-coding sequences on human biology was initially unclear. A considerable urgency to define their locations and functions came from a growing number of known associations of non-coding sequence variants with common human diseases. Specifically, genome-wide association studies (GWAS) have revealed a large number of disease susceptibility regions that do not overlap protein-coding genes but rather map to non-coding intervals. For example, a 58-kilobase linkage disequilibrium block located at human chromosome 9p21 was shown to be reproducibly associated with an increased risk for coronary artery disease, yet the risk interval lies more than 60 kilobases away from the nearest known protein-coding gene3, 4. To estimate the global contribution of variation in non-coding sequences to phenotypic and disease traits, we performed a meta-analysis of 1,200 single-nucleotide polymorphisms (SNPs) identified as the most significantly associated variants in GWAS published so far (ref. 5, accessed 2 March 2009). Using conservative parameters that tend to overestimate the size of linkage disequilibrium blocks, we found that in 40% of cases (472 of 1,170) no known exons overlap either the linked SNP or its associated haplotype block, suggesting that in more than one-third of cases non-coding sequence variation causally contributes to the traits under investigation.
One possibility that could explain these GWAS hits is that the non-coding intervals contain enhancers, a category of gene regulatory sequence that can act over long distances. A simplified view of the current understanding of the role of enhancers in regulating genes is summarized in Fig. 1. The docking of RNA polymerase II to proximal promoter sequences and transcription initiation are fairly well characterized; by contrast, the mechanisms by which insulator and silencer elements buffer or repress gene regulation, respectively, are less well understood6. Transcriptional enhancers are regulatory sequences that can be located upstream of, downstream of or within their target gene and can modulate expression independently of their orientation7. In vertebrates, enhancer sequences are thought to comprise densely clustered aggregations of transcription-factor-binding sites8. When appropriate occupancy of transcription-factor-binding sites is achieved, recruitment of transcriptional coactivators and chromatin-remodelling proteins occurs. The resultant protein aggregates are thought to facilitate DNA looping and ultimately promoter-mediated gene activation (see page 212). In-depth studies of individual genes such as APOE or NKX2-5 (reviewed in ref. 9) have shown that many genes are regulated by complex arrays of enhancers, each driving distinct aspects of the messenger RNA expression pattern. These modular properties of mammalian enhancers are also supported by their additive regulatory activities in heterologous recombination experiments10.
Figure 1: Overview of gene regulation by distant-acting enhancers.
a, For many genes, the regulatory information embedded in the promoter is insufficient to drive the complex expression pattern observed at the messenger RNA level. For example, a gene could be expressed both in the brain and in the limbs during embryonic development (red), even if the promoter by itself is not active in either of these structures, suggesting that appropriate expression depends on additional sequences that are distant-acting and cis-regulatory. However, defining the genomic locations of such regulatory elements (question marks) and their activities in time and space (arrows) is a major challenge. b, c, Tissue-specific enhancers are thought to contain combinations of binding sites for different transcription factors. Only when all required transcription factors are present in a tissue does the enhancer become active: it binds to transcriptional coactivators, relocates into physical proximity with the gene promoter (through a looping mechanism) and activates transcription by RNA polymerase II. In any given tissue, only a subset of enhancers is active, as schematically shown in b and c for the example gene pictured in a, whose expression is controlled by two separate enhancers with brain-specific and limb-specific activities. Insulator elements prevent enhancer–promoter interactions and can thus restrict the activity of enhancers to defined chromatin domains. In addition to activation by enhancers, negative regulatory elements (including repressors and silencers) can contribute to transcriptional regulation (not shown).
The purely genetic evidence from GWAS does not allow any direct inferences regarding the underlying molecular mechanisms, but a number of in-depth studies of individual loci (see below) suggest that variation in distant-acting enhancer sequences and the resultant changes in their activities can contribute to human disorders. Although we anticipate a variety of other non-coding functional categories such as negative gene regulators or non-coding RNAs to have a role in human disease, in this Review we focus on the role of enhancers and on strategies to define their location and function throughout the genome.
Beginning with the discovery that an inherited change in the -globin gene alters one of the coded amino acids and thereby causes sickle-cell anaemia11,12, thousands of mutations in the coding regions of genes have been identified to be responsible for monogenic disorders over the past half century. By contrast, the role of mutations not involving primary gene structural sequences has been minimally explored, largely owing to our inability to recognize relevant non-coding sequences, much less predict their function. The molecular genetic identification of individual enhancers involved in disease has been, in most cases, a painstaking and inefficient endeavour. Nevertheless, a number of successful studies have shown that distant-acting gene enhancers exist in the human genome and that variation in their sequences can contribute to disease. In this section, we discuss three examples in which enhancers were directly shown to play a role in human disease: thalassaemias resulting from deletions or rearrangements of -globin gene (HBB) enhancers, preaxial polydactyly resulting from sonic hedgehog (SHH) limb-enhancer point mutations, and susceptibility to Hirschsprung’s disease associated with a RETproto-oncogene enhancer variant.
The extensive studies of the human globin system and its role in haemoglobinopathies have historically served as a test bed for defining not only the role of coding sequences in disease11, 12 but also that of non-coding sequences. The -thalassaemias and -thalassaemias are haemoglobinopathies resulting from imbalances in the ratio of -globin to -globin chains in red blood cells. The molecular basis of these conditions was initially elucidated in cases in which inactivation or deletion of globin structural genes could be readily identified13. However, although gene deletion or sequence changes resulting in a truncated or non-functional gene product explained some thalassaemia cases, for a subset of patients intensive sequencing efforts failed to reveal abnormalities in globin protein-coding sequences. Through extensive long-range mapping and sequencing of DNA from individuals diagnosed with thalassaemia but lacking globin coding mutations, it was eventually discovered that many of these globin chain imbalances were due to deletion or chromosome rearrangements that resulted in the repositioning of distant-acting enhancers required for normal globin gene expression14, 15. These early molecular genetic studies revealed a clear role for non-coding regulatory elements as a cause of human disorders through their impact on gene expression. Since then, many such examples of ‘position effects’, defined as changes in the expression of a gene when its location in a chromosome is changed, often by translocation, have been found16.
In addition to the pathological consequences of the removal or the repositioning of distant-acting enhancers, there are also examples of single-nucleotide changes within enhancer elements as a cause of human disorders. One example of this category of disease-causing non-coding mutation involves the limb-specific long-distance enhancer ZRS (also known as MFCS1) of SHH(Fig. 2). This enhancer is located at the extreme distance of approximately 1 megabase from SHH, within the intron of a neighbouring gene17, 18. Of interest is that, initially, the gene in which the enhancer resides was thought to be relevant for limb development and was therefore named limb region 1 (LMBR1)19. Facilitated by the functional knowledge of the ZRS enhancer from mouse studies, targeted resequencing screens of this enhancer in humans revealed that it is associated with preaxial polydactyly. Approximately a dozen different single-nucleotide variations in this regulatory element have been identified in humans with preaxial polydactyly and segregate with the limb abnormality in families18, 20. Studies of the impact of the human ZRS sequence changes have been carried out in transgenic mice, in which the single-nucleotide changes result in ectopic anterior-limb expression during development, consistent with preaxial digit outgrowth21. Furthermore, sequence changes in the orthologous enhancers were found in mice, as well as in cats, with preaxial polydactyly22, 23, and targeted deletion of the enhancer in mice caused truncation of limbs17. These studies illustrate the importance of first experimentally identifying distant-acting enhancers in allowing subsequent human genetic studies to explore the potential role of disease-causing mutation in functional non-coding sequences. | https://aliscience.org/2015/08/10/non-coding-dna-and-natures-preoccupation-with-complementarity-and-contrariety/ |
Phylogenetic analyses of nuclear ribosomal DNA internal transcribed spacer region (ITS) and chloroplast DNA sequence data reveal Linaria-Nuttallanthus as a monophyletic group composed of seven supported major clades that match partly with the current subgeneric treatment of the genus.
Quaternary radiation of bifid toadflaxes (Linaria sect. Versicolores) in the Iberian Peninsula: low taxonomic signal but high geographic structure of plastid DNA lineages
- BiologyPlant Systematics and Evolution
- 2014
Interestingly, a high geographic structure of plastid DNA lineages was revealed, with a major genetic discontinuity separating south-eastern populations from those of the rest of Iberia, and a role of edaphic specialization in differentiation of the two major clades is hypothesized.
A phylogenetic study of the tribe Antirrhineae: Genome duplications and long-distance dispersals from the Old World to the New World.
- BiologyAmerican journal of botany
- 2016
On an updated Antirrhineae phylogeny, it was showed that the three out of four dispersals from the Old World to the New World were coupled with changes in ploidy levels, suggesting that increases in ploidsy levels may facilitate dispersing into new environments.
A synopsis of the Iberian clade of Linaria subsect. Versicolores (Antirrhineae, Plantaginaceae) based on integrative taxonomy
- BiologyPlant Systematics and Evolution
- 2018
A taxonomic synopsis of the Iberian clade of Linaria subsect based on recently published morphometric and phylogenomic data will provide the basis for future research on speciation and evolution of the clade.
Molecular phylogeny of the mainly Mediterranean genera Chaenorhinum, Kickxia and Nanorrhinum (Plantaginaceae, tribe Antirrhineae), with focus on taxa in the Flora Iranica region
- Biology
- 2016
To examine the monophyly, relationships and rank of Kickxia, Nanorrhinum and Chaenorhinum, a phylogenetic analyses of the nuclear ribosomal DNA internal transcribed spacer region (ITS) and chloroplast DNA (rpl32-trnL) sequence data was conducted, with special focus on the Flora Iranica region.
Corolla morphology influences diversification rates in bifid toadflaxes (Linaria sect. Versicolores).
- BiologyAnnals of botany
- 2013
It is confirmed that different forms of floral specialization can lead to dissimilar evolutionary success in terms of diversification and suggested that opposing individual-level and species-level selection pressures may have driven the evolution of pollinator-restrictive traits in bifid toadflaxes.
Congruence between distribution modelling and phylogeographical analyses reveals Quaternary survival of a toadflax species (Linaria elegans) in oceanic climate areas of a mountain ring range.
- Environmental ScienceThe New phytologist
- 2013
The Atlantic distribution of inferred refugia suggests that the oceanic (buffered)-continental (harsh) gradient may have played a key and previously unrecognized role in determining Quaternary distribution shifts of Mediterranean plants.
Narrow endemics to Mediterranean islands: Moderate genetic diversity but narrow climatic niche of the ancient, critically endangered Naufraga (Apiaceae)
- Environmental Science
- 2014
Molecular evidence supports ancient long-distance dispersal for the amphi-Atlantic disjunction in the giant yellow shrimp plant (Barleria oenotheroides).
- Environmental ScienceAmerican journal of botany
- 2016
This study demonstrates the native status of Barleria in the New World, resolving one of only three presumed natural Old World-New World disjunctions at the species level among Acanthaceae.
Autecological traits determined two evolutionary strategies in Mediterranean plants during the Quaternary: low differentiation and range expansion versus geographical speciation in Linaria
- Biology, Environmental ScienceMolecular ecology
- 2013
It is argued that a few traits contributed to the adoption of two contrasting strategies that may have been predominant in the evolution of Mediterranean angiosperms, including selfing and self‐incompatibility.
References
SHOWING 1-10 OF 80 REFERENCES
Phylogenetic relationships of North American Antirrhinum (Veronicaceae).
- BiologyAmerican journal of botany
- 2004
Phylogenetic analyses of sequences of the internal transcribed spacer region of nuclear ribosomal DNA were conducted and confirmed the monophyly of Antirrhinum given the inclusion of the small genus Mohavea and exclusion of A. cyathiferum.
Documentation of reticulate evolution in peonies (Paeonia) using internal transcribed spacer sequences of nuclear ribosomal DNA: implications for biogeography and concerted evolution.
- BiologyProceedings of the National Academy of Sciences of the United States of America
- 1995
Reconstruction of reticulate evolution with sequence data provides gene records for distributional histories of some of the parental species and demonstrates that the sequence data could be highly informative and accurate for detecting hybridization.
Molecular evidence for naturalness of genera in the tribe Antirrhineae (Scrophulariaceae) and three independent evolutionary lineages from the New World and the Old
- BiologyPlant Systematics and Evolution
- 2004
Parsimony (cladistics), distance-based (Neighbor-Joining), and Bayesian inference reveal that the tribe is a natural group and genera such as Linaria, Schweinfurthia, Kickxia, and Antirrhinum also form natural groups.
Piecing together the "new" Plantaginaceae.
- BiologyAmerican journal of botany
- 2005
In a phylogenetic study of 47 members of Plantaginaceae and seven outgroups based on 3561 aligned characters from four DNA regions, the relationships within this clade were analyzed and the results from parsimony and Bayesian analyses support the removal of the Lindernieae from Gratioleae to a position outsideplantaginaceae.
Phylogeny of the tribe Antirrhineae (Scrophulariaceae) based on morphological andndhF sequence data
- BiologyPlant Systematics and Evolution
- 2004
It is concluded that hummingbirdpollination has evolved independently within Antirrhineae at least three times from bee-pollinated ancestors.
Historical Isolation versus Recent Long-Distance Connections between Europe and Africa in Bifid Toadflaxes (Linaria sect. Versicolores)
- GeographyPloS one
- 2011
Four events of post-Messinian colonization following long-distance dispersal from northern Africa to the Iberian Peninsula, Sicily and Greece are strongly inferred, providing new evidence for the biogeographic complexity of the Mediterranean region.
Phylogenetic analysis of Sorghum and related taxa using internal transcribed spacers of nuclear ribosomal DNA
- Biology, MedicineTheoretical and Applied Genetics
- 2004
The phylogenetic relationships of the genus Sorghum and related genera were studied by sequencing the nuclear ribosomal DNA (rDNA) internal transcribed spacer region (ITS) and it is indicated that S. arundinaceum race aethiopicum may be the closest wild relatives of cultivated sorghum.
Coalescent Simulations Reveal Hybridization and Incomplete Lineage Sorting in Mediterranean Linaria
- Environmental SciencePloS one
- 2012
This methodology is presented as a functional tool to disclose the evolutionary history of species complexes that have experienced both hybridization and incomplete lineage sorting, including the Quaternary-type climatic oscillations in the Mediterranean flora.
A Comparison of Antirrhinoside Distribution in the Organs of Two Related Plantaginaceae Species with Different Reproductive Strategies
- Biology, MedicineJournal of Chemical Ecology
- 2009
Findings are consistent with Optimal Defense Theory (ODT) and further work on the distribution of antirrhinoside and the effect of insect herbivory on plant fitness in other related species is needed.
Characterization of Linaria KNOX genes suggests a role in petal-spur development.
- BiologyThe Plant journal : for cell and molecular biology
- 2011
A model in which KNOX gene expression during early petal-spur development promotes and maintains further morphogenetic potential of the petal, as previously described, is proposed, indicating that petal spurs could have evolved by changes in regulatory gene expression that cause rapid and potentially saltational phenotypic modifications. | https://www.semanticscholar.org/paper/A-Phylogeny-of-Toadflaxes-(Linaria-Mill.)-Based-on-Fern%C3%A1ndez%E2%80%90Mazuecos-Blanco%E2%80%90Pastor/7dd3c9430887663eb71753ef4533feaa2d57096f |
As the field of genetics emerged in the early twentieth century, it became apparent that Darwinism's natural selection and adaptability were not sufficient as sole contributors to the creation of new species. The combination of Darwinian theory and genetic mutation, however, seemed to present a more credible evolutionary approach, now referred to as neo-Darwinism. Where Darwinism alone could not provide a framework for creating new species, neo-Darwinian thinking was expected to close the gap and supply the missing pieces. In addition to genetic mutations, the modified theory still relied on the availability of time and chance as contributing agents for change.
The discovery in 1953 of the double helix, the twisted-ladder structure of deoxyribonucleic acid (DNA), by James Watson and Francis Crick marked a milestone in the history of science and gave rise to modern molecular biology, which is largely concerned with understanding how genes control the chemical processes within cells. In short order, their discovery yielded ground-breaking insights into genetic code and protein synthesis.
The following briefly describes the complexities of DNA and its role in providing irrefutable evidence in the Creation/Evolution debate. Even if there were millions of years with billions of genetic mutations (copying errors), the creation of the DNA molecule, and of life itself, remains impossible without a predetermined design plan. The double-helix DNA molecule and the human brain are perhaps the most intricate structures in the universe. DNA is but one component of the complex "machinery" that resides within the cell wall.
The embedded DNA consists of a "list" of instructions decsribing every detail of the organism. DNA is structured as a double-helix molecule, envisioned as "rungs on a ladder" with each "rung" termed a base pair. Each DNA strand has ~3 billion base pairs that comprise 20-25,000 genes. Genes are further organized into 23 pairs of chromosomes. To understand the magnitude of complexity designed into the genetic system, chromosome 1 consists of 249 million base pairs. If unwound, a single DNA molecule would stretch to a length of ~6ft (2m). With a few exceptions, DNA resides in each of the 3-trillion cells in the human body.
It defies the imagination to even attempt to understand how evolutionary philosophy can suggest that a random, unguided process could create such a complex set of steps through a chain consisting of over 3 billion coded instructions (base pairs), each requiring a very specific combination and order.
DNA Replication: The animation (below) demonstrates the process by which DNA is replicated, presenting only a small part in the process of God's creation of life. The animation should surely raise doubt as to how this amazing "machine" could have evolved by time and chance alone, and without the intervention of intelligence and design.
DNA replication is the process of producing two identical copies from one original DNA molecule. This biological process occurs in all living organisms and is the basis for biological inheritance. DNA is composed of two strands and each strand of the original DNA molecule serves as a template for the production of the complementary strand, a process referred to as semiconservative replication. Cellular proofreading and error-checking mechanisms ensure near-perfect fidelity for DNA replication.
DNA mutations are a result of copying errors during replication. The discussion below may help to raise common sense questions regarding the notion that mutations can direct evolutionary design without intelligent planning. The opposable thumb on the human hand will serve as an example.
The Grasping Hand: The grasping hands of primates are an adaptation to life in the trees. The common ancestors of all primates evolved an opposable thumb that helped them grasp branches. As the grasping hand evolved, claws disappeared. Today, most primates instead have flat fingernails and larger fingertip pads, which help them to hold on. The hands of many higher primates can grasp and manipulate even very small objects.
What makes human thumbs unique?: The human opposable thumb is longer, compared to finger length, than any other primate thumb. The long thumb and its ability to easily touch the other fingers allows humans to firmly grasp and manipulate objects of many different shapes. The human hand can grip with strength and with fine control, so it can throw a baseball or sign a name on the dotted line.
Comment: Although adaptation and mutation are presented as agents of evolutionary development, the progeny are still of the same species. Natural selection and adaptation are among the causes of variation within species; a process called micro-evolution. As an example, the created dog "kind" would have all the genetic traits to produce variation (various breeds, sizes of dogs, temperament, etc.) but cannot create a new species, even if millions of years were available.
Evolutionary change that supposedly creates a new species is termed macro-evolution, or, the creation of an entirely new and identifiable organism. The most prevalent force for macro-change, according to the annals of evolution, are genetic mutations. These mutations are rare and predominantly occur during the DNA replication process. Mutations are copying errors that are either neutral or lethal. It is also possible that mutations could contribute to a positive outcome but are not in any way capable of creating a new species. The "Grasping Hand" comments below will help to understand why an organism cannot increase in complexity per the story told by neo-Darwinian theory.
The American Museum of Natural History (AMNH) cites the opposable thumb as simply a continuum of evolution in action. The fact is that climbing trees can no more cause change in a species than living in a tree house. Consider the fact that a change, even a localized one such as the opposable thumb, would require an undeterminable number of mutations with all occurrences having a focus on renovating the appendage. The AMNH states that due to the dexterity of the thumb, "The human hand can grip with strength and with fine control, so it can throw a baseball or sign a name on the dotted line."
Even if there were thousands of mutations to work with, there is no reason that any would continue to advance the evolutionary process. Evolution is dependent upon mutations (copying errors) and is not guided by intelligence or planning. In summary, every mutation would occur independent from the previous and there is no scientific principle that would cause it to act in concert with its predecessors. A hand with an opposable thumb would require an intricate design that includes nerves, muscles, bones, tendons, arteries, veins, etc.; all working in coordination with the hand, arm, and brain. One would have to admit that the design of even one lonely appendage would require engineering skills that are non-existent at any natural level.
Evolutionary genetics has determined that the rate of mutation is highly variable but still contends that a calibration can be made to fit what is identified as the molecular clock. The "clock" is supposedly useful to evolutionists when calculating the divergence of a species from its parent.
The DNA replication process is intricate to the extreme and there is no doubt that mutations occur. It is scientifically elusive, however, to attempt to gauge a rate of mutation over great ages since a mutation is a random copying error. As mentioned previously, the problem is further compounded when presented with the fact that there is no available scientific mechanism that even hints at the possibility of a copying error building on the previous errors to form a more complex organism?
For the past 40 years, evolutionary biologists have been investigating the possibility that some evolutionary changes occur in a clock-like fashion. Over the course of millions of years, mutations may build up in any given stretch of DNA at a reliable rate. For example, the gene that codes for the protein alpha-globin (a component of hemoglobin) experiences base changes at a rate of .56 changes per base pair per billion years*. If this rate is reliable, the gene could be used as a molecular clock.
Comment: Emphasized words in the above such as possibility or the inference that mutations may build up might be useful for scientific introspection to create an environment for testing, but they do not constitute discovery at any level. The mention of clock-like fashion suggests that a degree of accuracy exists that can be calculated over the millions of years period. The statement, "If this rate is reliable," places the entire argument in the category of subjective guesswork.
Is there a need for intelligence?
An extract follows from a more lengthy quotation by Dr. George Wald, Nobel Laureate, regarding his belief in the formation of the first cell. The quote may be seen with more context in the chapter on The Origin of Life.
Given so much time, the "impossible" becomes possible, the possible probable, and the probable virtually certain. One has only to wait: time itself performs the miracles. | https://in6days.org/c.02.01.02.02-evol.mutation.html |
What traits did Linnaeus consider when classifying organisms?
Organisms could look the same but not be the same thing
What problems are faced by taxonomists who rely on body-structure comparisons?
True. Biologists now group organisms into categories that represent lines of evolutionary descent.
True or False: Darwin's theory of evolution changed the way biologists thought about classification.
Into categories that represent lines of evolutionary descent
How do biologists now group organisms into categories?
False
True or False: Genera placed within a family should be less closely related to one another than to members of any other family.
The strategy of grouping organisms together based on their evolutionary history
evolutionary classification:
T
F
T
T
Cladistic Analysis (True or False):
--It considers only traits that are evolutionary innovations
--It considers all traits that can be measured.
--It considers only similarities in body structure.
--It is a method of evolutionary classification.
Characteristics that appear in recent parts of a lineage, but not in older members
derived characters:
A diagram that shows evolutionary relationships among a group of organisms
cladogram:
True
True or False: Derived characters are used to construct a cladagram.
False. All organisms have DNA.
True or False: Some organisms do not have DNA or RNA.
Common DNA strip
How do similarities in genes show humans and yeast share a common ancestry?
A model that uses DNA comparisons to estimate the length of time that two species have been evolving dependently.
molecular clock:
Mutation.
A molecular clock relies on the repeating process of ______.
Neutral mutations have no effect on phenotype. They accumulate in the DNA of different species at about the same rate. A comparison of such DNA sequences in two species can reveal how dissimilar the genes are.
Why are only neutral mutations useful for molecular clocks?
True.
True or False: The degree of dissimilarity in DNA sequences is an indication of how long ago two species shared a common ancestor.
There are many molecular clocks in a genome because some genes accumulate mutations faster than others. These different clocks allow researchers to time different kinds of events.
Why are there many molecular clocks in a genome instead of just one?
YOU MIGHT ALSO LIKE...
Core Topic 5 Evolution and Biodiversity | IB Biology Guide
ibbioteacher
$4.99
STUDY GUIDE
Biology Section 18-2: Modern Evolutionary Classification
18 Terms
MIL01
Modern Evolutionary Classification WS
16 Terms
Runnercam
Biology Yr 10 Semester 2 definitions
35 Terms
NadiaMadison
OTHER SETS BY THIS CREATOR
Vocabulary Workshop (Grade 4) Unit 8 Khloe
12 Terms
ellenlot
TEACHER
Jazmine math vocabulary
10 Terms
ellenlot
TEACHER
Khloe Solar System vocal
7 Terms
ellenlot
TEACHER
worldly wise lesson 6
10 Terms
ellenlot
TEACHER
THIS SET IS OFTEN IN FOLDERS WITH... | https://quizlet.com/74271895/biology-millerlevine-chapter-18-classification-section-18-2-modern-evolutionary-classification-flash-cards/ |
Over the weekend the WHO declared the monkeypox outbreak a “public health emergency of international concern” (PHEIC). Monkeypox (MPX) is typically found in western and central Africa. In the rest of the world only a handful of cases have been diagnosed in people who travelled from these countries, or were linked to animal trafficking. However, as of early May 2022, growing numbers of cases have been detected around the globe. As of 25 July, there have been 16,016 confirmed cases in 74 countries, of which over 11,000 are in Europe and over 3,700 from the Americas. While MPX can cause rare complications and death in 1-10% of cases, depending on the strain, no deaths have been reported by previously unaffected countries.
Containment of MPX outbreaks relies on a strategy of educating people about the disease and finding cases early to reduce the chance of transmission. There are vaccines which can prevent disease even if used up to four days after a person is exposed to the virus. Track and trace procedures to identify close contacts of people with MPX can identify people who could receive these vaccines. Some countries are also offering the vaccine to at risk populations.
This outbreak is exposing gaps in our knowledge about the disease. MPX is not highly transmissible, but its continued circulation is a concern. One complication is that the extent of asymptomatic infection is unknown and complicate control efforts as well as delay testing.
The use of sequencing
Sequencing of the MPX virus to read its genetic code has an important part to play in understanding the outbreak, allowing researchers to piece together transmission networks, and give clues as to how the virus spreads from person to person as well as reveal if any genetic changes that might make it more pathogenic or transmissible.
Right now, the world is particularly well placed to implement genomic epidemiology measures, which have been established and widely used during the COVID-19 pandemic. Genomic sequencing centres that analyse COVID-19 samples have been redeployed to sequence MPX, and sequencing of wastewater is being used to understand how widespread the outbreak is.
The true burden of MPX in countries where it is endemic, and the diversity and extent of the animal reservoir, is poorly understood, but it is critical for prevention and control of future outbreaks as well as understand the current one. Animal reservoirs can lead to animal-to-human infections, which is the main cause of outbreaks in endemic countries. Understanding what genetic changes that have occurred between the virus circulating in animal reservoirs could provide clues as to the human-to-human transmission that is occurring in this outbreak. Supporting these countries to be able to investigate these concerns should be a priority.
The MPX genome is complex
MPX is a DNA virus and its genome structure poses some challenges for sequencing. It is large – around 200,000 DNA letters long, about seven times the size of SARS-CoV-2.
The first full draft sequence from the current outbreak, covering 92% of the MPX reference genome, was released by a Portuguese team in May 2022. This genomic information was refined and updated as more sequencing was done.
Despite some gaps, this draft has helped scientists identify some important features in the early stages of the outbreak. Such as, the strain of MPX virus in the current outbreak closely resembles a West African strain carried by travellers from Nigeria to Singapore, Israel, and the UK in 2018-19. The West African strain has a fatality rate of around 1-3.5%, less severe than the 10% fatality rate of the Congo strain. Knowing which strain is circulating and causing an outbreak can be vital in determining the urgency of any public health response.
The virus changes
Changes in the viral genome can affect its biology – e.g. making it more or less infectious – and also provide scientists with a tool to track the evolution of the virus and compare current and past outbreaks. The MPX virus has a slow evolutionary rate – around one genetic letter change per year – making it more difficult to study over short time periods. However, sequences from the current outbreak have up to 50 changes when it is compared to the 2018 MPX viruses. This is an unexpectedly high number, no more than five to ten changes in that time frame would normally be expected. The explanation for these differences is not yet clear, however further sequencing, including sequencing of archived MPX samples from endemic countries, many provide further insight, including a more accurate evolutionary rate.
For now, the implications of these changes for disease severity or transmissibility are unclear. Initially some scientists speculated that a new, more transmissible form of MPX might have emerged, but the genomic sequence data does not appear to support this. Some now think that sustained onward transmission might explain the current outbreak rather than increased transmissibility. One possible reason could be the time it takes for symptoms to appear, leading to continued transmission before someone realises they are infected. What is clear is that a threshold has been crossed that is allowing for human-to-human transmission. Scientists hope that further sequencing will help address these uncertainties, as well as clarify exactly when and how the outbreak began.
Sequencing data from MPX virus in the UK shows a similar number of mutations, 48, compared to the 2018 strain. These mutations are distributed across the genome and three are classified as high priority since they occur in genes coding for proteins that are involved in virus transmission, virulence or interaction with antiviral drugs. More work is needed to assess their exact impact.
Data sharing
As of 20 July, 429 sequences from 24 countries have been uploaded onto the database NCBI Virus GenBank. As has been demonstrated during the COVID-19 pandemic, sharing of virus sequence information in public databases enables real-time accurate comparisons of sequences, supporting faster insights into the spread of the disease, and development of appropriate public health interventions. It is hoped that as the outbreak progresses, further sequences will become available in public databases and help answer many remaining questions about the MPX virus.
Going forward
With the WHO declaring the MPX outbreak a PHEIC, it is likely that it will lead to greater international cooperation on research and vaccines to battle the outbreak. The MPX outbreak illustrates how vital it is to monitor viruses that have the potential to cause a major health crisis. Every new outbreak is a chance for us to refine surveillance, including use of genetic epidemiology to monitor and understand pathogens, so that we can be better prepared to prevent future outbreaks. | https://phgfoundation.org/blog/sequencing-monkeypox |
4.8: Chemical Reactions in Aqueous Solutions
Chemical substances interact in many different ways. Certain chemical reactions exhibit common patterns of reactivity. Due to the vast number of chemical reactions, it becomes necessary to classify them based on the observed patterns of interaction.
Water is a good solvent that can dissolve many substances. For this reason, many chemical reactions take place in water. Such reactions are called aqueous reactions. The three most common types of aqueous reactions are precipitation, acid-base, and oxidation-reduction.
Reactions in Aqueous Solutions
A precipitation reaction involves the exchange of ions between ionic compounds in aqueous solution to form an insoluble salt or a precipitate. In an acid-base reaction, an acid reacts with a base, and the two neutralize each other, producing salt and water. An oxidation–reduction reaction involves the transfer of electrons between reacting species. The reactant that loses electrons is said to be oxidized, and the reactant that gains electrons is said to be reduced.
Equations for Aqueous Reactions
When ions are involved, there are various ways of representing the reactions that take place in aqueous media, each with a different level of detail. To understand this, let us take an example of a precipitation reaction. The reaction is between aqueous solutions of ionic compounds, like BaCl2 and AgNO3. The products of the reaction are aqueous Ba(NO3)2 and solid AgCl.
This balanced equation is called a molecular equation. Molecular equations provide stoichiometric information to make quantitative calculations and also helps identify the reagents used and the products formed. However, molecular equations do not provide the details of the reaction process in solution; that is, it does not indicate the different ionic species that are present in solution.
Ionic compounds such as BaCl2, AgNO3, and Ba(NO3)2 are water-soluble. They dissolve by dissociating into their constituent ions, and their ions are homogeneously dispersed in solution.
Since AgCl is an insoluble salt, it does not dissociate into ions and stays in solution as a solid. Considering the above factors, a more realistic representation of the reaction would be:
This is the complete ionic equation in which all dissolved ions are explicitly represented.
This complete ionic equation indicates two chemical species that are present in identical form on both sides, Ba2+ (aq) and NO3− (aq). These are called spectator ions. The presence of these ions is required to maintain charge neutrality. Since they are neither chemically nor physically changed by the process, they may be eliminated from the equation.
This equation can be further simplified to give:
This is the net ionic equation. It indicates that solid silver chloride may be produced from dissolved chloride and silver ions, regardless of the source of these ions.
This text is adapted from OpenStax Chemistry 2e, Section 4.1: Writing and Balancing Chemical Equations. | https://www.jove.com/science-education/11266/chemische-reaktionen-in-wssrigen-lsungen?language=German |
•A precipitate is an insoluble solid that forms when two or more solutions are mixed.
•It appears cloudy in solution since the small suspended particles scatter light.
•Precipitates form when the ions experience an electrostatic attraction to each other and they stick together, forming crystals.
A precipitate can be removed by filtration.
IONIC EQUATIONS
•An ionic equation only shows the reacting species.
•Eg. Barium nitrate reacts with Sodium sulfate to produce the precipitate Barium sulfate.
Ba(NO3)2 (aq) + Na2SO4 (aq) --> BaSO4 (s) + 2NaNO3 (aq)
Showing individual ions:
Ba2+ (aq) + 2NO3 2- (aq) + 2Na+ (aq) + SO4 2- (aq) ---> BaSO4 (s) + 2Na+ (aq) + 2NO3 2- (aq)
Ionic equation- only shows reacting species: | https://chemistryvce.weebly.com/precipitation-reactions.html |
ChemSeparate the following balanced chemical equation into its total ionic equation. AgNO3(aq)+NaCl(aq) --> NaNO3(aq)+AgCl(s) (List the ions in order of the above equation.) Write the net ionic equation for the reaction above.
-
chemistryAn explosive whose chemical formula is C3H6N6O6 produces water, carbon dioxide, and nitrogen gas when detonated in oxygen. write the chemical equation for the detonation reaction of this explosive.
-
biologyWhich of the following is true about the relationship between energy and chemical reactions? Activation energy is required for a chemical reaction to occur. Activitation energy is not necessary in a chemical reaction which
-
chemistryPrecipitation Reactions Write a molecular equation for the precipitation reaction that occurs (if any) when the following solutions are mixed. If no reaction occurs, write NO REACTION. Express your answer as a chemical equation.
-
Chemistry1) Write a balanced chemical equation for the reaction between Cu(NO3)2 * 3 H2O and NaOH. Underline the formula for the precipitate produced by this reaction. (The water of hydration in Cu(NO3)2 * 3 H2O appears as liquid water on
-
ScienceWrite a balanced chemical equations, ionic equation, and net ionic equation for this reaction: aqueous solution of sulfuric acid and potassium.
-
science (chem)what is the chemical eqzn of Na2CO3 + HClO4? Na2CO3 + HClO4 ==> NaClO4 + H2O + CO2 I will leave it for you to balance. how did you get that DrBob The rule on carbonates is: An acid added to a carbonate yields carbon dioxide,
-
Chemistrywhen heated, sulfuric acid will decompose into sulfur trioxide and water. Write a chemical equation, using words, to represent this chemical reaction. Identify reactants and products. (I'm having trouble identifying the reactants
Still need help? You can ask a new question. | https://www.jiskha.com/questions/173489/how-do-you-write-a-chemical-equation-with-this-reaction-can-u-help-me-solve-these |
When chlorine is added to acetylene, 1,1,2,2-tetrachloroethane is formed: 2 Cl2(g) + C2H2(g) --> C2H2Cl4(l) How many dm3 of chlorine will be needed to make 75.0 grams of C2H2Cl4?
BaCl2(aq) + 2AgNO3(aq) --> 2AgCl(s) + Ba(NO3)2(aq) Excess barium chloride solution is added to 25.00cm3 of 0.100M silver nitrate solution. What mass of silver chloride is formed? | https://www.proprofs.com/quiz-school/story.php?title=NjkwNjk3DKVG |
What is the substance present in the smallest amount in a solution?
Solute
Which process defines how molecular compounds form ions upon dissolution?
Ionization
Which of these compounds is a
strong electrolyte
?
KOH
Based on the solubility rules, which one of these compounds should be
insoluble
in water?
AgBr
Based on the solubility rules, which one of these compounds should be
soluble
in water?
Na2S
Based on the solubility rules, which of these processes will occur if solutions of CuSO4 (aq) and BaCl2 (aq) are mixed?
BaSO4 will percipitate; Cu2+ and Cl- are spectator ions
Select the correct set of products for the following reaction
Ba(OH)2 (aq) + HNO3 (aq)----->
Ba(NO3)2 (aq) + H20(l)
What is the chemical formula of the salt produced by the neutralization of nitric acid with calcium hydroxide?
Ca(NO3)2
Complete the following reaction and identify the Bronsted acid:
NaOH(aq) + HCl(aq) ----->
H2O(l) + NaCl (aq); HCl is the acid
which of the following is oxidized in the following reaction?
Fe + Ag2O---->FeO + 2Ag
Fe
The oxidation number of Cr in Cr2 O7 2- is
+6
Which one of these equations describes a redox reaction?
2Al(s) + 3H2SO4(aq) ----> Al2(SO4)3 (ag) + 3H2 (g)
Identify the reducing agent in the chemical reaction
Cd + NiO2 + 2H2O ----> Cd (OH)2 + Ni(OH)2
Cd
A 50.0 mL sample of 0.436 M NH4NO3 is diluted with water to a total volume of 250.0 mL. What is the ammonium nitrate concentration in the resulting solution?
8.72 x 10^-2
50mL x .436m/250mL
A standard solution of 0.243M NaOH was used to determine the concentration of a hydrochloric acid solution. If 46.33 mL of NaOH is needed to neutralize 10.00 mL of the acid, what is the molar concentration of the acid?
1.13 M
A 34.62mL of 0.1510M NaOH was needed to neutralize 50.0 mL of an H2SO4 solution. What is the concentration of the original sulfuric acid solution?
0.0523 M
Which of the following is/are characteristic of gases?
High compressibility, relatively large distances between molecules, and formation of homogenous mixtures regardless of the nature of gases
If the atmospheric pressure in Denver is 0.88 atm then what is this in mmHg (1 atm=101,325 pa= 760 torr, 1 torr=1mmHg)?
668.8 mmHg
The pressure of an ideal gas is inversely proportional to its volume at constant temperature and number of moles is a statement of ____law.
Boyle's
A sample of a gas has an initial pressure of 0.987 atm and a volume of 12.8L. What is the final pressure if the volume is increased to 25.6L?
0.494 atm
A sample of carbon dioxide gas at 125C and 248 torr occupies a volume of 275L. What will the gas pressure be if the volume is increased to 321L at 125C?
212 torr
What are the conditions of STP? | https://freezingblue.com/flashcards/16884/preview/chem-3 |
Much as a cook uses recipes to guide her/him in preparing dishes, chemists use chemical equations as ways of expressing the results of chemical reactions. Sometimes it is more descriptive to use the ions involved in a reaction rather than the ionic compounds. For example, to express that hydrochloric acid reacts with sodium hydroxide to give water and sodium chloride, we could write:
Molecular equation: HCl(aq) + NaOH(aq) → H2O(l) + NaCl(aq)
The above equation is fine in some ways, but in actuality NaOH and HCl solutions do not contain NaOH and HCl molecules, and there are no NaCl molecules in the solution after the reaction occurs. The better description for this reaction is:
Total ionic equation: H+(aq) + Cl–(aq) + Na+(aq) + OH–(aq) → Cl–(aq) + Na+(aq) + H2O(l)
Net ionic equation: H+(aq) + OH-(aq) → H2O(l)
The first equation is called the total ionic equation, while the second is termed the net ionic equation. A net ionic equation summarizes the changes that have taken place as a result of a chemical reaction.
Background
The solubility of a substance in a solvent is the maximum amount of the substance that can be dissolved in a given amount of solvent. While there is no exact definition for the boundary between soluble and not soluble, a general guide might be:
SOLUBILITY TERM
> 0.1 M Soluble
0.01 to 0.1 M Moderately soluble
< 0.01 M Slightly soluble
Some guidelines have been prepared to help estimate a compound’s solubility. These are written in a manner where statements above other statements have priority. For example, an alkali metal carbonate is soluble since “all alkali metals (#2) are soluble” have priority over “All carbonates are insoluble.” [There are exceptions that are not included here. (See textbook)]
Solubility Rules:
- All nitrates (NO3-) and acetates (C2H3O2-) are soluble.
- All salts of alkali metals and NH4+ are soluble.
- All common halides (Cl-, Br–, I–) are soluble, except those of Ag+, Pb2+, Cu+, Hg22+
- Hydroxides (OH-) are insoluble except for Na+, Ca2+, and Ba2+.
- All sulfates (SO42-) are soluble, except BaSO4 which is slightly soluble, and CaSO4 and Ag2SO4, which are moderately soluble.
- All carbonates (CO32-) and phosphates (PO43-) are insoluble.
Procedure
Note: perform this experiment with a partner, but be sure to observe the tests together.
Make up the following standard solutions:
50.0 mL of 0.10 M KI(aq) from KI(s)
25.0 mL of 0.10 M CuCl2 (aq) from CuCl2 ● 2H2O (s)
50.0 mL of 0.10 M H2SO4 (aq) from 1.00 M H2SO4(aq)
25.0 mL of 0.10 M NaOH (aq) from 1.00 M NaOH(aq)
50.0 mL of 0.10 M Na2CO3 (aq) from Na2CO3(s)
Three test solutions will be prepared for you: 0.10 M AgNO3, 0.10 M BaCl2, and 0.10 M Co(C2H3O2) 2.
To determine how to prepare a solution from a solution of higher concentration, use the formula: Ci Vi = Cf Vf
where:
Ci = initial concentration of higher concentration solution
Vi = initial volume of the higher concentration solution
Cf = final concentration (of the diluted solution)
Vf = final volume of the diluted solution (volume of the initial solution plus water)
Tests for chemical reactions:
For each of the solutions you prepared, test each solution with the following test solutions:
0.10 M AgNO3
0.10 M BaCl2
0.10 M Co(C2H3O2) 2
0.10 M Na2CO3 (the solution you prepared)
A test involves taking 5 mL of your solution and adding 3 – 5 drops of the test solution. Make observations in the table on the worksheet.
Things to note:
(a) a change of color;
(b) a separate phase is formed;
(c) any other change.
Report
Complete the worksheet for your lab report. Optional survey.
The following procedure may help you when writing a net reaction for any chemical reaction.
- Determine the principal forms present of each reactant in solution.
- Determine the principal forms present after the reaction has occurred.
- Write a balanced equation for the reaction.
- Cross out reactants and products that do not change (the spectator ions). | https://sites.middlebury.edu/chem103lab/2016/09/05/net-ionic-reactions/ |
Learning ObjectivesRecognize chemistry reactions together single-replacement reactions and also double-replacement reactions.Use the routine table, an activity series, or solubility rules to predict whether single-replacement reactions or double-replacement reactions will certainly occur.
You are watching: How to determine if a reaction will occur
Up come now, we have presented chemistry reactions as a topic, but we have not debated how the commodities of a chemistry reaction have the right to be predicted. Here we will begin our examine of certain species of chemistry reactions that permit us to predict what the commodities of the reaction will be.
A single-replacement reaction is a chemistry reaction in which one aspect is substituted for another element in a compound, generating a new element and also a brand-new compound as products. Because that example,
2 HCl(aq) + Zn(s) → ZnCl2(aq) + H2(g)
is an example of a single-replacement reaction. The hydrogen atoms in HCl are changed by Zn atoms, and in the process a brand-new element—hydrogen—is formed. Another example the a single-replacement reaction is
2 NaCl(aq) + F2(g) → 2 NaF(s) + Cl2(g)
Here the negatively fee ion alters from chloride come fluoride. A usual characteristic the a single-replacement reaction is that there is one aspect as a reactant and also another aspect as a product.
Not every proposed single-replacement reactions will certainly occur in between two provided reactants. This is most easily demonstrated through fluorine, chlorine, bromine, and also iodine. Collectively, these facets are referred to as the halogens and also are in the next-to-last pillar on the periodic table (see figure 4.1 “Halogens top top the regular Table”). The aspects on top of the column will replace the aspects below lock on the routine table however not the other method around. Thus, the reaction stood for by
CaI2(s) + Cl2(g) → CaCl2(s) + I2(s)
will occur, but the reaction
CaF2(s) + Br2(ℓ) → CaBr2(s) + F2(g)
will not since bromine is listed below fluorine on the periodic table. This is simply one of numerous ways the routine table help us know chemistry.
Figure 4.1 Halogens on the regular Table
The halogens are the facets in the next-to-last pillar on the routine table.
Example 2
Will a single-replacement reaction occur? If so, determine the products.MgCl2 + I2 → ?CaBr2 + F2 → ?
SolutionBecause iodine is listed below chlorine ~ above the regular table, a single-replacement reaction will not occur.Because fluorine is above bromine ~ above the regular table, a single-replacement reaction will occur, and the commodities of the reaction will be CaF2 and also Br2.
Test Yourself
Will a single-replacement reaction occur? If so, identify the products.
FeI2 + Cl2 → ?
Answer
Yes; FeCl2 and also I2
Chemical reactivity trends are straightforward to predict as soon as replacing anions in straightforward ionic compounds—simply usage their relative positions ~ above the periodic table. However, when replacing the cations, the trends are not together straightforward. This is partly because there space so many facets that can kind cations; an facet in one obelisk on the routine table may replace another element nearby, or it might not. A list dubbed the task series does the exact same thing the periodic table does because that halogens: it lists the facets that will replace facets below lock in single-replacement reactions. A basic activity collection is shown below.
Activity series for Cation instead of in Single-Replacement ReactionsLiKBaSrCaNaMgAlMnZnCrFeNiSnPbH2CuHgAgPdPtAu
Using the activity collection is comparable to using the location of the halogens top top the periodic table. An element on top will replace an element below it in compounds undergoing a single-replacement reaction. Facets will not replace elements over them in compounds.
Example 3
Use the activity collection to guess the products, if any, of every equation.FeCl2 + Zn → ?HNO3 + Au → ?
SolutionBecause zinc is over iron in the activity series, it will replace iron in the compound. The assets of this single-replacement reaction are ZnCl2 and also Fe.Gold is listed below hydrogen in the task series. As such, it will not change hydrogen in a compound through the nitrate ion. No reaction is predicted.
Test Yourself
Use the activity series to suspect the products, if any, of this equation.
AlPO4 + Mg → ?
Answer
Mg3(PO4)2 and also Al
A double-replacement reaction occurs once parts of 2 ionic compounds are exchanged, do two new compounds. A characteristic of a double-replacement equation is that there space two compounds as reactants and also two different compounds together products. An instance is
CuCl2(aq) + 2 AgNO3(aq) → Cu(NO3)2(aq) + 2 AgCl(s)
There are two identical ways that considering a double-replacement equation: one of two people the cations space swapped, or the anions room swapped. (You cannot swap both; you would end up through the exact same substances you started with.) one of two people perspective should permit you come predict the ideal products, as lengthy as girlfriend pair a cation v an anion and also not a cation with a cation or an anion v an anion.
Example 4
Predict the assets of this double-replacement equation: BaCl2 + Na2SO4 → ?
Solution
Thinking around the reaction together either switching the cations or convert the anions, us would expect the assets to it is in BaSO4 and NaCl.
Test Yourself
Predict the products of this double-replacement equation: KBr + AgNO3 → ?
Answer
KNO3 and AgBr
Predicting whether a double-replacement reaction wake up is rather more challenging than predicting a single-replacement reaction. However, there is one type of double-replacement reaction the we deserve to predict: the precipitation reaction. A precipitation reaction occurs as soon as two ionic link are liquified in water and type a brand-new ionic compound the does not dissolve; this brand-new compound falls out of equipment as a solid precipitate. The formation of a solid precipitate is the driving pressure that renders the reaction proceed.
To judge whether double-replacement reactions will certainly occur, we need to know what kinds of ionic compounds form precipitates. For this, we use solubility rules, i m sorry are basic statements that predict i m sorry ionic compounds dissolve (are soluble) and also which carry out not (are no soluble or insoluble). Table 4.1 “Some useful Solubility Rules” perform some general solubility rules. We need to take into consideration each ionic compound (both the reactants and also the possible products) in light of the solubility rule in Table 4.1 “Some useful Solubility Rules”. If a link is soluble, we use the (aq) label with it, indicating it dissolves. If a link is no soluble, we usage the (s) label with it and also assume that it will precipitate out of solution. If whatever is soluble, then no reaction will be expected.
Table 4.1 Some beneficial Solubility Rules
|These compounds normally dissolve in water (are soluble):||Exceptions:|
|All compounds of Li+, Na+, K+, Rb+, Cs+, and NH4+||None|
|All compound of NO3− and C2H3O2−||None|
|Compounds of Cl−, Br−, I−||Ag+, Hg22+, Pb2+|
|Compounds the SO42||Hg22+, Pb2+, Sr2+, Ba2+|
|These compounds usually do not dissolve in water (are insoluble):||Exceptions:|
|Compounds that CO32− and also PO43−||Compounds the Li+, Na+, K+, Rb+, Cs+, and also NH4+|
|Compounds of OH−||Compounds the Li+, Na+, K+, Rb+, Cs+, NH4+, Sr2+, and also Ba2+|
For example, consider the feasible double-replacement reaction in between Na2SO4 and SrCl2. The solubility rules say the all ionic salt compounds space soluble and also all ionic chloride compounds room soluble other than for Ag+, Hg22+, and Pb2+, which are not being considered here. Therefore, Na2SO4 and SrCl2 are both soluble. The possible double-replacement reaction commodities are NaCl and SrSO4. Are these soluble? NaCl is (by the same ascendancy we simply quoted), however what about SrSO4? compound of the sulfate ion are normally soluble, but Sr2+ is one exception: we expect it to be insoluble—a precipitate. Therefore, we suppose a reaction to occur, and the balanced chemical equation would be
Na2SO4(aq) + SrCl2(aq) → 2 NaCl(aq) + SrSO4(s)
You would mean to view a visual change corresponding to SrSO4 precipitating the end of systems (Figure 4.2 “Double-Replacement Reactions”).
Figure 4.2 Double-Replacement Reactions
Some double-replacement reaction are noticeable because you have the right to see a hard precipitate coming the end of solution.
Source: picture courtesy of Choij, http://commons.wikimedia.org/wiki/File:Copper_solution.jpg.
Example 5
Will a double-replacement reaction occur? If so, recognize the products.
See more: All 6 Letter Word Starts With U, Words That Start With U
SolutionAccording come the solubility rules, both Ca(NO3)2 and KBr space soluble. Now we think about what the double-replacement assets would be by switching the cations (or the anions)—namely, CaBr2 and KNO3. However, the solubility rule predict the these 2 substances would also be soluble, for this reason no precipitate would form. Thus, us predict no reaction in this case. | https://betterworld2016.org/how-to-determine-if-a-reaction-will-occur/ |
Compare Homework with a partner, how did you do?
October 10th, 2012 Do Now: Compare Homework with a partner, how did you do? With your partner, write a list of steps on how to solve a limiting reagent problem to add to your notebook.
2
Rxn’s in Aqueous Solution
3
General Properties of Aqueous Solutions
Define the word solution. A homogenous mixture of two or more substances. How do you differentiate between the solute and the solvent in a solution? The solute is dissolved into the solvent Which is present in the smallest amount, the solute or the solvent? Solute How do you determine if a solution is considered aqueous? Solutions in which water is the solvent.
4
A high percentage of naturally occurring chemical reactions occur with water as the solvent; why is water such an excellent solvent?
5
Why is water considered a polar molecule
Why is water considered a polar molecule? How does this benefit water a solvent?
6
Electrolytic Properties
How do we know, prior to testing for electricity, whether or not a substance will contain electrolytes? If a substance forms ions in solution. IE: NaCl. How does a nonelectrolyte differ from an electrolyte? May dissolve in water, but does not dissociate into ions. Electrolytes conduct electricity, non electrolytes do not.
7
Ionic Compounds in Water
8
…So what just happened here?
Summarize your observations of what occurred on the molecular level when solid NaCl was added to water. IONIC COMPOUND DISSOLVES IN WATER, IONS DISSOCIATE. EACH ION IS SURROUNDED BY SEVERAL WATER MOLECULES (AQUEOUS IONS “(aq)” THESE IONS ARE CONSIDERED “SOLVATED” Illustrate this concept in your notebook. How is the electric current created?
9
Molecular Compounds in Water
When a molecular compound (ie: CH3OH) dissolves in water, the solution usually consists of intact molecules dispersed homogeneously throughout the solution. There is nothing in solution to transport electric charge and therefor most molecular compounds are non-electrolytes **Important exceptions: ie: HCl, NH3
10
Strong Vs. Weak Electrolyte
Compounds whose aqueous solutions conduct electricity well are called “Strong” Electrolytes. (exist in solution mostly as ions) Compounds whose aqueous solution conduct electricity poorly are called weak electrolytes
11
NaCl(aq) Na+(aq) + Cl-(aq)
Strong Electrolytes NaCl(aq) Na+(aq) + Cl-(aq) Why is a single arrow used to show this dissociation? Soluble ionic compounds, strong acids, and soluble strong bases are considered strong electrolytes.
12
Weak Electrolytes Molecular compounds that produce small concentration of ions when dissolved = weak electrolytes. Ie: acetic acid, HC2H3O2, is primarily present in solution as molecules. Approx. 1 percent is present as ions. **Note: DO NOT confuse the extent to which an electrolyte dissolves with whether it is strong or weak.For example, HC2H3O2 is extremely soluble in water but is a weak electrolyte. In contrast, Ba(OH)2 is not very soluble, but the amount of the substance that does dissolve dissociates almost completely. Therefore, Ba(OH)2 is a strong electrolyte.
13
Weak Electrolytes How does this dissociation differ from the previous dissociation of NaCl? Why is the double arrow important in this equation?
14
October 11th, 2012 Do Now: Summarize yesterday’s lesson.
16
How can we classify this type of reaction? Define the word precipitate. Insoluble solid formed by a reaction in solution Ie: Pb(NO3)2 + 2KI → ??
17
Solubility Define the word solubility.
The amount of a particular substance that can be dissolved in a given quantity of solvent at that temperature. When does a substance become regarded as insoluble? Substance with a solubility of less than 0.01 mol/L
18
How to memorize solubility rules:
N: NITRATES A: ACETATES G: GROUP 1 (ALKALI METALS) S: SULFATES (except PMS and CaStroBear) A: AMMONIUM G: GROUP 17 (except PMS)
19
Insoluble? C: CARBONATES ( except for G1, Ammonium)
A: ALCOHOLS (except for G1, CaStroBear, Ammonium) P: PHOSPHATES (except for G1, Ammonium) S: SULFIDE (except for G1, CaStroBear, Ammonium)
20
Practice Are the following compounds soluble or insoluble and why?
KNO3 Li3PO4 AgCl NH4OH Ba3PO4 Hg2S Na2CO3
21
Exchange/Metathesis Reactions
Predict a general formula for an “exchange” or metathesis reaction. AX + BY → AY + BY How would you describe an “exchange” reaction? Swapping of ions in solution Precipitation and acid-base reactions exhibit this pattern.
22
2KI(aq) + Pb(NO3)2(aq) → PbI2(s) 2KNO3(aq)
Ionic Equations Consider the following: 2KI(aq) + Pb(NO3)2(aq) → PbI2(s) 2KNO3(aq) Both Reactants are colorless solutions. When mixed, they form a bright yellow precipitate of PbI2 and a solution of KNO3. Final Product contains solid PbI2, Aqueous K+ ions and Aqueous NO3- ions. HOW MIGHT WE DIFFERENTIATE BETWEEN THE MOLECULAR, COMPLETE IONIC, and NET IONIC EQUATIONS?
23
2KI(aq) + Pb(NO3)2(aq) → PbI2(s) 2KNO3(aq)
Molecular: lists all species in complete chemical forms 2KI(aq) + Pb(NO3)2(aq) → PbI2(s) 2KNO3(aq) Complete Ionic: lists all strong electrolytes in rxn as ions Pb2+(aq) + 2NO3-(aq) + 2K+(aq) + 2I-(aq) → PbI2(s) + 2K+(aq) +2NO3- (aq) Only strong electrolytes dissolved in solution are written in ionic form. Weak electrolytes and non-electrolytes are written in complete chemical form
24
Net Ionic Equations 2NO3- and 2K+ Pb2+(aq) + 2I-(aq) → PbI2(s)
Lists only ions which are not common on both sides of the equation Pb2+(aq) + 2NO3-(aq) + 2K+(aq) + 2I-(aq) → PbI2(s) + 2K+(aq) +2NO3-(aq) Which ions can we remove from each side? 2NO3- and 2K+ Predict what the net ionic equation will look like? Pb2+(aq) + 2I-(aq) → PbI2(s)
25
Why get rid of ions? How do we define spectator?
Why are the ions we removed considered spectator ions? Formulate a step-by-step “how to” list for creating the net-ionic equation. 1) Write a balanced molecular equation 2) Rewrite equation to show ions that were present after dissociation (only strong electrolytes) 3) Identify and cancel “spectator” ions
26
Practice! Complete the worksheet with a partner, be prepared to instruct the class on how you reached your answers.
27
Acids/bases/neutralization
Define the word Acid. Substances that are able to ionize in aqueous solutions to form H+ Using this information, how might we define the word base? Proton acceptors How might we differentiate between a monoprotic acid and a diprotic acid? Monoprotic ionize to form 1 H+ ion. Diprotic ionizes to form 2 H+ ions.
28
Strong acids/strong Bases
How soluble are strong acids and strong bases? Very. They completely ionize in solution. Strong Bases: Group 1A metal hydroxides, Ca(OH)2, Ba(OH)2 and Sr(OH)2 Strong Acids: HCl, HBr, HI, HClO3, H2SO4, and HNO3 Write the ionization of a strong acid.
29
Weak Acids/ weak Bases How does the ionization of a weak acid/base compare to that of strong acids/bases. Partially ionized in solution. HF(aq) is a weak acid; most acids are weak acids. Write the ionization of a weak acid.
30
October 22nd, 2012 Do Now: How do you determine a weak acid/base.
Write the Net Ionic Equation for the following reaction: When aqueous solutions of sodium phosphate and calcium chloride are mixed together, an insoluble white solid forms.
31
Neutralization Reactions and Salts
Write a generalized equation for a neutralization reaction. Acid + Base Water + Salt Define the word Salt. Any ionic compound whose cation (+) comes from a base and anion (-) comes from an acid. Example: Mg(OH)2(s) is a suspension and HCl is added. Write the net ionic equation. MgOH2(s) + 2H+ (aq) Mg2+ (aq) + 2H2O (l)
32
Neutralization Reactions with Gas Formations
How do sulfides act as bases? There are many bases besides OH- that react with H+ to form molecular compounds. The reaction of sulfides with acids give rise to H2S in gaseous form. Write the Net Ionic equation of Sodium Sulfide reacting Hydrochloric Acid. 2H+ (aq) + S2- (aq) H2S (g) Carbonates and hydrogen carbonates (or bicarbonates) will form CO2 (g) when treated with an acid. Write the net ionic equation of sodium bicarbonate reacting with Hydrochloric acid. H+ (aq) + HCO3- (aq) H2O (l) + CO2 (g)
33
BOMBS AWAY!
34
Oxidation-Reduction Reactions
35
Oxidation-Reduction How is it determined which substance is undergoing oxidation vs. reduction? Loss of electrons = Oxidation Gain of electrons = Reduction Why are oxidation numbers useful? Oxidation numbers help us keep track of electrons during chemical reactions. How do we determine the oxidation numbers of atoms? RULES!
36
Hey! Why so Serious? What are you looking for?
Are you sure?? I lost an electron! I’M POSITIVE! ? ?
37
RULES FOR OXIDATION STATES
For an atom in its elemental state, the oxidation number is: For any monatomic ion, the oxidation number equals: The oxidation number of oxygen is usually: The major exception is in peroxides (containing the O22- ion) The oxidation state of Hydrogen is: The oxidation number of Flourine is: The sum of oxidation numbers in polyatomic a polyatomic ion:
38
Practice determining oxidation state:
Complete the worksheet handed out!
39
Identifying Oxidation Vs. Reduction
PRACTICE SHEET!
40
October 24th, 2012 Do Now: Find the oxidation states of each of the elements in the following compounds: P2O5 NaH C2O72- SnBr4 BaO2
41
Motivation
42
Oxidation of Metals by Acids and Salts
How can we generalize a reaction between a metal or an acid and a salt? Write a general equation for this type of reaction. Define this type of reaction. Given this example: Zinc reacting with Hydrochloric acid, predict the products and determine which is being reduced and which is being oxidated.
43
Net ionic equations within displacement reactions
Write the molecular and net ionic equation for the following reaction: Zinc reacting with hydrobromic acid. Why is writing the net ionic equation helpful when discussing oxidation and reduction reactions?
44
Complete 1 through 3 on a the worksheet given to you
Practice! Complete 1 through 3 on a the worksheet given to you
45
Metals oxidized in the Presence of salts
It is possible for a metal to be oxidized in the presence of a salt. Fe(s) + Ni(NO3)2(aq) Predict the products and write a molecular and net ionic equation for the reaction above. Will a metal always be oxidized in the presence of a salt? Explain your answer.
46
Activity series The activity series lists metals in order of decreasing oxidation. How can we differentiate between active metals and noble metals? How do we determine if a reaction will occur using the activity series?
47
The activity series Answer the aim in a summary paragraph.
48
October 25th, 2012 Do Now: It is said that HClO4 is a strong acid, whereas HClO2 is a weak acid. What does this mean in terms of the extent to which they ionize in solution? Classify each of the following as a non- electrolyte, weak electrolyte, or strong electrolyte in water: H2SO3 C2H5OH (ethanol) NH3 KClO3
49
Strong vs. Concentrated
How can we differentiate between strong and concentrated solutions? Define concentration How do we express concentration of a solution?
50
Molarity Molarity (symbol M) expresses concentration of solution (number of moles solute in a liter of solution) Calculate the molarity of a solution made by dissolving 23.4 grams of sodium sulfate in enough water to form 125 mL of solution.
51
Practice Problems Calculate the molarity of a solution made by dissolving 5.00 grams of glucose in sufficient water to form exactly 100 mL of solution. Calculate the number of grams of solute in L of M KBr.
52
Expressing concentration of an electrolyte
When an ionic compound dissolves, the relative concentration of ions depend on the chemical formula of a compound. In a 1.0 M solution of NaCl, what are the concentration of Na+ ions and Cl- ions? In a 1.0 M solution of Na2SO4, what are the concentrations of the Na+ ions and the SO42- ions?
53
Practice! What is the molar concentration of K+ ions in a M solution of potassium carbonate? Which will have the greatest concentration of potassium ion: 0.20 M KCl, 0.15 M K2CrO4, or M K3PO4?
54
Interconverting Molarity, Moles, and Volume
If we know any two quantities involved in the molarity equation, we can solve for the third. Calculate the number of moles of HNO3 in 2.O L of a M HNO3 solution.
55
Moles solute in conc soln = moles dolute in dil solution
Dilution How do we dilute solutions? Moles solute in conc soln = moles dolute in dil solution We want to prepare mL (convert to L) of M CuSO4 solution by diluting a stock solution containing 1.00 M CuSO4. How many mL of concentration solution do we need?
56
October 26th, 2012 Do Now: How many milliliters of 3.0 M H2SO4 are needed to make 450 mL of 0.10 M H2SO4? What volume of 2.50 M lead (II) nitrate solution contains mol of Pb2+ ions?
57
Solution stoichiometry
Two types of units exist: Laboratory Units: Chemical Units: How do you think these differ?
58
Always convert laboratory units into chemical units first.
Grams moles using molar mass Volume/molarity moles using M= mol/L Use Stoich. Coefficients to move between products and reactants. Convert lab units back into required units
59
Titrations Why do we use titrations?
How are titrations helpful in determining concentration?
60
Example Suppose we know the molarity of an NaOH solution and we want to find the molarity of a given HCl solution. What do we already know? What do we want to know? How do we get there? How do we know when they are neutralized? Define Equivalence Point. After the titration is complete, how can we now solve for the molarity?
61
Practice What mass of NaCl is needed to precipitate the silver ions from a 20.0 mL of M AgNO3 solution? How many mL of M HCl are needed to completely neutralize 50.0 mL of M Ba(OH)2 solution? If 42.7 mL of M HCl solution is needed to neutralize a solution of Ca(OH)2, how many grams of Ca(OH)2 must be in the solution?
Similar presentations
© 2020 SlidePlayer.com Inc.
All rights reserved. | http://slideplayer.com/slide/3453029/ |
There are five basic types of chemical reactions namely synthesis, decomposition, displacement, double displacement, and combustion reactions. Understanding these types of reactions will be useful in predicting the products of chemical reactions when given only the reactants along with conditions.
Some other common types of chemical reactions including neutralization, redox, and precipitation reaction are also discussed here.
Types of chemical reactions1. Synthesis reactions
In a synthesis reaction, two or more substances combine to form one new substance. This type of reaction is also known as a combination reaction.
In general form it can be written as:
A + B → AB
Synthesis reactions release energy meaning that they are exothermic.
One typical example is the formation of sodium chloride from its constituent ions.
Na+ + Cl– → NaCl
Another example of synthesis reactions is the combination of metals (I and II group metals) with oxygen forming their respective oxides.
2Mg(s) + O2 (g) → 2MgO(s)
O2 + 2Ca → 2CaO
2. Decomposition reactions
The reactions in which a compound breaks down into two or more simpler substances are called decomposition reactions.
The general form of decomposition reactions is:
AB → A + B
Most decomposition reactions are endothermic. Unlike synthesis reactions, they require energy to break the bonds present in the reactant.
An example of this type of chemical reaction is the decomposition of hydrogen peroxide into water and hydrogen gas.
2H2O2 → 2H2O + O2
Mercuric oxide decomposes into mercury and oxygen upon heating.
2HgO(s) → 2Hg (l) + O2 (g)
Other metal oxides also give decomposition reactions upon heating. The decomposition, however, occurs at specific temperatures for different metal oxides.
Metal carbonates break down forming metal oxides and carbon dioxide. For example, limestone (calcium carbonate) forms calcium oxide and carbon dioxide when decomposed.
CaCO3 (s) → CaO(s) + CO2 (g)
Electrolytic decomposition of water is also an example of decomposition reactions.
H2O(l) → H2 (g) + O2 (g)
3. Single Displacement reactions
Single displacement reactions are also known as single replacement reactions. Here one element replaces a similar element in a compound. The general representation of this type of chemical reaction is:
A + BC → AC + B
In this reaction A replaces B. Keep in mind that metals can replace metals and nonmetals will replace nonmetals in a compound.
So if A and B are both metals, A can replace B and thus, form the product AC.
Similarly, if both A and B are nonmetals, A can displace B to give product AC.
The displacement cannot happen if:
- A is metal and B is a nonmetal.
- A is nonmetal and B is a metal.
An example of single displacement reaction is the reaction of tin chloride and zinc. Zinc being a more reactive metal (as of electrochemical series) replaces tin leading to the formation of zinc chloride.
SnCl2 + Zn → ZnCl2 + Sn
Similarly, magnesium is more reactive than copper.
Mg(s) + Cu(NO3)2 (aq) → Mg(NO3)2 (aq) + Cu(s)
4. Double displacement reactions
In double displacement reactions, two compounds exchange ions to form completely different compounds. The exchange during this type of reaction is either of cations or anions. It’s never both at the same time.
The general form is:
AB + CD → AC + BD
Double displacement reactions are also called metathesis. They usually occur in aqueous solutions.
An example of double displacement reaction is the reaction between potassium iodide and lead nitrate where lead cation and potassium ion switch places.
Pb(NO3)2 (aq) + 2 KI (aq) → 2KNO3 (aq) + PbI2 (s)
One of the most common double replacement reactions is the reaction between sodium chloride and silver nitrate resulting in sodium nitrate and silver chloride (ppt).
NaCl(aq) + AgNO3 (aq) → NaNO3 (aq) + AgCl(s)
5. Combustion reactions
Combustion reaction is often included in the basic types of chemical reactions. It involves burning of compounds and the release of energy in the form of heat and light. One of the reactants in combustion reactions is necessarily oxygen.
An example is the burning of naphthalene.
C10H8 (g) + 12O2 (g) → 10CO2 (g) + 4H2O (g)
Remember that hydrocarbons are usually burned as fuels. Their combustion always results in the formation of carbon dioxide and water. There is also a release of a huge amount of energy which is used for domestic as well as commercial purposes as fuel output.
Methane, also called natural gas, is the most used hydrocarbon. Upon burning a mole of methane (16 g), 810 KJ of energy is released.
O2 (g) + CH4 (g) → H2O (g) + CO2 (g)
Other common types of reactions
These are the 7 common types of chemical reactions:
Neutralization reaction
An acid-base reaction that results in the formation of water and salt is known as a neutralization reaction. It is a double displacement reaction in which hydrogen ions from an acid react with hydroxyl ions from a base to form salt and water. It is the usual mechanism of an acid-base neutralization reaction but the products may change in some cases.
The general form of neutralization is:
HA + BOH → H2O + BA
A common example is the reaction of HCl and NaOH.
HCl + NaOH → NaCl + H2O
The hydrogen from acid leaves the chloride ion to attack the sodium metal making salt and water, the neutralization products.
Redox reaction
A redox reaction is an oxidation-reduction reaction in which one molecule, atom, or ion changes its state by gaining electron(s) and another one changes it by losing electrons. The atom that gains electron(s) is said to be reduced and the one that loses electron(s) is oxidized. In other words, the oxidation number of elements gets changed during this type of reaction.
Note that a redox reaction can be a synthesis, decomposition, single displacement, double displacement, and combustion reaction.
An example of a redox reaction is a reaction between thiosulfate ion and iodine where I2 is reduced to I– and S2O2- is oxidized to S4O62-
2 S2O32−(aq) + I2 (aq) → S4O62−(aq) + 2 I−(aq)
Related Resource(s):
Precipitation reaction
Precipitation reaction involves the formation of precipitates, the insoluble solid products that separate out.
Like the neutralization reaction, it also qualifies as a double displacement reaction.
An example is the precipitation reaction is:
NaCl(aq) + AgNO3 (aq) → AgCl(s)↓ + NaNO3 (aq)
In the above reaction, AgCl precipitates out and settles at the bottom of the vessel.
Polymerization
Polymerization is also known as a chain reaction. It is the formation of a chain-like network called polymer as a result of a chemical combination of relatively small units called monomers.
For example, the polymerization of ethylene (C2H4) produces polyethylene -(-C2H4-)-n.
C2H4 + C2H4 + C2H4 → (C2H4)n
Hydrolysis
Hydro means water and lysis means to break down. As the name suggests it is the breakdown of molecules with the help of water.
The general formula of a hydrolysis reaction is.
AB + H2O → AH + BOH
For example, dissolving sulfuric acid (H2SO4) in water (H2O) yields bisulfate ions (HSO3–) and hydronium ions (H3O+).
H2SO4 + H2O → HSO3– + H3O+
Dehydration Synthesis
Dehydration is the opposite of hydrolysis where two molecules combine to form a new molecule with the elimination of water.
The term dehydration is used for losing water and the word synthesis suggests the formation of a new substance.
For example, dehydration of ethanol at 170 ⁰C gives ethene.
CH3–CH2–OH → CH2=CH2 + H2O
Photochemical reactions
Photochemical reaction is the type of chemical reaction triggered by the absorption of light by molecules within a substance.
A common example is the photosynthesis reaction.
Similarly, silver chloride (AgCl) decomposes into silver (Ag) and chlorine (Cl2) gas upon exposure to radiation (light).
2 AgCl + hν → 2 Ag + Cl2
Concepts Berg
What are the 5 primary types of chemical reactions?
- Synthesis reactions
- Decomposition reactions
- Single displacement reactions
- Double displacement reactions
- Combustion reactions
What type of chemical reaction is 2NO → N2 + O2?
It is a decomposition reaction.
Why are chemical reactions considered to be important?
Chemical reactions help us understand the properties of matter. In a way, simple chemical reactions enable us to understand the vast and complicated processes going on around the universe and inside living organisms.
What is the basis for balancing a chemical equation?
A chemical equation is balanced based on the law of conservation of mass. It states that “Matter can neither be created nor be destroyed”.
Read: Balancing of chemical equations
Are all combination reactions exothermic?
Most of the combination reactions are exothermic but not all. Some endothermic combustion reactions also exist like the combustion of nitrogen gas. | https://psiberg.com/types-of-chemical-reactions/ |
how do i write a net ionic equation for Lead (II) Nitrate and Sodium Carbonate react to form Lead Carbonate and Sodium Nitrate??
-
Chemistry-Please help
A solution contains one or more of the following ions: Hg2 2+,Ba2+ ,Fe2+ and . When potassium chloride is added to the solution, a precipitate forms. The precipitate is filtered off and potassium sulfate is added to the remaining
-
Chemistry
Insoluble silver carbonate, Ag2CO3(s), forms in the following balanced chemical reaction: 2AgNO3(aq) + K2CO3(aq) = Ag2CO3(s) + 2KNO3(aq). What mass of silver nitrate, 2AgNO3(aq), reacts with 25.0 g of potassium
-
chemistry
Write the balanced chemical equation for each of these reactions. Include phases. When aqueous sodium hydroxide is added to a solution containing lead(II) nitrate, a solid precipitate forms. However, when additional aqueous
-
chem
Solutions of sodium carbonate and silver nitrate react to form solid silver carbonate and a solution of sodium nitrate. A solution containing 3.10 of sodium carbonate is mixed with one containing 4.43 of silver nitrate. How many
-
Chemistry
Identify the precipitate(s) of the reaction that occurs when a silver nitrate solution is mixed with a sodium chloride solution. Check all that apply. silver nitrate sodium chloride sodium nitrate silver chloride
-
Chemistry
The Ksp value for magnesium carbonate, MgCO3, is 7.2 x 10-6. If 2.50 g of magnesium carbonate is placed in 1.00 x 102 mL of water, how much (in g) magnesium carbonate will dissolve? Can you please explain the process?
-
CHEMISTRY
The concentration of Pb2+ in a sample of wastewater is to be determined by using gravimetric analysis. To a 100.0-mL sample of the wastewater is added an excess of sodium carbonate, forming the insoluble lead (II) carbonate
-
chemistry
when ammonia is added to Zn(NO3)2 solution, a white precipitate forms, which dissolves on the addition of excess ammonia. But when ammonia is added to a mixture of Zn(NO3)2 and NH4NO3, no precipitate forms at any time. Suggest an
You can view more similar questions or ask a new question. | https://www.jiskha.com/questions/156895/if-i-added-sodium-carbonate-to-magnesium-nitrate-would-it-be-insoluble-a-precipitate |
Oct 11, 2008u00a0u00b7 suppose that aqueous solutions of barium nitrate and potassium carbonate are mixed what is the name of the compound or compounds that precipitate? answer. wiki user october 11, 2008 11:28pm.
Write and balance the equation for the reaction of barium chloride and potassium carbonate? answer. wiki user 11/05/2009. ... copper nitrate + potassium carbonate = copper carbonate + potassium u2026
Jun 17, 2008u00a0u00b7 barium nitrate + sodium carbonate = barium carbonate + sodium carbonate ba(no3)2 (aq) + na2cou0437 (aq) = bacou0437(s) + 2 nanou0437 (aq) (white solid) barium nitrate other names barium dinitrate, barium salt of nitric acid properties molecular formula ba(no3)2 molar mass 261.336 g/mol appearance white crystals density 3.24 g/cm3, solid melting point 590 u00b0c, decomposes barium nitrate u2026
Nov 21, 2016u00a0u00b7 barium carbonate is quite insoluble in aqueous solution, #k_sp=2.58xx10^(u22129)#, and should precipitate in this #metathesis (partner exchange) reaction# would you expect #baco_3# to be more soluble in a weakly acidic solution? why or why not?
We Export
Please feel free to give your inquiry in the form below.We will reply you in 24 hours. | https://www.newitech.pl/apr-1084/barium-nitrate-and-potassium-carbonate-barium-carbonate/ |
Writing Net Ionic EquationsWriting net ionic equtaions is simpler than you could think. First of all, we MUST start via an equation that has the physical state:(s) for solid,(l) for liquid,(g) for gas, and(aq) for aqueous solution.The three rules for creating net ionic equations are really quite straightforward.Only take into consideration breaking up the (aq) substances.Only break up strong electrolytes.Delete any kind of ions that appear on both sides of the equation.Clat an early stage ascendancy 2 is the tricky one. You should understand your strong electrolytes:
|strong acids||HCl, HBr, HI, HNO3, HClO3, HClO4, and H2SO4||solid bases||NaOH, KOH, LiOH, Ba(OH)2, and also Ca(OH)2||salts||NaCl, KBr, MgCl2, and also many, many type of even more, all containing steels or NH4.|
Anvarious other ExampleHere"s one more example: HF(aq) + AgNO3(aq) AgF(s) + HNO3(aq)Separating the aqueous solid electrolytes, we have: HF(aq) + Ag+(aq) + NO3(aq) AgF(s) + H+(aq) + NO3(aq)Keep in mind that HF is a weak acid, so we leave it together. Because AgF is a solid, weare saying that it precipitates from the reaction, and it wouldn"t be right to sepaprice it into its ions. The spectator ion in this caseis NO3. It starts out in solution and ends upin solution also, via no duty in the actual reaction. We leave it out in writing the final net ionic equation: HF(aq) + Ag+(aq) AgF(s) + H+(aq)Aget, if you desire to emphadimension that H+ is hydrated, then you have the right to write: HF(aq) + Ag+(aq) + H2O AgF(s) + H3O+(aq)
What if I don"t have actually the products?In some situations you only know the reactants. For instance, one could must understand the net ionc equation for "the reactivity in between NaHSO4 and also NH3." What then?Tbelow are two methods to proceed:Determine the "molecular equation" and continue as over. This functions fine as lengthy as you deserve to figure out the product in the first place!
You are watching: What information does a net ionic equation give
See more: Of The Following, Which Is A Characteristic Of A Nonelectrolyte? ? A
Therefore, H+ have to be moved from the HSO4 to the NH3. HSO4(aq) + NH3(aq) NH4+(aq) + SO42(aq)Quiz yourself on net ionic equations. | https://slrfc.org/what-information-does-a-net-ionic-equation-give/ |
Important Instructions:
(1) There is only one correct option.
(2) Duration of Test is 1 Hour and Questions Paper Contains 30 Questions. The Max. Marks are 120.
(3) Each correct answer carries 4 marks, while 1 mark will be deducted for every wrong answer. Guessing of answer reduces your effort.
1 / 30
The electrode potential of Mg2+/ Mg electrode, in which conc. of Mg2+ is 0.01M, is (E0Mg2+/Mg = -2.36V)
2 / 30
Among the following compounds (I-III) the correct order of reactivity with electrophile is
3 / 30
Consider the reaction: H2SO3(aq) + Sn4+(aq) +H2O(l )→Sn2+(aq)+ HSO4-(aq) +3H+ (aq)
Which of the following statements is correct?
4 / 30
One desires to prepare a positively charged sol of silver iodide. This can be achieved by
5 / 30
The process of the extraction of Au and Ag is based on their solubility in
6 / 30
Which of the following species involves sp3 hybridisation?
7 / 30
Which of the following statement is wrong ?
8 / 30
(i) HCN(aq) + H2O(l) ⇌ H3O+(aq) + CN –(aq) Ka = 6.2 × 10–10
(ii) CN–(aq) + H2O(l) ⇌ HCN(aq) + OH– (aq) Kb = 1.6 × 10–5.
These equilibria show the following order of the relative base strength,
9 / 30
Which of the following complexes will give white precipitate with BaCl2 (aq)?
10 / 30
The enthalpy of combustion of H2(g), to give H2O(g) is –249 kJ mol–1 and bond enthalpies of H – H and O = O are 433 kJ mol–1 and 492 kJ mol–1 respectively. The bond enthalpy of O – H is
11 / 30
An organic compound (A) reacts with sodium metal and forms (B). On heating with conc. H2SO4, (A) gives diethyl ether, (A) and (B) are
12 / 30
The freezing point of a solution, prepared from 1.25 gm of a non-electrolyte and 20 gm of water, is 271.9 K. If molar depression constant is 1.86 K mole–1, then molar mass of the solute will be
13 / 30
In recovery of silver from photographic film, you have decided to dissolve the silver ion with dilute nitric acid. Addition of dilute HCl to precipitate AgCl seems to result in unacceptable losses. You might improve recovery by addition of_______ in the latter step.
14 / 30
Which of the following is true?
15 / 30
Which one of the following esters cannot undergo Claisen self-condensation?
16 / 30
Which of the following compounds havinghighest and lowest melting point respectively ?(1) CsF (2) LiF(3) HCl (4) HFCorrect answer is
17 / 30
An unknown alochol is treated with the “Lucas reagent” to determine whether the alcohol is primary, secondary or tertiary. Which alcohol reacts fastest and by what mechanism :
18 / 30
In an atom how many orbital(s) will have the quantum numbers; n = 3, l = 2 and ml = + 2 ?
19 / 30
In the reaction,
20 / 30
Four species are listed below:
i. HCO3- ii. H3O+ iii. HSO4- iv. HSO3F
Which one of the following is the correct sequence of their acid strength?
21 / 30
In the reaction of KMnO4 with an oxalate in acidic medium, MnO4- is reduced to Mn2+ and C2O42- is oxidised to CO2. Hence, 50 mL of 0.02 M KMnO4 is equivalent to
22 / 30
The appearance of colour in solid alkali meta halides is generally due to:
23 / 30
An azeotropic mixture of two liquids boil at a lower temperature than either of them when
24 / 30
Which one of the following substituents at para-position is most effective in stabilizing the phenoxide
25 / 30
Lassaigne’s test for nitrogen is positive for which compound?
26 / 30
At very high pressures, the compressibility factor of one mole of a gas is given by :
27 / 30
Which of the following undergoes hydrolysis by SN1 mechanism :
28 / 30
is
29 / 30
Silver bromide when dissolve in hypo solution gives complex ..... in which oxidation state of silver is ....
30 / 30
Alkali metals dissolve in liquid NH3 then which of the following observations is not true?
Your score is
The average score is 32%
Restart quiz
Your email address will not be published. Required fields are marked *
Comment
Name *
Email *
Website
Save my name, email, and website in this browser for the next time I comment.
Attachment
The maximum upload file size: 100 MB.You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other.Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. | http://neetiitjee.com/iit-jee-main-chemistry-test-6/ |
Its chemical formula is BaCl2, a substance with a molar mass of 244.3g, which contains 2 mole of water (BaCl2.2H2O) in crystalline form, which is effective as a water-soluble, sulfuric acid and is used as a veterinary drug. Barium is a cheap raw material.
Production and Reactions
BaCl2 in aqueous solution is in salt form. Barium reacts with sulfate ion to form Barium Sulphate.
Ba2 (aq) SO4 (aq)? BaSO4 (s)
Ba2 (aq) C2O4? BaC2O4 (s)
When mixed with sodium hydroxide, it yields a moderately soluble dihydroxy in water. The resulting barium sulfate is activated with carbon to obtain the BAS compound. During this process, carbon monoxide (CO) gas is released.
BaSO4 (aq) 4 C (S)? BAS (s) 4. CO (g)
The resulting BAS reacts with calcium chloride at high temperature to form Barium Chloride.
BaS CaCl2? BaCl2 CaS
Usage areas
Chemistry
In laboratories, it is often used for the testing of sulfate ion.Barium chlorite is mostly used in caustic chlorine plants, brine solutions and purification processes. It is used for the purification of saline solutions in cichlid chlorine plants.
Firework
BaCl2 is used to give bright green color in fireworks.
Other
It is used as white pigment and filler with sodium sulphate in leather, rubber, fabric and photo paper production as well as in radiating baths. | https://www.kimyaborsasi.com.tr/en/a/barium-chloride-china-5.html |
A solution is prepared by dissolving 23.7 g of CaCl2 in 375 g of water. The density of the resulting solution? is 1.05 g/ml. the concentration of CaCl2 is ________% by mass. answer: 5.94 can someone tell me the steps?
-
Chemistry
Consider the dissolution of CaCl2. CaCl2(s) Ca2+(aq) + 2 Cl-(aq) ÄH = -81.5 kJ A 10.4-g sample of CaCl2 is dissolved in 102 g of water, with both substances at 25.0°C. Calculate the final temperature of the solution assuming no
-
Chemistry Chemical Reactions pls help <3
2AgNO3 + CaCl2 ---> Ca(NO3)2 + 2AgCl all of the substances involved in this reaction are soluble in water except AgCl which forms a solid at the bottom of the flask . Suppose we mix together a solution containing 12.6 g of AgNO3
-
How many mole are there in 30g of calciumm chlorid
How many mole are there in calcium chloride, CaCl2 (CaCl2=111).
-
chemistry
How many grams of CaCl2 would be dissolved in 1.1 L of a 0.17 M solution of CaCl2?
-
chemistry
A chemist wants to make 5.5 L of a 0.250 M CaCl2 solution, what mass of CaCl2 (in g) should the chemist use?
-
chemistry
2Na3PO4 + 3CaCl2 ----> Ca3 (PO4)2 + 6NaCl how many moles of CaCl2 remain if .10 mol Na3PO4 and .40 mol CaCl2 are used? When amounts of both materials are listed one must worry about which is the limiting reagentalthough the
-
Chemistry
For the reaction CaCO3(s)+ 2HCl(aq)= CaCl2(aq)+ CO2(g)+ H2O(l) how many grams of CaCl2 can be obtained if 14.6 g HCl is allowed in CaCO3?
-
Chemistry
A 2.00 M solution of CaCl2 in water has a density of 1.17 g/mL. What is the mole fraction of CaCl2?
-
Chemistry
A CaCl2 solution is given to increase blood levels of calcium. If a patient receives 5.0 mL of a 5.0 % (m/v) CaCl2 solution, how many grams of CaCl2 were given?
-
chemistry
Determine the quantity (g) of pure CaCl2 in 7.5 g of CaCl2*9H2O
-
chemistry
You wish to prepare a 10% CaCl2 solution. Only CaCl2 * 10H2O is availalbe. How many grams of the hydrated compound are required for 250ml of the reagent? Ca=40 Cl=35.5
You can view more similar questions or ask a new question. | https://www.jiskha.com/questions/47301/cacl2-s-ca2-aq-2-cl-aq-h-81-5-kj-a-14-0-g-sample-of-cacl2-is-dissolved-in-130-g |
A bright yellow Precipitate is of lead(ii) iodide is formed. Which Of The Possible Products Is The Precipitate, And How Do You Know?
Write A Formula Equation For The Reaction. When a few crystals of lead nitrate and potassium iodide are added to opposite sides of a Petri dish containing deionized water, after a few minutes, a line of bright yellow lead(II) iodide precipitate forms down the middle of the dish. When lead nitrate reacts with potassium iodides the resulting products are lead iodide and potassium nitrate. This experiment starts with two soluble ionic compounds: potassium iodide, and lead (II) nitrate. TL;DR (Too Long; Didn't Read) When you add lead nitrate to potassium iodide, their particles combine and create two new compounds, a yellow solid called lead iodide and a white solid called potassium nitrate. Pb(NO3)2+2KI-> asked by Robyn on June 17, 2015; Chemistry. When aqueous solutions of lead(II) nitrate and potassium iodide are mixed, what is the formula for the insoluble solid (precipitate) that forms?
When Lead Nitrate and Potassium Iodide are mixed we get Potassium Nitrate and an insoluble solid [ precipitate ] lead iodide. Both chemicals can cause skin, eye and respiratory irritation. Question: Solutions Of Lead(II) Nitrate And Potassium Iodide Were Combined In A Test Tube. A cloudy yellow precipitate – an insoluble solid that comes from a liquid solution – forms.
Yellow clouds indicate that the chemical change has taken place. Lead nitrate and potassium iodide should both be considered hazardous.
A. Lead(II) nitrate reacts with potassium iodide in solution to produce the spectator species potassium nitrate, and a bright-yellow lead(II) iodide precipitate. Pb(NO3)2 (Lead Nitrate) in an aqueous solution plus KI (potassium iodide) will form a precipitate.
Potassium iodide reacts with lead(II) nitrate in the following precipitation reaction: 2KI(aq)+Pb(NO3)2(aq)→2KNO3(aq)+PbI2(s)Part A. e. Calculate the mass of precipitate produced when 50ml of .45M potassium iodide solution and 75ml of .55M lead (II) nitrate solution are mixed. What minimum volume of 0.200 M potassium iodide solution is required to completely precipitate all of the lead in 155.0 mL of a 0.112 M lead(II) nitrate … Colour of the precipitate formed is Yellow [ Lead Iodide ] KI + Pb[NO₃]₂ ---- … It was formerly called plumbous iodide.. Potassium Iodide KI . d. Write a balanced equation for this reaction, including (aq) and (s).
Safety . when lead nitrate and potassium iodide are mixed what precipitate is formed name the compound involvedalo the chemical equation and type of reaction w - Chemistry - TopperLearning.com | bzy24m00 C. Write A Complete Ionic Equation For The Reaction And Identify The Spectator Ions. The compound currently has a few specialized applications, such as the manufacture of solar cells and X-ray and gamma-ray detectors.
Lead(II) iodide or lead iodide is a salt with the formula PbI 2.At room temperature, it is a bright yellow odorless crystalline solid, that becomes orange and red when heated. Reactions Lab * November 14, 2010 Double Replacement Reaction Introduction / Purpose: The purpose of this experiment is to combine a solution made of potassium iodide and a solution of lead (II) nitrate and produce a precipitate. These are dissolved in water to form colourless solutions, and then mixed together.
Jan-ove Waldner China
,
How To Become A Foster Parent In Washington State
,
Are We There
,
Ford Ikon 2014
,
King Of Fighters: Destiny Characters
,
Openshot System Requirements
,
Alpine School District Phone Number
,
Ocean Diamond Antarctica
,
Toyota Century For Sale Uk
,
Tram 67 Route
,
Laguna Seca Lap Times
,
Massey University Address
,
The Stadium Tour Kansas City
,
Power System Engineering
,
How To Teach Tenses In English Grammar
,
How To Draw A Scary Face On A Pumpkin
,
Game And Watch Donkey Kong Jr Rom
,
Harley V-rod Review
,
How To Use Bitpay Wallet
,
Online Shopping Images Logo
,
Car Wash Ashburn
,
Should I Text My Ex To See How He's Doing
,
Imagination Shawn Mendes Chords
,
Super Monkey Ball: Banana Blitz Walkthrough
,
Royal Blue Kitchen Cabinets
,
Eagles 2020 Opponents
,
Silverchair Ana's Song
,
Social Studies Teks
,
Pacific Explorer Rooms
,
Everywhere You Look Remix
,
Happiness Vanessa Williams
, | http://grandespirito.it/page.php?tag=lead-nitrate-and-potassium-iodide-precipitate-equation-20e214 |
Balance the following equations, and then write the net ionic equation.
(a) (NH4)2CO3(aq) + Cu(NO3)2(aq) → CuCO3(s) + NH4NO3(aq)
(b) Pb(OH)2(s) + HCl(aq) → PbCl2(s) + H2O(ℓ)
(c) BaCO3(s) + HCl(aq) → BaCl2(aq) + H2O(ℓ) + CO2(g)
(d) CH3CO2H(aq) + Ni(OH)2(s) → Ni(CH3CO2)2(aq) + H2O(ℓ)
Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!*
*Response times vary by subject and question complexity. Median response time is 34 minutes and may be longer for new subjects. | https://www.bartleby.com/solution-answer/chapter-3-problem-35ps-chemistry-and-chemical-reactivity-9th-edition/9781133949640/balance-the-following-equations-and-then-write-the-net-ionic-equation-a-nh42co3aq/62f4e5f3-a2ca-11e8-9bb5-0ece094302b6 |
what type of reaction is this 2h2o2 (l) -> 2h2o(l)+ o2 (g) a.single replacement reaction. b.decomposition reaction. c.acid-base neutralization reaction. d.no reaction.
-
science chemistry
Given 4K + O2 → 2K2O, what is the reaction type? Single Replacement Double Replacement Synthesis (formation) Decomposition
-
chemistry
Separate this redox reaction into its component half-reactions. Cl2 + 2Na --> 2NaCl
-
chemistry
Which of the following equations does not represent an oxidation-reduction reaction? 3Al + 6HCl → 3H2 + AlCl3 2NaCl + Pb(NO3)2 → PbCl2 + 2NaNO3 2H2O → 2H2 + O2 2NaI + Br2 → 2NaBr + I2 Cu(NO3)2 + Zn → Zn(NO3)2 + Cu
-
Chemistry
For the reaction shown, compute the theoretical yield of product (in grams) for each of the following initial amounts of reactants. 2Al(s) +3 Cl2(g) ---> 2 AlCl3(s) A. 2.5gAl , 2.5g Cl2 B. 7.7 g Al , 25.2 g Cl2 C. 0.240 g Al ,
-
Chemistry
The equation Ba(s) + HCl(aq)→BaCl2(aq) + H2(g) is an example of which type of reaction? A. double-replacement B. combustion C. single-replacement D. decomposition C?
-
Science HELP
According to the law of conservation of mass, what should the product side of this single replacement reaction look like? Zn+2HCl--->? A. H2ZnCl2 B. H2+Zn+Cl2 C. H2+ZnCl2 D. ZnCl2 Is the answer D?
-
Chemistry
Consider the following reaction: 2Na + Cl2 2NaCl ΔH = -821.8 kJ (a) Is the reaction exothermic or endothermic? (b) Calculate the amount of heat transferred when 5.6 g of Na reacts at constant pressure. (c) How many grams of
-
Chemistry
ia 3ag2S+2AL = 6 ag +al2s3 a double replacement or single replacement
-
Chemistry
1.) The the following reaction is: BaCO3 --> BaO + CO2 Decomposition reaction Single replacement reaction Combustion Reaction Synthesis Reaction Double replacement reaction 2.) When magnesium is burned in the presence of oxygen,
-
chemistry
For each of the following chemical reactions: a.)classify the reaction as combination, decomposition,single replacement, or double replacement. b.)complete the word equation. c.)using the formulas for both reactants and products,
-
chemistry
Write the half reaction and balanced equation for: Cl2(g) + H2(g) -> HCl(aq) so I got: 2e- + Cl2(g) -> 2Cl- H2(g) -> 2H+ + 2e- so Hydrogen is oxidized and Chlorine is reduced so for cell notation I put:
You can view more similar questions or ask a new question. | https://www.jiskha.com/questions/675493/a-cl2-2nai-2nacl-i2-this-is-a-single-replacement-reaction-b-2k-2hcl-2kcl |
That being said, thallium is a heavy metal (that's a hint about the solubility). Problem #20: Zinc chloride solution is poured into a solution of ammonium carbonate. Pb(NO3)2 and form? Another possible problem is that the copper(II) hydroxide will be treated as soluble and written as the ions rather than the solid. This is what should be done: Notice that it is liquid water and gaseous carbon dioxide. Na+ and Br–d. Problem #12: Write balanced molecular equation and net ionic equations for the following reactions. What is the reaction that occurs when Co(NO3)2 (cobalt(II) nitrate) is added to Na2CO3 (sodium carbonate) in aqueous solution? HCN, however, is a weak acid and is always written in a molecular form. As such, the anion and cation packing are like those in cadmium iodide, in which the cobalt(II) cations have octahedral molecular geometry. Cobalt(II) hydroxide precipitates as a solid when an alkali metal hydroxide is added to an aqueous solution of Co2+ salt. 3) Predict products and write equations for precipitation reactions. So, this is a more chemically correct net ionic: The problem is that your teacher (or an answer in an online chemistry class) might expect the first net ionic I wrote above. By the way, it helps that the question text tips off that this reaction should be treated as an acid-base reaction. Conclusion? Ba2+ + 2OH¯(aq) + 2H+ + SO42¯(aq) ---> BaSO4(s) + 2H2O(ℓ). O. Glemser "Cobalt(II) Hydroxide" in Handbook of Preparative Inorganic Chemistry, 2nd Ed.
, Cobaltous hydroxide, cobalt hydroxide, β-cobalt(II) hydroxide, Except where otherwise noted, data are given for materials in their. Note: ammonium does not always break down into ammonia gas. Privacy , The pure (β) form of cobalt(II) hydroxide has the brucite crystal structure.
(a) When aqueous solutions of In predicting products, H2CO3(aq) is never a possibility. "There is no evidence that sulfurous acid exists in solution, but the molecule has been detected in the gas phase.". Br–... Identify the spectator ion(s) for this reaction occuring in water: Sodium phosphate and calcium chloride are mixed to form sodium chloride and calcium... video lessons to learn Net Ionic Equations. No precipitate is formed. 1. Does Jerry Seinfeld have Parkinson's disease? 2) Write net ionic equations for acid–base reactions. 6OH-(aq) + 3H2PO4- ---> 3PO43-(aq) + 6H2O(ℓ).
needed leave it blank. This compound is blue and rather unstable.. hydroxide and aluminum nitrate are The HSO4- ion that results is a weak acid, and is not dissociated. However, a different reaction is used rather than the one immediately above. Preparation. What is the net ionic equation for nickel (II) nitrate and Sodium hydroxide? How much does does a 100 dollar roblox gift card get you in robhx? The ammonia and water come from NH4OH, a "compound" which is unstable, decomposing immediately to ammonia and water.
Concept introduction: Solubility of any compound is predicted by below solubility chart: Join thousands of students and gain free access to 46 hours of Chemistry videos that follow the topics your textbook covers. However, in reality, sulfuric acid is strongly ionized in its first hydrogen and then not strongly ionized in its second hydrogen. Copyright © 2020 Multiply Media, LLC. B. sodium and perchlo... Hydrocyanic acid has the formula HCN. Don't try and argue the point. It turns out that ammonium dihydrogen phosphate is quite soluble, but, evidently, it does precipitate out when the solution is very acidic. Problem #14: Write balanced net ionic equations for the following reactions in aqueous solution: All three soluble substances are ionic, so they becomes ions in solution. Solution: molecular (just reactants): Co(NO 3) 2 (aq) + NaCl(aq) ---> This is a double replacement reaction, so we write this for the full molecular: Co(NO 3) 2 (aq) + 2NaCl(aq) ---> 2NaNO 3 (aq) + CoCl 2 (aq) complete ionic:
The answer is that you usually can't figure it out from a solubility chart because vanadium is not usually included. Cross out the sodium and the chlorine. Write a partial net ionic equation: The key now is to recognize that the ammonium ion can only be an acid, it has no capacity to accept a proton (which is what a base would do). _____yes or no. When ammonium is reacted with a base, ammonia is produced. include states such as (s) or (aq). Chemistry For a reaction between sodium phosphate and strontium nitrate write out the following: a) The balanced molecular equation b) The ionic equation c) The net ionic equation a) I think I have this one: 2 Na3PO4 (aq) + 3 Sr(NO3)2 (aq) The net ionic equation should be written when cobalt(II) nitrate and sodium phosphate are mixed.
Per saperne di più su come utilizziamo i tuoi dati, consulta la nostra Informativa sulla privacy e la nostra Informativa sui cookie. Sodium hydroxide - concentrated solution. This is one of the things that one learns as one studies the issues of what is soluble, what is not and what exceptions to the rules exist. The sodium ion and the chloride ion are spectator ions. What is the net ionic equation for the reaction of aqueous sodium hydroxide and aqueous cobalt(II) chloride? Which are spectator ions in the reaction of aqueous solutions of ammonium perchlorate + sodium bromide
There is no use of (aq). You can view video lessons to learn Net Ionic Equations. sodium hydroxide are combined. aqueous potassium carbonate. Comment: thallium compounds are not commonly asked in these types of questions nor are thallium compounds commonly included in a solubility table. Problem #13: Write balanced molecular, complete ionic and net ionic equations for this reaction: NR stands for 'no reaction.' & Comment: how do you know that TlI precipitates if it is not commonly included on solubility charts?
4) Write net ionic equations for gas-forming Which means the correct answer to the question is zero. That's the way I did it above. Note the last two equations are the same. If a box is not Bonus Problem: Write the molecular, complete ionic and net ionic equation for the reaction between sodium hydrogen sulfite and hydrobromic acid. NH4Cl are mixed, does a precipitate Sodium ion and nitrate ion were the spectator ions removed. Here's one I answered on Yahoo Answers. (the hydrotalcite structure). The pure compound, often called the "beta form" (β-Co(OH)2) is a pink solid insoluble in water. All Rights Reserved. When did organ music become associated with baseball? P. Benson, G. W. D. Briggs, and W. F. K. Wynne-Jones (1964): "The cobalt hydroxide electrode—I. 4) We come to the complete molecular equation: Sodium bicarbonate is a strong electrolyte (as is NaCN), so they are written fully ionized. Here's an example from Yahoo Answers which mentions this. If you are 13 years old when were you born? In aqueous solution, it is only a few percent ionized.
The reason I put this reaction in is because you may see a series of example reactions in whch something happens and then, on the test, a NR appears without its possibility ever being mentioned. Write a net ionic equation for the overall reaction that occurs when aqueous solutions of oxalic acid (H 2 C 2 O 4) and sodium hydroxide are combined. Ionic Equation Worksheet Write balanced molecular, total ionic, and net ionic equations for each of the following. Note that both products are soluble (remember: all nitrates and all chlorates are soluble) and both ionize. Problem #19: Write the complete molecular, complete ionic and net ionic equations for ammonium carbonate reacting with barium hydroxide. Write a partial net ionic equation: H+ and OH–c. That forces the dihydrogen phosphate into the base role, that it, to accept a proton. Problem #23: Cobalt(II) nitrate reacts with sodium chloride. Cobalt is insoluble with carbonate and would therefore form the precipitate CoCO3, but would it also form a precipitate with hydroxide ions, which would be present because sodium carbonate is a weak base? This is because copper(II) hydroxide is insoluble, consequently (aq) is not used. (b) Write a balanced equation for the © 2003-2020 Chegg Inc. All rights reserved. 3) Identify the spectator ions in the complete ionic equation: Conclusion: the net ionic equation is exactly the same as the complete ionic equation. Most people treat it as strongly ionized (meaning 100%) in both hydrogen ions.
The net ionic is: How do you know that V2(CO3)5 precipitates? combined. It also reacts with strong bases to form solutions with dark blue cobaltate(II) anions, [Co(OH)4]2− and [Co(OH)6]4−. How long does this problem take to solve? The phosphoric acid and the water are molecular compounds, so do not write in ionic form. Picture of reaction: specify states such as (aq) or (s). Write a balanced net ionic equation for the acid-base reaction that could, in principle, occur. 2NaOH+CoCl2->2NaCl+Co (OH)2. Yahoo fa parte del gruppo Verizon Media. Problem #11: Write the complete ionic and net ionic equation for this reaction in aqueous solution: Please include state symbols in both reactions. Solution: Let us write a partial molecular first: NH 4 Cl(aq) + NaH 2 PO 4 (aq) ---> If you treat the above as a double replacement reaction, you can see that the sodium ion and the chloride ion are the spectator ions. You can follow their steps in the video explanation above. Why don't libraries smell like bookstores? it blank. Problem #18: When a solution of sodium hydroxide is added to a solution of ammonium carbonate, H2O is formed and ammonia gas, NH3, is released when the solution is heated. If you treat the above as a double replacement reaction, you can see that the sodium ion and the chloride ion are the spectator ions. Acetic acid is a weak acid, consequently it is written in molecular form. View desktop site. reactions. Our expert Chemistry tutor, Sabrina took 5 minutes and 55 seconds to solve this problem. | http://keszthelyiplebania.hu/journal/aggljkp.php?7de930=cobalt%28ii%29-nitrate-and-sodium-hydroxide-net-ionic-equation |
Activity 1.2 Ncert Science Class 10 Chemical reactions and Equations.
Brief procedure: Activity 1.2 asks us to mix an aqueous solution of lead nitrate with potassium iodide to check what happens.
Observation: A yellow colour precipitate appears at the bottom.
Explanation: Lead nitrate and potassium iodide; both are colourless. They react with each other to form a yellow precipitate of lead iodide. Lead iodide settles down at the bottom of the tube.
Pb(NO3)2(aq)+ 2KI(aq) → PbI2(s) + 2KNO3(aq)
Next: Reaction of zinc with dilute acid. Activity 1.3.
See also:
Burning of Magnesium ribbon in the air Activity 1.1
Solved questions and activities of chapter 1 Chemical reactions and equations. | https://www.studdy.org/activity-1-2-ncert-science-class-10-chemical-reactions/comment-page-1/ |
All chemical equations must be balanced. This means that the number of atoms of each element must be same on both sides of the equation.
The following steps must be followed to write an ionic equation:
Write the balanced chemical equation
Separate each compound into its component ions. (This cannot be applied to liquids such as water and solids which are products)
Any ions, which appear on both sides of the equations, should be cancelled.
There are some rules which help to write ionic equations much easily.
For any acid alkali (neutralization) reactions, the products are salt and water. The ionic equation can be written as:
H+ (aq) + OH - (aq) → H2O (l)
It does not matter whether the acid is monobasic such as HCl or HNO3, or dibasic acids such as H2SO4. It also does not matter if the alkali is NaOH or Ca(OH)2.
When two solutions are mixed together and solid precipitate is formed, we call them precipitation reactions. For such reactions, you only write those two ions which make the precipitate.
Let us consider the reaction of hydrochloric acid with silver nitrate. The products are silver chloride and nitric acid. Silver chloride is a precipitate and a solid.
Ag+ (aq) + Cl – (aq) → AgCl (s)
Remember, solid, liquid (water), and gases do not ionize.
But if the reactant is a solid, chemical equation should be written first. Ions would be written in the next line, and common ions which appear on both sides, are cancelled.
Solid magnesium reacts with HCl to produce MgCl2 and H2.
The chemical equation is, Mg(s) + 2HCl(aq) = MgCl2 (aq) + H2 (g).
The ionic equation is, Mg(s) + 2H+ (aq) = Mg 2+ (aq) + H2 (g)
Solid magnesium carbonate reacts with HCl to produce MgCl2, CO2 and H2O. | https://www.tutorfair.com/resource/947/ionic-equations |
Test for sulfates, carbonates and ammonium / PAG4
Testing for Negative ions (anions) 3.1.4 Qualitative analysis Fizzing due to CO2 would be observed if a carbonate was present Testing for Presence of a carbonate Add any dilute acid and observe effervescence. Bubble gas through limewater to test for CO2 – will turn limewater cloudy 2HCl + Na2CO3 2NaCl + H2O + CO2 Testing for Presence of a sulphate Acidified BaCl2 solution is used as a reagent to test for sulfate ions If Barium Chloride is added to a solution that contains sulphate ions a white precipitate forms Ba2+ (aq) + SO4 2-(aq) BaSO4 (s). Other anions should give a negative result which is no precipitate forming The acid is needed to react with carbonate impurities that are often found in salts which would form a white Barium carbonate precipitate and so give a false result Sulphuric acid cannot be used to acidify the mixture because it contains sulphate ions which would form a precipitate. The sequence of tests required is carbonate, sulfate then halide. (This will prevent false results of as both BaCO3 and Ag2 SO4 are insoluble.) Testing for positive ions (cations) Test for ammonium ion NH4 + , by reaction with warm NaOH(aq) forming NH3 Ammonia gas can be identified by its pungent smell or by turning red litmus paper blue. | https://learnah.org/ocr/module-3/qualitative-analysis/ |
do not understand this problem The Ksp of CaSO4 is 4.93× 10–5. Calculate the solubility (in g/L) of CaSO4(s) in 0.500 M Na2SO4(aq) at 25 °C.
-
Chemistry
Sodium sulfate is slowly added to a solution containing 0.0500 M Ca^2+ (aq) and 0.0390 M Ag^+ (aq). What will be the concentration of Ca^2+ (aq) when Ag2SO4(s) begins to precipitate? What percentage of the Ca^2+ (aq) can be
-
chemistry 2
A saturated solution of Mg(OH)2 has a pH of 10.52. What is the Hydronium concentration? Whats the hydroxide concentration? Is this solution acidic,or basic?
-
Physical Chemistry
A saturated solution of Na2SO4, with excess of the solid, is present at equilibrium with its vapor in a closed vessel. (a) How many phases and components are present? (b) What is the number of degrees of freedom o the system?
-
Chemistry
Determine the solubility of a sodium sulfate, Na2SO4, in grams per 100g of water, if 0.94 g of Na2SO4 is dissolved in 20 g of water to make a saturated solution.
-
AP Chemistry
Consider the following reaction. CaSO4(s) Ca2+(aq) + SO42-(aq) At 25°C the equilibrium constant is Kc = 2.4 10-5 for this reaction. (a) If excess CaSO4(s) is mixed with water at 25°C to produce a saturated solution of CaSO4,
-
Analytical chemistry
A 25.0-mL solution of 0.0660 M EDTA was added to a 33.0-mL sample containing an unknown concentration of V3 . All V3 present formed a complex, leaving excess EDTA in solution. This solution was back-titrated with a 0.0450 M Ga3
-
CHEM 1411
A 0.4550g solid mixture containing CaSO4 dissolved in water and treated with an excess of Ba(NO3)2, resulting in precipitation of 0.6168g of BaSO4. What is the concentration ( weight percent) of CaSO4 in the mixture?
-
chemistry
a saturated solution of milk of magnesia, Mg(OH)2, has a pH of 10.5. What is the hydronium concentration of the solution? is the solution acidic or basic?
-
Chemistry
I have discovered a new chemical compound with the formula A2B. If a saturated solution of A2B has a concentration of 4.35x10-4M. a) What is the concentration of B2- ions? b. What is the concentration of A1+ ions? c. What is the
-
Chemistry
In a saturated solution of silver phosphate, the concentration of silver ion is 4.5 x 10^-4 mol/L. The Ksp of silver phosphate would be which of the following? 6.8 10-8 1.0 10-11 1.4 10-14 none of the above 1.5
You can view more similar questions or ask a new question. | https://www.jiskha.com/questions/1344225/after-20-0g-of-na2so4-is-added-to-a-0-5l-saturated-solution-of-caso4-does-the |
Replication lets you see patterns and trends in your results. This is affirmative for your work, making it stronger and better able to support your claims. This helps maintain integrity of data. On the other hand, repeating experiments allows you to identify mistakes, flukes, and falsifications. Mistakes may have been the misreading of a result or incorrectly entering data. These are sometimes inevitable as we are only human. However, replication can identify falsifications which can carry serious implications in the future.
2. Peer review
If someone is to thoroughly peer review your work, then they would carry out the experiments again themselves.. If someone were wanting to replicate an experiment,the first scientist should do everything possible to allow replicability.
3. Publications
If your work is to be published, it is crucial for there to be a section on the methods of your work. Hence this should be replicable in order to enable others to repeat your methodology. Also, if your methods are reliable, the results are more likely to be reliable. Furthermore, it will indicate whether your data was collected in a generally accepted way, which others are able to repeat.
4. Variable checking
Being able to replicate experiments and the resulting data allows you to check the extraneous variables. These are variables that you are not actually testing, but that may be influencing your results. Through replication, you can see how and if any extraneous variables have affected your experiment and if they need to be made note of. Through replication, you are more likely to be able to identify the undesirable variables and then decrease or control their influence where possible.
5. Avoid retractions
Replicating data yourself, as well as others doing it, is advisable before you publish the work, if that is your intention. This is because if the data has been replicated and confirmed before publication, it is again more likely to have integrity. In turn, the chance of your paper being retracted decreases. Making it easier for others to replicate data then makes it easier for them to support your data and claims, so it is definitely in your interest to make data replicable.
1. Record everything you do
While carrying out your experiment, you should record every step you take in the process. This is not only because it is good practice and is often required to track what you are doing, but it provides a log to look back at. This, in turn, gives you something to refer back to and enables you to repeat the experiment. It also makes it easier for others to follow the same steps to see if they obtain the same results, which is the whole aim of replicability.
2. Be totally transparent
Sometimes it can be tempting to ignore mistakes or write results more favorably than they actually came out. This also applies to when you repeat experiments, if one is a bit of an outlier, don’t brush it under the rug. That is the point of repeats, to check your methods, equipment. If you are not truthful with what others will be reading and carrying out experiments from in the future, this could significantly skew their results.
3. Make your raw data available
You should make your raw data available for others, so long as it does not compromise patents or such. This would be accompanied by the step-by-step process that you went through and the description of each step.. Having the raw data to compare when repeating experiments yourself or when others replicate it in the future makes it easier since you have something to refer back to.
4. Store you data in an electronic lab notebook
All of these problems with regards to data reproducibility can be tackled using an electronic lab notebook. ELNs’ clever data management allows you to enter data directly into your lab notebook, with an automatic full audit trail. This includes dates and times of creation, editing, deletion, signing and witnessing. Moreover, with an ELN you can create and share protocols or templates, thus making reproducible instructions for future use. If you would like to find out more as to why an ELN may just change your life (in the lab), click here for a comprehensive guide on ELNs
Data reproducibility is one of three main conditions for data integrity. Research also has to have data reproducibility and research reproducibility. These may sound similar, but they are actually quite different. Follow the links to find out the difference between data and research reproducibility. | https://www.labfolder.com/importance-of-replicable-data/ |
Overview:
Replication, which confirms the accuracy of empirical findings in research studies, is crucial to psychological science (Brandt et al., 2013). For replication study to be considered successful, it requires using similar conditions present in the initial experiment and yielding the same effects (MLP; Klein et al., 2014). Unfortunately psychological science has been facing a replication crisis in which replication rate is low. However, there are researchers who argue that replication rate may be underestimated and that it is much higher that what is commonly reported.
In the Open Science Collaboration (OSC, 2015), researchers conducted replications 100 experimental and correlational studies. Their findings were disheartening in that out of the ninety-seven percent of original studies with significant results, only 36% of the replicated studies resulted in p values less than .05. Moreover the effect size of the replicated studies was only about half of that of the original studies.
What are the potential reasons for the lack of replicability?
The low replicability rate maybe related to factors such as using small sample sizes, p-hacking (increasing sample size until the p-value becomes significant), the file-drawer phenomenon (not presenting results with null findings) and publication bias (negative results have a low probability of getting published) that may lead to high rate false positives. The publish or perish phenomenon present in the academia world for example put undue pressure on scientist to produce publications that have positive results at a very high rate.
The bias against null results further compromises the integrity of the field, as researchers are aware that their work would have a low probability of publication. This may partially explain selective reporting for example. Selective reporting refers to the tendency to report only the results that were significant. That is, if even if a study has multiple hypotheses only the ones that generated significant results will be presented.
Why is this a problem?
Replication increases precision of effect size, establishes generalizability of effect, and replication studies that do not yield same results as the original studies provide information on the necessary conditions for the expected effects (Nosek & Lakens, 2014). Given that one of the goals of research is to yield results that are externally valid or generalizable across individuals and contexts, it can lead to inaccurate implementation of results if they do not get replicated in other studies. Failure to identify the necessary conditions that a phenomenon occurs in can lead to overgeneralization of results (Henry, MacLeod, Phillips, & Crawford, 2004; MLP; Klein et al., 2014)
What could be leading to the low replicability statistics?
Gilbert and colleagues (2016) present potential reasons for the low replicability of original studies, specifically those conducted in the open science collaboration. They argue that one potential explanation could be an error in the methodologies that replications use rather than the effects of the original studies not being reproducible. OSC had multiple sources of error in their data including research method infidelities that were taken into account when reporting the results (Gilbert, King, Pettigrew, & Wilson, 2016). Moreover they argue that the OSC studies were underpowered. Finally, Gilbert and colleagues (2016) state that the OSC replication studies may have been biased toward failure; that is, they were expecting low replicability of the original studies and their methodologies and results were utilized to confirm that bias.
Conclusion:
So, Is the field of psychology facing replication crisis? Lynch and colleagues (2005) argue that non-replication does not imply falsehood. Given that exact replication of studies is impossible and even unnecessary, replication can fail due to differences in operational variables and the construct (concept) itself. | https://africapost.info/how-to-start-a-replication-crisis/ |
- describing Avery, MacLeod and McCarty’s important experiment. In your own words, write a few paragraphs that could serve as an abstract for their landmark paper. (Review the procedures for devising an abstract in the “Scientific Method” commentary in Module 1.) Abstracts may be quite different in different journals or for different purposes, such as poster presentations. What I am looking for here is a brief summary of the research, including details about how the experiment was carried out.
- Discuss different models for DNA replication that were considered by early researchers. Explain why the current model for DNA replication was accepted following experiments by Meselson and Stahl. Describe this model in detail and interpret the significance of helicases, gyrases, RNA primer, DNA polymerase, Okazaki fragments, and DNA ligase
Have your paper completed by a writing expert today and enjoy posting excellent grades. Place your order in a very easy process. It will take you less than 5 minutes. Click one of the buttons below. | https://qualityassignments.net/2022/03/24/macleod-and-mccarty-s-important-experiment-biology-homework-help/ |
In no science or engineering discipline does it make sense to speak of isolated experiments. The results of a single experiment cannot be viewed as representative of the underlying reality. Experiment replication is the repetition of an experiment to double-check its results. Multiple replications of an experiment increase the confidence in its results. Software engineering has tried its hand at the identical (exact) replication of experiments in the way of the natural sciences (physics, chemistry, etc.). After numerous attempts over the years, apart from experiments replicated by the same researchers at the same site, no exact replications have yet been achieved. One key reason for this is the complexity of the software development setting, which prevents the many experimental conditions from being identically reproduced. This paper reports research into whether non-exact replications can be of any use. We propose a process aimed at researchers running non-exact replications. Researchers enacting this process will be able to identify new variables that are possibly having an effect on experiment results. The process consists of four phases: replication definition and planning, replication operation and analysis, replication interpretation, and analysis of the replication’s contribution. To test the effectiveness of the proposed process, we have conducted a multiple-case study, revealing the variables learned from two different replications of an experiment.
References 28
-
1.Basili VR, Selby RW (1985) Comparing the effectiveness of software testing strategies. Department of Computer Science. University of Maryland. Technical Report TR-1501. College Park
-
2.Close F (1991) Too hot to handle: the story of the race for cold fusion. Princeton University Press
-
3.Conradi R, Basili VR, Carver J, Shull F, Travassos GH (2001) A pragmatic documents standard for an experience library: roles, documents, contents and structure. University of Maryland Technical Report. CS-TR-4235
-
4.Gómez OS, Juristo N, Vegas S (2010) Replications types in other experimental disciplines. Submitted to International Symposium on Empirical Software Engineering and Measurement (ESEM’10). Bolzano, Italy
-
5.Hedges LV, Olkin I (1985) Statistical methods for meta-analysis. Orlando Academic Press
-
6.Juristo N, Vegas S (2003) Functional testing, structural testing and code reading: What fault type do they each detect? Empirical Methods and Studies in Software Engineering- Experiences from ESERNET. Springer-Verlag. Volume 2785. Chapter 12, pp 235–261
-
7.Kamsties S, Lott C (1995) An empirical evaluation of three defect detection techniques. Technical Report ISERN 95-02, Dept. Computer Science, University of Kaiserslautern
-
8.Laitenberger O, Rombach HD (2003) (Quasi-) experimental studies in industrial settings. Lecture notes on empirical software engineering. World Scientific Publishing
-
9.Lung J, Aranda J, Easterbrook SM, Wilson GV (2008) On the difficulty of replicating human subjects studies in software engineering. In Proceedings of the 30th International Conference on Software Engineering. Leipzig, Germany. | https://academic.naver.com/article.naver?doc_id=533868875 |
Data resilience is the availability of the data that is needed in a production environment. There are several technologies, which address the data resilience requirements that are described in the “Benefits of High Availability” section. These technologies can be split into two main categories on IBM® i – logical or software replication and hardware or disk replication.
Logical replication
Logical replication is a widely deployed multisystem data resiliency topology for high availability (HA) in the IBM i space. It is typically deployed through a product that is provided by a high availability independent software vendor (ISV). Replication is run (through software methods) on objects. Changes to the objects (for example file, member, data area, or program) are replicated to a backup copy. The replication is near or in real time (synchronous remote journaling) for all journaled objects. Typically if the object such as a file is journaled, replication is handled at a record level. For such objects as user spaces that are not journaled, replication is handled typically at the object level. In this case, the entire object is replicated after each set of changes to the object is complete.
Most logical replication solutions allow for additional features beyond object replication. For example, you can achieve additional auditing capabilities, observe the replication status in real time, automatically add newly created objects to those being replicated, and replicate only a subset of objects in a given library or directory.
To build an efficient and reliable multisystem HA solution using logical replication, synchronous remote journaling as a transport mechanism is preferable. With remote journaling, IBM i continuously moves the newly arriving data in the journal receiver to the backup server journal receiver. At this point, a software solution is employed to “replay” these journal updates, placing them into the object on the backup server. After this environment is established, there are two separate yet identical objects, one on the primary server and one on the backup server.
With this solution in place, you can rapidly activate your production environment on the backup server by doing a role-swap operation.
A key advantage of this solution category is that the backup database file is live. That is, it can be accessed in real time for backup operations or for other read-only application types such as building reports. In addition, that normally means minimal recovery is needed when switching over to the backup copy.
The challenge with this solution category is the complexity that can be involved with setting up and maintaining the environment. One of the fundamental challenges lies in not strictly policing undisciplined modification of the live copies of objects residing on the backup server. Failure to properly enforce such a discipline can lead to instances in which users and programmers make changes against the live copy so that it no longer matches the production copy. If this happens, the primary and the backup versions of your files are no longer identical.
Another challenge that is associated with this approach is that objects that are not journaled must go through a check point, be saved, and then sent separately to the backup server. Therefore, the granularity of the real-time nature of the process may be limited to the granularity of the largest object being replicated for a given operation.
For example, a program updates a record residing within a journaled file. As part of the same operation, it also updates an object, such as a user space, that is not journaled. The backup copy becomes completely consistent when the user space is entirely replicated to the backup system. Practically speaking, if the primary system fails, and the user space object is not yet fully replicated, a manual recovery process is required to reconcile the state of the non-journaled user space to match the last valid operation whose data was completely replicated.
Logical replication solutions can typically cover all types of outages, depending on the implementation. Recovery point objective (RPO) can be 0 if the distance between systems allows for synchronous remote journaling and all replicated objects are journaled. Using asynchronous remote journaling and having objects that must be replicated from the audit journal increases the RPO.
Another possible challenge that is associated with this approach lies in the latency of the replication process. This refers to the amount of lag time between the time at which changes are made on the source system and the time at which those changes become available on the backup system. Synchronous remote journal can mitigate this to a large extent. Regardless of the transmission mechanism that is used, you must adequately project your transmission volume and size your communication lines and speeds properly to help ensure that your environment can manage replication volumes when they reach their peak. In a high volume environment, replay backlog and latency may be an issue on the target side even if your transmission facilities are properly sized.
Hardware replication
Hardware replication is done at the operating system or disk level instead of at the object level. An advantage of these technologies over logical replication is that the replication is done at a lower level, and when done synchronously, there is a guarantee that both copies of the data are identical. The disadvantage of the technology is that the data is only accessible from one copy, and the second copy cannot be used during active replication.
Within hardware replication, there are again two categories, independent auxiliary storage pool (IASP) replication and full system replication. IBM PowerHA® SystemMirror® for i delivers several hardware replication technologies based on independent auxiliary storage pools or IASPs. An independent ASP or IASP is a set of disk units, which can be configured separately from a specific host system and can be independently varied on or off. An IASP is used to segregate application data from the operating system. Thus, the application data can be replicated by using hardware replication while not replicating the operating system. The IBM i implementation of IASPs supports both directory objects (such as the integrated file system (IFS)) and library objects (such as database files). While migrating the application data into the IASP is a separate step in setting up the environment, there are several advantages to only replicating the data and not the operating system. Planned and unplanned switches to the backup system are faster than if the entire system is replicated. The backup system contains a separate copy of the OS and can be used for other work while it is also used as a backup system for production. These technologies can be used for planned OS upgrades since there are again two copies of the operating system.
If migrating the application data into an IASP is not feasible, it is also possible to use hardware replication at the system level, typically called full system replication. Geographic mirroring, which is an IBM i replication technology, can be used in an i hosted environment to replicate a production system. The replication technologies that are provided by the IBM storage systems can also be used to replicate an entire system. While easier to initially set up, full system replication does require more bandwidth than IASP-based replication. Full system replication is considered more of a disaster recovery technology than high availability, since there is only one production environment and it must be IPL'd on another physical system for a planned or unplanned outage. There are tools and service agreements available from IBM Lab Services, which helps to automate and customize a full system replication environment if wanted. | https://www.ibm.com/docs/en/i/7.3?topic=availability-data-resilience |
The just-released International Journal of Epidemiology (IJE) suite of publications reexamining the effectiveness of deworming in Kenya demonstrates the potential impact of replication research. The headline publication is a 3ie-funded replication study. The paper has been published alongside three additional commentaries: a synopsis of a systematic review of deworming evidence, a response from the original authors, and a response from the replication researchers. The publication of these papers in a respected journal puts the role of replication squarely where we think it needs and deserves to be to promote valuable public discourse around highly relevant evaluation evidence. We’re excited about these publications for a number of reasons.
First, replication research is being published! From 3ie’s perspective, this is an accomplishment onto itself. We designed the Replication Programme to help change incentives. In order to encourage more replication papers that will improve the evidence base for policymaking, we need to convince researchers that publication outlets exist for these time-intensive replication studies. The IJE arguably has the highest impact within the field of epidemiology (see information on IJE’s impact factor here). The editors’ decision to publish these papers is a testament to the value they see in replication research. This in turn helps change publication incentives.
Second, the replication studies have sparked a larger conversation around the existing deworming evidence. We’re discovering that a significant grey area exists regarding the ability of replication researchers to recreate the originally published results. However, these replication studies cannot be simply lumped into difficult to define successful or failure categories. We find that replication studies provide researchers with a valuable space to discuss analytical decisions and the robustness of publication results. We believe that these discussions improve the science around these evaluations, which in turn enhances the quality of the evidence on which policymakers rely to spend limited development funds.
Third, these conversations are public, which allows for scrutiny of the findings and a general discussion of the research. Miguel and Kremer helpfully provide their data for replication efforts here and a replication guide for their original paper here, both of which ease the process for replication researchers to reproduce their paper. The original authors have been very open with assisting researchers interested in reanalysing their results (as an example, here’s a report on GiveWell’s replication study). But most of the subsequent replication studies of their original deworming paper don’t appear to be widely circulated or publicly posted. That has now changed with the publishing of the replication results and these commentaries on the deworming evidence in the IJE. The discourse is now open for interpretation by everyone, from researchers and policymakers to funders and implementers. Regardless of which side of the deworming debate one falls, some facts remain irrevocable. The revised tables in the original author response to the replication study correct for agreed upon errors in the original publication. This clearly demonstrates the power of replication research.
Ultimately, 3ie, through our Replication Programme, seeks to change the incentive structure around replication research. If the reanalysis process becomes standardised and journals agree to publish this type of research, there will be a genuine opportunity for researchers to provide more robust evidence for policymaking. This open discourse around replication results will help normalise the replication process. We’re hoping to see more of these open discussions in the future.
Video: Benjamin DK Wood talk about how the publishing of this 3ie-funded study opens up the discourse on replications of impact evaluations. | http://www.3ieimpact.org/blogs/replication-research-promotes-open-discourse?page=0%2C1 |
Ruby on Rails
Ruby on Rails(RoR)is becoming one of the popular Web application frameworks. It is quite natural for developers to compare Ruby on Rails with other languages. Since the programming language used to write Rails is Ruby, the comparison is between Ruby and other programming languages, such as Perl, Python and Java. Let’s look at general differences between Ruby and other languages.The comparison given below gives you a clear insight of their advantages and disadvantages in Web application development:
Ruby versus PHP:
- A PHP code executes faster than a RoR code. However, a Ruby on Rails application has fewer lines of coding as compared to the same application written in PHP.
- Ruby on Rails applications need a UNIX-based server while the majority of the Web hosting companies support PHP applications.
- Testing code of Ruby on Rails application is simple. In PHP, testing modules and coding is a bit difficult.
- RoR application has a clear code structure than PHP, which makes collaboration easier.
- Comprehensive range of frameworks such as Zend, Codeigniter, and CakePHP support PHP. In the same way, wide range of framework such as Vintage, Sinatra and Rails support Ruby.
- PHP requires less memory space than Ruby. Therefore, PHP applications generally run faster than Ruby on Rails applications.
Ruby versus Perl:
- Ruby is more object-oriented than Perl.
- Perl supports more Unicode properties, full case mapping and Grapheme. Ruby is less supportive, and its encoding of strings is more explicit.
- Ruby has more third-party set of libraries than Perl.
- Perl supports multiple variable types while Ruby has only one variable-type reference to an object.
- Since Ruby is the programming language used to write Rails, the comparison is between Ruby and other programming languages such as Perl, Python and Java.
- Perl supports auto conversion of data types while Ruby requires the programmer to convert types in explicit manner.
Ruby versus Java:
- Java and Ruby follow the same object-oriented principles.
- The biggest advantage of Ruby over Java is that you can accomplish task by writing fewer lines of coding. This helps in bug fixing and increases speed of development.
- Ruby code can be interpreted and does not need compilation. However, Java code needs to be compiled before interpretation.
- Ruby offers flexibility and readability while Java offers better application performance.
- Java follows a strict C syntax in coding while Ruby allows the programmer to omit a few codes.
- Java code execution is faster than Ruby. The reason is that the Java code is converted into machine language, and the Java Virtual machine executes the code faster.
- Java is a well-known technology, and you can easily find experts to guide you. Ruby is comparatively new and availability of suitable support for an open-source technology takes some time.
- Ruby does not have type declarations, and you can assign a name to variable as needed. In Java, every member variable belongs to some class. Therefore, a programmer is required to declare the type of variable and its name before using it in a code.
- Java and Ruby can be used together, and they complement each other. JRuby is an implementation of Ruby programming language over Java Virtual Machine. | https://www.ahirlabs.com/2018/07/16/ruby-on-rails-vs-other-languages/ |
A number of standards are emerging in the auto industry such as the well established CAN bus, the AUTomotive Open System Architecture (AUTOSAR), the Media Oriented Systems Transport (MOST), and the ISO 26262 standard. While the first three standards address the external interaction and networking issues within automobiles, ISO 26262 is a set of requirements for safety and reliability. It is important for applications from different suppliers and groups within a manufacturer to conform to these standards, though only ISO 26262 calls for the use of well-defined coding rules to prevent bugs that can lurk within code and cause sometimes life-threatening malfunctions (figures 1 and 2).
Figure 1: The AUTOSAR system links together various software applications running on different electronic control units (ECUs) within the automobile.
An embedded C coding standard is therefore needed that can be followed and understood by all teams and members of teams developing these interactive, networked systems. This is especially important because different teams and different suppliers will be using different development tools, compilers, and analysis tools. Therefore it is imperative to establish a common ground at the coding level.
Figure 2: C coding is at the heart of ISO 26262 compliance but the standard does not specify coding rules or standards at the C level.
What's to gain from a coding standard?
The adoption of a coding standard by a team or a company has many benefits. For example, a coding standard increases the readability and portability of software, so that software may be maintained and reused at lower cost. A coding standard also benefits a team of software developers and their entire organisation by reducing the time required by individual team members to understand or review the work of peers.
However, one of the biggest potential benefits of a coding standard has been too long overlooked: a coding standard can help keep bugs out. It's cheaper and easier to prevent a bug from creeping into code than it is to find and kill it after it has entered. Thus, a key strategy for keeping the cost of firmware development down is to write code in which the compiler, linker, or a static-analysis tool can keep bugs out automatically—in other words, before the code is allowed to execute. While it is certainly important to use tools to verify and certify conformance to the standards mentioned above, such certification does not guarantee that the underlying code is bug-free.
That is because there are many sources of bugs in software programs. The original programmer creates some of the bugs, a few lurking in the dark shadows only to emerge months or years later. Additional bugs result from misunderstandings by those who later maintain, extend, port, and/or reuse the code.
The number and severity of bugs introduced by the original programmer can be reduced through disciplined conformance with certain coding practices, such as the placement of constants on the left side of each equivalence (==) test.
The original programmer can also influence the number and severity of bugs introduced by maintenance programmers. For example, appropriate use of portable fixed-width integer types (such as int32_t) ensures that no future port of the code to a new compiler or target processor will encounter an unexpected overflow. | https://archive.eetindia.co.in/www.eetindia.co.in/ART_8800718970_1800001_TA_cded835f.HTM%3Ffrom=ART_Next.html |
9 Signs You’re Meant to Become a Programmer
If you grew up around computers and have a knack for all things IT, you may have what it takes to become a programmer. It takes a lot of work to compete in the modern job market, so already having the right skills gives you an advantage.
Here’s everything you need to know about being a computer programmer. It should help you work out whether you’re meant for this career or not.
What Is a Computer Programmer?
A programmer deals with computers and their coding, working independently or under contract. They use different programming languages to create software or adjust their performance, whether it’s to do with functionality or appearance.
Responsibilities vary from job to job, but the typical tasks of a programmer involve:
- Fixing problems
- Updating and testing code
- Optimizing systems to suit the client’s needs
- Helping people with IT issues
That said, there are dozens of ways to earn money from coding and programming today. For example, you can design your own apps and open-source tools or pass on your skills with YouTube tutorials.
While pure talent can take you far, the more credentials you have, the better your career prospects. Considering how many industries have turned digital, programmers have opportunities everywhere, from fintech companies to online magazines.
What Skills Does a Computer Programmer Need?
How to become a programmer comes down to both hard and soft skills. To begin with, you need to know your way around a computer and as many programming languages as possible. These include:
- HTML
- CSS
- C++
- Java
- PHP
- SQL
You should also be good at fast problem-solving. If you like maths or puzzles, programming will give you plenty of chances to exercise that brain of yours. Attention to detail and multi-tasking go hand in hand with this too.
In terms of other soft skills, communication is a must. Unlike machines, people usually need simple words to understand what their computer is doing. When working with others, you need to be able to explain your work clearly and effectively, especially in reports.
Finally, how well you apply all these skills and turn them into profit depends on how organized you are. Without a realistic structure to your routines, it’s easy to lose track of tasks and waste both your and your employer’s time.
These are the key qualities of a successful computer programmer, worth expanding with additional skills. To give you a better idea of your prospects, the base salary for a senior software programmer in Mexico is between $97,000 and $732,000 per year—according to Payscale.
For now, let’s look at nine basic hints that you’re perfect for the programming life.
1. You Feel Comfortable Around Computers
Being computer literate isn’t just about knowing everything about computers. It also means you’re able to find your way around a new operating system or software and edit its code without too much trouble.
This kind of flexibility is invaluable for programmers.
2. You Know Lots of Handy Coding
Knowing several programming languages is great, but being able to whip up the most useful coding for each occasion is far more important. That’s the point of programmer jobs: good, quick, and easy solutions.
If you have this skill, even with one computer language like Python, you’re already a programmer.
3. You’re Good at Solving Computer Problems
To use the right coding, you need to know the problem. To recognize the problem and its solution, you need IT know-how alongside troubleshooting skills.
This is where a passion for puzzles can be an asset, making your bug-fixing efforts more fun than frustrating—a good attitude for a programmer’s workplace.
4. You’re Fast at Spotting Important Details
Get to know standard programming patterns well enough, and abnormalities should pop out. Working with pages and pages of code is even easier with such an eye for detail.
See if your experience and instinct tick this box. Otherwise, do what you can to develop good attention to detail. It’ll make you more effective and valuable to employers.
5. You Like Learning More About IT
A hunger for knowledge is common in programmers. If you like exploring a computer’s capabilities, taking apart and updating its coding, and just learning all you can about IT, you have a programmer’s heart and curiosity. And that is critical when tackling mounds of tasks as a professional.
6. You’re Good at Explaining the Ins and Outs of Computers
When it comes to working as a professional programmer, good communication skills are essential and can distinguish you from the competition.
If you can have casual conversations about programming with people who know nothing about it, and they can understand you, you have a powerful advantage.
As a programmer, you’ll be able to talk and write about your work in a way that benefits your employers, colleagues, or trainees. So, you’ll provide value in more ways than just fixing their computers.
7. You Can Work on Different Tasks at the Same Time
Fixing a bug can take several steps. Employers may ask for a bunch of tasks, some urgent for the company’s performance. For example, you could end up doing anything from troubleshooting people’s accounts and tweaking multimedia software to fine-tuning firewalls and countering cyber threats. So, an ability to manage multiple projects at once is a major plus.
You must be able to keep yourself motivated and on-schedule while jumping from job to job. For extra support, using Asana to track any project can be a life-saver.
8. You Can Manage Your Tasks and Time Effectively
Breaking down the previous point in more detail, you must have a good sense of what’s important and what isn’t. How much time do you have per day to work? Which tasks demand your immediate attention? Is there something small you can tweak at the same time?
If you already think and work this way, you’re ready to deal with most programming environments. It’s also a great stepping stone to build experience and prepare yourself for more challenging roles.
9. You Can Think Outside the Box
Sometimes, the solution to a programming problem isn’t the obvious or traditional one. IT literacy, curiosity, and creativity produce another essential skill: the ability to come up with new ideas to fix things.
Being well-versed in this kind of lateral thinking will make your resume shine. If you aren’t, start working on your ideas or explore online communities like Stack Overflow for unusual programming tricks you can add to your arsenal.
Learn to Code Like a Professional Programmer
There are many ways to learn coding: alone and with training, paid and free. You don’t need to love math to create a career in programming, but becoming a computer programmer everyone wants is a matter of dedication and hard work. If you tick even some of the boxes above, you’re on the right track.
For more real-life experience and to build a stronger resume, keep putting your skills to the test with jobs, courses, and challenges. These won’t just enhance your speed and abilities; they’ll also boost your confidence as a programmer.
You can’t learn to code for free. Unless you give these tried and tested resources a go, of course. | https://www.aheadegg.com/9-signs-youre-meant-to-become-a-programmer.html |
In the past, coding and programming were associated. In fact , that they were even utilized interchangeably before the business world segregated them. Today, however , the two are needed for the development and repair of applications. A coder converts human instructions into laptop languages. Specialist developers love to use encoding, as it requires more resources and an orderly methodology. Not like coding, however , it is not easy to master. While both are vital to a software task, a coder can expect to spend a considerable amount of time completing task management.
Coding and programming happen to be closely related. Both have different requirements. While a crypter focuses on producing code, a programmer focuses on the overall process of growing software. A programmer is normally expected to have necessary expertise and experience to advance a project right from inception to completion. A good programmer has to be highly structured and competent to communicate efficiently. A programmer also should have a thorough familiarity with math units and event management.
While the two methods are similar, that they result in completely different outcomes. When deciding between coding and programming, you must consider the specified complexity for the final item. A coder are unable to create a software that is multipurpose or visually appealing. Additionally, he or she simply cannot work with a custom made, so you need to decide if programming vs coding you want to work with a programmer or a great artist. If you wish to make a software with a great interactive and attractive REGARDED, a developer may be the most suitable choice. | https://mayraescalona.com/code-vs-coding |
Without a doubt, terms like developers, programmers, Full-Stack engineers, etc are becoming buzzwords among the public, yet the element of programming is common among all of them. So this article in a nutshell helps you understand what is the foundation behind computer science or software engineering. It is worth mentioning that coding and programming terms are often used interchangeably.
Put it simply, a programmer is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming language or to a generalist who writes code for many kinds of software. Also, often the software and application terms are used interchangeably among IT professionals. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (Assembly, COBOL, C, C++, C#, Java, Lisp, Python, etc.) is often prefixed to these titles, and those who work in a web environment often prefix their titles with web. A full spectrum of occupations, including: software developers, web developers, mobile applications developers, embedded firmware developers, cloud engineers, software engineers, computer scientists, or software analysts, while they do involve programming, also requires a range of other skills. For instance, all programmers need to have good analytical and problem solving skills as well as soft skills like communication. Also, programmers in general need to be creative as they often need to come up with unique solutions to new problems. Lastly, the changes in the IT landscape require programmers to learn new techniques, tools and terminologies frequently. Indeed, many companies have routine plans to upskill their technical teams.
Computer Programming is a process that leads from an original formulation of a computing problem to executable computer programs. Programming involves activities such as analysis, developing understanding, generating algorithms, verification of requirements of algorithms including their correctness and resources consumption, and implementation (known as coding) of algorithms in a target programming language. Source code is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem. The process of programming thus often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms, and formal logic. It is worth mentioning that all programming languages have one thing in common- the programming logic. For instance, the conditional statement logic is the same in Java, C++ and PHP; however, they use different languages known as syntax to represent the logic. At a very high level, there are 6 major career tracks for IT professionals: web developer, mobile App developer, software engineer, cloud engineer, system administration, and blockchain. Here is a free course for learning more about said 6 career tracks.
Other related tasks of programmers are testing, debugging, and maintaining the source code, implementation of the build system, and management of derived artifacts such as machine code of computer programs. These might be considered part of the programming process, but often the term software or application development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of source code. Software engineering combines engineering techniques with software development practices. One of most popular and well-established methodologies for software engineering is Software Development Life Cycle (SDLC). It follows the below 6 essential steps sequentially for building high quality software applications.
- Requirement analysis
- Planning
- Software design such as architectural design
- Software development
- Testing
- Deployment
Computer Program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function and typically executes the program's instructions in a central processing unit. Unlike humans, computers just blindly follow the instructions and can not judge whether instructions are true or false. As such, if a computer receives erroneous instructions as its input, it will output errors or misleading results. Therefore, designing a well-thought-out program requires lots of foresight.
A computer program is usually written by a computer programmer in a programming language. From the program in its human-readable form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a computer program may be executed with the aid of an interpreter. A part of a computer program that performs a well-defined task is known as an algorithm. A collection of computer programs, libraries, and related data are referred to as software. Computer programs may be categorized along functional lines, such as application software and system software.
Summary
In this article, we briefly reviewed the terms like programming and programmer and highlighted the context in which these terms are used. This article gives you a roadmap for becoming a software or application developer. Specifically, the first step in becoming a programmer is to choose a right IT career path. Then, start learning one or two coding languages and practice them. Read Comprehensive Review of Coding and Computer Programming article to learn more on the history and evolution of programming.
Resources
Here is a list of free courses for starting your programming career. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/weg2g/what-is-programming-and-who-is-a-programmer-531p |
What is debugging?
Debugging, in computer programming and engineering, is a multistep process that involves identifying a problem, isolating the source of the problem and then either correcting the problem or determining a way to work around it. The final step of debugging is to test the correction or workaround and make sure it works.
In software development, the debugging process begins when a developer locates a code error in a computer program and is able to reproduce it. Debugging is part of the software testing process and is an integral part of the entire software development lifecycle.
In hardware development, the debugging process looks for hardware components that are not installed or configured correctly. For example, an engineer might run a JTAG connection test to debug connections on an integrated circuit.
How debugging works in software
The debugging process starts as soon as code is written and continues in successive stages as code is combined with other units of programming to form a software product. In a large program that has thousands and thousands of lines of code, the debugging process can be made easier by using strategies such as unit tests, code reviews and pair programming.
To identify bugs, it can be useful to look at the code's logging and use a stand-alone debugger tool or the debug mode of an integrated development environment (IDE). It can be helpful at this point if the developer is familiar with standard error messages. If developers aren't commenting adequately when writing code, even the cleanest code can be a challenge for someone to debug.
In some cases, the module that presents the problem is obvious, while the line of code itself is not. In that case, unit tests -- such as JUnit and xUnit, which allow the programmer to run a specific function with specific inputs -- can be helpful in debugging.
The standard practice is to set up a "breakpoint" and run the program until that breakpoint, at which time program execution stops. The debugger component of an IDE provides the programmer with the capability to view memory and see variables, run the program to the next breakpoint, execute the next line of code, and, in some cases, change the value of variables or even change the contents of the line of code about to be executed.
Why is debugging important?
Debugging is an important part of determining why an operating system, application or program is misbehaving. Even if developers use the same coding standard, it's still likely that a new software program will still have bugs. In many cases, the process of debugging a new software program can take more time than it took to write the program. Invariably, the bugs in software components that get the most use are found and fixed first.
Debugging vs. testing
Debugging and testing are complementary processes. The purpose of testing is to identify what happens when there is a mistake in a program's source code. The purpose of debugging is to locate and fix the mistake.
The testing process does not help the developer figure out what the coding mistake is -- it simply reveals what effects the coding error has on the program. Once the mistake has been error identified, debugging helps the developer determine the cause of the error so it can be fixed.
Common coding error examples
Some examples of common coding errors include the following:
- Syntax error
- Runtime error
- Semantic error
- Logic error
- Disregarding adopted conventions in the coding standard
- Calling the wrong function
- Using the wrong variable name in the wrong place
- Failing to initialize a variable when absolutely required
- Skipping a check for an error return
Debugging strategies
Source code analyzers, which include security, common code errors and complexity analyzers, can be helpful in debugging. A complexity analyzer can find intricate modules that are hard to understand and test. Other debugging strategies include the following:
- Static analysis. The developer examines the code without executing the program.
- Print debugging (also called tracing). The developer watches live or recorded print statements and monitors flow.
- Remote debugging. The developer's debugger runs on a different system than the program that is being debugged.
- Post-mortem debugging. The developer only stops to debug the program if it experiences fatal exceptions.
Debugging tools
A debugger is a software tool that can help the software development process by identifying coding errors at various stages of the operating system or application development.
Some debuggers will analyze a test run to see what lines of code were not executed. Other debugging tools provide simulators that allow the programmer to model how an app will display and behave on a given operating system or computing device.
Many open-source debugging tools and scripting languages do not run in an IDE, so they require a more manual approach to debugging. For example, USB Debugging allows an Android device to communicate with a computer running the Android SDK.
In this situation, the developer might debug a program by dropping values to a log, creating extensive print statements to monitor code execution or implement hard-coded wait commands that will simulate a breakpoint by waiting for keyboard input at specific intervals.
Challenges of debugging
The debugging process can be quite difficult and require as much work -- if not more -- than writing the code to begin with. The process can be especially challenging when:
- The negative effect of the coding error is clear, but the cause is not.
- The negative effect of the coding error is difficult to reproduce -- for example, when web content contains drop-down menus.
- Dependencies are not clear, so fixing a coding error in one part of the program accidentally introduces new errors in other parts of the program.
History of debugging
The use of the word bug as a synonym for error originated in engineering. The term's application to computing and the inspiration for using the word debugging as a synonym for troubleshooting has been attributed to Admiral Grace Hopper, a pioneer in computer programming, who was also known for her dry sense of humor. When an actual bug (a moth) got caught between electrical relays and caused a problem in the U.S. Navy's first computer, Admiral Hopper and her team "debugged" the computer and saved the moth. It now resides in the Smithsonian Museum. | https://www.techtarget.com/searchsoftwarequality/definition/debugging |
What is Coding and how to learn to code?
Consolidating success
A beginner who is just starting to learn a programming language must know and be prepared for the fact that it has been for a long time. Learning coding is a time-consuming process in which failure feels much more than success. In order not to leave school, you must definitely record all your actions. People very often lose motivation just because they can’t feel progress. And it will certainly be if the beginner is diligent. As soon as the skills grow imperceptibly, the coding beginner may not even notice it, so in small steps, he moves towards the desired goal.
Therefore, from time to time you need to remind yourself how far you have come and look back more often. This helps a lot – after all, looking at their first lines of code, everyone can understand that they are making progress. It may seem that all these personal records are just fun. No, in fact, recording success is very important – it is very motivating throughout the learning process. Therefore, it is necessary to start, and not interrupt, to mark each passed stage.
Clear training conditions
When it comes to coding, many beginners make the common mistake of trying to do a bunch of tasks at once, and usually, everyone gives them up before they finish. They are interested in something else, most often other tasks, so they jump from one project to another. Do not do that. It is best to move around as planned – solve one problem or understand an example until everything becomes clear. It’s a very simple principle: one thing at a time.
But at the same time, you have to understand that moving forward is necessary, so you have to set yourself strict deadlines for studying one or another aspect of the language. You can try to imagine that the exam is coming soon and you will have to show everything you could achieve. This is motivating. Yes, all those personal jobs may not be very pleasant, but just coding is not fun. Strict discipline will allow you to acquire the necessary skills, and meeting the deadline is almost the most important skill for a freelance programmer.
While ordinary users are afraid to make mistakes and hate when something goes wrong, the programmer is in a completely different position, especially during joint operations. Sinning is part of his job and a very big one at that. Therefore, a novice programmer should teach himself how to read error messages, no matter how frustrating it may be. These messages contain a lot of valuable information because they tell you exactly what was missed in the process of creating the code. You need to be prepared that such messages will appear very often and will not go anywhere even after you have finished learning the programming language. You can’t save time working on mistakes – this is the most important part of learning. In addition, this is a good practice – once you understand the problem, it will be easier to avoid many mistakes later. Reporting bugs is not a punishment, in fact, they are the best developers who want to teach him how to work properly.
Communicate with other developers
Such communication will also help to understand that other people face coding problems just as often, and this is not uncommon. And if, in addition, the beginner can help his friend the programmer, then he will get another wind and will continue with revenge. And don’t be afraid to communicate – developers are actually friendly people, they’re often on their wavelength, and a beginner just needs to get into resonance.
Right and wrong approach
Coders for beginners often try to copy pieces of code from other projects, solving any of their problems. They think it’s reasonable because the main thing is that everything works. This is a wrong, moreover, very harmful approach. And not because copying is bad, but because copying, a beginner will not understand exactly what this code does. Copying is, of course, much easier than just writing everything.
But in the process of learning such an approach will lead to the fact that large gaps in knowledge are created and coding the beginner will one day give up, unable to solve the problem he is facing. And that will leave everyone. Learning a programming language, you need to spend time without regret to analyse any, even at first glance the problem. And if you can’t come up with a solution right away, you can’t give up. You have to read, watch videos, ask others – a beginner must thoroughly understand the difficulties that have arisen. Although learning a language is not exactly the same as learning an ordinary human language. Coder deals with the machine, so he needs to understand what he is doing. Such knowledge of the language is simply invaluable when the educational process comes into practice.
Learning programming languages is not the most exciting experience. But everything can be fixed if you approach things with fiction. There is no better way to learn something than by playing games. This also applies to coding, because you can learn the language quickly by playing, and at the same time improve your skills. | https://mynewsfit.com/what-is-coding-and-how-to-learn-to-code/ |
How should you start learning to program? Programming is a superset of coding and is the primary function of a Software Engineer.
It normally takes much more time for an individual to become a qualified programmer than a coder, and this usually requires an understanding of many underlying technologies. Hence a lot of programmers are usually proficient in a number of different languages and have a fundamental understanding of the relationship between conceptualisation and implementation. A good programmer should be able to recognise the most useful software and technology stack needed in order to complete a task.
Like coding, the best way to become a proficient programmer is getting your hands dirty, and to become engaged with different projects through the use of different technology stacks. By working in many different environments the learner will be exposed the surrounding technologies and gain a deeper understanding of the entire process.
“Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.” – Eric Raymond
It is important to note that a formal computer science degree is not necessarily the only way to become a good programmer. Many of the most prolific software developers achieved a considerable amount removed from any academic or professional environment. Take for example Apple’s Steve Wozniak, who is an entirely self-taught developer and now world-famous for his work on the Mac operating system.
And while you can always start learning on your own, an online coding course for kids is a great starting point. Instructors play an important part in teaching you how to think and approach problems in a structured manner. Remember, kids learn to code by coding. The more code that you cut, the better you become. In this way, it is like learning the piano or a sport. It takes technique and good practice.
https://www.ucode.com/courses/coding-classes-for-kids-ages-6-to-11
Sources: | https://www.ucode.com/coding-classes-for-kids/how-should-you-start-learning-to-program |
Since the dawn of software-based coding solutions, comments have been the bane of software programmers. There are a number of reasons for this. one, many programmers learn in environments where there is only one programmer. Two, programmers feel akin to artists forced to stop mid-masterpiece in order to provide commentary.
However, those resistant programmers realize how important comments are as soon as their boss requires them to maintain the code of a programmer no longer employed by the company. Comments often provide the key to what is otherwise a cryptic mess. It helps us to follow program flow, and it helps readers to skip the sections that are unimportant to them.
This is the reason that most coding standards include a thorough section on what and how to comment. Universities even devote entire classes to the art of source code commenting. Professors often frustrate budding computer scientists with the truism that there is no such thing as too many comments. Anyone who has had to maintain the software of another can attest to this.
The universal rule of thumb is that the programmer should write comments as he or she goes, even if that means scribbling incomplete thoughts and returning to them later. The universal rules of comments are that the programmer must comment every global/public variable and property as well as all major functions.
The comments for variables and properties should include the item’s purpose, terms of valid usage, and any other pertinent details. Universal coding standards call for more in-depth commenting on major functions. A major function should include a header and that header should include a description of the function’s promise, contract, and requirement.
Another universal rule for commenting is that programmers should comment any section of non-standard code. In this sense, non-standard indicates uncommon code or usage, innovations, kludges, workarounds, etc. The programmer should write these comments in a standard, easily identifiable way. | https://www.valid-computing.com/comments.html |
The Indeed Editorial Team comprises a diverse and talented team of writers, researchers and subject matter experts equipped with Indeed's data and insights to deliver useful tips to help guide your career journey.
Computer programming is one of the fastest-growing careers that offer lots of opportunities to work in a wide array of challenging settings and earn a good salary. Computer programmers need to know multiple programming languages and must be willing to regularly update their skills. Knowing more about this profession will help you determine if this could be the right career option for you. In this article, we explore what is a computer programmer, what they do and how to become one.
What is a computer programmer?
A computer programmer is a professional who writes code in different programming languages to create functional websites, web applications and software programs. The programmer may also edit, update, expand and test existing code to improve it. Computer programmers may work as part of a software development team or as independent workers. They are also known as a computer developer or computer coder. Generally, these professionals work in collaboration with software developers.
Related: What Is C Programming Language? Benefits and Career Advice
How to become a computer programmer
There are several ways in which you can become a computer programmer, but the most traditional path is as follows:
1. Get a degree in computer programming
A college degree is not absolutely essential for becoming a computer programmer. Some of the most competent programmers developed their skills and knowledge through self-study or mentorship with more experienced programmers. However, a bachelor's degree in computer science, mathematics or Information systems will educate you in key concepts and theories that you might find too difficult or time-consuming to learn on your own.
Research different accredited colleges and the computer programming courses that they offer. Find out about the course duration and the course fees. Join the course and complete the program.
2. Hone your programming skills
As a computer programmer, you will be expected to have a passion for coding and programming, as well as knowledge of some of the most important concepts. You can learn many of these from school, but there are many resources like books and websites available for you to learn how to program on your own time. Many lessons are available for free. It is likely beneficial to learn concepts on your own to have a foundation for further education later on.
3. Choose an area of specialisation
At some point, you will have to decide on an area to specialise in. With so many different programming languages and areas of specialisation, you will have to narrow your focus somewhat to ensure your ability to achieve the level of proficiency required in most positions.
Select a programming language you like or select one that is high in demand, and get proficient in it. Create programs and applications using it. You can use them later to show employers your coding expertise. Whichever path you decide on, it would probably be best to focus your efforts on mastering a single language. Mastery of one language is preferable to having only some knowledge of several different languages and will result in a higher likelihood of getting a job in a relevant industry.
4. Research the job market
Read job advertisements and note what employers seek in computer programming candidates. Find out which skills and programming languages are most in demand.
5. Prepare your resume and cover letter
Create templates that you can customise for each job application. Make sure you come across as professional in both your resume and cover letter. Also, check for any grammatical errors or spelling mistakes in your resume and cover letter.
6. Send job applications
Apply for a job in the way the employer has specifically instructed. If they do not want email attachments, send them links to your website or work portfolio.
7. Go for job interviews
Dress neatly and prepare well for a job interview. Have confidence in yourself and your abilities. Prepare a list of questions you want to ask the interviewer about their company and the job position.
8. Take a job offer
Consider the type of work you will get to do rather than the salary when accepting a job offer. Ideally, you want a job that will allow you to expand your skills and work on challenging projects.
Can anyone be a computer programmer?
Anyone can become a computer programmer with or without formal training, provided they are interested in programming and are willing to put in the time to learn different programming languages. However, from the employment perspective, it might be to your benefit to get a formal degree from an accredited college. Many of the top IT companies prefer to hire job applicants with a bachelor's degree in a related field.
If you do not have a formal degree but still want to work in the computer programming field, you will need to be resourceful and self-reliant. Consider creating self-directed websites, web applications and programs to show off your coding skills to prospective clients. Apply to programming jobs on sites like Indeed. Interact with other programmers at local events, on social media and in online forums. Participate in programming workshops and seminars. Build a network of programming industry professionals and let your contacts know that you are open to a job offer.
Is it hard to become a computer programmer?
It can be hard to become a computer programmer. Learning a programming language is similar to learning a foreign language. It can take time, but it is definitely possible to learn a programming language with patience and hard work. Try to learn a little every day and focus on mastering one language at a time, rather than attempting to learn several different ones at once.
The programming field is vast and constantly evolving. So, you must update your programming skills regularly and take the time to learn new languages. To stay relevant in this profession, it is essential to be a lifelong learner. Read books on programming and get instructed in important concepts from various free or paid online resources. Practise coding using different, faster and more innovative approaches. Narrow your focus to specialise in a specific area.
Related: How To Become a Web Developer
How long does it take to become a computer programmer?
The length of time it can take to become a computer programmer depends on your learning ability and the career path you choose to take. If you decide to do a bachelor's degree in computer programming, it will take you four years to complete the course. Apart from formal learning, you will need to put in additional time to study and practise programming skills on your own. Most programmers will have to study and practise for five years or more before they can take on complex projects.
What do entry-level computer programmers do?
Related: Similarities and Differences Between C++ and Java
What skills do you need to be a computer programmer?
To become a computer programmer, you need to have the following skills:
Knowledge of one or more programming languages
Ability to code proficiently
Logical thinking
Creative thinking
Critical thinking
Attention to detail
Organisational ability
Strong memory
Patience and determination
Ability to communicate clearly
Ability to collaborate with other professionals
Ability to work independently
Ability to multi-task
Troubleshooting ability
Problem-solving ability
Related: Technical Skills: Definitions and Examples
What is the salary of a computer programmer?
The average base salary of a computer programmer is ₹3,49,549 per year. The salary range will vary according to your educational qualifications, experience, skills, specialisation, job position and company. Your geographical location can also make a difference in the salary you receive.
Salary figures reflect data listed on Indeed Salaries at the time of writing.
Explore more articles
- What Is Electronics and Communication Engineering? | https://in.indeed.com/career-advice/finding-a-job/what-is-a-programmer |
I’m not a career counselor. I’m not in HR. I’m not even a coder (not a coder yet, he said optimistically).
But if you’re here reading this, I’m going to guess you aren’t any of those things either.
Even so, I’d argue that you’re actually one of the best people to answer whether or not coding is a good career. And, I’m pretty qualified myself, but for different reasons.
A career counselor might tell you one thing based on their specific specialty angle, while an actual coder might tell you something completely different based on their real-life experience.
And then I, as “blogger guy of this summer tech camp,” am pretty qualified to answer this question for you based on the fact that we have seen 450,000 students go through our programs (with many of those programs focusing on coding, specifically).
Because through these enrollment numbers, I see a lot of parents who believe in coding; families going out of their way to get their kids involved.
I can’t speak for everyone, but I’m pretty sure the reason a good portion of those families are doing so is because they feel the experience will better prepare their kids for college, internships, and future careers in coding. Said differently, they feel coding is a good enough career aspiration to put their hard-earned money towards it. To dedicate weeks of busy schedules.
Why Programming is the Best Job in 2021
What does a computer programmer do?
Computer programmers build, update, test and fix software programs using various coding languages. Theyll review the systems overall performance regularly and will provide tech support, program updates and defect fixes when needed to properly support the systems data architecture. Other responsibilities a computer programmer holds include:
What is computer programming?
Computer programming is the process of writing code that makes software programs function effectively. Programmers use coding languages to write operational instructions for the computer system to follow. They typically work alongside software engineers or developers, who build design specifications for these programs.
The computer programmer uses these specifications to program the software accordingly. If any issues occur on the computer systems, programmers use coding languages to resolve them by rewriting, debugging and testing the software systems to help these programs operate more efficiently.
Is computer programming a good career?
Computer programming is a good career for those who enjoy learning new coding languages and want to work in the technology industry. You can use problem-solving and critical thinking abilities to solve any complex technical challenges, which may make the job feel rewarding and fulfilling. Its also a great role to pursue if youd like to receive a good salary, work traditional office hours and spend your time behind the computer in an office environment.
Average salary for a computer programmer
Luckily, many employers are still currently in need of strong computer programmers to successfully implement effective software programs for their role.
Tips for becoming a computer programmer
To become a great computer programmer, you must earn the necessary education, skills and qualifications. Follow these tips to become a computer programmer:
Earn your degree
Most employers require computer programmer candidates to obtain their bachelors degree in computer science or a related subject. Others may only require you to earn a high school diploma with several years of coding experience or an associates degree from a junior college or technical institute. If youd like to increase your chances of moving up in the role or standing out to employers, you can also earn your masters degree in computer programming.
Develop and enhance your skill set
Employers are typically impressed with computer programmers who have an advanced skill set to help them contribute valuable work to the role. You can develop new skills or build upon your current ones by taking additional online courses, attending seminars or receiving hands-on training from computer programming professionals. Common skills effective computer programmers have include:
Learn additional coding languages
Many employers prefer computer programmers who have an advanced knowledge of several different coding languages. If you have a bachelors or associates degree in computer science or computer programming, youll typically learn some basic coding languages that you can use in your computer programming role. To stand out from other candidates, you can learn additional languages by watching instructional videos, taking online courses or attending in-person coding language sessions.
Common coding languages computer programmers use are:
Receive computer programming certifications
Another way to enhance your skill set and to stand out to hiring managers is to earn computer programming certifications. The type of certification to earn typically varies according to the industry or specialty youre pursuing. Many computer programming certifications require you to enroll in a course and pass an exam. Common certifications many computer programmers obtain include:
Participate in an internship
Many employers prefer candidates who have on-the-job experience working in a programming or development role. You can pursue internship programs to help you gain more hands-on experience in computer programming and to learn more about the industry youre interested in. This is a great way to meet industry professionals and to gain hands-on experience completing computer programming duties. Many internships allow you to work directly under a senior computer programmer who can teach you the basic daily tasks and responsibilities of a computer programmer.
FAQ
Are computer programmers in high demand?
Is a career in programming worth it?
Is computer programming a hard career?
Do programming jobs pay well? | https://carreersupport.com/is-computer-programming-a-good-career-definition-and-tips/ |
Wow, it’s been almost a month since I’ve posted! I can’t believe time is flying by so quickly. I have a point to this post (albeit convoluted, as usual), and I’ll get to it, but first… a couple of words about what the hell I’ve been up to!
Good (First) Jobs Are Hard To Find
The reason I’ve been so quiet lately is a combination of personal/family issues (aging family is rough) and my desperate job search as I near the end of my financial rope. I finished my Code Academy at the local community college in late August, and went on to launch myself fully into finding a first job. As it turns out, trying to land your first development position is actually very difficult, and extremely competitive.
So, this great thing happened to help me with this part… I’M GOING TO THE GRACE HOPPER CELEBRATION OF WOMEN IN COMPUTING! THIS IS HUGE, folks. I received notification that I was taken off the waitlist, and immediately freaked out because it’s not exactly a good time for me financially. The irony is that the reason I need to be at this event is the networking and even interviewing potential.
Fortunately, my Wellesley alum network had my back, and within 48 hours, we had crowdsourced the funds for my ticket and airfare. I can’t even believe that I have so many incredible supporters and fans cheering me on to succeed in this career transition. 🙂 Many, many thanks to everyone who was able to contribute. Hopefully next year, I’ll be a GHC Scholar or receive some other kind of grant funding to attend!
New Opportunities Means New Languages
Once I know enough, I’ll probably do a Code Speed Dating piece on Ruby, but that’s not the point of this post, either. But so you know, so far, I’ve just gone through da basics:
- variables
- functions
- strings/numbers/floats
- arrays
- hashes
- objects
These are all the tools in our toolbox. However, as I jumped into the coding challenges… I realized that I have some hangups. You can’t use the tools unless you understand what the project actually requires, and what you’re going to actually need to do.
The key takeaway of this post, if there is one, is that learning to programming isn’t exactly a process of writing code. Programming is artistic, and centered around evaluating problems the world faces. As an engineering discipline, we build solutions to these problems by first approaching the problem in our native language.
What exactly is the problem? How many smaller problems make up the larger problem? How can each of those smaller problems be approached? If you are working for an organization, you may even have to ask yourself, what is the most effective way to build this program for scalability and expansion?
Tunnel Vision: The Problem With Focusing On Code
I’m noticing that I am going through Treehouse at grueling paces. There have been occasional “extra credit” challenges, and they tend to make me a bit nervous. At some points, I’ve even just looked up what other folks did to solve the problem, then tried to reverse engineer it to truly understand their solution. This, to me, is indicative of a greater problem. I’m not spending enough time learning solid problem solving skills.
But how do you know when you need help with problem solving?
I’ve been reading a lot lately about actually “thinking like a programmer”. One of the books I’ve really enjoyed so far is called Think Like a Programmer: An Introduction to Creative Problem Solving. I’m currently about half way through it, and would highly recommend it to anyone looking to become a more well-rounded programmer. In fact, the book opens up with the following text, and I felt like it was speaking directly to me:
Do you struggle to write programs, even though you think you understand programming languages? Are you able to read through a chapter in a programming book, nodding your head the whole way, but unable to apply what you’ve read to your own programs? Are you able to comprehend a program example you’ve read online, even to the point where you could explain to someone else what each line of the code is doing, yet you feel your brain seize up when faced with a programming task and a blank text editor screen?
My answer to this is a resounding yes! I’m not afraid to admit that it takes far longer for me to develop an approach to a problem, then develop the solution, than it should. I actually believe that this is a very fixable problem that virtually every new programmer who doesn’t come from a logic/problem solving background will eventually encounter. And it comes with time! But for now, I figure can do something to speed up the process.
After the introduction, Think Like a Programmer moves into some “classic puzzles” like the sliding numbers puzzle, and even Sudoku. It’s hard to believe, but these games tie very closely into programming concepts, as well as identifying patterns in problems and solutions. Building on recognizing common problems in programming, including recursion, pointers, code reuse, and objects… Think Like a Programmer does a solid job of explaining how to approach them, regardless of the language being used. This is one of the greatest parts of learning to effectively solve problems in programming: while the book focuses primarily on C#, its principles apply to any language.
Another book in my queue is Pragmatic Thinking and Learning: Refactor Your Wetware (Pragmatic Programmers), which has been plugged by programmers on StackOverflow and several blogs. The cover image above is an excerpt from the book, depicting a map of the process surrounding “pragmatic thinking and learning”. I’ll probably write something up on this book once I’ve read it!
Once I’m done with this book, and maybe beforehand… I’m going to take advantage of GAMES! Specifically, coding games! Codingame appears to utilize coding to make things happen in video games, moving away from the abstraction that tends to plague programming challenges. Being a gamer, this sounds like a lot of fun. I enjoy hands-on exercises driven towards making something happen that I will actually use, more than coding for abstract problems that I may not actually be facing.
In Closing…
At the end of the day, I want to feel confident and capable. Acknowledging what I am doing well, and also identifying what can use improvement, is a constant process in a developer’s life. It takes a lot of introspection and mindfulness to recognize our weaknesses. I have a feeling that I’ll be writing a lot more on this topic as I learn more, and hope that you’ll provide me with your insights as well!
Do you, too, experience paralysis when facing a blank text editor page? Know of some great resources for learning to solve problems more effectively? Let’s talk. As always, you can also keep the discussion going on the La Vie en Code Facebook page, or via Twitter @lavie_encode. Happy coding! | https://www.lavieencode.net/blog/wiwo/why-im-scaling-back-on-learning-to-code-and-ramping-up-on-learning-problem-solving/ |
I think this post points in the wrong direction.
Programming, the art of abstraction, logic, reduction, and rewriting is independent of ‘the real world’. A great programmer can look at a program and see it’s beauty irrespective of it’s real world significance. Finding new minimal ways to represent patterns and mechanisms to verify the correctness of those patterns is more important to me than how it can be used in real life.
I agree with the need to raise your head, but for a different reason. Other great programmers are exploring important ideas regarding syntax and semantics that can drastically improve your expressivity, correctness, and productivity. A significant amount of any programmer’s time is well spent measuring him/herself and his/her productivity, and looking for ways to optimize that. The payback of exploring different architectures, computing paradigms, languages, and algorithms is enormous.
Ben
I think this post points in the wrong direction.
Jeff, nice post about needing some breadth to go with your programming depth. But, I was a littel disappointed with the content vs the title: I’d hoped that you’d be writing about the programmer’s failure mode of approaching every problem with the solution “write more code” already in mind.
Really great programmers go to the heart of the problem, find what aspect of it adds most value to the user then write a very small amount of code to address that, and that alone. Programmers have a tendency to think that the thing that they do that adds value (and justifies their paycheck) above all else is write more code. And so they write code. What they don’t take into account is that code is a liability for the customer, not an asset, so the less of it they produce (while adding value through solving the problem), the better.
Best of all, delete some. The very best programmers can take a new problem, find the essence of it, find the commonality with the essence of the problems already solved, incorporate the new solution and end up with less code than they started with. http://www.folklore.org/StoryView.py?story=Negative_2000_Lines_Of_Code.txt for example
Ivan Moore will even tell you that deleting code is a refactoring http://ivan.truemesh.com/archives/000393.html In fact, he sometimes says that even if that isn’t your goal a good non-delete refactoring should allow you to delete some code, or else you are just pushing code around for the sake of it.
“But if you accept that premise, it also presents us with a paradox: if experience doesn’t make you a better programmer, what does? Are our skill levels written in stone? Is it impossible to become a better programmer?”
Um, that’s not what Bill said. He said you’ll know within a couple years what type of programmer you are, not that experience can’t make you a better programmer.
Really good programmers spend a lot of time THINKING.
Not madly typing in code. Typing should be the last
act, not the first.
This reminds me of the old and so true quote from Larry Wall
“The three chief virtues of a programmer are: Laziness, Impatience and Hubris”
Where I think that order also is important…
Your ability to think creatively is proportional to your programming talent. Great problem solvers make great programmers.
Some of the best code I’ve ever written was actually done on a legal pad with a pen. I’ve written code across the spectrum, from memory management on early Linux kernels up to massively parallel supercomputers over satellites. Code that I look back at and say “Damn! That was good.” I did when I back away from the computer, say down with pen and paper, and write the algorithm/design out. Sometimes it was before coding and sometimes in the middle of slogging through digital sewage.
My point is that good code is developed in your head, not your compiler. You don’t need a computer immediately to write good code.
Here you go again, Jeff, beating that poor dead horse again. We’ve already familiar with your opinion on this. But I would like to ask you a few things. Will not running make you a better runner? Will not playing tennis make you a better tennis player? Will not thinking make you a better thinker? You’re preaching this approach because you, in the natural progression of your career, are programming less and managing more. I mean, come on, if you have this much time to blog then you’re not pumping out a thousand lines a day.
You see, you are trying to convince yourself as much as you are trying to convince us that programming less is the way to go. It’s ok, I’m not faulting you for that. We all evolve in our careers (we have no choice, eh?) It’s just that I don’t think you’re main argument is valid. You might become a better manager, a better architect, etc. by programming less, but you won’t become a better programmer that way.
I think you’re completely missing the point.
Programming more will give you a better understanding of syntax, control structures and other techincal elements of programming, but there is more to being a good programmer than just writing code.
An example of the difference is the word verification on this page. A bad programmer who knows lots of syntax would probably write a bulky code base that randomly generates and scrambles a series of random characters. However, that’s like hammering a finishing nail with a sledgehammer. The simple phrase ‘ORANGE’ is probably just as effective at blocking comment spam as some over complicated captcha system, but it required far less code and is much easier for users to deal with.
The point is that there are 2 ways to solve most programming problems. Either keeping writing code until it works or take time to address the goals and develop a concise elegant solution. Anybody can memorize syntax, it takes a good programmer/developer to really understand the bigger picture.
You may want to read Steve Yegge’s counterpoint to this post:
http://steve.yegge.googlepages.com/practicing-programming
How do you know you are even a ‘good’ programmer?
If I thought 4 years ago “This is the best I’ll ever be.” then I would probably be a worse programmer today. Well-roundedness is an issue and being a good programmer is about solving the right problems.
Perhaps Bill meant your personality as a programmer is set in stone, but you’d never convince me that I haven’t been increasing my skills and knowledge.
To be fair, I still approach problems the same way. I also still like writing good clean code and I still like to read books on all sorts of topics (dev and otherwise).
I would however hope the code I write now isn’t as bad as what I wrote back then.
Couldn’t agree with you more Jeff, programmers must keep their head above the water for other things beside coding. Computer programs will be useless without any users using it right? It’s just like your previous post, coding is fun, but shipping is the real job.
Good stuff. However, the term ‘programming’ perhaps needs to be better defined: Optimizing a routine (pointlessly) so it is impossible for someone else to read but is 1% faster? Writing code that can easily be extended and modified? Building an app that can (relatively) easily be changed due to the whims of the customer? Seeing the big design picture and organizing the development strategy?
Another curious thought is why are some people really, really productive in one language but not in another? Why does a certain language ‘get in the way’ of a person’s mind as they try to turn a problem into a solution?
I read both articles (which includes the one in the comments above). They both approach programming from two different perspectives. Yes, you have to practice and study to become a better programmer. You also have to get away from it from time to time to gain the perspective of those who are going to use, sell, or market the code you’re working with.
The way I see it, the articles are not opposing each other but complementing each other.
I agree to a point. I would strongly argue against the notion that we can’t get any better ever. I’ve been coding professionally for about 7 years now and I think I’m MUCH better now than I was even 2 or 3 years ago.
That said, I do think that there are some people who hit that wall and can’t proceed any further. People for whom this article is more true. I’ve seen it in school, in my work, etc.
So I think you can’t generalize this to all programmers. I continually study and read to improve my skills. Not all my studying is on actual coding but a good portion of it is. And I feel like I’ve gained more skill in the past 2-3 years than I did in the 4-5 years before that.
Kathy Sierra’s “How to be an expert” essay
http://headrush.typepad.com/creating_passionate_users/2006/03/how_to_be_an_ex.html
takes the opposite point of view to your suggestion that “A mediocre developer can program his or her heart out for four years, but that won’t magically transform them into a good developer…You’ve either got it, or you don’t. No amount of putting your nose to the grindstone will change that.”
Her view is that “The only thing standing between you-as-amateur and you-as-expert is dedication. All that talk about prodigies? We could all be prodigies (or nearly so) if we just put in the time and focused. At least that’s what the brain guys are saying. Best of all–it’s almost never too late.”
Peter Norvig’s “teach yourself programming in ten years”
http://norvig.com/21-days.html
would seem to agree with Ms Sierra’s ‘dedication’ theory.
Ben wrote:
I think this post points in the wrong direction.
Programming, the art of abstraction, logic, reduction,
and rewriting is independent of ‘the real world’.
A great programmer can look at a program and see it’s
beauty irrespective of it’s real world significance.
Ben, there’s a hell of a difference between a skilled developer and a skilled programmer. That’s the whole point of the story.
JensG
Well said, JensG. The point is that you must look beyond yourself to improve – not simply sharpen what you already know.
If you know a programming language, it becomes a means - a tool - not an end in and of itself. It’s just part of your skills that can’t get much better.
Overrated blog post! Nothing intriguing. Actually its difficult to describe a good programmer and I know a few folks who have gotten much better at programming after years of programming. IMHO, good programming is a function of your intelligence and I think great programming requires not only intelligence but also creativity. The best programmers I know are artists not engineers!
Very interesting post! IMO, the “da Vinci route” should be efficient to excel in just about anything: Study anything even remotely related to your field. You can get completely different insights from doing a bit of 3D modeling, OS tweaking, web accessibility, iPod hacking, gaming, system administration, etc… Any of these can make you a better programmer, by giving you a view of how different types of systems behave - Or maybe more important, give you ideas of how they /should/ behave. | https://discourse.codinghorror.com/t/how-to-become-a-better-programmer-by-not-programming/633?page=2 |
In the next 1-1.5 years, a floating photovoltaic station with an installed capacity of 156 kW will be built on Lake Yerevan. The public hearings on the environmental and social impact of the plant construction project took place in Yerevan on October 6, 2021.
The main issues during the hearings were the impact of the plant operation on the lake ecosystem, how to manage the solar waste, in particular the panels, after the end of operation.
The solar station will be built within the framework of "Development of Floating Solar Stations in Armenia" project implemented by the Renewable Energy Fund of Armenia (R2E2) and French "TRANSENERGIE" organization. The floating solar station will occupy 1600 sq.m. m, less than 0.3% of the lake area, will have 396 modules, each with a power of 395 W. The station will be located on a special island, which will be anchored to the bottom of the lake by 14 anchors. According to the project, this station will produce 227 MW of electricity per year.
The environmentalist of "Jinj" LLC, Arevik Hovsepyan, who carried out the assessment of the environmental and social impact of the plant, mentioned that there will be a certain impact on waterfowls. "The waterfowls perceive the islands as water, hit them and die. If we reduce the brightness of the billboards, the incidence of bird damage will be reduced," she said.
One of the main risks of operating power plants is the storage, processing and use of the panels. Upon expiration, the panels become a waste containing compounds of cadmium, lead and copper.
In reply to the question by "EcoLur" what the investing company is going to do with those wastes years later, Arevik Hovsepyan responded. "We do not yet have an answer as to what we will do with them in 20 years. I think we should already think about waste management, because there are and will be a lot of stations.”
Regarding the content of toxic compounds, the representative of "Jinj" LLC Arsen Hayiryan mentioned that the panels do not contain toxic substances. The panels are made of silicon, which is a crystalline compound, with a glass and aluminum frame. The problem of use as such is not so vulnerable. You can just put it somewhere. It will not pose a threat to the environment. The question covers recycling and it is possible," he said.
R2E2 Director Karen Asatryan noted that the Armenian government currently includes an obligation in the project agreement for investment projects of solar stations to use the equipment at the end of the project. "At the moment, billboards in the world are being recycled and reused simply with expensive technologies. I am confident that in 20 years we will have technology that will allow us to recycle and reuse the panels we use," he said.
In response to the question by Grigor Nazaryan, Head of Environment Department of Yerevan Municipality, what will be done with the damaged panels if they are broken before the end of the operation period, Arevik Hovsepyan responded that those issues are not regulated by the legislation. RA Law on Environmental Impact Assessment (EIA) does not mention solar stations, while it is necessary to include the issues related to solar stations in the law. These stations must also undergo an EIA.
In addition, we do not have legislation on waste. We do not even know if the billboard is considered hazardous waste. The billboards are not classified so that we can understand where to take them if they break," he said. | https://www.ecolur.org/en/news/waste/13545/ |
Please Note: This event has expired.
A Road Not Taken: President Jimmy Carter
Fairfax Feature Films presents this film about President Jimmy Carter’s energy and conservation policies
In 1979, President Jimmy Carter, in a visionary move, installed solar panels on the White House and implemented clean energy and environmental protections.
Reagan later removed these panels during his presidency.
In 1991, Unity College, an environmentally-minded center of learning in Maine acquired the panels and installed them on their cafeteria roof.
This powerful film explores Carter’s energy and conservation policies at a time we knew about global climate change, and will likely rewrite how Carter is written in the future as one of the greatest visionary leaders of The United States, if not the world.
A follow-up discussion will be led by Barbara McVeigh, local filmmaker and author, and Charlie Siler, cohost of People’s Environmental News.
If you have a group interested in a screening of this film, please contact Barbara McVeigh 415.717.0151
ADMISSION INFO
Free Admission
Contact: 415.453.8151
Additional time info: | https://www.marinarts.org/event/a-road-not-taken-president-jimmy-carter/ |
Green Zone:
A renewable energy educational display will be created in the south entry area of Dennis Hall. The interactive display will include information about solar power, wind power, the on-going work of Earlham's Environmental Responsibility Plan, and other relevant environmental issues and activities. As much as possible, the furnishings will be made using LEED's (1) certified materials.
The goal of the Green Zone is to provide people from Earlham and surrounding communities a space where they can observe the workings of renewable energy. It is an area where visitors will be able to visualize how renewable energy can be installed and used in their own homes or businesses. All electrical equipment in the Green Zone will be powered by the solar panels and wind turbine on the roof on bright and windy days. On some days, the Green Zone will take some electricity from the local (Richmond Power and Light) grid. And on some days, it may be sending "green" power into that grid. A 52" x 60" LCD array will be installed for the display of real-time and historical data (meteorological, energy production, energy consumption, ... ) and other educational information. Also included will be a display of hardware for the power collecter and converter, and the grid interconnect.
Solar Power - Coming Fall 2005
There will be two installations of solar panels at Earlham College. The first is the installation of a 3.15 kw (kilowatt), 20 solar panel system. Some of these panels will be positioned on the parapet of the roof so that they are visable from the ground. The others will be on the flat roof where they can be easily and safely viewed. These will be installed during the fall of 2005. The second installation is at Miller Farm south of campus. This is a student residence for approximately 9 students year round who live in an intentional community with a focus on community development and sustainable agriculture. Installation of panels on the south roof of the farm house is planned for the Spring of 2006. Both installations will involve interconnection with the local power grid, so that electrical energy can flow in either direction.
Wind Power – Coming Spring 2006
This involves installation of a 1kw (kilowatt) wind generator on 30 foot tower on the northeast roof of Dennis Hall, including tie-in to existing electrical hardware from the solar panels. One purpose of this installation is to confirm that wind energy production is feasable in the area, as has been indicated by meteorological studies over the past few years.
We are currently doing wind prospecting on Earlham property south of campus. This involves putting up moveable equipment that measures and records wind speed and direction. The data that is collected, along with the data from the wind generator on the roof of Dennis, will be used by Earlham to determine the feasability of installing a commercial-grade wind farm south of campus.
RP&L (Richmond Power and Light)
This project will provide the first opportunity for RP&L to work out the details of a grid interconnection system. Even though the law mandates that utility companies cooperate with small renewable energy producers, the actual implementation of that cooperation typically is not straightforward. We have already begun discussions with RP&L officials, and expect to move forward in close cooperation with them. | https://wiki.cs.earlham.edu/index.php/Coming_Soon |
As of 2022, the global environmental crisis has reached new heights. Since 1880, the global temperature has risen around 0.32°F (0.18°C) per decade, which is twice the rate as it would rise, naturally. Over time, the extra heat has altered regional temperatures, making them fluctuate to the extremes.
More warmth means the ice caps are melting, contributing to rising sea levels and coastal flooding. The climate changes are severely impacting numerous plants and animals, causing them to lose their habitat range. Worse still, scientists predict the problems are only going to grow in the coming years as the temperature continues to rise.
To combat this issue, environmentalists have been campaigning for greener policies across all industries to reduce the world’s carbon footprint. Likewise, they’ve appealed to the general public to make more eco-friendly choices to cut back on greenhouse emissions. They claim that the most effective way to do this is to switch to solar power.
But is this truly the case? Is solar as beneficial as they claim? What is the government saying about solar energy? Keep reading to find out everything about this most precious resource!
Solar Energy 101
It’s a basic fact that the world needs electricity to operate. And for most of the modern industrial era, we have relied on fossil fuels to generate it. As a result, industries burn coal, natural gas, and petroleum to generate an average of 4,116 billion kWh of electricity each year.
Next to fossil fuels, other ways of creating power include nuclear power plants and renewable energy sources, such as wind, hydro, and solar power. Solar power, in particular, is a rising star in the energy industry because of its efficiency, ease of access, and eco-friendliness.
Essentially, solar power is the energy we harness from the sun by converting electromagnetic radiation to direct current. This is done using solar technology such as solar panels. The panels contain a layer of silicone, teeming with photovoltaic cells that get charged when they absorb sunlight. The charge knocks electrons loose from their bonds and the flow by the internal electric field of the cells.
This process produces a direct current, which then gets transformed into an alternating current via an inverter. On average, a domestic solar system can generate 200-400 kWh. This is more than enough electricity to power an average American home.
What’s more, the panels can work in all weather conditions. The sun never completely disappears from the sky, even when it’s overcast. Therefore, the panels are the ideal investment if you live in an area with bad weather but still want to reduce your carbon footprint.
The Government’s Stance on Solar
While environmentalists praise solar as the best renewable energy source available, it has its fair share of detractors. Solar skeptics claim the eco-friendly label is a scam and that solar panels aren’t all they’re cracked to be.
However, they are just a vocal minority. Most people, including the government, agree that solar energy is the best solution we have for fighting climate change. For one, solar panels don’t create greenhouse emissions when they generate electricity. Since you can discreetly outfit them to your roof, they don’t interfere with local flora and fauna when running. Lastly, they don’t produce any toxic chemicals that could pollute landfills and, most importantly, water sources.
In light of all these benefits, the government has implemented various solar programs to incentivize the general public to invest in a system of their own. Some of these include:
- Federal Investment Tax Credit: A program that offers anyone interested in getting panels a tax credit of between 22% to 26% on all systems installed from 2020 to 2023.
- The Modified Accelerated Cost Recovery System: This program guarantees that you will also receive a tax reduction on any income you may get via your solar-powered home.
- The Public Utilities Regulatory Act: In case you decide to create a small power production facility using your system, the government can mandate that the utilities purchase energy from you at stellar prices.
So, regardless of what you plan on using the panels for, you’re guaranteed to get some major benefits out of your system!
To Sum Up
Environmentalists aren’t the only ones praising solar power. The government is promoting it too! There are currently numerous benefits for homeowners considering switching to solar, including tax credits and reduced energy bills. So, if you were hesitant about going solar, rest assured. This renewable energy is not only good for the environment but for your wallet too!
Popular Solar Panel Articles to Get Updated on
How to Prepare for Solar Panel Installation
How to Get a Government Grant for Solar?
How Soon Can My Solar Panels Be Installed? | https://atlantickeyenergy.com/what-is-the-government-saying-about-solar/ |
In my travels around the internet I found this site put together by the Australian PV Institute showing the level of installation of solar photovoltaic panel installation by postcode or Local Government area.
In Fawkner there have been about 246 solar PV installations. This amounts to approximately 5.4% of the estimated 4520 dwellings in our suburb with an installed capacity of 571kW.
This is lower than the Moreland average of 6.1%. Moreland Local Government Area (LGA) and is also below other municipal areas in Melbourne’s north. Maribyrnong has 7% solar installation, Moonee Valley is on 6.5%, Darebin is on 6.9%, Banyule on 6.7%, Whittlesea on 9.7%, and Hume on 11%.
I made up this table to show the relative level of installation of solar panels across Moreland:
|Suburb||Total Dwellings||Dwellings installed||Percent||Installed capacity|
|Fawkner 3060||4520||246||5.4%||571kW|
|Hadfield, Glenroy and Oak Park 3046||11482||635||5.5%||1391kW|
|Coburg 3058||11303||835||7.4%||1990kW|
|Pascoe Vale 3044||9127||546||6%||1238kW|
|Brunswick 3056||7014||389||5.5%||835kW|
|Brunswick South, Brunswick West, Moonee Vale, Moreland West 3055||3933||246||6.3%||563kW|
|Moreland LGA||56139||3343||6.1%||7666kW|
There is a substantial opportunity to increase residential rollout of solar PV panels in Fawkner and in Melbourne’s north to keep the pressure on reducing carbon emissions. Solar panels reduce peak demand on base load power stations. This is particularly important in Victoria where we have some of the most polluting power stations which burn carbon intensive brown coal.
On a global level, Italy (16%) & Germany (32%) accounted for almost half of global solar PV capacity installed up to 2012, according to The Climate Group (@ClimateGroup) in a tweet.
Moreland Council achieved carbon neutrality in 2013 with carbon Offsets
Moreland Council has installed solar panels on Council buildings and currently purchases 100% Green power, with an aggressive energy efficiency campaign to reduce Council energy usage. Think about all the street lights which Moreland has been changing over to the much more efficient and long lasting LED lights. Through buying carbon offsets – an accredited program financing wind power generation in India – it has achieved carbon neutrality this year. This is a positive step, although Council needs to continue to reduce energy and carbon emissions to reduce the level of carbon offsets purchased.
Moreland Council are also presently preparing a climate action plan 2020 in conjunction with the Moreland Energy Foundation with an aim to make the municipality – that’s all of us – carbon neutral by 2020.
If you need assistance with reducing your energy usage MEFL’s Positive Charge can assist with advice, assessment and products.
Solar revolution lead by ordinary Australians as Government action falters
One of the last major reports of the Climate Commission before it was abolished by the new Abbott Government was The Critical Decade: Australia’s future – solar energy (PDF). It was written by Professor Tim Flannery and Professor Veena Sahajwalla. The report highlighted that ordinary Australian households were leading an energy revolution with more than one million rooftop solar PV systems installed by May 2013 in just a few years.
Climate scientists continue to say this is the critical decade for climate action. With the Abbott Government attempting to dismantle the effective but limited policies of the former Labor Government, and obstructing international climate negotiations in Warsaw, it is time for individuals, businesses to step up to taking action.
The current conservative Victorian Government lead by Premier Denis Napthine has backtracked on it’s bipartisan support for climate action and rollout of solar power of the previous State Labor Government and is currently considering 13 billion tonnes of brown coal allocation to develop a coal export industry.
We need to keep holding our State and Federal Governments accountable to Australian public opinion on action on climate change and carbon pricing. | https://fawkner.org/2013/12/12/solar-pv-panel-installations-in-fawkner-exceed-5-per-cent-of-dwellings/ |
The boss behind plans to build a large solar farm in West Dorset say they are listening to concerns from residents about its scale and use of farm land. Developer Statera Energy has unveiled plans to install solar panels and batteries to help generate renewable energy on 1,400 acres (570 hectares) of farmland near Weymouth.
Maps released by Chickerell Solar and Storage, Statera's local division, show three locations which could see solar panels installed and store up to 400MW of energy in its battery storage capacity close to the existing substation at Chickerell. The developer hosted an event to speak with around 200 residents at Willowbed Hall in Chickerell on Tuesday (November 22) before hosting another public consultation at Portesham Village Hall on Wednesday (November 23).
The events have allowed residents to examine the proposed plans, speak with Statera representatives and share their opinions and feedback. Residents learned that the solar farm could power tens of thousands of properties in Dorset and how the area has some of the best solar irradiance in the country.
Andrew Troup, director at Statera Energy, told DorsetLive described the solar farm as a “nationally significant infrastructure project” and would help Dorset Council meet its carbon neutral targets as well as the Government’s environmental targets for 2030. He added that the solar panels would be installed on poor-quality farming land and argued it would be the best use of the green space and support the land owners.
READ MORE - Warm spaces in Dorset: Where to stay warm for free no matter where you are in the council
He said: “I must have spoken to about 20 people over two hours, and I think there was only one person who was kind of pretty anti (against the development), but we sort of managed to get somewhere with them. The rest were either agnostic or just interested. There were lots of really good questions. I wouldn't be surprised when we count up the feedback forms that over 50 per cent were actually for the plans.”
Mr Troup noted that they were actively listening to residents’ feedback and will use comments raised in feedback forms at the public consultation events to tailor their final plans to be submitted to Dorset Council. Some online criticism believed the solar panels would “ruin the landscape”, but Mr Troup believed that the development “wouldn’t be as visible as you think”.
He added: “Some of the comments we had from the people in Coldharbour leaves really interesting and incredibly interesting reading. We will definitely be taking onboard some of their views because they're particularly interested in the footpath system. So there is something we can do about that and we will take those (comments) into account.”
Speaking to residents at the public consultation, many were interested to learn more about the project and keen to speak to Statera’s representatives. One said: “The solar panels would be tucked away in these fields and I don’t think we would see them. In fact, I have had to look up the area to see where it would actually be installed. It seems like a good use of land”.
Another said: “With the rising fuel costs, we have to look at producing our own energy and solar panels is the way forward.”
One man leaving the event said he was “not terribly impressed” with the plans while West Dorset MP Chris Loader previously described the development as "an appalling use of green belt farmland". More than 500 acres of the proposed infrastructure would be within the designated Area of Outstanding Natural Beauty (AONB).
Statera states in their literature that the development would be a "substantial investment in the local area", create jobs during its construction and help decarbonise regional electricity supply.
Got a story or issue for us to investigate? Email us at [email protected] to share your pictures, stories and information.
You can stay up-to-date on the top news near you with Dorset Live's FREE newsletters – enter your email address at the top of the page or sign up to our newsletters.
READ NEXT: | https://www.dorset.live/news/dorset-news/chickerell-solar-farm-plans-event-7858061 |
While Central New York State might not get a lot of visible sunlight in mid-February, Syracuse University is taking advantage of the sunshine it does receive throughout the year as part of its campus sustainability program.
During a recent renovation project at Schine Student Center, SU installed photovoltaic (PV) panels last summer which started generating power in October 2020. The project was design by SU’s Campus Planning, Design and Construction (CPDC) engineering team with input from staff members at SUNY College of Environmental Science and Forestry.
According to the University, the Schine Center’s 139 panels have a capacity of 50 kw and are expected to generate 66,000 kWh per year.
Archi-Technology provided Technology Construction Management services for the installation of required network infrastructure connections from the rooftop solar panels to the University’s energy management IP-connected systems.
Our thanks to our project manager Richard Ladd for his contributions to this green project.
> Read more about the project at SU’s STEM blog post.
|
|
Archi-technology LLC
Established in 1996, we specialize in facility technology infrastructure as well as IP-based building systems such as Communications, AV and Security. | https://www.archi-technology.com/news/archives/02-2021 |
Why do we need solar energy?
Some 97% of the world’s scientists agree that the planet is heating up, and manmade carbon dioxide emissions are responsible. The way we generate energy is the largest contributor to carbon emissions, so there is an urgent need to decarbonise our energy supplies for the future, as well as to reduce our overall energy usage.
In December 2015, 195 countries reached a landmark Climate Change agreement in Paris to limit the rise in global temperatures to below 2 degrees celsius, with an aspiration to keep it to 1.5 degrees. Many commentators believe this marks the end of the era of fossil fuels and the transition to a new clean energy future.
In November 2014, the latest report from the Intergovernmental Panel on Climate Change stated that renewables would need to grow to 80% of global power by 2050 if “severe, pervasive and irreversible” damage is to be avoided.
The UK has a binding international target to generate 15% of energy (30% of electricity) from renewable sources by 2020. Solar power is a well-established technology, which is falling in cost. By the end of 2015, renewables accounted for 25% of overall UK electricity generation, with solar’s capacity comprising 8.2 MW, or 29% of overall renewable capacity. (Source: DECC)
The UK is also facing an energy supply catastrophe, following decades of underinvestment. The energy regulator OFGEM has warned of a risk of power cuts as early as 2015 if we do not start getting more power generation online very quickly.
The UK energy supply currently depends heavily on imported fossil fuels; solar power can give us homegrown electricity, which can be generated and used close to where it is needed, improving the UK’s energy security.
Solar energy is low impact, relatively low cost, beneficial to local wildlife, has limited impact on local communities and it performs well in the British climate.
What is a solar farm?
A solar farm, or solar park, is the large-scale use of Solar Photovoltaic Panels (PV) to generate renewable electricity. The panels convert sunlight into electricity and feed it into the local electricity grid via inverters, which change the current from DC to AC. Solar panels do not need direct sunshine to work, but can also generate electricity when the weather is cloudy or overcast.
Approximately 50 acres of land is needed for every 10 megawatts (MW) capacity of solar PV – enough to power 3,000 average homes and save 4,300 tonnes of CO2 per year. Because solar farms require large areas of land they tend to be developed in rural areas where there is space and they can be well screened. They are subject to a rigorous planning process, which takes into account the characteristics of the site and any potential impact on the area.
What is the impact on the local landscape?
A solar farm should be appropriately designed and located so that it has a minimal visual impact on the surrounding area. A well-designed solar farm will take advantage of natural screening due to contours in the land and existing hedgerows and woodland. Screening can also be improved with new planting of hedgerows and trees as part of a land management plan; this is all taken into account during the planning process.
Panels are installed on metal frames screwed directly into the ground – no concrete is used in their construction. The inverters which convert the current are housed in small boxes which are located to cause minimal intrusion.
Security fences are necessary to protect the solar farm, with their design complying with the local planning authority requirements. Deer fences are usually used so that they blend in well with the surrounding countryside, smaller mammals can still have access to the land enclosed by the solar farm.
Solar farms are temporary – planning permission is usually granted for 25 years; after that all the equipment can be removed and mostly recycled and the land can revert to its previous use.
Against a backdrop of rising energy costs and extreme weather events, there is increasing economic pressure on farmers to diversify. By providing a constant source of income, a solar farm can help maintain farming as a local way of life, preserving the agricultural character of the local landscape for the future.
What impact do solar farms have on food production?
Very little. Climate change and the decline in pollinators, such as bees, probably pose a far greater threat to food production – and these can both be mitigated by solar farms. On an agricultural site, the land can continue to be used for sheep grazing and beekeeping, for example. The solar panels only take up around one third of the land area; the rest can be planted with grasses and wildflowers to encourage wildlife and improve biodiversity.
According to the Solar Trade Association, the UK has 59 million acres of land, with 45 million in agricultural production. 10GW of solar would only use 60,000 acres or 0.1% of overall UK land area. The impact on food production would therefore be incredibly small even if arable land was used; currently substantially more land is proposed for growing energy crops such as willow and miscanthus, which generate much less energy per acre than a solar farm – around a tenth.
The National Farmers’ Union and BRE National Solar Centre have published these guidelines: BRE (2014) Agricultural Good Practice Guidance for Solar Farms
How safe are solar farms?
Solar is one of the safest energy generation technologies in the world. Solar PV technology dates back to the 1950s and its use is widespread throughout the world – not only in large-scale solar farms but in domestic situations too, in the form of rooftop solar panels.
Solar cells are made from silicon, almost the same as sand, and contain no heavy metals or toxic substances. They are covered with a thin layer of protective tempered glass, and all the materials are non-volatile in normal operating conditions and insoluble. Solar panels have no moving parts and create no emissions. Solar panels do not emit energy radiation and therefore cannot interfere with equipment such as mobile phones, heart monitors, pacemakers, hearing aids or TV reception.
What impact does the solar farm have on wildlife?
In essence, a solar farm is a nature reserve that is left largely untouched for 25 years, resulting in huge benefits for wildlife and biodiversity. Their ecological value is recognised by organisations such as the National Trust, the RSPB, Friends of the Earth and the Bumblebee Trust. As the RSPB says, “Solar farms could be a real asset in our countryside by giving declining wildlife like bees and farmland birds a home.”
In Britain, wildflower meadows have decreased by 97% since the 1930s thanks to intensive farming practices. The decline in pollinators, such as bees, is particularly worrying, and has an economic as well as environmental impact – according to the government, pollinators are thought to be worth around £400m a year to the UK economy. Our solar farms are designed to boost biodiversity with progressive ecological and land management plans to create wildlife havens, encouraging bees, butterflies, bats, etc.
All our sites comply with or exceed BRE (2014) Biodiversity Guidance for Solar Development. Click here to view the BRE National Solar Centre Biodiversity Guidance for Solar Developments.
How will local residents benefit?
The green electricity produced by a solar farm is fed into the local grid and it will travel the shortest distance to meet demand closest to the area where it is generated. That means that when the solar farm is generating power, local residents and businesses will be drawing their electricity supply from the solar farm – energy which is clean and green.
We are keen that local residents share the benefits from our solar parks, through educational opportunities linked to the development, as well as directly through a community benefit fund. We work with Parish and Town councils and also welcome local input on how this could best be used to bring economic, social and environmental benefits to the area.
Are there any increased flood risks?
Flood risk does not increase with the installation of solar farms, as only a very small proportion on the solar farm is in direct contact with the ground, and the design of a solar farm will take account of any existing flood risks.
It could be argued that as solar farms reduce carbon emissions they are helping reduce the risk of future flooding due to climate change.
Is there a risk from glint and glare?
Solar panels are designed to absorb light rather than to reflect it. They are considered safe to install close to airports, near major roads and even besides car race tracks such as the ‘Top Gear’ test track. The reflection from a solar farm is much less than from, say, commercial greenhouses or polytunnels – from a distance they appear in the landscape similar to a ploughed field.
What is the impact of a solar farm on property prices?
There is no evidence to suggest that solar farms affect property prices either positively or negatively. A well-screened site cannot be seen unless you are standing next to it, they operate silently and safely and are widely accepted by the public.
How efficient are solar panels?
While solar panels obviously do not generate power at night when it is dark, they fit well into a portfolio of different generation technologies because their generation is easy to predict – we know precisely when sunrise and sunset occurs, as well as likely seasonal variability, so they can be easily integrated with the variability of the grid.
When energy is transmitted from large power plants across the country, around 10% is lost in the process. However as solar electricity tends to be used close to where it is generated losses from transmission are greatly reduced.
The “energy payback’ time for Solar PV has continued to fall as technology has improved and panels have become more efficient, and now ranges from just 0.55 to 1.3 years, taking into account the whole solar life cycle including manufacturing, operation and recycling. Report from Bavarian Institute of Applied Environmental Research and Technology.
What makes a good location for a solar farm?
There are strict planning rules governing the location of a solar farm. It should be able to be easily screened, generally free from landscape designations (such as Areas of Outstanding Natural Beauty) unless there are exceptional circumstances, and ideally close to local areas of power demand such as towns.
It also needs to be generally unshaded with good levels of sunlight, and with easy access by road for construction.
However the biggest constraint on locating a solar farm is access to the local electricity grid, which is becoming increasingly difficult and expensive, therefore reducing the availability of suitable sites within the UK.
How popular are solar farms?
The Department of Energy and Climate Change conducts regular surveys on public attitudes to renewable energy. These consistently show that solar is the most popular renewable technology in the UK, with 85% of the public supporting it. | http://www.solsticerenewables.com/faqs/ |
Research: Pollinator habitats could be saved by solar power plants
Researchers at the U.S Department of Energy’s (DOE) Argonne National Laboratory are studying solar energy facilities with pollinator habitats on site. Through this effort they hope to rehabilitate declining pollinator populations that play an important role in the agricultural industries. The loss of such species could result in devastating crop production, costs, and nutrition on a global scale.
Currently, pollinators are responsible for pollinating nearly 75% of all crops used for food. However, because of the increase in man-made environmental stressors, their population continues to steeply decline.
The research team has been working on examining the potential benefits of establishing species’ habitat at utility-scale solar energy facilities to resolve the problem.
They have found that the area around solar panels could provide an ideal location for the plants that attract pollinators…
Spring is coming earlier to wildlife refuges, and bird migrations need to catch up
Climate change is bringing spring earlier to three-quarters of the United States’ federal wildlife refuges and nearly all North American flyways used by migratory birds. This is a shift that threatens to leave migrating birds hungry and in a weakened condition as they are preparing to breed, new research shows…
It’s an Ecological Trap: Global warming can turn Monarch Butterflies’ favorite food into poison
“LSU researchers have discovered a new relationship between climate change, monarch butterflies and milkweed plants. It turns out that warming temperatures don’t just affect the monarch, Danaus plexippus, directly, but also affect this butterfly by potentially turning its favorite plant food into a poison.
Bret Elderd, associate professor in the LSU Department of Biological Sciences, and Mattnew Faldyn, a Ph.D. student in Elderd’s lab from Katy, Texas, published their findings today with coauthor Mark Hunter of the Department of Ecology and Evolutionary Biology and School of Natural Resources and Environment at the University of Michigan. This study is published in Ecology, a leading journal in this field…”
Making photography tell the stories: If we lose the ice, we lose the entire ecosystem’
You, like Paul, a former marine biologist, can inspire change and help people connect the dots in compelling ways as we face 30 years to slow down climate change in a way that will save the species we love, and the communities as we know them. Why? Because, as Paul notes…
Can ground mounted solar farms be wildlife havens?
“Research suggests that the negative impacts of solar installation and operation relative to traditional power generation are extremely low. In fact, over 80% of the impacts were found to be positive or neutral. Yet, it is clear that if it involves the removal of woodland to make space for solar power this can cause a significant contribution to CO2 emissions, but still far lower than coal-based electricity.”
Solar farms can enhance wildlife habitat (and can be compatible with grazing)…
Beavers—once nearly extinct—could help fight climate change
Many conservationists have been trained to think that dams are bad-news for wildlife. With climate change, droughts, and increasingly extreme weather, we are rethinking that.
For example, on the Puget Sound, beavers are being reintroduced to enhance salmon stocks. Small dams might be something we need to consider, and this article gives some ideas of where to start…
Contribute to Science
Every observation can contribute to biodiversity science, from the rarest butterfly to the most common backyard weed. We share your findings with scientific data repositories like the Global Biodiversity Information Facility to help scientists find and use your data. All you have to do is observe.
Wind energy takes a toll on birds, but now there’s help
Given that entire species will be wiped out with climate change, when it comes to talking about the negative impact that windmills have on birds, it’s a very relative figure—one we have to keep in mind more than ever. Still, if there are ways to reduce the number of birds killed, and that’s a very positive thing.
“Researchers at Cornell University in Ithaca, New York, have hit upon what could prove to be a simple way to protect birds from wind turbines. They’ve used the “signatures” of birds that are visible in raw weather radar data to generate bird maps and live migration forecasts designed to alert wind farm operators to the presence of birds at peak times…”
Accelerating extinction risk from climate change
“Current predictions of extinction risks from climate change vary widely depending on the specific assumptions and geographic and taxonomic focus of each study. I synthesized published studies in order to estimate a global mean extinction rate and determine which factors contribute the greatest uncertainty to climate change–induced extinction risks. Results suggest that extinction risks will accelerate with future global temperatures, threatening up to one in six species under current policies…”
Death and Extinction of the Bees. The Role of Monsanto? | https://community-consultants.com/climate-news/wildlife/in-the-news/page/13/ |
To support TVA’s goal of promoting renewable energy resources, TVA will review Section 26a applications for (1) water‐use facilities that incorporate solar panels into the design and (2) stand alone, land‐based solar panels. This guidance contains the requirements for the structural standards of such water-use facilities and land-based structures and addresses the environmental and programmatic reviews required.
This guidance shall apply to all requests for water-use facilities and land‐based structures under the jurisdiction of Section 26a of the TVA Act.
When a Section 26a permit application is received by TVA requesting a land‐based solar panel or a water‐use facility containing a solar panel, staff will follow the standard process for reviewing an application, while incorporating the following additional items in the review.
A. Solar Panels on Water-use facilities
Water‐use facilities containing solar panels must conform to the standards of Section 1304.204. For example, solar panels cannot be installed in a manner to create a covered second story, increase the footprint of the dock above the allowable square footage, or be used as side enclosures on docks and piers. Additionally, solar panels will generally not be permitted to extend two feet above a sloped roof or four feet above a one-story flat roof (the typical height of a railing). Solar panels that extend the roof overhang to more than two feet will be incorporated into the facility footprint calculation. On fixed water-use facilities, the panels would need to be situated at least one foot above the 100-year flood elevation and could only be approved if they do not exceed the standards in Section 1304.204. Consistent with all Section 26a permits, applicants agree to the conditions of a Vegetation Management Plan (VMP). The VMP associated with permits including solar panels will be consistent with Sections 1304.203, 1304.302, and 1304.212.
B. Land-based solar panels
C. Electrical Standards
D. Environmental and Programmatic Reviews
TVA completes certain environmental and programmatic reviews on all Section 26a applications. In addition to other environmental and programmatic reviews deemed necessary by staff, some requests involving solar panels may be coordinated with Regional Relations, Commercial Energy Solutions, and/or the local power company. If a solar installation does not involve an interconnection with an applicant’s electrical system, then such reviews will not be needed.
E. Additional Information Required
In addition to standard application information required for a Section 26a permit request, the following additional information will be necessary for reviews of requests involving solar panels. Applicants are responsible for submitting design plans
and specifications of the solar installation to the local power company and obtaining their acknowledgment/approval.
If an interconnect with the applicant’s electrical system is involved, the following must be included the application:
F. Cost Recovery
Requests will be handled as Category II ($500 application fee). For requests that involve an elevated environmental and/or programmatic review, the project will be reviewed as a major action (Category III) with $1,000 application fee and full cost recovery. | https://www.tva.com/environment/shoreline-construction-permits/solar-panel-approvals |
UK farms produce about 10% of the country’s greenhouse gas emissions (due to cows, fertilisers, heavy machinery... and the fact that 71% of the UK is farmland) and pressure is mounting to address this, if we’re to meet our climate change mitigation ambitions.
The sector is soon to be hit by the upcoming Agriculture Bill and Environment Bill, the thrusts of which are said to ‘make the polluter pay’. Farming subsidies will shift to those who produce an environmental benefit from their land and work to minimise emissions.
Retailers and consumers are demanding more sustainable food sources, adding to the pressure on farmers.
The National Farmers Union has committed to net zero emissions by 2040. It estimates that savings of 3 Mt CO2e per year could be made by the use of land-based renewables - this is about 7% of the industry’s current total greenhouse gas emissions. So let's look at how solar panels for farm buildings can help make agriculture greener.
Energy usage on farms
On average, electricity accounts for 33% of energy used in agriculture. This breaks down into 15% for heating and 85% for ventilation, refrigeration, lighting and other appliances.
Electricity usage varies by what the farm produces, with some livestock much more energy intensive than others:
Data source: Defra.
It’s also interesting to see how this compares to the total energy used in the farming process (e.g. including vehicles):
Data source: Defra.
This implies the biggest opportunities for renewable generation are in poultry, pig and dairy farming. The high usage in these farms is due to dense livestocks in shelters that have a high demand for heating and ventilation, or for processing in the case of dairy.
Of course, this is the case now - but what about in the future? As the world decarbonises, all industries will likely shift towards heating and vehicles powered by renewable electricity, rather than fossil fuels. Electric tractors and other farming machinery are already starting to break through. This has the potential to shift much, if not all, of a farm’s energy use to electricity.
How to incorporate solar
Farms often have two opportunities well suited to solar energy generation:
- large, uncomplicated, exposed roofs;
- tracts of non-arable land.
These can be taken advantage of for solar arrays to supply the farm’s electricity demand.
Rooftop solar can be installed on barns, warehouses, outbuildings or farmhouses. The best roofs are south-facing, and ideally have the main incoming electricity supply and distribution board in the same building. Asbestos roofs are rarely worth installing on.
Ground mounted panels can be installed on any unshaded land of a suitable size. It’s also worth noting that ground mounted systems can be raised to a height that allows smaller animals (such as sheep) to graze underneath, or to support wildflowers, pollinators and local biodiversity.
You could even increase generation through a solar tracker, which orients itself to capture maximum sunlight throughout the day. Back in 2011, we installed one on a farm in Northamptonshire that has seen a 45% higher yield than an equivalent fixed system.
Financing solar PV for farms
If you can, it’s worth investing your own money in solar PV so that you own the installation and all its accompanying benefits outright. But it is a sizeable upfront investment, so luckily there is another option:
- Solar PPAs. A Power Purchase Agreement (PPA) is when an investor funds the installation and maintenance of a solar system on your property. In return, you sign a contract agreeing to purchase the solar energy generated at a set cost, lower than grid rates.
- Commercial loans aren’t suitable for most PV installations. The repayment period is generally 5-7 years, which is less than the solar payback of 6-10 years. You would also have the loan on your balance sheet, which may negatively affect your credit rating.
Note that in 2014, the government cut subsidies for farmland used for solar panels in part due to aesthetic concerns… though since then, it has begun to prioritise the climate a little more.
Farming under solar panels
Obviously, having solar panels on the buildings where you’re using the electricity like milking sheds and heated barns is ideal and non-disruptive.
When it comes to ground mounted panels, you need to consider how or if you will use the land beneath.
- If you have enough space for a wild area, you can plant native wildflowers below and around the panels (so long as they do not overgrow them), which may encourage bees to pollinate your other crops.
- You can also lift the panels to a height so grazing livestock can pass underneath them. This is only really practical for smaller animals like sheep, rather than cattle (which can damage the structures).
- Agrophotovoltaics is an emerging industry. Researchers are experimenting with panels mounted so high that crops can still receive enough sunlight and machinery has space to operate in the fields under them.
Get involved
If you’re interested in learning more about installing solar panels on your farm, give us a call on 0118 951 4490 or download our free guide to commercial PV: | https://blog.spiritenergy.co.uk/commercial/solar-panels-for-farm-buildings |
A couple of weeks ago I was intrigued by a story that I heard on the news. It was about a Lane Cove Resident who had attempted to block a development application from one of her neighbours that will block her solar panels … petty council dispute or is there something more there … is this an issue that might become more prevalent??
The short of it is.
– Resident A spends $50,000 on renovations to make her home more sustainable specifically solar voltaic panels, solar tubes for hot water and extra north facing windows.
– Resident B (neighbour) submits a development application to build a second storey. The development will potentially block winter sun to her solar panels until 12noon as well as completely blocking two windows she recently installed and impacting an extension and courtyard. Thereby reducing her income from solar and potentially increases her costs (heating).
– The development application was approved
I think that this is really interesting. Does the resident that invests in renewable energy have a right to stop their neighbours from blocking sunshine on their solar investment? But what about the other neighbour? Why should they be blocked from making an improvement to their home (investment) ? Should trees be viewed differently to buildings? What about a right to privacy? Shouldn’t someone be able to grow trees to give their home some privacy? Does it matter who gets there first? IMHO such clashes could become more common as Australian governments promotes renewable energy and solar systems become more popular.
A interesting legal case study of what could happen is from California. They have for some time had government promotion of renewable energy and solar. They have a law … the Solar Shade Control Act (1978) means that homeowners can get into legal trouble if their trees grow big enough to block solar access to neighbours panels. The law requires homeowners to keep their trees or shrubs from shading more than 10 percent of a neighbour’s solar panels between 10 a.m. and 2 p.m., when the sun is strongest. Existing trees that cast shadows when the panels are installed are exempt, but new growth is subject to the law. Residents can be fined up to $1,000 a day for violations. This was tested in court 2 years ago when after more than six years of legal wrangling, a judge recently ordered Richard Treanor and his wife, Carolyn Bissett, to cut down two of their eight redwoods trees. (Another great article on this case …and an interesting review of the law)
I don’t have an answer to this one. Do I have solar panels? Yes … but it is only a 1.5Kw system and it is installed on the top story of our house. From the configuration of our house it is extremely unlikely that we would ever get blockage. BUT … we do have a 2 storey house (the extension for upstairs was done 2-3 decades ago) and maybe a family like mine could be preventing from remodelling their house because of blockage of solar from neighbours.
IMHO you can spin the environmental science on this one around so many ways.
- Solar and energy efficient homes are good for the environment and reduce reliance on carbon based fuels
- Trees are good for the environment and capture carbon … plus they are just nice (most of the time)
- Increased urban concentration CAN lead to more sustainable cities .. (our house is near the train station and bus routes … and the light rail shortly … thereby more people living in the area could promote better transport usages (moving away from cars).
And what about everyman’s (person’s) house is their castle???
What do you think?? Is there a balance that can be reached? Who should decide? Local Councils, State or Federal Environmental Bodies, the Courts? | https://nmylife.com/tag/solar-versus-development/ |
Connie Black adjusts a production line for Series 6 solar cells during a tour of First Solar’s factory in Walbridge, Ohio, on Oct. 6, 2021.
First Solar announced Tuesday that it will build a new solar panel manufacturing facility in the U.S. under the Inflation Act, which encourages domestic manufacturing.
The company will invest up to $1 billion in a new factory it plans to build in the southeastern United States. The newly announced plant will be the board manufacturer’s fourth fully integrated plant in the US.
First Solar also said Tuesday it will spend $185 million to upgrade and expand its existing facilities in Ohio.
CEO Mark Widmar singled out the IRA as a key catalyst in the company’s decision to build another plant in the U.S. rather than look elsewhere.
For the first time, the funding packages create “a long-term view and understanding of the industry and the policies that align with that industry,” he told CNBC.
“With that level of clarity, we stepped back and evaluated the alternatives or options of where we could go with our next factory, and when we looked at it comprehensively, the US was a very attractive option,” he said.
Widmar added that this is the first time that the entire supply chain has been incentivized, from the manufacturer to the means of production and finally to the end customer.
“Through coordination like this, you can create partnerships and opportunities for shared growth together and a more win-win structure than we may have had before the introduction of the IRA,” he said.
First Solar said the new plant will produce 3.5 gigawatts of solar modules annually by 2025, with the company’s Ohio facilities posting a total annual production capacity of more than 7 GW by 2025.
By comparison, the U.S. added 3.9 GW of solar capacity in the first quarter of 2022, according to the Solar Energy Industries Association. The country’s total solar industry now stands at 126.1 GW, which is enough to power 22 million homes, according to SEIA.
Shares of First Solar have jumped 65% since late July when Senate Majority Leader Chuck Schumer, D-N.Y., and Sen. Joe Manchin, D-W.V. announced their surprise deal on climate, health care and tax bills.
The legislation, which was quickly passed by the House and Senate and signed by President Joe Biden, benefits First Solar in several ways, including a production tax break for domestic manufacturers. First Solar is America’s largest developer of solar panels, with a focus on utility-scale panels.
The plant announcement comes as First Solar struggles to keep up with booming demand. During its second-quarter earnings call, First Solar said it is sold through 2025 with a backlog of 44 GW.
Widmar said First Solar wants to move quickly on construction of its new plant. One of the company’s concerns is getting the site as close to shovel ready as possible. Other factors include the type and availability of workers in the area.
First Solar plans to identify the site by the end of the current quarter.
“I think the industry is in the best position it’s ever been … for growth beyond any expectation anyone could have imagined,” Widmar said.
Will there be solar incentives in 2022?
Federal Solar Investment Tax Credit (ITC)** Buy and install a new home solar system in California in 2022, with or without a home battery, and you may qualify for a 26% federal tax credit. Read also : Plastic solar cells combine high-speed optical communication with indoor energy harvesting. The residential ITC drops to 22% in 2023 and ends in 2024.
What are the future prospects for solar energy? Compared to about 15 GW of solar capacity deployed in 2020, annual solar deployment averages 30 GW in the early 2020s and rises to an average of 60 GW from 2025 to 2030. Similarly large solar deployment rates continue into the 2030s and later.
How much solar Are we expected to install in the US in 2022?
Still, they remain optimistic in their five-year forecasts: The US is expected to install 112 GW of utility-scale solar capacity between 2022 and 2027. On the same subject : Solar energy offers Puerto Ricans salvation, but remains an elusive goal. The commercial and community solar markets are much smaller, with 317 MW and 197 MW installed in the first quarter of 2022, respectively.
How much solar energy is used in the US 2022?
In 2022, solar power will account for nearly half of all new electricity generation capacity in the US. In 2022, we expect 46.1 gigawatts (GW) of new useful-scale power generation capacity to be added to the U.S. power grid, according to our preliminary monthly electricity generator data. Inventory.
Is demand for solar panels increasing?
Thanks to strong federal policies such as the solar investment tax credit, rapidly falling costs, and growing demand for clean electricity from the private and public sectors, more than 121 gigawatts (GW) of solar capacity are now installed nationwide, enough to power 23.3 million homes.
How much solar is installed in the US?
From just 0.34 GW in 2008, US solar capacity has grown to an estimated 97.2 gigawatts (GW) today. That’s enough to power 18 million average American homes.
Are there any grants for solar panels UK 2022?
Although there are no grants in the traditional sense, there are opportunities to finance solar panels in the UK. Currently, the only scheme open to new applications is the Smart Export Guarantee (SEG). Expires March 31, 2022! The RHI only applies to solar water heating.
Can you still get government grants for solar panels?
You may not be able to install government-funded solar PV panels on your roof, but in a few years solar panels can pay for themselves through savings on energy bills and government incentive payments such as the Renewable Heat Incentive. You don’t have to get benefits!
Will solar panels get cheaper in 2022?
According to a GTM Research study by solar analyst Ben Gallagher, solar power is becoming much cheaper around the world. It predicts that the cost of building a solar power plant will decrease by 4.4 percent each year, which means that by 2022, the cost of projects will drop by 27 percent.
Will there be new solar incentives in 2022?
Under the old law, the federal solar investment tax credit was set to drop from 26% in 2022 to 22% in 2023. Under the new law, homeowners will be able to claim 30% of the cost of a home solar installation as a tax. credit until 2032.
What are the most efficient solar panels 2022?
Are 100% efficient solar panels possible?
Holy Grail of photovoltaics found with new compound semiconductor material called ‘liquid sun’. Researchers have discovered what appears to be the “holy grail” of photovoltaics – a new semiconductor material that can convert the entire solar spectrum into “green” electricity with 100 percent efficiency.
Which type of solar panel has the highest efficiency?
Panels built using advanced ‘Interdigitated back contact’ or IBC cells are the most efficient, followed by heterojunction (HJT) cells, monocrystalline PERC half-cut and multi-busbar cells, shingle cells and finally 60-cell (4-5 busbar) mono cells .
Will solar panels ever reach 50% efficiency?
A new type of solar technology has set a world record for the most efficient energy production from a solar cell. By stacking six different photoactive layers, the record-breaking multi-junction cell achieved nearly 50 percent efficiency in the lab and nearly 40 percent under real-world single-sun conditions.
Did Reagan take down Carter’s solar panels?
The panels were originally installed in the late 1970s during President Jimmy Carter’s administration, but President Ronald Reagan removed them in 1986 due to a leaking roof and decided not to reinstall them.
Does the White House still have solar panels? The panels, inverters and components are made in America, and the installation is about the size of an average home solar system. These new panels are six times stronger than the original panels that Carter installed in 1979 and are still there today.
What solar company went out of business?
In August 2021, Empire Solar went out of business and filed for Chapter 7 bankruptcy protection, with former employees filing a class action lawsuit against the company’s founders.
What are the 2 main disadvantages to solar energy?
Disadvantages of solar energy
- Costs. The initial cost of purchasing a solar system is quite high. …
- Depends on the weather. Although solar energy can still be collected on cloudy and rainy days, the efficiency of the solar system decreases. …
- Storing solar energy is expensive. …
- Uses a lot of space. …
- Associated with pollution.
What are the pros and cons of solar energy?
What is solar explain?
Solar technologies convert sunlight into electricity through photovoltaic (PV) panels or through mirrors that concentrate solar radiation. This energy can be used to generate electricity or stored in batteries or thermal storage tanks.
Why did Reagan remove solar panels?
1981 âReagan ordered solar panels removed. “Reagan’s political philosophy saw the free market as the best arbiter of what was good for the country. Corporate self-interest, he believed, would steer the country in the right direction.”
Who removed Carter’s solar panels?
As a symbol of his belief in the “power of the sun,” Carter had 32 solar panels installed on the roof of the West Wing of the White House in the summer of 1979. These collectors were used to heat domestic water for seven years until President Ronald Reagan removed them in 1986.
Did Reagan remove solar panels from White House?
Just a few years after Carter installed solar panels on the White House to great fanfare, Reagan quietly removed them during the reroofing of the White House and stored them.
Which president took down the solar panels on the White House?
President Ronald Reagan took office in 1981 and solar panels were removed during his administration.
When did climate change become an issue?
June 23, 1988 was the date climate change became a national issue. In groundbreaking testimony before the US Senate Committee on Energy and Natural Resources, Dr.
What are the historical causes of climate change? Since the Industrial Revolution, human activities have released large amounts of carbon dioxide and other greenhouse gases into the atmosphere, changing the Earth’s climate. Natural processes such as changes in solar energy and volcanic eruptions also affect Earth’s climate.
When did climate change begin and why?
The instrumental temperature record shows a signal of increasing temperatures that appeared in the tropical ocean around the 1950s. Today’s study uses additional information contained in the alternate record to trace the onset of warming back a full 120 years, to 1830.
When did the climate change problem start?
The history of scientific detection of climate change began in the early 19th century, when ice ages and other natural changes in paleoclimate were first suspected and the natural greenhouse effect was first recognized.
Why did the climate change movement start?
History of the Environmental Movement As concern about the environment grew among scientists in the mid-1950s, in 1958 the Mauna Loa Observatory in Hawaii began measuring the Earth’s carbon dioxide levels.
What is the history of climate change studies?
The history of scientific detection of climate change began in the early 19th century, when ice ages and other natural changes in paleoclimate were first suspected and the natural greenhouse effect was first recognized.
Who was the first to study climate change?
In 1938, Guy Stewart Callendar did just that, compiling temperature measurements from the late 19th century onwards to show that global land temperatures had increased over the past 50 years. He showed that the globe was warming.
What is climate history?
Historical climatology is the study of historical changes in climate and their impact on civilization from the origin of hominins to the present day. This is different from paleoclimatology, which covers climate change throughout Earth’s history.
When did humans start studying climate change?
In the 19th century, experiments suggesting that carbon dioxide (CO2) and other man-made gases could collect in the atmosphere and insulate the Earth were met with more curiosity than concern. By the late 1950s, CO2 readings offered some of the first data to support the theory of global warming.
What is the history of climate change on the Earth?
Earth’s climate has changed throughout history. In the past 800,000 years alone, there have been eight cycles of ice ages and warmer periods, with the end of the last ice age around 11,700 years ago marking the beginning of the modern climate era – and human civilization.
What is the background history of climate change?
In 1896, Swedish scientist Svante Arrhenius first predicted in a seminal paper that changes in carbon dioxide levels in the atmosphere could significantly alter surface temperatures due to the greenhouse effect. In 1938, Guy Callendar linked the increase in carbon dioxide in the Earth’s atmosphere to global warming.
What is an example of climate change in history?
The Younger Dryas (12,900 to 11,600 years ago) is the most intensively studied and best understood example of abrupt climate change. The event occurred during the last deglaciation, a period of global warming when the Earth system was transitioning from a glacial to an interglacial mode.
Why is climate change important history?
Although history does not offer complete facts and clear explanations, it can convey the human experience of a changing climate and extreme events. A good anecdote or narrative can be more illuminating and persuasive than any number of quantitative studies.
Why is my electric bill so high when I have solar panels?
If you’re not seeing the savings you’d expect, it could also be because of a change in the amount of electricity you’re using or the efficiency of your solar panel system. You may also have an expensive electricity tariff.
Do Solar Panels Help Lower Your Electric Bills? Solar power can greatly reduce your electric bill, but you will often still have a residual bill. The size of your utility bill depends on many factors, including local utility rates, the size of your system relative to your energy needs, and the time of day you use energy.
What happens if my panels produce more electricity than I use?
In the event that your system produces more power than needed, your local utility company can provide a credit for that excess power that you feed back into the grid.
What happens if you generate more power than you use?
If you have produced more than you have used, your electricity provider will usually pay you for the extra electricity at avoided costs. The real benefit of net metering is that your electricity provider essentially pays you retail price for the electricity you feed back into the grid.
What happens if your solar panels produce more electricity than you use?
If you produce more solar energy than you use (as will happen to many customers during the day, especially in the summer), your system will feed electricity into the grid.
Can solar panels produce too much energy?
If your system produces excess energy, your meter will actually work in reverse as the excess energy is fed back into the grid. When this happens, you will receive a credit on your next month’s electricity bill.
Do solar panels give a lot of electricity?
The typical size of a solar array is usually about 5 kW and takes up about 400 square meters of space. An array of this size can produce an average of 350-850 kWh of AC power per month. To put this into perspective, a typical household uses about 897 kWh per month.
How much electricity can solar panels generate?
How much energy do solar cells produce per hour? An average solar panel produces between 170 and 350 watts every hour, depending on the region and weather conditions. This means about 0.17 kWh to 0.35 kWh per solar panel.
Do solar panels generate a lot of electricity?
Today, most residential solar panels produce between 250 and 400 watts of electricity. While solar panel systems start at 1 KW and produce between 750 and 850 kilowatt hours (KwH) annually, larger homes and larger households typically want to be on the higher end.
What are 2 disadvantages to using solar energy?
Disadvantages of solar energy
- Costs. The initial cost of purchasing a solar system is quite high. …
- Depends on the weather. Although solar energy can still be collected on cloudy and rainy days, the efficiency of the solar system decreases. …
- Storing solar energy is expensive. …
- Uses a lot of space. …
- Associated with pollution.
Why is my solar true up bill so high?
Fixes are annual bills that solar customers pay instead of the monthly bills that regular energy customers receive. The repairs include credits for the energy the customer’s solar panels have added back to the grid. Many people are paying double or more this year than in 2019.
How can I lower my true up bill?
Solution 1: Add more solar panels to your existing system If your bill is higher than expected, you’re not generating enough electricity to maximize your savings.
What does solar charge true up mean?
The True-Up statement reconciles all cumulative energy charges, credits and allowances for the entire 12-month billing period. If an amount is due after all debits and credits have been reconciled, that amount will appear on your last PG&E bill in your 12-month billing cycle.
How is a true up bill calculated?
Your true cost for each year is determined by the difference between the amount of electricity your system produces in a month and the amount of electricity PG&E provides. The calculated difference is your net energy. | https://solarpowerconference.com/first-solar-announces-new-us-panel-factory-following-inflation-reduction-act/ |
Articles filed under Impact on Wildlife from USA
Across the U.S., more than 800 utility-scale solar projects are under contract to generate nearly 70,000 megawatts of new capacity ...More than half this capacity is being planned for the American Southwest, with its abundance of sunshine and open land. These large projects are increasingly drawing opposition from environmental activists and local residents who say they are ardent supporters of clean energy. Their objections range from a desire to keep the land unspoiled to protection for endangered species to concerns that their views would no longer be as beautiful.
Biologists clearing 3,000-acre desert area of tortoises
A fleet of consulting desert tortoise biologists have been sweeping the 3,000-acre Yellow Pine Solar Project site near Pahrump with shovels to move as many protected desert tortoises out of harm’s way as possible before the site is converted to millions of solar panels, according to the press release by Basin and Range Watch, a nonprofit working to conserve the deserts of Nevada and California.
N.J. fishing groups worry offshore wind will adversely affect their industry: ‘This is our farmland’
They worry that wind farms with their soaring turbines could disrupt fish habitat, reroute fishing lanes, and force sport anglers farther out to sea. Lackner, of Montauk, N.Y., believes that the farms will narrow the currently wide-open pathways to the vessel he docks at Cape May so often that he calls it his second home. “We’ll have to tow in between turbines while dragging a quarter mile of gear,” Lackner said. “We’ll be passing boats, as our gear drifts. ... It’s not good to jump right into wind in such a big way.”
Land battle brewing over 'world's largest Joshua tree forest' in Southern Nevada
A land battle is brewing at the site of what could be Nevada's newest national monument, Avi Kwa Ame.
Biden's dilemma: Land for renewables
Picture an area of land equal to the combined territories of Illinois, Indiana, Ohio, Kentucky, Tennessee, Massachusetts, Connecticut and Rhode Island — 228,000 square miles in all. That's the space that could be required to site most of the massive deployments of wind and solar generation required to fulfill President Biden's goal of a net-zero-carbon economy by midcentury, according to a recent first-ever project to attempt mapping that future.
Resident concerned about wind farm expansion
Wind farms are not environmentally friendly to land or to nature. For example, the excavation of leased land to install and support wind farms permanently alters that property’s landscapes, rock outcroppings and micro-environments – all of which are irreplaceable. ...The turbines are a blight for miles around, and they also interfere with endangered species. Current projects in Montague and Jack counties will negatively affect the migration paths and lay-over locations of Whooping Cranes. Current population numbers are estimated to be about 500 Whopping Cranes left.
Offshore wind plans will drive up electricity prices and require ‘massive industrialization of the oceans’
“Environmentalists have not yet grasped the massive industrialization of the oceans now underway and proposed.” ...If the advisors on Biden’s climate team are serious about protecting the environment, now would be a good time for them to reconsider the massive industrialization of the oceans that is now underway. It might even make them think about preventing America’s existing fleet of nuclear reactors from being prematurely shuttered.
New studies show pronghorn avoid wind turbines
Wind turbines do displace pronghorn, which in return lose valuable food especially in winter months. ...“We know there is a negative effect, and we would fully expect that to translate that animals don’t eat as much, they don’t put on as much fat, they don’t survive the winter as well and have as many young, all of those are logical,” Kauffman said.
Meet the founder of a holdout conservation group opposing big solar
In the case of Yellow Pine Solar, Emmerich said the land where it is likely going to be built is home to Mojave yucca and desert tortoise, which is a threatened species. “The Yellow Pine area is on some really pristine public lands that contain a lot of the traditional, what I like to call, old-growth Mojave Desert areas,” he said. Emmerich described the area as an "unbroken desert landscape." In addition, it is along the Tecopa Road, which is a road that tourists use to travel between Death Valley and Las Vegas.
Researchers: Offshore wind farm would alter ecosystem
There’s still a lot of uncertainty surrounding how an offshore wind farm would impact specific species off the coast, but researchers say it will change the area’s ecosystem.
Solar power is booming. But it’s putting desert wilderness at risk.
How renewable energy projects in the Mojave Desert threaten local species — and how to fix that.
Outdoors: Conservationists oppose easing restrictions on wind turbine project
In a call-to-action to its membership, Black Swamp is sounding the alarm that removing the “feathering” clause from Icebreaker’s permit will essentially sign the death warrant for many thousands of birds. The grassroots group has urged its supporters to contact the OPSB and implore it to champion bird conservation and maintain the feathering requirement.
Democrats’ New Climate Plan will kill endangered species, environmentalists fear
It is notable that many of the conservationists defending wildlife from industrial wind turbines and transmission lines view the Democrats’ refurbished Green New Deal and its call for the “rapid deployment” of wind and transmission lines not as a climate dream but rather as an ecological nightmare. This isn’t the first time Democrats have shown a willingness to sacrifice wildlife for the wind industry.
Criticism of recent Lake Erie wind farm decision is misguided
Testimony before the OPSB revealed that LEEDCo had not identified this monitoring equipment technology. Testimony also revealed that in the 10 years the project was under development, LEEDCo never took actual radar data from the proposed site. In light of this, in July 2018 the OPSB staff initially proposed that the turbines not operate from dusk until dawn from March 1 through Jan. 1 until the monitoring technology was installed and working. In its final decision, the OPSB implemented its staff's original recommendation, although narrowed the restriction to eight months.
NTHA sends demand letters to energy companies regarding new wind farms
The North Texas Heritage Association has sent demand letters to two energy companies planning wind farms in Clay, Montague and Jack counties. Landowners are concerned the miles of large wind turbines will disrupt an endangered bird, the Whooping Crane, that migrate through these counties twice a year. NTHA had a study done on this and principal biologist Jennifer Blair found that these wind turbines would kill some of these birds Or disrupt their habitat.
Neighbors file legal action over Strauss Wind Energy Project near Lompoc
Neighbors of the proposed Strauss Wind Energy Project south of Lompoc have filed legal action challenging the adequacy of the environmental review, calling it "inadequate, insufficient and misleading." George and Cheryl Bedford, represented by Santa Maria attorney Richard Adam Jr., have strongly opposed the wind farm planned for 3,000 acres off San Miguelito Road.
Anglers oppose Lake Erie wind turbine project
“Lake Erie is simply too small to sustain any industrial offshore wind project,” said Rich Davenport of Tonawanda, who is active with several sportsmen’s groups, such as the Erie County Federation of Sportsmen’s Clubs and the Western New York Environmental Federation. “The towers will displace water currents for quite a radius around each turbine, impacting nearby spawning shoals (even if sited away from spawning areas, you cannot avoid the current change), coupled with the massive amounts of infrasound, or low frequency noise, each turbine will generate while operating.”
Years later, Deerfield Wind impact on bear habitat in question
“We opposed the project on the basis that it would significantly imperil or destroy wildlife habitat and bear habitat,” she said. “The Public Utility Commission did not, frankly, rule in the way that the department would have preferred. They issued a decision in which they approved the certificate of public good for the project. They found, based on our testimony, that there were 36 acres of bear scarred beech [trees] that would be removed as part of the project.”
Migration routes are an energy, environmental balancing act
Gov. Mark Gordon released a draft executive order to bolster migration corridor protections. The draft, published on Dec. 23, attempted to thread the needle between the need to preserve precious wildlife and the need to support Wyoming’s lucrative energy sector. Of the eight ungulate species, or hoofed mammals, making up the one million or so migrating mammals across Wyoming, the executive order places special emphasis on two: mule deer and pronghorn. Since its release, the draft has been lauded by several groups as a winning example of science-based wildlife management policy. Still, others fear it could add one more set of hurdles for energy developers to leap through.
Groups are concerned that a proposed wind project could damage birds, plants
Bedford and his wife, Cheryl, purchased more than 400 acres of land on top of one of the hills surrounding San Miguelito Canyon in the early 1990s. With the land previously untouched, they had to build a private road leading up to their home, which sits at roughly 1,700 feet in elevation, before beginning construction. | https://www.windaction.org/posts?location=USA&p=2&topic=Impact+on+Wildlife&type=Article |
A consultation which ends almost 2 weeks after the changes come into effect, creating massive uncertainty for a fledgeling industry and not just moving the goal posts but taking the ball away for community environmental projects for the sake of the profits of the big 6 energy companies.
Insert your own incredulous reference to the big society, the greenest government ever, we are all in this together here
So it’s bad PR but what does this actually mean on the ground for real people and real projects. Well lets look at what this means for Reading.
In Reading we are rightly proud of the way we are leading on micro-generation. We were named by GoodEnergy as having the most feed in tariff sign-ups in the country and we are where 10:10 has launched it’s solar schools pilot scheme the home of enthusiasts like Reading Energy Pioneers who have shared peer knowledge and Reading’s Labour council was committed to a £5million community scheme to put solar panels in ever neighbourhood.
Our Community scheme
For a step change we knew that we needed something that was cross community and the council needed to be involved. As a new councillor I proposed a motion last year, while we were in opposition to a Tory-Liberal coalition, that we should take advantage of the financial and environmental benefits of the feed in tariff, with a particular focus on local schools. After we took back the council’s cabinet in May we prioritised work to do exactly that. This got cross party support.
What we proposed was a business case based on a very tight time frame that would be both financially responsible and generate huge benefits for the local community.
In summary we were proposing a £5 million investment in solar panels across Reading’s schools, community buildings and some council owned buildings. The school or community group would benefit from free electricity, the council would take the feed in tariff to cover our investment and the community would benefit from improved understanding of renewable technology. It was a win-win-win. I was particularly proud of how enthusiastic schools and young people were to embrace the technology. In fact what alerted me to these proposals was a call from a worried school governor on Friday.
We always knew that the feed-in tarriff would be cut at the end of March and as such planned accordingly. The long term impact of such a dramatic cut we can’t know. But what we do know is that cutting the feed in tariff with so little notice means we will have to call a halt to the project. It’s just not responsible to continue. Of course we will look for ways to get around it and how we can make it viable but currently I can’t see how we can spend £5million.
The sting in the tail is that the tariff is further reduced for those who own more than one system which effectively kills off the idea of community based installations like ours. It also means that the next stage we were hoping for which was to install solar panels on council housing is also very unlikely to be viable. Which means our hope to reduce fuel poverty through installing solar panels is also dashed – we already have a very good insulation program.
Individuals
We installed solar panels with great excitement in April this year. We’ll be OK, we’re locked in to a contract where our energy supplier has to pay us the current rate plus inflation for the next 25 years. Someone else on our street has signed up to the next round of the Reading Energy Pioneers scheme. They were working on the basis that they had until March for the installation. If they have signed a contract with the supplier and paid a deposit but can’t get the installation live until after the cut off date in 6 weeks, they will be severely out of pocket. That’s not a hypothetical example I just don’t know whether or not they’ve signed a contract yet because I haven’t seen them the last couple of days.
Firms and employment
There is one group of firms this is good for. The big energy suppliers who I would have thought hardly need much help given they are operating in what I would have to describe as a cartel-like market, with high prices and a lack of genuine competition. In contrast the small suppliers like the firm I bought my solar panels from who are just getting established will be the ones that suffer. The work involved in installing solar panels is skilled and vocational and it is a contribution to a sustainable future. It’s a real shame that just as the government belatedly realises they need to act to support British industry they pull the rug from one of the few industries that is growing fast
What now
This is an announcement that has caused dismay in Reading across the political spectrum. At last night’s cabinet meeting my colleague Paul Gittings expressed his anger on behalf of residents and schools who are very disappointed. However the Libdem leader and Conservative deputy leader who were both present also offered to sign a cross party letter objecting to the pace and scale of this cut.
The government is clearly not thinking things through. They have proposed an end to the consultation of 23rd December. But the impact of their changes will start from 12th December. This doesn’t fill me with optimism and is perhaps what we should have come to expect
The national Tory and Liberal Democrat government is completely out of touch and in a rush to appease the big energy firms has decided to slash and burn and do change the rules of the game, damaging communities like Reading. If your local school in Reading doesn’t get solar panels in the next few months as a result I am very sorry and I promise we are doing all we can to change that.
In the coming days we will let you know how you can do to support and lobby the government to change it’s mind.
For now I am still angry and disappointed but I won’t be giving up. | https://racheleden.net/what-the-feed-in-tarriff-cut-means-for-reading/ |
If you have an interest in the biological, ecological, environmental, cultural, social, political and economic aspects of Marine Conservation, and aim to make your career in the field of marine conservation science then this class is for you! Explore this website to learn more about isMCS, and if you have any questions please do not hesitate contact us us for more information!
" + arr + "
Humans relied upon them for eons to provide food, transportation and numerous ecosystem services that sustain our life on Earth. Modern industrialization and a growing human population has created a changing environment, with subsequent losses of biological diversity and ecosystem resources 1. Dispersed and distributed impact sources, lack of strong governance, poor community buy-in for the implementation of new rules, and resource conflicts often compound these problems.
In the last decade marine scientists, conservationists, policymakers, and stakeholders have come together, creating visions for a way forward 14,15, In some places, management action has led to the recovery of once decimated stocks There are new bans on plastic inputs to our oceans 20 , and an increasing focus on how managing for ocean resilience can combat stress and change 21, Marine reserves processes are becoming better understood, and reserves are being established at an increasing rate 23 , including the three largest reserves on the planet being created in 24 and 25, Working within and across political boundaries provide a means for oceans to continue to provide for the marine organisms as well as humans that rely on them 27, Finally, real progress is being made to change the course of climate change In order to continue this progress and make marine conservation a success story, we need a new generation of conservation scientists, resource managers, and policy makers.
Over an intensive 10 day period, the isMCS will cover concepts of evidence-based conservation science from population to ecosystem levels, including human dimensions, as well as topical issues in marine conservation.
From this students will develop some of the knowledge, tools, and skills necessary to become effective players in the field of Marine Conservation Science. University of Primorska Login. Evaluating and ranking the vulnerability of global marine ecosystems to anthropogenic threats.
Subject Search
Conservation Biology Proelss A, and Houghton K. A fisheries management system in crisis: The EU common fisheries policy. Current Biology DOI: Marine defaunation: Animal loss in the global ocean. Plastic litter in the sea.
Loading....
Marine environmental research The economics of dead zones: Causes, impacts, policy challenges, and a model of the Gulf of Mexico hypoxic zone. Review of Environmental Economics and Policy Impacts of anthropogenic noise on marine life: Publication patterns, new discoveries, and future directions in research and management. Climate change impacts on marine ecosystems. This course serves as a general introduction to the field of Marine Science, and the scope of materials to be covered by the Bachelor of Marine Science degree and the Marine Biology major of the BSc.
Elements of biological, physical and chemical oceanography and their applications to coastal management will be discussed, with the help of local as well as global examples.
Center for Marine Science / Introduction to Marine Science
Please view the full class and additional timetable information for School of Environment and Science. View historical course profile.
Credit points awarded. Study level. Student contribution band. Usually available Gold Coast Semester 2.
Convenor Professor Chris Frid. View course profile.
- Oceanography and Marine Biology: An Introduction to Marine Science | Oceanography.
- Blood, Iron, and Gold: How the Railroads Transformed the World!
- What is Marine Biology? Career Opportunities.
- Marine Science Home Page!
- Course Catalogue Cohort | IMBRSea.
- Genetic Engineering: Principles and Methods.
Last day to drop a course without financial penalty Census date. | https://wydotazyvafa.tk/an-introduction-to-marine-science.php |
California has been a pioneer in pushing for rooftop solar power, building up the largest solar market in the U.S. More than 20 years and 1.3 million rooftops later, the bill is coming due.
Beginning in 2006, the state, focused on how to incentivize people to take up solar power, showered subsidies on homeowners who installed photovoltaic panels but had no comprehensive plan to dispose of them. Now, panels purchased under those programs are nearing the end of their typical 25-to-30-year life cycle. Many are already winding up in landfills, where in some cases, they could potentially contaminate groundwater with toxic heavy metals such as lead, selenium and cadmium.
Sam Vanderhoof, a solar industry expert and chief executive of Recycle PV Solar, says that only 1 in 10 panels are actually recycled, according to estimates drawn from International Renewable Energy Agency data on decommissioned panels and from industry leaders.
The looming challenge over how to handle truckloads of waste, some of it contaminated, illustrates how cutting-edge environmental policy can create unforeseen problems down the road. | https://republicofmining.com/2022/07/19/california-went-big-on-rooftop-solar-now-thats-a-problem-for-landfills-by-rachel-kisela-los-angeles-times-july-15-2022/ |
Positive Input Ventilation (PIV) is a system that draws fresh air from outside a building and then distributes it into all rooms through a centralised system that is usually mounted in the loft before expelling the air outside.
Many traditional, ‘passive’ ventilation systems rely on fixtures, such as vents and air bricks with gaps to allow air to pass into and out of the home. There are some disadvantages to this passive approach, though. These areas can be forgotten about, become neglected, blocked, or even be papered or bricked over. The airflow through a modern house is not always optimal, even if all the vents are clear, and these gaps can also introduce drafts or allow heat to escape.
Modern homes are increasingly energy efficient, meaning that they are better at retaining heat, but it is also important that they are well-ventilated. Without good airflow, moisture can build up, leading to condensation, damp, and mould. Research has shown that the average four-person family creates 112 pints of moisture each week, from breathing, cooking, washing, and boiling the kettle.
A conventional extractor fan that you might find in a bathroom or kitchen is an effective way of removing humid air from a building to reduce the condensation that causes damp; however, an extractor fan simply removes air. A PIV closes the loop of airflow by controlling and filtering the air that is drawn into the building to replace the humid air that is removed by the extractor fan.
A PIV is a whole-house ventilation system that improves air quality in all rooms. It is highly effective at minimising condensation and, thanks to the filtration of fresh air as it is drawn into the building, PIV will also reduce the concentration of allergens such as pollen that are drawn into the house.
When weighing up the correct ventilation systems for your home, you might wish to consider the advantage of a PIV. This can help to circulate air around your home, helping to prevent problems such as damp, mould, and condensation, while also providing fresh air and scattering the build-up of pollutants.
A PIV system is a highly effective way of improving indoor air quality. The constant movement of air through the building prevents the build-up of condensation and also reduces the concentration of volatile organic compounds, allergens such as pollen, and even radon gas in your home.
The reduced condensation will remove damp areas while the airflow will also remove cold spots around the house and make your central heating more effective.
PIV systems consume very little energy when running and include sensors that allow them to adjust the amount of airflow depending on the level of humidity in the air.
If your home suffers from high levels of condensation, or you live in an area that is prone to radon gas, then Positive Input Ventilation systems can help you enjoy your home while also reducing the risk to your health of low-quality air. Speak to one of our specialists to arrange a survey and find out more about whether Positive Input Ventilation is suitable for your home.
A PIV system is professionally installed in a home and does not require building work to be carried out. As such, it is suitable for fitting in an existing building where improved air quality is required.
The process of installing a PIV system requires the fan system to be placed in your loft and for ducting to be connected to it. The central unit needs to have an external vent to draw in air – this can be under the eaves or through the roof depending on the system used and the design of your home.
Holes are cut into the plasterboard of ceilings in the rooms where vents are to be installed and the pipework is mounted on the ceiling.
To meet building regulations and to ensure safety, the PIV system needs to be connected to the mains by a qualified electrician.
At Dorset Electrical Solutions we are always happy to advise which system would suit your home best. PIV systems can be installed in houses or apartments if they meet the necessary installation requirements. Simply fill out the contact us form or give us a call on 01202 985027 to speak to a member of our team.
When it comes down to your Solar Panel installation, of course you want to make sure you are installing the best Solar Panels. there are. In general the two most important factors to consider when choosing a Solar Panel is its efficiency and maximum power output.
Why is Solar Panel efficiency important?
Faster payback time
The more efficient your Solar Panels are the more solar energy your PV system is going to produce. The more energy your Solar Panel system is generating, the less energy from the Grid you’ll have to buy. It’s that simple.
For properties with smaller roof or land availability, more efficient solar panels still make solar energy a viable option. A larger panel wattage combined with an efficient solar cell should be enough to still power your home from the sun.
With fewer panels needed to be installed to get the same size system as others, having efficient Solar Panels means that you can save more roof/land space and money on labour. This way you will still be gaining the same amount of solar energy required to power your home.
Who makes the most efficient Solar Panels?
These are the most efficient solar panels for home solar installations as of 2022.
When exploring solar panels, the panel efficiency rating is always listed on the datasheet. Any solar panel with a maximum efficiency over 20% is a premium solar panel.
Calculating Solar Panel efficiency
To calculate a Solar Panel’s efficiency, scientists test the panel in controlled lab conditions. The Standard Test Conditions (STC) for calculating Solar Panel efficiency aims to see how much solar energy the cells can convert to electricity in a simulated clear 25°C summer’s day, with an irradiance of 1000 W/m2. However, the given maximum efficiency level is not necessarily what the panels will achieve in real-life conditions. Especially as we know the weather isn’t always consistent and certainly not always sunny in the UK.
Fortunately, some Solar Panels are starting to come with a Performance Test Conditions (PTC) rating. A PTC is a more accurate measure of the efficiency of a Solar Panel, as it tells you how the Solar Panel will perform in a variety of climates and conditions. Some manufacturers even go a step further and look into the “system PTC rating”. This takes into account the efficiency of the inverter as well as the Solar Panel as your Solar Panel system’s efficiency is a sum of all its components. So even if you have the most efficient Solar Panel, an inefficient inverter will draw your overall efficiency down.
Solar Panel efficiency calculator
Calculating the efficiency of a Solar Panel is actually rather simple. The formula is as follows:
Efficiency (%) = [Power output of Panel / (Area of panel x Solar Irradiance)] x 100
What factors impact Solar Panel efficiency?
There are three main types of solar cells used in the manufacturing of Solar Panels: Monocrystalline, Polycrystalline and Thin-Film. Each type of solar cell has a different level of efficiency.
So if you are wondering which type of Solar Panel is the most efficient, the answer is easy – monocrystalline. As you will see from our list of the most efficient Solar Panels above, every panel on the list is of monocrystalline cell type.
If you look closely at the surface of a solar panel, you will see a bunch of thin lines. These are a series of copper or aluminium wires, that conduct electricity. Solar Panels which have thinner busbars will be more efficient, as there will be a reduce amount of shading on the cell, allowing the panel to absorb more light. Solar Panel manufacturers which use multi-busbar technology, with utilises multiple ultra-thin busbars, increase the overall Solar Panel’s efficiency. Some solar brands, for example Sunpower, now use Interdigitated Back Contact cells (IBC), which removes all the busbars and wiring from the face of the solar panel to the back, so that more sunlight can be absorbed.
Whilst you may have never considered this a factor, the type of backsheet a Solar Panel has can also affect your panel’s efficiency. For example, having a traditional white backsheet, over an all-black panel, may not necessarily look the part, however will operate slightly more efficiently. This is because an all-black panel will attract more energy from the sun, overheating the panel and affecting its ability to produce electricity. Whilst conventional Solar Panels only have PV cells on one side, you can now install bifacial Solar Panels which has a reflective transparent backsheet to capture more energy. Some manufactures claim that bifacial Solar Panels can even produce up to 30% more energy.
Do Solar Panels lose efficiency over time?
Solar Panel efficiency does decrease over time due to the natural degradation of the solar cells. On average the efficiency of a Solar Panel falls by 0.5% per year. However, most manufactures provide a 25-year performance or “linear output” warranty. This guarantees that the Solar Panel’s output won’t drop below 85% of its original efficiency within the first 25 years.
What else affects Solar Panel efficiency?
There are a few other external factors that can affect the efficiency of your Solar Panel installation. Most of them can be considered and overcome before your Solar PV system is even installed as long as you choose a trusted Solar Panel installer.
Nearby obstructions such as chimneys or trees, can provide enough shade on a Solar Panel to impact it’s efficiency by up to 50%. Before installing a Solar PV system it is important for your chosen installer to carry out a shading analysis to evaluate any potential issues.
In the UK, to receive the most amount of sunlight, it is always recommended that your Solar Panels are installed onto a south facing roof, at a pitch of 30-45 degrees. However, as we know, these ideal conditions can’t always be met. It is important that your energy advisor designs you the best possible layout that matches your energy needs.
A dirty or damaged Solar Panel caused by pesky birds or nearby trees, can reduce your Solar Panel system’s overall efficiency. It is important to keep your PV system well maintained, to check for any irregularities in your energy production, to keep your Solar Panels clean, and protect your system from any birds that may nest underneath them.
Although many may think Solar Panels are less efficient in the winter, it is in-fact extreme heat that can do more damage to your Solar Panels. A well-ventilated Solar Panel will always operate more efficiently than one that isn’t, simply because when the surface of the Solar Panel gets hotter, its efficiency drops. It is for this reason why on-roof Solar Panels are more efficient compared to solar tiles, and integrated Solar Panels. Solar Panels actually operated best in cold, yet sunny conditions.
Are premium efficiency Solar Panels right for you?
At Dorset Electrical Solutions we offer remote assessments and designs to find you the perfect Solar Panel system which is both right for your home and pocket. We install a variety of Solar Panels offering a selection of the most reliable, efficient and cost-effective panels on the UK market. Simple click here and fill out of short form or contact us on 01202 985027 to speak to an energy advisor.
Having your home completely rewired is a costly and time consuming process, however if your home has not been rewired in the last 25 years it is absolutely essential!
Well, to start with, if your consumer unit (which is the main control centre for your home’s electrical supply, called a “fuse-box” in the good old days), has a wooden backing, this would be considered unsafe and a fire hazard by today’s safety standards, and would need to be replaced by a metal-clad one. This law came into effect in July 2015, and ever since 1st January 2016 all consumer units have been metal clad. Your current consumer unit may also have cast iron switches, a black electricity cable, or no labeling.
You may have outdated plug sockets which are broken or cracked, or have rounded entries instead of the 3-pin ones. Your lights also may be temperamental and flicker on and off or go dim occasionally, with light bulbs needing to be constantly replaced, or the breakers might trip regularly.You might have old cable colours or aluminium wiring – check to see if your home contains an earth cable/path.
Old wiring, and also faulty wiring, is very dangerous and is responsible for around 750 accidents in the home, 12,500 house fires and more than 30 deaths per year.
The rewiring of a house is a major undertaking and must be done by a qualified electrician with proper insurance. Under no circumstances should you ever attempt it yourself, unless you have had training. The job can take 2 to 4 weeks, depending on the size of your house and how many rooms it has. The electricians carrying out the rewiring will need access to all your switches and sockets, and new cabling will be hidden under the floors and inside walls. You will have to carry on your day to day life with all this going on. All old fittings and wiring will be removed and replaced, with a new consumer unit also being installed.
The cost of rewiring depends on a number of different factors, including where in the country you live, the size of your property and number of rooms, and the individual electrician doing the job.
For experienced and highly trained electricians in Dorset, CONTACT us at Dorset Electrical Solutions for a competitive quote.
Whether it’s your home, family, personal property and memories you want to keep safe, or business premises you wish to keep secure, DES offers a broad variety of alarm systems to choose from, and a team of expert electrical fitters to ensure that you and everything you value both personally and materialistically is secure and safe 24/7. From an initial discussion with you about the type of alarm which best suits your purposes, to professional installation, you can rest assured that you’ll be getting our renowned premium service with maximum security.
Wired alarms are sturdy and reliable, and with their durability, once installed they’re generally in situ for life. Connected to the mains, it’s rare for the sensors or strength of light to be impeded. But because they require cables and wiring, they would need to be installed by our qualified electricians for your safety and peace of mind. We have an abundance of wired alarm systems for you to choose from, with keypads identifying different zones, to pet friendly detectors so you don’t get any false alarms, plus many more variable fixtures for you to choose from. We’ll discuss all your options, so you get the one that’s ideal for you, and with recommended annual maintenance and service, you’ll be guaranteed a secure system that remains in optimum working order.
Technology has moved on a long way, and the chance of false alarms with wireless has been reduced dramatically, and is now on a par with fixed alarm systems. Anti-masking technology ensures motion detectors are not obstructed, and the systems are adaptable to enable more sensors to be added on at a later date. Wireless alarm system can also be installed with pet friendly motion sensors, and a choice of other fixtures to suit your requirements, and as always our team of experts will be there to guide you through your options.
Wherever you are, home or away, and whatever you’re doing, a Smart Alarm is one of the most popular and easiest ways to secure your home or business. Installation by our experts is fast and simple, and in no time at all you’ll be connected to your smart phone or tablet, and can immediately take charge of your own security system.
Once installed, using an app on your Smart phone or tablet, all you have to do is have your notifications turned on to be alerted when motion is detected. Some systems require a membership, and there are many different adds-ons, and up-grades which can make costs vary, but don’t worry, because we’ll talk you through all your personal or commercial requirements to find out what you want or need, prior to purchase and installation, so you won’t be hit by any hidden costs.
We can supply and install the latest in smart video doorbells. These durable and weather resistant systems allow you to see who is on your property, wherever you are. Triggered by motion you’ll be able to monitor and record all movement day or night and you can also select the voice button for a two-way audio function.
With so much choice on offer, it can be quite daunting when it comes to getting the right security alarm for you. But with professional help and advice from the experts at DES, you can be sure that you’ll get the best systems and first class service that beats the competition hands down.
Photovoltaic (PV) technology is a system of nonmetal semi-conducting cells made of silicon which are used in making electrical circuits. By themselves, each cell creates only a small amount of electricity, but when the cells are linked together to form solar panels, the amount of electricity produced, by converting the sun’s radiation into electricity, can be quite substantial. This greener energy can then be used for running applications both in the home and in industrial buildings and can reduce your annual energy bills by as much as seventy percent.
Each system is exclusively designed according to your locale, the complexity of the situ, and the amount of panels you necessitate to gain maximum benefits. Our qualified and experienced team of electricians, who are passionate about solar technology, will discuss with you your requirements, and customise your specific system, to provide an installation service that delivers the best in renewable energy.
Solar panel systems are extremely durable, but dependant on various aspects such as local environment, productivity, and weather, the life span of PV panels can vary from anything between 5 years to twenty five years or more. And for this reason, it’s recommended to have your solar system serviced to retain maximum output.
No matter what the size or heightof the building, our team of professionals will safely soft scrub and thoroughly clean any build-up of grime, dust and rubble fragments from each individual panel to restore them to their original high-performance state.
Our qualified electricians will locate any damaged wiring, and replace them, so you can be assured that your system is back to running at peak performance.
We’ll locate and replace any damaged or broken panels to ensure that your system gets back to giving you your full capacity electrical output with immediate effect.
Our electrical experts will always ensure that any solar PV systems they install will dispel the probability of a fire hazard in the first instance, but if your system has been improperly implemented by another source, then our professionals in the trade can replace your old system, so that you can be confident that your new solar PV installation is safe, and by installing everything to code you won’t have anything to worry about.
To ensure continuous maximum operation and performance from your solar panels, the installation, routine checking, and preservation of your PV systems are paramount in maintaining maximum capacity, and with our extensive knowledge in this area our electricians will undertake any maintenance and repairs that are needed to guarantee you’re always getting the best from your system year on year.
CONTACT US
Home is where the heart is, and probably everything else you value too, but most importantly yourself and your loved ones. Yet one of the most common accidents that can happen in the home, could take everything you cherish away in an instant, with devastating consequences. And the cause? Fire!!! Fires can happen anywhere in the home. The list is endless, and the outcomes can be shocking and life-changing. But help is at hand, from the professionals at DES, to ensure that you, your loved ones and your assets don’t become a casualty.
The above list emphasises the importance of fire prevention in the home. Having a mains powered smoke alarm installed, can guarantee to be more reliable and conducive than just having a battery powered smoke alarm, because it’s connected to your main electricity supply and therefore it isn’t solely dependable on batteries alone, which can suddenly cut out. However, our mains powered smoke alarms do still come fitted with a battery for back-up should your electricity supple be cut off, so it’s advantages are expanded.
At DES, our experts will go through the entire procedure with you. We’ll discuss which rooms would necessitate a smoke alarm, and where they need to be situated for maximum effect. Then our professional electricians will install them efficiently and safely, so you can relax, knowing that your home, your loved ones and all your contents will be protected and safeguarded for years to come.
You only need to step inside a room that has Redheat underfloor heating to feel an instant and overall welcoming ambient temperature of the space you’re in. With a heating system that disperses infrared waves evenly within the four walls you’ll be enveloped in a convivial warmth.
Have you ever come in from the cold, turned on the boiler, and waited for your traditional convector radiators on the wall to heat up, then stood over them to try and get yourself warm? You can feel the hot air, contained in that specific area rise and disperse, but as soon as you move away, you realise you haven’t quite warmed up enough. With infrared heating, you don’t have to worry about boilers or radiators, and having to position yourself next to one to feel the heat, because with electric underfloor heating, the entire room will emanate a radiant warmth wherever you are.
It’s a series of electrical wires or heating mats that are positioned beneath your flooring. It doesn’t matter whether your floor covering is vinyl, tiled or carpeted, with an infrared underfloor heating system installed surreptitiously beneath it, you’ll feel the warming effect immediately. Redheat is an ultra-thin film containing a mixture of carbon crystal, graphite and graphene with copper strips down each side. It is through these copper strips that the electricity is conducted and then emitted as infrared waves, rising through the floor giving an even spread of heat throughout the room.
In collaboration with Redheat, and taking ownership of the best installation experts on hand, at DES, we guarantee to offer you the most efficient and cost-effective electrical underfloor heating, ensuring you get years of unadulterated warmth throughout your home.
Traditionally, the kitchen was a defined room in the home, solely for preparing and cooking the families meals. But nowadays for many households, it’s evolved into the central hub for family get-togethers and social gatherings, and with that in mind, at DES, we’re here to help you create an atmosphere of welcoming warmth, no matter what the occasion. Our team of professionals will happily discuss your individual requirements and guide you through the many options available to you, and our electricians will safely and expertly provide a no-fuss installation.
It’s important to have the perfect lighting for all your work tops when preparing and cooking food. With the applicability of the positioning and brightness of lighting being a top priority, to provide a safe and workable environment, we’ll liaise with you throughout the process, to install the best and most appropriate lighting for your explicit needs. You can use LED Spotlights to create a coned beam effect to illuminate specific work areas, or Strip Lights to provide an overall flood of light across the entire work surface.
Re-setting the mood can make all the difference when transforming your kitchen from a working area into a focus of societal get-togethers, and imbuing an atmosphere of sheer exuberance, is dependable on the lighting you use. Mood lighting can add depth to a room and its furnishings with soft, unassuming lighting effects to enhance the ambiance you want to create.
What kind of lighting you use is entirely up to you, whether that’s LED Spotlights or LED Strip Lights.
From miniature LED’s for gentle lighting to Super-flux LEDs for maximum light emission, in the long run, they are more economic and energy-efficient than the old halogen and fluorescent spotlights, and are by far, more eco-friendly too. When working in the kitchen, using knives and other utensils, it’s safer to have brighter lighting, and if you have lots of natural light, then a slightly lower wattage bulb might be sufficient.
This is a flexible strip of numerous little LED lights, that comes in different shapes and sizes, and can be self-contained or linked to dimmer devices, to provide variable light levels, depending on the mood you wish to create. They come attached to a reel that has a controller, so you can mix and match the colours for that perfect ambiance. These efficient lights are also very economic to run.
Examples of wattage and brightness for LED tapes.
With many lighting options to choose from, and our electrical engineers waiting to assist you, all you have to do is call us or pop in-store, and we’ll be waiting to light up your kitchen, and your social life.
Over the past year, our way of living has changed, and the approach we take to entertaining family and friends has also had to change. With indoor restrictions limiting the number of people who can get together inside, many people have had to reassess how they can utilize their outside spaces to safely accommodate open-air gatherings, whether that’s in the hospitality sector with hotels, restaurants, or the local pub, or for more personal gatherings in your own back garden. Here at DES, we’ve got all the knowledge and knowhow to bring you the very best in outdoor lighting, adding warmth and ambience to any open-air space, and our experts are here to help you to make the most of this new ‘normal’ way of alfresco living in these capricious times. And although we’re hopeful that the present restrictions will eventually come to an end, our new-found passion for the outdoor entertaining experience will undeniably continue for many years to come.
Whether you are a landlord, publican or homeowner, here at DES, we offer free quotations, and with our team of professionals we’ll discuss the many options open to you, from upgrading your previous lighting, to discussing new electrical designs, and ensuring safety in the installation and wiring, whilst at all times being aware of the advantages of using renewable energy, including carbon neutral sources like sunlight, wind, rain, and geothermal heat.
We’re experts in a multiplicity of electrical systems in both commercial and residential settings, and our electricians, whilst meeting with your expectations, will comply with all regulations, to ensure that both your personal safety, and that of the general public is paramount.
Solar powered outdoor lighting is popular and applied in many commercial and residential settings, but due to their structure, and depending on their use, can tend to be a little low on brightness. And it’s in these situations that it may be more beneficial to have your outdoor lighting connected to your mains supply, which our electricians can easily and safely install for you.
It’s important to be considerate to the environment and your neighbours when deciding on your lighting design, by not creating too much glare. Wildlife, and bats in particular, can be affected by the disturbance of bright lights and we take this into consideration by not flooding a particular area, but by distributing the lighting. This is also important when considering your neighbours, so keeping your lighting local will prevent them from being in the direct line of any light instalments.
At DES we understand the importance of looking after and preserving the environment, and decreasing our carbon footprint. And by incorporating energy efficient lighting technology, we can guarantee to be doing our bit for the planet, by reducing greenhouse emissions, using low energy lighting. Although initially costing a little extra, both CFL and LED lightbulbs have a lower carbon rating, and last considerably longer than older incandescent lights.
From the initial concept to completion, our team of professionals are ready to help you design, illuminate and enjoy your new outdoor space for years to come. Just another one of our lightbulb moments for you to enjoy!
We’ve been busy this week in Brighton installing an 8Kwh Solar Edge system on a fantastic renovation project. The ‘in roof’ PV system will sit level with the tiles when installed to give a great end result.
Thanks to Daylight Energy Ltd for getting us involved in the project. | https://www.dorsetelectricalsolutions.com/blog/ |
The link between good nutrition and overall health is too important to ignore. When we look closely at healthy aging, we can see how critical it is to maintain good nutrition in order to maintain a healthy weight, reduce risk of chronic conditions and maintain overall health, not only physically but mentally as well.
Between 2000 and 2030, the number of adults worldwide aged 65 years and older is projected to more than double from approximately 420 million to 973 million (1). In the past century, the leading causes of death have shifted from infectious diseases to chronic diseases such as cardiovascular disease and cancer, which may be influenced by diet (2). This has drawn more attention to the effect of diet on mortality. As the older adult population increases, so does the need to identify how dietary choices affect quality of life and survival (3).
Consuming a wide variety of foods is considered the best way to ensure balance of nutrients when healthful food components, including fresh fruits and vegetables, whole grains, legumes, nuts and lean proteins, are predominant in the diet. Without good balanced nutrition, the body might not work properly. Organs and tissues rely on the nutrients consumed in order to perform all the functions needed to maintain overall health.
Without these nutrients, the body is more prone to disease, infection, fatigue and even unstable mental health. The association between emotional well-being and nutritional status has been well documented (4,5) and it is widely known that not only is poor physical health associated with diminished emotional well-being, it is also related to poor nutritional status among older adults (6).
In conclusion, it is imperative to ensure that older adults consume the right amount of nutrients needed to sustain healthy aging and overall good health. Good nutritional status in older adults benefits both the individual and society: health is improved, dependence is decreased, time required to recuperate from illness is reduced, and utilization of health care resources is contained (7,8).
Written By Lina Marquez, RDN, LDN
1. Centers for Disease Control and Prevention. Public health and aging: Trends in aging—United States and worldwide. JAMA. 2003;289: 1371-1373
2. Gorina Y, Hoyert D, Lentzner H, Goulding M. Trends in causes of death among older persons in the United States. Aging Trends. 2005; 6:1-12.
3. Dietary Patterns and Survival of Older Adults
4. Anderson, Amy L. et al. Journal of the Academy of Nutrition and Dietetics, Volume 111, Issue 1, 84 – 91
5. Eskelinen K, Hartikainen S, Nykanen I. Is loneliness associated with malnutrition in older people? Int J Gerontol. 2016;10(1);43-45.
6. Fratiglioni L, Wang HX, Ericsson K, Maytan M, Winblad B. Influence of social network on occurrence of dementia: A community-based longitudinal study. Lancet. 2000;355(9212):1315-1319.
7. German L, Feldblum I, Bilenko N, Castel H, Harman-Boehm I, Shahar DR. Depressive symptoms and risk for malnutrition among hospitalized elderly people. J Nut Health Aging. 2008;12(5):313-318
8. Position of the American Dietetic Association: cost-effectiveness of medical nutrition therapy. J Am Diet Assoc. 1995;95:88-91Gallagher-Allred CR, Voss AC, Finn SC, McCamish MA. Malnutrition and clinical outcomes: the case for medical nutrition therapy. J Am Diet Assoc. 1996;96:361-369. | https://deliverleancare.com/2019/12/13/link-between-nutritional-status-and-overall-health/ |
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
To send this article to your account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
You can save your searches here and later view and run them again in "My saved searches".
Page 1 of 2
In the 1960s, the thesis that dietary cholesterol contributes to blood cholesterol and heart disease risk was a rational conclusion based on the available science at that time. Fifty years later the research evidence no longer supports this hypothesis yet changing the dietary recommendation to limit dietary cholesterol has been a slow and at times contentious process. The preponderance of the clinical and epidemiological data accumulated since the original dietary cholesterol restrictions were formulated indicate that: (1) dietary cholesterol has a small effect on the plasma cholesterol levels with an increase in the cholesterol content of the LDL particle and an increase in HDL cholesterol, with little effect on the LDL:HDL ratio, a significant indicator of heart disease risk, and (2) the lack of a significant relationship between cholesterol intake and heart disease incidence reported from numerous epidemiological surveys. Over the last decade, many countries and health promotion groups have modified their dietary recommendations to reflect the current evidence and to address a now recognised negative consequence of ineffective dietary cholesterol restrictions (such as inadequate choline intake). In contrast, health promotion groups in some countries appear to suffer from cognitive dissonance and continue to promote an outdated and potentially hazardous dietary recommendation based on an invalidated hypothesis. This review evaluates the evidence for and against dietary cholesterol restrictions and the potential consequences of such restrictions.
Carbohydrate-rich foods are an essential component of the diet, providing the glucose that is continuously required by the nervous system and some other cells and tissues in the body for normal function. There is some concern that too much carbohydrate or certain types of carbohydrate such as fructose or the high glycaemic index carbohydrate foods that produce large, rapid increases in blood glucose may be detrimental to health. This review considers these issues and also summarises the public health advice currently available in Europe and the USA concerning dietary carbohydrates. The UK Scientific Advisory Committee on Nutrition is currently reviewing carbohydrates and health, and the subsequent report should help clarify some of the concerns regarding carbohydrates and health.
The human gut microbiota has been identified as a possible novel CVD risk factor. This review aims to summarise recent insights connecting human gut microbiome activities with CVD and how such activities may be modulated by diet. Aberrant gut microbiota profiles have been associated with obesity, type 1 and type 2 diabetes and non-alcoholic fatty liver disease. Transfer of microbiota from obese animals induces metabolic disease and obesity in germ-free animals. Conversely, transfer of pathogen-free microbiota from lean healthy human donors to patients with metabolic disease can increase insulin sensitivity. Not only are aberrant microbiota profiles associated with metabolic disease, but the flux of metabolites derived from gut microbial metabolism of choline, phosphatidylcholine and l-carnitine has been shown to contribute directly to CVD pathology, providing one explanation for increased disease risk of eating too much red meat. Diet, especially high intake of fermentable fibres and plant polyphenols, appears to regulate microbial activities within the gut, supporting regulatory guidelines encouraging increased consumption of whole-plant foods (fruit, vegetables and whole-grain cereals), and providing the scientific rationale for the design of efficacious prebiotics. Similarly, recent human studies with carefully selected probiotic strains show that ingestion of viable microorganisms with the ability to hydrolyse bile salts can lower blood cholesterol, a recognised risk factor in CVD. Taken together such observations raise the intriguing possibility that gut microbiome modulation by whole-plant foods, probiotics and prebiotics may be at the base of healthy eating pyramids advised by regulatory agencies across the globe. In conclusion, dietary strategies which modulate the gut microbiota or their metabolic activities are emerging as efficacious tools for reducing CVD risk and indicate that indeed, the way to a healthy heart may be through a healthy gut microbiota.
DHA is an abundant nutrient from marine lipids: its specific biological effects have been investigated in human volunteers, taking into consideration the dose effects. We report herein that, at dosages below 1 g/d, DHA proved to be effective in lowering blood platelet function and exhibited an ‘antioxidant’ effect. However, this was no longer the case following 1·6 g/d, showing then a U-shape response. The antioxidant effect has been observed in platelets as well as LDL, of which the redox status is assumed to be crucial in their relationship with atherosclerosis. Second, the oxygenated products of DHA, especially protectins produced by lipoxygenases, have been considered for their potential to affect blood platelets and leucocytes. It is concluded that DHA is an interesting nutrient to reduce atherothrombogenesis, possibly through complementary mechanisms involving lipoxygenase products of DHA.
Childhood obesity is an issue of public health concern globally. This review reports on levels of overweight and obesity in Irish children and examines some aspects of their diet and lifestyle proposed to promote or protect against increasing body fatness in children. While there is still some debate with regard to the most appropriate cut-off points to use when assessing body fatness in children, approximately one in five Irish children (aged 2–17 years) have been classified as overweight (including obese) according to two generally accepted approaches. Furthermore, comparison with previous data has shown an increase in mean body weight and BMI over time. On examining dietary patterns for Irish children, there was a noticeable transition from a less energy dense diet in pre-school children to a more energy dense diet in older children and teenagers, associated with a change to less favourable dietary intakes for fibre, fat, fruit and vegetables, confectionery and snacks and sugar-sweetened beverages as children got older. A significant proportion of school-aged children and teenagers reported watching more than 2 h television per day (35 % on school-days and 65 % on week-ends) compared with 13 % of pre-school children. For children aged 5–12 years, eating out of the home contributed just 9 % of energy intake but food eaten from outside the home was shown to contribute a higher proportion of energy from fat and to be less fibre-dense than food prepared at home. Improvements in dietary lifestyle are needed to control increasing levels of overweight and obesity in children in Ireland.
Assessing dietary intake in people of any age is challenging but measuring the diet of infants and children can be particularly problematic. Young children may lack the cognitive skills, writing skills and food knowledge to record their own food intake. Multiple people may be responsible for the care of the child and to collect an accurate picture of intake it may be necessary to combine parental reports with observation in school or nursery. Where interviews are conducted with the child themselves questions may need to focus on aspects of the diet which children are likely to attend to. For example, children may not be familiar with food names or brands but may be able to describe their texture, colour and images on packaging. Adolescents are likely to be more aware of the foods they consume and have the cognitive and writing skills to record their own food intake but may lack the interest or motivation. Research has focused on reducing the burden of recording intake on the participant. Developments include food photographs for assessment of portion size which remove the need for weighing each food item, and, in recent years, computer-based methods have been developed for self-completion by young people with the aim of motivating them to participate in studies by making dietary reporting more engaging. The present paper discusses methods and challenges in assessing food intake in children followed by details of two such tools developed at Newcastle University, UK, the Young Person's Food Atlas and INTAKE24.
The dramatic rise in childhood obesity has driven the demand for tools better able to assess and define obesity and risk for related co-morbidities. In addition, the early life origins of non-communicable diseases including type 2 diabetes are associated with subtle alterations in growth and body composition, including total and regional body fatness, limb/trunk length and skeletal muscle mass (SMM). Consequently improved tools based on national reference data, which capture these body components must be developed as the limitations of BMI as a measure of overweight and obesity and associated cardiometabolic risk are now recognised. Furthermore, waist circumference as a measure of abdominal fatness in children is now endorsed by the International Diabetes Federation and National Institute for Clinical and Health Excellence for diagnostic and monitoring purposes. The present paper aims to review the research on growth-related variations in body composition and proportions, together with how national references for percentage body fat, SMM and leg/trunk length have been developed. Where collection of these measures is not possible, alternative proxy measures including thigh and hip circumferences are suggested. Finally, body ratios including the waist:height and muscle:fat ratios are highlighted as potential measures of cardiometabolic disease risk. In conclusion, a collection of national references for individual body measures have been produced against which children and youths can be assessed. Collectively, they have the capacity to build a better picture of an individual's phenotype, which represents their risk for cardiometabolic disease beyond that of the capability of BMI.
Physical inactivity is an important risk factor for many chronic diseases and contributes to obesity and poor mental well-being. The present paper describes the main advantages and disadvantages, practical problems, suggested uses, and future developments regarding self-reported and objective data collection in the context of dietary surveys. In dietary surveys, physical activity is measured primarily to estimate energy expenditure. Energy expenditure surveillance is important for tracking changes over time, particularly given the debates over the role of the relative importance of energy intake and expenditure changes in the aetiology of obesity. It is also important to assess the extent of underreporting of dietary intake in these surveys. Physical activity data collected should include details on the frequency, duration and relative intensity of activity for each activity type that contributes considerably to overall activity and energy expenditure. Problems of validity and reliability, associated with inaccurate assessment, recall bias, and social desirability bias, are well-known; children under 10 years cannot report their activities accurately. However, despite such limitations, questionnaires are still the dominant method of physical activity assessment in dietary surveys due to their low cost and relatively low participant burden. Objective, time-stamped measures that monitor heart rate and/or movement can provide more comprehensive, quantitative assessment of physical activity but at greater cost and participant burden. Although overcoming many limitations of questionnaires, objective measures also have drawbacks, including technical, practical and interpretational issues.
The prevention of childhood obesity is a global priority. However, a range of complex social and environmental influences is implicated in the development of obesity and chronic disease that goes beyond the notion of individual choice. A population-level approach recognises the importance of access to and availability of healthy foods outside the home. These external food environments, in restaurants, supermarkets, and in school, or recreation and sports settings, are often characterised by energy dense, nutrient-poor food items that do not reflect the current nutritional guidelines for health. In addition, our understanding of these broader influences on nutritional intake is still limited. Particularly, lacking is a clear understanding of what constitutes the food environment, as well as robust measures of components of the food environment across different contexts. Therefore, this review summarises the literature on food environments of relevance to childhood obesity prevention, with a focus on places where children live, learn and play. Specifically, the paper highlights the approaches and challenges related to defining and measuring the food environment, discusses the aspects of the food environment unique to children and reports on environmental characteristics that are being modified within community, school and recreational settings. Results of the review show the need for a continued focus on understanding the intersection between individual behaviour and external factors; improved instrument development, especially regarding validity and reliability; clearer reported methodology including protocols for instrument use and data management; and considering novel study design approaches that are targeted at measuring the relationship between the individual and their food environment.
Research on healthy ageing lacks an agreed conceptual framework and has not adequately taken into account the growing evidence that social and biological factors from early life onwards affect later health. We conceptualise healthy ageing within a life-course framework, separating healthy biological ageing (in terms of optimal physical and cognitive functioning, delaying the onset of chronic diseases, and extending length of life for as long as possible) from changes in psychological and social wellbeing. We summarise the findings of a review of healthy ageing indicators, focusing on objective measures of physical capability, such as tests of grip strength, walking speed, chair rises and standing balance, which aim to capture physical functioning at the individual level, assessing the capacity to undertake the physical tasks of daily living. There is robust evidence that higher scores on these measures are associated with lower rates of mortality, and more limited evidence of lower risk of morbidity, and of age-related patterns of change. Drawing on a research collaboration of UK cohort studies, we summarise what is known about the influences on physical capability in terms of lifetime socioeconomic position, body size and lifestyle, and underlying physiology and genetics; the evidence to date supports a broad set of factors already identified as risk factors for chronic diseases. We identify a need for larger longitudinal studies to investigate age-related change and ethnic diversity in these objective measures, the dynamic relationships between them, and how they relate to other component measures of healthy ageing. Robust evidence across cohort studies, using standardised measures within a clear conceptual framework, will benefit policy and practice to promote healthy ageing.
Healthy longevity is a tangible possibility for many individuals and populations, with nutritional and other lifestyle factors playing a key role in modulating the likelihood of healthy ageing. Nevertheless, studies of effects of nutrients or single foods on ageing often show inconsistent results and ignore the overall framework of dietary habits. Therefore, the use of dietary patterns (e.g. a Mediterranean dietary pattern) and the specific dietary recommendations (e.g. dietary approaches to stop hypertension, Polymeal and the American Healthy Eating Index) are becoming more widespread in promoting lifelong health. A posteriori defined dietary patterns are described frequently in relation to age-related diseases but their generalisability is often a challenge since these are developed specifically for the population under study. Conversely, the dietary guidelines are often developed based on prevention of disease or nutrient deficiency, but often less attention is paid to how well these dietary guidelines promote health outcomes. In the present paper, we provide an overview of the state of the art of dietary patterns and dietary recommendations in relation to life expectancy and the risk of age-related disorders (with emphasis on cardiometabolic diseases and cognitive outcomes). According to both a posteriori and a priori dietary patterns, some key ‘ingredients’ can be identified that are associated consistently with longevity and better cardiometabolic and cognitive health. These include high intake of fruit, vegetables, fish, (whole) grains and legumes/pulses and potatoes, whereas dietary patterns rich in red meat and sugar-rich foods have been associated with an increased risk of mortality and cardiometabolic outcomes.
Dietary restriction (DR) has been shown to extend both median and maximum lifespan in a range of animals, although recent findings suggest that these effects are not universally enjoyed across all animals. In particular, the lifespan effect following DR in mice is highly strain-specific and there is little current evidence that DR induces a positive effect on all-cause mortality in non-human primates. However, the positive effects of DR on health appear to be highly conserved across the vast majority of species, including human subjects. Despite these effects on health, it is highly unlikely that DR will become a realistic or popular life choice for most human subjects given the level of restraint required. Consequently significant research is focusing on identifying compounds that will bestow the benefits of DR without the obligation to adhere to stringent reductions in daily food intake. Several such compounds, including rapamycin, metformin and resveratrol, have been identified as potential DR mimetics. Although these compounds show significant promise, there is a need to properly understand the mechanisms through which these drugs act. This review will discuss the importance in understanding the role that genetic background and heterogeneity play in mediating the lifespan and healthspan effects of DR. It will also provide an overview of the most promising current DR mimetics and their effects on healthy lifespan.
The number of people suffering from metabolic diseases is dramatically increasing worldwide. This stresses the need for new therapeutic strategies to combat this growing epidemic of metabolic diseases. A reduced mitochondrial function is one of the characteristics of metabolic diseases and therefore a target for intervention. Here we review the evidence that mitochondrial function may act as a target to treat and prevent type 2 diabetes mellitus, and, if so, whether these effects are due to reduction in skeletal muscle fat accumulation. We describe how exercise may affect these parameters and can be beneficial for type 2 diabetes. We next focus on alternative ways to improve mitochondrial function in a non-exercise manner. Thus, in 2003, resveratrol (3,5,4′-trihydroxystilbene) was discovered to be a small molecule activator of sirtuin 1, an important molecular target regulating cellular energy metabolism and mitochondrial homoeostasis. Rodent studies have clearly demonstrated the potential of resveratrol to improve various metabolic health parameters. Here we review data in human subjects that is available on the effects of resveratrol on metabolism and mitochondrial function and discuss how resveratrol may serve as a new therapeutic strategy to preserve metabolic health. We also discuss whether the effects of resveratrol are similar to the effects of exercise training and therefore if resveratrol can be considered as an exercise mimetic.
Osteoarthritis (OA) is a degenerative joint disease for which there are no disease-modifying drugs. It is a leading cause of disability in the UK. Increasing age and obesity are both major risk factors for OA and the health and economic burden of this disease will increase in the future. Focusing on compounds from the habitual diet that may prevent the onset or slow the progression of OA is a strategy that has been under-investigated to date. An approach that relies on dietary modification is clearly attractive in terms of risk/benefit and more likely to be implementable at the population level. However, before undertaking a full clinical trial to examine potential efficacy, detailed molecular studies are required in order to optimise the design. This review focuses on potential dietary factors that may reduce the risk or progression of OA, including micronutrients, fatty acids, flavonoids and other phytochemicals. It therefore ignores data coming from classical inflammatory arthritides and nutraceuticals such as glucosamine and chondroitin. In conclusion, diet offers a route by which the health of the joint can be protected and OA incidence or progression decreased. In a chronic disease, with risk factors increasing in the population and with no pharmaceutical cure, an understanding of this will be crucial.
Epidemiological studies, including those in identical twins, and in individuals in utero during periods of famine have provided robust evidence of strong correlations between low birth-weight and subsequent risk of disease in later life, including type 2 diabetes (T2D), CVD, and metabolic syndrome. These and studies in animal models have suggested that the early environment, especially early nutrition, plays an important role in mediating these associations. The concept of early life programming is therefore widely accepted; however the molecular mechanisms by which early environmental insults can have long-term effects on a cell and consequently the metabolism of an organism in later life, are relatively unclear. So far, these mechanisms include permanent structural changes to the organ caused by suboptimal levels of an important factor during a critical developmental period, changes in gene expression caused by epigenetic modifications (including DNA methylation, histone modification and microRNA) and permanent changes in cellular ageing. Many of the conditions associated with early-life nutrition are also those which have an age-associated aetiology. Recently, a common molecular mechanism in animal models of developmental programming and epidemiological studies has been development of oxidative stress and macromolecule damage, specifically DNA damage and telomere shortening. These are phenotypes common to accelerated cellular ageing. Thus, this review will encompass epidemiological and animal models of developmental programming with specific emphasis on cellular ageing and how these could lead to potential therapeutic interventions and strategies which could combat the burden of common age-associated disease, such as T2D and CVD.
Gait and cognitive impairments in older adults can reflect the simultaneous existence of two syndromes that affect certain brain substrates and pathologies. Nutritional deficiencies, which are extremely common among elderly population worldwide, have potential to impact the existence and rehabilitation of both syndromes. Gait and cognition are controlled by brain circuits which are vulnerable to multiple age-related pathologies such as vascular diseases, inflammation and dementias that may be caused or accentuated by poor nutrition or deficiencies that lead to cognitive, gait or combined cognitive and gait impairments. The following review aims to link gait and cognitive classifications and provide an overview of the potential impact of nutritional deficiencies on both neurological and gait dysfunctions. The identification of common modifiable risk factors, such as poor nutrition, may serve as an important preventative strategy to reduce cognitive and mobility impairments and moderate the growing burden of dementia and disability worldwide.
Influenza is a major cause of death in the over 65s. Increased susceptibility to infection and reduced response to vaccination are due to immunosenscence in combination with medical history and lifestyle factors. Age-related alterations in the composition of the gut microbiota have a direct impact on the immune system and it is proposed that modulation of the gut microbiota using pre- and probiotics could offer an opportunity to improve immune responses to infections and vaccination in older people. There is growing evidence that probiotics have immunomodulatory properties, which to some extent are strain-dependent, and are strongly influenced by ageing. Randomised controlled trials suggest that probiotics may reduce the incidence and/or severity of respiratory infections, although there is limited data on older people. A small number of studies have examined the potential adjuvant effects of selected probiotics for vaccination against influenza; however, the data is inconsistent, particularly in older people. This review describes the impact of age-related changes in the gut on the immune response to respiratory infections and evaluates whether restoration of gut microbial homoeostasis by probiotics offers an opportunity to modulate the outcome of respiratory infections and vaccination against influenza in older people. Although there is promising evidence for effects of probiotics on human health, there is a lack of consistent data, perhaps partly due to strain-specific differences and an influence of the age of the host. Further research is critical in evaluating the potential use of probiotics in respiratory infections and vaccination in the ageing population.
High amounts of time spent sedentary and low levels of physical activity have been implicated in the process of excessive adiposity gains in youth. The aim of this review is to discuss the role of physical activity, sedentary time and behaviour (i.e. television (TV)-viewing) in relation to adiposity during the first two decades of life with a specific focus on whether the association between sedentary time, and behaviour and adiposity is independent of physical activity. We identified nine cohort studies (three prospective) whether sedentary time was associated with adiposity independent of physical activity. Eight of these studies suggested that sedentary time was unrelated to adiposity when physical activity was taken into account. Results from studies (n 8) examining the independent association between TV-viewing and adiposity independent of physical activity were mixed. Those that observed a positive association between TV-viewing and adiposity independent of physical activity discussed that the association may be due to residual confounding. A few additional studies have also challenged the general notion that low levels of physical activity leads to fatness and suggested that higher baseline fatness may be predictive of a decline in physical activity. It appears unlikely that higher levels of sedentary time are associated with or predictive of, higher levels of adiposity when physical activity is controlled for in youth. Specific sedentary behaviours such as TV-viewing may be associated with adiposity independent of physical activity but the results may be explained by residual confounding.
Osteoporosis, a metabolic skeletal disease characterised by decreased bone mass and increased fracture risk, is a growing public health problem. Among the various risk factors for osteoporosis, calcium and vitamin D have well-established protective roles, but it is likely that other nutritional factors are also implicated. This review will explore the emerging evidence supporting a role for certain B-vitamins, homocysteine and the 677C→T polymorphism in the gene encoding the folate-metabolising enzyme methylenetetrahydrofolate reductase, in bone health and disease. The evidence, however, is not entirely consistent and as yet no clear mechanism has been defined to explain the potential link between B-vitamins and bone health. Coeliac disease, a common condition of malabsorption, induced by gluten ingestion in genetically susceptible individuals, is associated with an increased risk both of osteoporosis and inadequate B-vitamin status. Given the growing body of evidence linking low bone mineral density and/or increased fracture risk with low B-vitamin status and elevated homocysteine, optimal B-vitamin status may play an important protective role against osteoporosis in coeliac disease; to date, no trial has addressed this possible link.
The prevalence of osteoporosis and the incidence of age-related fragility fracture vary by ethnicity. There is greater than 10-fold variation in fracture probabilities between countries across the world. Mineral and bone metabolism are intimately interlinked, and both are known to exhibit patterns of daily variation, known as the diurnal rhythm (DR). Ethnic differences are described for Ca and P metabolism. The importance of these differences is described in detail between select ethnic groups, within the USA between African-Americans and White-Americans, between the Gambia and the UK and between China and the UK. Dietary Ca intake is higher in White-Americans compared with African-Americans, and is higher in White-British compared with Gambian and Chinese adults. Differences are observed also for plasma 25-hydroxy vitamin D, related to lifestyle differences, skin pigmentation and skin exposure to UVB-containing sunshine. Higher plasma 1,25-dihydroxy vitamin D and parathyroid hormone are observed in African-American compared with White-American adults. Plasma parathyroid hormone is also higher in Gambian adults and, in winter, in Chinese compared with White-British adults. There may be ethnic differences in the bone resorptive effects of parathyroid hormone, with a relative skeletal resistance to parathyroid hormone observed in some, but not all ethnic groups. Renal mineral excretion is also influenced by ethnicity; urinary Ca (uCa) and urinary P (uP) excretions are lower in African-Americans compared with White-Americans, and in Gambians compared with their White-British counterparts. Little is known about ethnic differences in the DR of Ca and P metabolism, but differences may be expected due to known differences in lifestyle factors, such as dietary intake and sleep/wake pattern. The ethnic-specific DR of Ca and P metabolism may influence the net balance of Ca and P conservation and bone remodelling. These ethnic differences in Ca, P and the bone metabolism may be important factors in the variation in skeletal health. | https://core-cms.prod.aop.cambridge.org/core/journals/proceedings-of-the-nutrition-society/issue/B9CB408EBE607F66ADF44460BAAFA3F2 |
In this issue of JAMA Internal Medicine, Haring et al1 provide what appears to be the first detailed examination of a Mediterranean diet index and 3 other dietary quality indexes in association with the risk of hip and total fractures. They report that the 4 commonly used indexes predict a lower risk of hip fractures.
These a priori dietary indexes are one form of dietary pattern analyses, with the other being empirical dietary patterns based on statistical methods that take into account correlations among consumption of different foods. The use of dietary patterns in epidemiologic studies and intervention trials to complement studies of specific nutrients and foods has increased because effects of diet are likely to be strongest and clearest when contributions from multiple aspects of diet are combined. In addition, because isolating the effect of a specific nutrient or food from other highly correlated components of diet can be difficult, we can sometimes have greater confidence that an association with an overall dietary pattern is causal than we can for associations with specific components of that diet. One of the early uses of an a priori dietary index was the Healthy Eating Index (HEI), which was created by the US Department of Agriculture to describe adherence to the 1995 US Dietary Guidelines. Because of concerns that the focus of the 1995 guidelines—reduction of total fat and a broad increase in carbohydrates—was not supported by good evidence, we used the HEI to score the diets of participants in the Nurses’ Health Study and Health Professionals Follow-up Study using dietary data that had been collected every 4 years since 1986. After adjusting for smoking, physical activity, and other health-related behaviors, HEI scores were not associated with a composite outcome of cardiovascular disease, cancer, and total mortality. Thus, we created the Alternative Healthy Eating Index, which accounted for type of fat, form of carbohydrate, and source of protein; when applied to the same dietary data, this score strongly predicted a lower risk of this composite of major chronic disease outcomes in both men and women.2 Since that time, the US Dietary Guidelines and corresponding modifications of the HEI have moved closer to the diet described by the Alternative Healthy Eating Index, and both dietary indexes predict better health outcomes.3 More recently, the Alternative Healthy Eating Index has been used to track US trends in diet quality since 2000, documenting a steady improvement that would account for major health benefits.4 The Mediterranean Diet Index was developed to describe adherence to the traditional diet of Greece; this score and a modification for countries in which olive oil is not traditional (the alternative Mediterranean Diet Index) have been strongly associated with better health outcomes in Greece and elsewhere.5 The diet score used in the randomized Dietary Approaches to Stop Hypertension (DASH) trial was developed to describe the dietary pattern documented to reduce blood pressure. Although these dietary indexes differ in some ways, they generally emphasize intake of fruits, vegetables, whole grains, and plant sources of protein and deemphasize refined starch, sugar, and red meat.
Willett WC. Mediterranean Diet and Fracture Risk. JAMA Intern Med. 2016;176(5):652–653. doi:10.1001/jamainternmed.2016.0494
© 2022
Coronavirus Resource Center
Customize your JAMA Network experience by selecting one or more topics from the list below. | https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2504183 |
If you already know about The China Study then you will know how important a milestone it is for nutritional research. It’s such an important study that I thought it would be worth taking a quick look at its background, method and conclusions.
Background
Protein Consumption in Rats
Professor T Colin Campbell observed a relationship between the amount of dietary protein consumed and the promotion of cancer in rats. The animal protein used was casein (the main protein in milk and cheese), along with a variety of plant proteins. Distinct differences between the effects of animal vs. plant-based protein were observed:
- animal protein tended to promote disease conditions
- plant protein tended to have the opposite effect
Early 1970’s in China
The Chinese premier Zhou Enlai was dying of cancer. He had organised a survey called the Cancer Atlas which gathered details on about 880 million people. The survey revealed cancer rates across China to be geographically localised, suggesting dietary/environmental factors—not genes—accounted for differences in disease rates.
1983-1984 Survey
Dr. Campbell with researchers from Cornell University, Oxford University, and the Chinese government, conducted a major epidemiological study (i.e. a study of human populations to discover patterns of disease and the factors that influence them). This was called The China Project (from which the book The China Study derived some of its data). Researchers investigated the relationship between disease rates and dietary/lifestyle factors across the country.
Why China?
- large population of almost one billion
- very little migration within China
- rural Chinese mostly lived where they were born
- strict residential registration system existed
- food production was very localised
- the Cancer Atlas had revealed diseases were localised and so dietary and environmental factors (not genes) would be likely to account for disease rate variation by area (whether affluent and eating Western diet, or rural and eating traditional plant-based diet)
Method
Research Questions
1. Is there an association between environmental factors, like diet and lifestyle, and risk for chronic disease?
2. Would the patterns observed in a human population be consistent with diet and disease associations observed in experimental animals?
Hypothesis
Researchers hypothesised generally that an association between diet/lifestyle factors and disease rates would indeed exist. A specific hypothesis was that animal product consumption would be associated with an increase in cancer and chronic, degenerative disease.
Hypothesis Testing
6,500 adults in 65 different counties across China were surveyed in the 1983-4 project. These counties represented the range of disease rates countrywide for seven different cancers. The survey process with each participant included:
- three-day direct observation
- comprehensive diet and lifestyle questionnaires
- blood and urine samples
- food samples from local markets analysed for nutritional composition
- survey of geographic factors
1989-1990 Survey
- same counties and individuals resurveyed plus a survey of 20 additional new counties in mainland China and Taiwan.
- 10,200 adults surveyed
- socioeconomic information collected
- data combined with new mortality data for 1986-88
Analysis of Data from both 1983-1984 & 1989-1990 Surveys
- data was analysed at approximately two dozen laboratories around the world to reduce chances of error in data analysis
- researchers could be confident that if results were consistent, then they would be correct
Conclusion
- diseases more common in Western countries clustered together geographically in richer areas of China
- diseases in richer areas of the world were thus likely to be attributed to similar “nutritional extravagance”
- diseases in poorer areas of the world were likely to be attributed to nutritional inadequacy/poor sanitation
- blood cholesterol (strongly associated with chronic, degenerative diseases) was higher in those consuming more animal foods
- lower oestrogen levels in women (associated with fewer breast cancers) related to increased plant food consumption
- higher intake of fibre (found only in plants) associated with lower incidence of colon and rectal cancer
The consistency of the results led the researchers to make the overall conclusion that the closer people came to an all plant-based diet, the lower their risk of chronic disease.
Published Data
- The data on both the 1983-1984 survey and the 1989-1990 survey can be seen in more detail here.
- More detail on the experimental study design of the China Project (covered in Appendix B) plus a full copy of The China Study in pdf format is available here.
- Professor T Colin Campbell’s complete CV (including published papers analysing data from the China Project) is available here.
Plant Protein vs Animal Protein Webinar from Professor T Colin Campbell
If you have any comments or require further information on this topic, please let me know.
Bibliography:
- Chen J, Campbell TC, Li J, Peto R. Diet, Life-Style and Mortality in China: A Study of the Characteristics of 65 Chinese Counties. Oxford, UK: Oxford University Press; 1990.
- Chen J, Peto R, Pan W-H, Liu B-Q, Campbell TC, Boreham J, Parpia B. Mortality, Biochemistry, Diet and Lifestyle in Rural China: Geographic Study of the Characteristics of 69 Counties in Mainland China and 16 Areas in Taiwan. Oxford, UK; Ithaca, NY; Beijing, PRC: Oxford University Press, Cornell University Press; People’s Medical Publishing House, 1990. | https://www.wholefoodplantbaseddiet.com/the-china-study/ |
A sufficient intake of vegetables is important for maintaining a balanced diet and avoiding a wide range of diseases. But might a diet rich in vegetables also lower the risk of cardiovascular disease (CVD)?
Unfortunately, researchers from the Nuffield Department of Population Health at the University of Oxford, the Chinese University of Hong Kong, and the University of Bristol found no evidence for this.
That the consumption of vegetables might lower the risk of CVD might at first sight seem plausible, as their ingredients such as carotenoids and alpha-tocopherol have properties that could protect against CVD. But so far, the evidence from previous studies for an overall effect of vegetable consumption on CVD has been inconsistent.
Now, new results from a powerful, large-scale new study in Frontiers in Nutrition shows that a higher consumption of cooked or uncooked vegetables is unlikely to affect the risk of CVD. They also explain how confounding factors might have explained previous spurious, positive findings.
“The UK Biobank is a large-scale prospective study on how genetics and environment contribute to the development of the most common and life-threatening diseases. Here we make use of the UK Biobank’s large sample size, long-term follow-up, and detailed information on social and lifestyle factors, to assess reliably the association of vegetable intake with the risk of subsequent CVD,” said Prof Naomi Allen, UK Biobank’s chief scientist and co-author on the study.
The UK Biobank, follows the health half a million adults in the UK by linking to their healthcare records. Upon their enrollment in 2006-2010, these volunteers were interviewed about their diet, lifestyle, medical and reproductive history, and other factors.
The researchers used the responses at enrollment of 399,586 participants (of whom 4.5% went on to develop CVD) to questions about their daily average consumption of uncooked versus cooked vegetables.
They analyzed the association with the risk of hospitalization or death from myocardial infarction, stroke, or major CVD. They controlled for a wide range of possible confounding factors, including socio-economic status, physical activity, and other dietary factors.
Crucially, the researchers also assessed the potential role of ‘residual confounding’, that is, whether unknown additional factors or inaccurate measurement of known factors might lead to a spurious statistical association between CVD risk and vegetable consumption.
The mean daily intake of total vegetables, raw vegetables, and cooked vegetables was 5.0, 2.3, and 2.8 heaped tablespoons per person. The risk of dying from CVD was about 15% lower for those with the highest intake compared to the lowest vegetable intake. However, this apparent effect was substantially weakened when possible socio-economic, nutritional, and health- and medicine-related confounding factors were taken into account.
Controlling for these factors reduced the predictive statistical power of vegetable intake on CVD by over 80%, suggesting that more precise measures of these confounders would have completed explained any residual effect of vegetable intake.
Dr. Qi Feng, a researcher at the Nuffield Department of Population Health at the University of Oxford, and the study’s lead author, said: “Our large study did not find evidence for a protective effect of vegetable intake on the occurrence of CVD.
Instead, our analyses show that the seemingly protective effect of vegetable intake against CVD risk is very likely to be accounted for by bias from residual confounding factors, related to differences in socioeconomic situation and lifestyle.”
Feng et al. suggest that future studies should further assess whether particular types of vegetables or their method of preparation might affect the risk of CVD.
Last author Dr. Ben Lacey, Associate Professor in the department at the University of Oxford, concluded: “This is an important study with implications for understanding the dietary causes of CVD and the burden of CVD normally attributed to low vegetable intake. However, eating a balanced diet and maintaining a healthy weight remains an important part of maintaining good health and reducing risk of major diseases, including some cancers. It is widely recommended that at least five portions of a variety of fruits and vegetables should be eaten every day.”
educing the burden of cardiovascular disease (CVD) is a top public health priority in the UK and worldwide . A poor diet is a major contributor to morbidity and premature mortality, especially CVD [2, 3], in part by promoting excess weight, but also by raising total cholesterol and low-density lipoproteins concentrations (LDL) and increasing the risk of diabetes and hypertension . Traditionally, the vast majority of epidemiological studies investigating diet and health associations have usually focused on single nutrients and this evidence is reflected in current dietary recommendations [4,5,6].
These emphasize the importance of achieving and maintaining a healthy weight, reductions in saturated fatty acids (SFAs) and free sugars [7, 8], and increases in dietary fiber . High dietary energy density and free sugars are associated with increased risk of weight gain which can further increase CVD and mortality risk [8, 9], while SFAs increase total blood cholesterol and LDL [10, 11]. However, other recent meta-analyses and observational studies have not found evidence for a beneficial effect of reducing SFA intake on CVD and total mortality [12, 13], or found protective effects against stroke . Dietary fiber may lower the risk of CVD, through improved glucose control and lower serum cholesterol concentration .
However, despite years of public health efforts, population dietary change has been slow [1, 16]. This may reflect in part the difficulties of translating present dietary recommendations into food-based public health advice , and some existing recommendations are not universally echoed across countries .
The public have frequently been confused by apparently conflicting messages, for example about the importance of reducing saturated fat or free sugars , without recognizing that these nutrients frequently co-exist in foods and that the consequence may be a diet that is high in both saturated fats and free sugars and they may have synergistic effects on health. Dietary guidelines which focus on foods rather than individual nutrient recommendations could help avoid confusion and avoid inadvertent increases in one nutrient of concern at the expense of another. Despite the inclusion of some food-based recommendations in recent dietary guidelines (especially regarding fruits, vegetables, dairy), nutrient-based advice still remains the most common, often co-existing with food-based guidance, as seen in the latest release of the Dietary Guidelines for Americans 2020–2025 .
Increasingly, researchers have sought to characterize complex dietary patterns using either a priori (based on adherence to a specific patterns, e.g., Mediterranean diet, or a score which reflects overall dietary quality) or a posteriori (based on the observed dietary intake using empirical methods such as factor analysis or principal component analysis (PCA)) [20, 21].
Reduced rank regression (RRR) is a data-dimension reduction technique that aims to identify the combination of food groups that explain the maximum amount of variation in a set of response variables (e.g., nutrients) hypothesized to be on the causal pathway between diet and health outcomes. This approach can test a priori hypotheses of the pathophysiology of disease . To our knowledge, only six longitudinal cohort studies have examined overall CVD risk and/or all-cause mortality using RRR, but all included smaller populations and none was focused in the UK (Additional file 1: Table S1). This population-specificity is important given that dietary patterns can vary substantially even when nutrient intakes are broadly similar, owing to cultural differences in food preference.
Using data from the UK Biobank study, we aimed to identify food-based dietary patterns explaining the variability in known dietary risk factors which operate through excess energy intake, such as energy density, free sugars, saturated fat, and low fiber intakes, and to investigate their association with total and fatal cardiovascular disease (CVD) and all-cause mortality.
Discussion
In this sample of middle-aged British adults, two principal dietary patterns explained 43% and 20% of the variance in specific nutrients, namely energy density, saturated fat, free sugars, and fiber, which are hypothesized to be on the pathway between the associations of food groups and CVD and all-cause mortality through their contribution to excess energy intake. In the primary pattern, greater consumption of chocolate and confectionery, butter, refined bread, and table sugar and preserves together with low intakes of fresh fruit, vegetables, and wholegrain foods was significantly associated with increased CVD and all-cause mortality.
A second pattern was related to higher intakes of free sugars, predominately from sugar sweetened beverages, fruit juice, chocolate and confectionary, and table sugar and preserves but low in butter and higher fat cheese. The association of this dietary pattern with incident CVD and all-cause mortality was non-linear, with only evidence of increased risk for those with the highest dietary pattern z-scores. Exploratory analyses suggested the association observed with dietary pattern 1 was potentially mediated by excess weight.
RRR has not been widely used to identify dietary patterns and their associations with CVD risks. The first dietary pattern largely confirms previous studies reporting associations with a priori “Western” dietary patterns and the benefits of “Mediterranean” diets, and with a large body of data reporting the associations between individual food groups or nutrients and disease outcomes from prospective cohort studies in the USA and Europe [9, 11, 33, 34].
It is notable that people in the dietary pattern quintile with the lowest risk had mean intakes of energy from SFA of 9.7%, very close to the national and international recommendations, and free sugars accounted for 8.8% of total energy, below the World Health Organization (WHO) guidelines , though this level still exceeded the more stringent UK recommendations .
The second dietary pattern is more unusual and is characterized by higher intakes of sugar-sweetened beverages, fruit juice, and table sugar and preserves, together with lower intakes of high fat cheese and butter. This dietary pattern is striking because people in the highest quintile, with very high free sugars intake, otherwise followed other healthy behaviors, with higher physical activity, lower alcohol intake, and were less likely to smoke, and their intake of SFA met the recommended levels.
People in the highest quintile for this dietary pattern had increased risks for CVD and all-cause mortality and consumed on average, 17.3% of dietary energy from free sugars, more than three times the UK dietary guideline, but only 10% SFA, which is the recommended level. While some previous research has shown that higher consumption of SSBs and other added sugars are associated with a higher risk of CVD [9, 37,38,39,40] and all-cause mortality , recent reviews of the evidence by the WHO , and by the UK Scientific Advisory Committee on Nutrition did not identify a specific link between sugar intakes and total mortality. | https://debuglies.com/2022/02/21/eating-vegetables-does-not-protect-against-cardiovascular-disease/ |
Inadequate nutrition in childhood can inhibit optimal growth and development, and is also associated with increased risk of chronic diseases later in life. Children living in households with limited financial resources may face a number of challenges to meet nutrient needs through unhealthy eating patterns, which may lead to health inequalities throughout the life-course. Therefore, improving low-income children’s diet would be an effective strategy for their health promotion and disease prevention, and potentially for narrowing health inequalities. The essential step for an efficient intervention would be to identify the unique nutrition risk that low-income children have. Therefore, the overarching aim of research in this dissertation was to identify nutrition risk of U.S. infants and children with low income or food insecurity, or participating in federal nutrition assistance programs using data from nationally representative surveys. An additional aim was to assess whether the inclusion of micronutrient intake from dietary supplements impacts micronutrient inadequacy in children.
For low-income infants and young children up to the age of 5 years, the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) provides tailored food packages to improve dietary intake that may be inadequate due to economic constraints. Therefore, it is expected that nutrient intake of WIC participants would be more like those of higher-income nonparticipants and higher than those of lower-income nonparticipants who are likely to be eligible for WIC. The results from the Feeding Infants and Toddlers Study 2016 data analysis supported the hypothesis for several nutrients of concern, although WIC participants were more likely to exceed the recommended limits for sodium and added sugars compared to higher-income nonparticipants. However, higher-income nonparticipants were more likely to use dietary supplements than both WIC participants and lower-income nonparticipants, which can impact total nutrient intake (i.e., nutrient intake from all sources).
Systematic differences in dietary supplement use by income and WIC participation were also observed in a nationally representative sample of children aged 18 years and younger from the 2011-2014 National Health and Nutrition Examination Survey (NHANES). Dietary supplement use was lower among children in low-income families compared to those in higher-income families. Among children in low-income families, those participating in WIC were less likely to use dietary supplements compared to nonparticipants. In addition, food insecurity and the Supplemental Nutrition Assistance Program (SNAP) participation were associated with lower use of dietary supplements. Overall, one-third of children used any dietary supplements, mostly multivitamin-minerals, with primary motivations for use as “improve” or “maintain” health.
The following analysis of the 2011-2014 NHANES data showed that the inclusion of dietary supplements in nutrient intake assessments may lead to wider disparities in dietary intake by food security. This study also demonstrated the dose-response relationship between food security status and mean adequacy ratio, a summary measure of micronutrient adequacy. The mean adequacy ratio, inclusive of dietary supplements, was the highest in high food-security group (mean of 0.77), lower in marginal and low food security group (mean of 0.74), and the lowest in very low food security group (mean of 0.66), based on classification by food security among household children. However, the mean adequacy ratio does not reflect the usual intake (i.e., a long-term, habitual intake).
Therefore, another analysis of the 2011-2016 NHANES data estimated total usual nutrient intake of U.S. children 18 years and younger by food security status, using the National Cancer Institute method that adjusts for random error by statistical modeling. The results suggested that food insecurity was associated with higher risks of inadequate intakes for some nutrients, such as vitamins D and E and magnesium among boys and girls and vitamin A and calcium among girls only. Poor overall dietary quality and excessive sodium intake were of concern, regardless of food security status.
Collectively, the results from the studies in this dissertation add value to the evidence base about the adverse association of low income level and food insecurity status with dietary intake and extend the finding to include nutrient intakes from dietary supplements, which widens the disparity in nutrition risk. These findings highlight a need for interventions to reduce nutrient inadequacies and improve dietary quality among children across all socioeconomic levels, but especially among those with low income or food insecurity. | https://hammer.figshare.com/articles/thesis/Identifying_nutrition_risk_among_U_S_infants_and_children_with_limited_financial_resources/12736067/1 |
Chronic obstructive pulmonary disease is one of the leading causes of morbidity and mortality worldwide and a growing healthcare problem. Identification of modifiable risk factors for prevention and treatment of COPD is urgent, and the scientific community has begun to pay close attention to diet as an integral part of COPD management, from prevention to treatment. This review summarizes the evidence from observational and clinical studies regarding the impact of nutrients and dietary patterns on lung function and COPD development, progression, and outcomes, with highlights on potential mechanisms of action. Although definitive data are lacking, the available scientific evidence indicates that some foods and nutrients, especially those nutraceuticals endowed with antioxidant and anti-inflammatory properties and when consumed in combinations in the form of balanced dietary patterns, are associated with better pulmonary function, less lung function decline, and reduced risk of COPD. Knowledge of dietary influences on COPD may provide health professionals with an evidence-based lifestyle approach to better counsel patients toward improved pulmonary health. According to WHO estimates mainly from high-income countries, 65 million people have moderate to severe COPD, but a great proportion of COPD worldwide may be underdiagnosed, mostly in low- and middle-income countries. COPD burden is projected to dramatically increase due to chronic exposure to risk factors and the changing age structure of the world population and is expected to be the third leading cause of death worldwide by WHO Therefore, prevention and management of COPD is currently considered a major health problem, with important social and economic issues. COPD encompasses a group of disorders, including small airway obstruction, emphysema, and chronic bronchitis, and is characterized by chronic inflammation of the airways and lung parenchyma with progressive and irreversible airflow limitation [ 2 ]. Symptoms of COPD include dyspnea distress with breathing, cough, and sputum production. To account for the complexity of the disease and aiding in disease severity assessment, multidimensional indices mainly based on clinical and functional parameters have been developed.
Are your daily lifestyle choices setting you up for illness in later life, or even worse are they impacting your life right now? Take our quiz today and find out! Take The Quiz. When change is too late to stop the occurrence of these diseases, lifestyle changes can prevent them from getting worse. Chronic obstructive pulmonary disease, or COPD, is a general term for several lung diseases, mainly chronic bronchitis and emphysema. These diseases are characterized by obstructed airflow through the airways in and out of the lungs. Both cause excessive inflammatory processes that eventually lead to abnormalities in lung structure and limited airflow. Both are progressive conditions that worsen over time. COPD symptoms include shortness of breath, wheezing, chest tightness, excessive mucus production and coughing. In addition, COPD adds to the work of the heart, and can cause pulmonary heart disease.
The amusing copd and plant based vegan diet question
COPD is generally diet as. The higher the intake of risk of COPD. A framework model of the copd of diets and dietary factors with lung function and relieve the symptoms and slow. Author Contributions Conceptualization, E these foods, and lower the. Also Plant McDougall sent based a video on the horrendous treatment of factory farmed animals. Whyand Vegan. | https://umarlaud.eu/copd-and-plant-based-vegan-diet/ |
No Association between Egg Consumption and Risk of Cardiovascular Disease
A meta-analysis is a statistical technique whereby results from several studies are combined to derive a single estimate on the association or effect of a particular treatment on an outcome. The advantage to this approach is that by combining multiple studies, the estimate is statistically stronger than the results of any one given study. This technique is frequently used in nutrition research and can be applied to results generated from observational or clinical intervention studies.
Over the past 15 years, several prospective cohort studies (a type of observational study) have evaluated associations between dietary patterns and risk of chronic disease. Egg consumption, in particular, has been of interest given the cholesterol content of eggs (186 mg of cholesterol per large egg). Shin et al. recently conducted a meta-analysis on results from these prospective cohort studies, specifically focusing on associations between egg consumption and cardiovascular disease (CVD), mortality, and type 2 diabetes mellitus (T2DM)1. A total of 16 studies, the majority of which focused on CVD, were included in the analysis and each study tracked subjects over 6 to 20 years of follow up.
Results showed that compared to those consuming less than one egg per week, individuals in the highest intake category (≥1 egg/day) did not have a higher risk of CVD, stroke, or overall mortality. This is consistent with findings from controlled intervention studies showing that egg consumption does not adversely affect blood cholesterol levels or other biomarkers for CVD risk, such as endothelial function2-4.
Results were more surprising with respect to T2DM, particularly in light of the body of scientific evidence on eggs and health. Individuals in the highest egg consumption category were more likely to develop T2DM compared to those consuming eggs infrequently. Further, individuals with T2DM were more likely to develop CVD than those who never ate eggs or consumed them less than once per week. While there were several limitations of the study that may have influenced these results, one possible explanation for such findings may relate to how eggs are consumed in an overall diet. Research presented at the 2013 Experimental Biology meeting showed that egg intake was positively correlated with T2DM risk factors (i.e., waist circumference and body mass index) only when consumed as part of dietary patterns low in vegetables, legumes, and grains5. Other dietary patterns containing eggs were not associated with these risk factors. This calls into question the results of prior observational studies looking at egg intake and disease risk and suggests that future studies should consider not only frequency of egg consumption, but the dietary context under which they are consumed.
As is so frequently the case in nutrition, additional studies are warranted to better understand the relationships between egg consumption and chronic disease risk.
1Shin JY, Xun P, Nakamura Y, He K. Egg consumption in relation to risk of cardiovascular disease and diabetes: a systematic review and meta-analysis. Am J Clin Nutr. 2013;98:146-59.
2Goodrow EF, Wilson TA, Houde SC, Vishwanathan R, Scollin PA, Handelman G, Nicolosi RJ. Consumption of one egg per day increases serum lutein and zeaxanthin concentrations in older adults without altering serum lipid and lipoprotein cholesterol concentrations. J Nutr. 2006;136:2519-24.
3Wenzel AJ, Gerweck C, Barbato D, Nicolosi RJ, Handelman GJ, Curran-Celentano J. A 12-wk egg intervention increases serum zeaxanthin and macular pigment optical density in women. J Nutr. 2006;136):2568-73.
4Katz DL, Evans MA, Nawaz H, Njike VY, Chan W, Comerford BP, Hoxley ML. Egg consumption and endothelial function: a randomized controlled crossover trial. Int J Cardiol. 2005;99:65-70.
5Nicklas TA, O’Neil CE, Fulgoni VL. Relationship between egg consumption patterns and nutrient intake, diet quality, weight measures, and cardiovascular risk factors (CVRF): 2001-2008 NHANES.Experimental Biology, 2013, Boston, MA. | http://www.eggnutritioncenter.org/blog/no-association-between-egg-consumption-and-risk-of-cardiovascular-disease/ |
Cross-comparison of diet quality indices for predicting chronic disease risk: findings from the Observation of Cardiovascular Risk Factors in Luxembourg (ORISCAV-LUX) study.
- Public Health Research
- Competence Center for Methodology and Statistics
The scientific community has become increasingly interested in the overall quality of diets rather than in single food-based or single nutrient-based approaches to examine diet-disease relationships. Despite the plethora of indices used to measure diet quality, there still exist questions as to which of these can best predict health outcomes. The present study aimed to compare the ability of five diet quality indices, namely the Recommendation Compliance Index (RCI), Diet Quality Index-International (DQI-I), Dietary Approaches to Stop Hypertension (DASH), Mediterranean Diet Score (MDS), and Dietary Inflammatory Index (DII), to detect changes in chronic disease risk biomarkers. Nutritional data from 1352 participants, aged 18-69 years, of the Luxembourg nationwide cross-sectional ORISCAV-LUX (Observation of Cardiovascular Risk Factors in Luxembourg) study, 2007-8, were used to calculate adherence to the diet quality index. General linear modelling was performed to assess trends in biomarkers according to adherence to different dietary patterns, after adjustment for age, sex, education level, smoking status, physical activity and energy intake. Among the five selected diet quality indices, the MDS exhibited the best ability to detect changes in numerous risk markers and was significantly associated with lower levels of LDL-cholesterol, apo B, diastolic blood pressure, renal function indicators (creatinine and uric acid) and liver enzymes (serum gamma-glutamyl-transpeptidase and glutamate-pyruvate transaminase). Compared with other dietary patterns, higher adherence to the Mediterranean diet is associated with a favourable cardiometabolic, hepatic and renal risk profile. Diets congruent with current universally accepted guidelines may be insufficient to prevent chronic diseases. Clinicians and public health decision makers should be aware of needs to improve the current dietary guidelines. | https://gcptraining.lih.lu/crp/publication/cross-comparison-of-diet-quality-indices-for-predicting-chronic-disease-risk-findings-from-the-observation-of-cardiovascular-risk-factors-in-luxembourg-oriscav-lux-study-13048 |
You are leaving Cambridge Core and will be taken to this journal's article submission site.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
You can save your searches here and later view and run them again in "My saved searches".
Page 1 of 2
Assessing dietary exposure or nutrient intakes requires detailed dietary data. These data are collected in France by the cross-sectional Individual and National Studies on Food Consumption (INCA). In 2014–2015, the third survey (INCA3) was launched in the framework of the European harmonization process which introduced major methodological changes. The present paper describes the design of the INCA3 survey, its participation rate and the quality of its dietary data, and discusses the lessons learned from the methodological adaptations.
Two representative samples of adults (18–79 years old) and children (0–17 years old) living in mainland France were selected following a three-stage stratified random sampling method using the national census database.
Food consumption was collected through three non-consecutive 24 h recalls (15–79 years old) or records (0–14 years old), supplemented by an FFQ. Information on food supplement use, eating habits, physical activity and sedentary behaviours, health status and sociodemographic characteristics were gathered by questionnaires. Height and body weight were measured.
In total, 4114 individuals (2121 adults, 1993 children) completed the whole protocol.
Participation rate was 41·5% for adults and 49·8% for children. Mean energy intake was estimated as 8795 kJ/d (2102 kcal/d) in adults and 7222 kJ/d (1726 kcal/d) in children and the rate of energy intake under-reporters was 17·8 and 13·9%, respectively.
Following the European guidelines, the INCA3 survey collected detailed dietary data useful for food-related and nutritional risk assessments at national and European level. The impact of the methodological changes on the participation rate should be further studied.
To assess the effect of famine exposure during early life on dietary patterns, chronic diseases, and the interaction effect between famine exposure and dietary patterns on chronic diseases in adulthood.
Cross-sectional study. Dietary patterns were derived by factor analysis. Multivariate quantile regression and log-binomial regression were used to evaluate the impact of famine exposure on dietary patterns, chronic diseases and the interaction effect between famine exposure and dietary patterns on chronic diseases, respectively.
Hefei, China.
Adults aged 45–60 years (n 939).
‘Healthy’, ‘high-fat and high-salt’, ‘Western’ and ‘traditional Chinese’ dietary patterns were identified. Early-childhood and mid-childhood famine exposure were remarkably correlated with high intake of the traditional Chinese dietary pattern. Compared with the non-exposed group (prevalence ratio (PR); 95 % CI), early-childhood (3·13; 1·43, 6·84) and mid-childhood (2·37; 1·05, 5·36) exposed groups showed an increased PR for diabetes, and the early-childhood (2·07; 1·01, 4·25) exposed group showed an increased PR for hypercholesterolaemia. Additionally, relative to the combination of non-exposed group and low-dichotomous high-fat and high-salt dietary pattern, the combination of famine exposure in early life and high-dichotomous high-fat and high-salt dietary pattern in adulthood had higher PR for diabetes (4·95; 1·66, 9·05) and hypercholesterolaemia (3·71; 1·73, 7·60), and significant additive interactions were observed.
Having suffered the Chinese famine in childhood might affect an individual’s dietary habits and health status, and the joint effect between famine and harmful dietary pattern could have serious consequences on later-life health outcomes.
The study aimed to investigate the relationship between physical activity, gross motor skills and adiposity in South African children of pre-school age.
Cross-sectional study.
High-income urban, and low-income urban and rural settings in South Africa.
Children (3–6 years old, n 268) were recruited from urban high-income (n 46), urban low-income (n 91) and rural low-income (n 122) settings. Height and weight were measured to calculate the main outcome variables: BMI and BMI-for-age Z-score (BAZ). Height-for-age and weight-for-age Z-scores were also calculated. Actigraph GT3X+ accelerometers were used to objectively measure physical activity; the Test of Gross Motor Development (Version 2) was used to assess gross motor skills.
More children were overweight/obese and had a higher BAZ from urban low-income settings compared with urban high-income settings and rural low-income settings. Being less physically active was associated with thinness, but not overweight/obesity. Time spent in physical activity at moderate and vigorous intensities was positively associated with BMI and BAZ. Gross motor proficiency was not associated with adiposity in this sample.
The findings of this research highlight the need for obesity prevention particularly in urban low-income settings, as well as the need to take into consideration the complexity of the relationship between adiposity, physical activity and gross motor skills in South African pre-school children.
There is an urgent need to find effective methods of supporting individuals to make dietary behaviour changes. Peer-supported interventions (PSI) have been suggested as a cost-effective strategy to support chronic disease self-management. However, the effect of PSI on dietary behaviour is unclear. The present systematic review aimed to assess the effectiveness of PSI for encouraging dietary behaviour change in adults and to consider intervention characteristics linked with effectiveness.
Electronic databases were searched until June 2018 for randomised controlled trials assessing the effectiveness of PSI compared with an alternative intervention and/or control on a dietary related outcome in adults. Following title and abstract screening, two reviewers independently screened full texts and data were extracted by one reviewer and independently checked by another. Results were synthesised narratively.
Randomised controlled trials.
Adult studies.
The fifty-four included studies varied in participants, intervention details and results. More PSI reported a positive or mixed effect on diet than no effect. Most interventions used a group model and were lay-led by peer supporters. Several studies did not report intervention intensity, fidelity and peer training and support in detail. Studies reporting positive effects employed more behaviour change techniques (BCT) than studies reporting no effect; however, heterogeneity between studies was considerable.
As evidence was mixed, further interventions need to assess the effect of PSI on dietary behaviour, describe intervention content (theoretical basis, BCT, intensity and peer training/support) and include a detailed process evaluation.
We aimed to assess the feasibility of a simple new fifteen-item FFQ as a tool for screening risk of poor dietary patterns in a healthy middle-aged population and to investigate how the results of the FFQ correlated with cardiovascular risk factors and socio-economic factors.
A randomized population-based cross-sectional study. Metabolic measurements for cardiovascular risk factors and information about lifestyle were collected. A fifteen-item FFQ was created to obtain information about dietary patterns. From the FFQ, a healthy eating index was created with three dietary groups: good, average and poor. Multivariate logistic regression was used to assess relationships between dietary patterns and cardiovascular risk factors.
Sweden.
Men and women aged 50 years and living in Gothenburg, Sweden.
In total, 521 middle-aged adults (257 men, 264 women) were examined. With good dietary pattern as the reference, there was a gradient association of having obesity, hypertension and high serum TAG in those with average and poor dietary patterns. After adjustment for education and lifestyle factors, individuals with a poor dietary pattern still had significantly higher risk (OR; 95 % CI) of obesity (2·33; 1·10, 4·94), hypertension (2·73; 1·44, 5·20) and high serum TAG (2·62; 1·33, 5·14) compared with those with a good dietary pattern.
Baseline data collected by a short FFQ can predict cardiovascular risk factors in middle-aged Swedish men and women. The FFQ could be a useful tool in health-care settings, when screening for risk of poor dietary patterns.
The purpose of the present meta-analysis was to evaluate the association between the inflammatory potential of diet, determined by the dietary inflammatory index (DII®) score, and depression.
Systematic review and meta-analysis.
A comprehensive literature search was conducted in PubMed, Web of Science and EMBASE databases up to August 2018. All observational studies that examined the association of the DII score with depression/depressive symptoms were included.
Four prospective cohorts and two cross-sectional studies enrolling a total of 49 584 subjects.
Overall, individuals in the highest DII v. the lowest DII category had a 23 % higher risk of depression (risk ratio (RR)=1·23; 95 % CI 1·12, 1·35). When stratified by study design, the pooled RR was 1·25 (95 % CI 1·12, 1·40) for the prospective cohort studies and 1·16 (95 % CI 0·96, 1·41) for the cross-sectional studies. Gender-specific analysis showed that this association was observed in women (RR=1·25; 95 % CI 1·09, 1·42) but was not statistically significant in men (RR=1·15; 95 % CI 0·83, 1·59).
The meta-analysis suggests that pro-inflammatory diet estimated by a higher DII score is independently associated with an increased risk of depression, particularly in women. However, more well-designed studies are needed to evaluate whether an anti-inflammatory diet can reduce the risk of depression.
To identify most commonly consumed foods by adolescents contributing to percentage of total energy, added sugars, SFA, Na and total gram intake per day.
Data from the National Health and Nutrition Examination Survey (NHANES) 2011–2014.
NHANES is a cross-sectional study nationally representative of the US population.
One 24 h dietary recall was used to assess dietary intake of 3156 adolescents aged 10–19 years. What We Eat in America food category classification system was used for all foods consumed. Food sources of energy, added sugars, SFA, Na and total gram amount consumed were sample-weighted and ranked based on percentage contribution to intake of total amount.
Three-highest ranked food subgroup sources of total energy consumed were: sugar-sweetened beverages (SSB; 7·8 %); sweet bakery products (6·9 %); mixed dishes – pizza (6·6 %). Highest ranked food sources of total gram amount consumed were: plain water (33·1 %); SSB (15·8 %); milk (7·2 %). Three highest ranked food sources of total Na were: mixed dishes – pizza (8·7 %); mixed dishes – Mexican (6·7 %); cured meats/poultry (6·6 %). Three highest ranked food sources of SFA were: mixed dishes – pizza (9·1 %); sweet bakery products (8·3 %); mixed dishes – Mexican (7·9 %). Three highest ranked food sources of added sugars were: SSB (42·1 %); sweet bakery products (12·1 %); coffee and tea (7·6 %).
Identifying current food sources of percentage energy, nutrients to limit and total gram amount consumed among US adolescents is critical for designing strategies to help them meet nutrient recommendations within energy needs.
To assess access to healthy food retailers among formerly incarcerated individuals.
Using linked data from the National Longitudinal Study of Adolescent to Adult Health and the Modified Retail Food Environment Index, the present study applies multivariate logistic regression to assess the association between incarceration and (i) living in a food desert and (ii) having low access to healthy food retailers. To account for unobserved heterogeneity, additional analyses are performed comparing formerly incarcerated individuals with persons arrested or convicted for a crime but not previously incarcerated.
Sample of respondents living in urban census tracts in the USA.
Adults (n 10390) aged 24–34 years.
In adjusted logistic regression models, prior incarceration was not significantly associated with living in a food desert (OR=1·097; 95% CI 0·896, 1·343). Prior incarceration significantly increased the likelihood of living in a census tract with low access to healthy food retailers (OR=1·265; 95% CI 1·069, 1·498). This significant association remained when comparing formerly incarcerated individuals with those who had been arrested or convicted of a crime, but not previously incarcerated (OR=1·246; 95% CI 1·032, 1·503).
Formerly incarcerated individuals are more likely to live in areas with low access to healthy food retailers compared with their non-incarcerated counterparts. Because lower access healthy food retailers may be associated with worse health and dietary behaviour, disparities in local food retail environments may exacerbate health inequalities among formerly incarcerated individuals.
People who eat alone, which is becoming a new trend owing to the increasing proportion of one-person households in Korea, are more likely to become overweight and obese. Therefore, we investigated the association between having a dinner companion and BMI.
A linear regression model adjusted for covariates was utilized to examine the association between having a dinner companion and BMI. Subgroup analyses were performed, stratified by age group, gender, household income, educational level and occupation.
We used the data from the Korean Health and Nutrition Examination Survey VI. Our primary independent variable was having a dinner companion while the dependent variable was BMI.
In total, 13303 individuals, aged 20 years or over, were analysed.
Compared with the solo eating group, BMI was lower in the family dinner group (β=−0·39, P<0·01) but not in the non-family dinner group (β=−0·06, P=0·67). The subgroup analysis revealed that the difference in BMI was most significant in young generations, such as those aged 20–29 years (β=−1·15, P<0·01) and 30–39 years (β=−0·78, P=0·01).
We found that people who eat dinner alone are more likely to become overweight and obese than those who eat with their family. This association was stronger in males and young adults than their counterparts. Considering the increasing trends in the proportion of single-person households and solo eating, appropriate intervention is needed.
Consumption of fruits and vegetables has been shown to contribute to mental and cognitive health in older adults from Western industrialized countries. However, it is unclear whether this effect replicates in older adults from non-Western developing countries. Thus, the present study examined the contribution of fruit and vegetable consumption to mental and cognitive health in older persons from China, India, Mexico, Russia, South Africa and Ghana.
Representative cross-sectional and cross-national study.
We used data from the WHO Study on Global Ageing and Adult Health (SAGE), sampled in 2007 to 2010. Our final sample size included 28 078 participants.
Fruit and vegetable consumption predicted an increased cognitive performance in older adults including improved verbal recall, improved delayed verbal recall, improved digit span test performance and improved verbal fluency; the effect of fruit consumption was much stronger than the effect of vegetable consumption. Regarding mental health, fruit consumption was significantly associated with better subjective quality of life and less depressive symptoms; vegetable consumption, however, did not significantly relate to mental health.
Consumption of fruits is associated with both improved cognitive and mental health in older adults from non-Western developing countries, and consumption of vegetables is associated with improved cognitive health only. Increasing fruit and vegetable consumption might be one easy and cost-effective way to improve the overall health and quality of life of older adults in non-Western developing countries.
To examine the association between household food insecurity and dietary diversity in the past 24h (dietary diversity score (DDS, range: 0–9); minimum dietary diversity (MDD, consumption of three or more food groups); consumption of nine separate food groups) among pregnant and lactating women in rural Malawi.
Two rural districts in Central Malawi.
Pregnant (n 589) and lactating (n 641) women.
Of surveyed pregnant and lactating women, 66·7 and 68·6 %, respectively, experienced moderate or severe food insecurity and only 32·4 and 28·1 %, respectively, met MDD. Compared with food-secure pregnant women, those who reported severe food insecurity had a 0·36 lower DDS (P<0·05) and more than threefold higher risk (OR; 95 % CI) of not consuming meat/fish (3·19; CI 1·68, 6·03). The risk of not consuming eggs (3·77; 1·04, 13·7) was higher among moderately food-insecure pregnant women. Compared with food-secure lactating women, those who reported mild, moderate and severe food insecurity showed a 0·36, 0·44 and 0·62 lower DDS, respectively (all P<0·05). The risk of not achieving MDD was higher among moderately (1·95; 1·06, 3·59) and severely (2·82; 1·53, 5·22) food-insecure lactating women. The risk of not consuming meat/fish and eggs increased in a dose–response manner among lactating women experiencing mild (1·75; 1·01, 3·03 and 2·81; 1·09, 7·25), moderate (2·66; 1·47, 4·82 and 3·75; 1·40, 10·0) and severe (5·33; 2·63, 10·8 and 3·47; 1·19, 10·1) food insecurity.
Addressing food insecurity during and after pregnancy needs to be considered when designing nutrition programmes aiming to increase dietary diversity in rural Malawi.
Obesity and hyperglycaemia contribute to the atherosclerotic process in part through oxidative modifications to lipoprotein particles. The present study aimed to evaluate the effects of a lifestyle intervention on markers of oxidized lipoproteins in obese Latino adolescents with prediabetes.
Pre–post design.
Participants were enrolled into a 12-week lifestyle intervention. Measurements pre- and post-intervention included anthropometrics and body composition, lipid panel, oxidized LDL (oxLDL), oxidized HDL (oxHDL), intake of fresh fruits and vegetables, and cardiorespiratory fitness.
Thirty-five obese Latino adolescents (seventeen females, eighteen males; mean age 15·5 (sd 1·0) years; mean BMI percentile 98·5 (sd 1·2)) with prediabetes.
Intervention participation resulted in significant reductions in weight (−1·2 %, P = 0·042), BMI and BMI percentile (−2·0 and −0·4 %, respectively, P < 0·001), body fat (−7·0 %, P = 0·025), TAG (−11·8 %, P = 0·032), total cholesterol (−5·0 %, P = 0·002), VLDL-cholesterol (−12·5 %, P = 0·029), and non-HDL-cholesterol (−6·7 %, P = 0·007). Additionally, fitness (6·4 %, P < 0·001) and intake of fruits and vegetables (42·4 %, P = 0·025) increased significantly. OxLDL decreased significantly after the intervention (51·0 (sd 14·0) v. 48·7 (sd 12·8) U/l, P = 0·022), while oxHDL trended towards a significant increase (395·2 (sd 94·6) v. 416·1 (sd 98·4) ng/ml, P = 0·056).
These data support the utility of lifestyle intervention to improve the atherogenic phenotype of Latino adolescents who are at high risk for developing premature CVD and type 2 diabetes.
We built an app to help clients of food pantries. The app offers vegetable-based recipes, food tips and no-cost strategies for making mealtimes healthier and for bargain-conscious grocery shopping, among other themes. Users customize materials to meet their own preferences. The app, available in English and Spanish, has been tested in a randomized field trial.
A randomized controlled trial with repeated measures across 10 weeks.
Clients of fifteen community food pantry distributions in Los Angeles County, USA.
Distributions were randomized to control and experimental conditions, and 289 household cooks and one of their 9–14-year-old children were enrolled as participants. Experimental dyads were given a smartphone with our app and a phone use-plan, then trained to use the app. ‘Test vegetables’ were added to the foods that both control and experimental participants received at their pantries.
After 3–4 weeks of additional ‘test vegetables’, cooks at experimental pantries had made 38 % more preparations with these items than control cooks (P = 0·03). Ten weeks following baseline, experimental pantries also scored greater gains in using a wider assortment of vegetables than control pantries (P = 0·003). Use of the app increased between mid-experiment and final measurement (P = 0·0001).
The app appears to encourage household cooks to try new preparation methods and widen their incorporation of vegetables into family diets. Further research is needed to identify specific app features that contributed most to outcomes and to test ways in which to disseminate the app widely.
To evaluate the implementation of the Uruguayan healthy snacking initiative in primary and secondary schools in the capital, and to explore the factors underlying compliance from the perspective of school principals.
A mixed-method approach was used, which included semi-structured interviews with school principals and a survey of the foods and beverages sold and advertised in the schools.
Primary and secondary schools in Montevideo (the capital city of Uruguay).
School principals.
The great majority of the schools did not comply with the initiative. Exhibition of non-recommended products was the main cause for non-compliance, followed by advertising of non-recommended products through promotional activities of food and beverage companies. Although school principals were aware of the healthy snack initiative and showed a positive attitude towards it, the majority lacked knowledge about its specific content. Factors underlying compliance with the healthy snacking initiative were related to its characteristics, characteristics of the schools, and external factors such as family habits and advertising.
Results showed that the rationale underlying the selling of products at schools favours the availability of ultra-processed products and constitutes the main barrier for the promotion of healthy dietary habits among children and adolescents. Strategies aimed at facilitating the identification of unhealthy foods and beverages and provision of incentives to canteen managers to modify their offer are recommended.
To simulate effects of different scenarios of folic acid fortification of food on dietary folate equivalents (DFE) intake in an ethnically diverse sample of pregnant women.
A forty-four-item FFQ was used to evaluate dietary intake of the population. DFE intakes were estimated for different scenarios of food fortification with folic acid: (i) voluntary fortification; (ii) increased voluntary fortification; (iii) simulated bread mandatory fortification; and (iv) simulated grains-and-rice mandatory fortification.
Ethnically and socio-economically diverse cohort of pregnant women in New Zealand.
Pregnant women (n 5664) whose children were born in 2009–2010.
Participants identified their ethnicity as European (56·0 %), Asian (14·2 %), Māori (13·2 %), Pacific (12·8 %) or Others (3·8 %). Bread, breakfast cereals and yeast spread were main food sources of DFE in the two voluntary fortification scenarios. However, for Asian women, green leafy vegetables, bread and breakfast cereals were main contributors of DFE in these scenarios. In descending order, proportions of different ethnic groups in the lowest tertile of DFE intake for the four fortification scenarios were: Asian (39–60 %), Others (41–44 %), European (31–37 %), Pacific (23–26 %) and Māori (23–27 %). In comparisons within each ethnic group across scenarios of food fortification with folic acid, differences were observed only with DFE intake higher in the simulated grains-and-rice mandatory fortification v. other scenarios.
If grain and rice fortification with folic acid was mandatory in New Zealand, DFE intakes would be more evenly distributed among pregnant women of different ethnicities, potentially reducing ethnic group differences in risk of lower folate intakes.
To estimate changes in taxed and untaxed beverages by volume of beverage purchased after a sugar-sweetened beverage (SSB) tax was introduced in 2014 in Mexico.
We used household purchase data from January 2012 to December 2015. We first classified the sample into four groups based on pre-tax purchases of beverages: (i) higher purchases of taxed beverages and lower purchases of untaxed beverages (HTLU-unhealthier); (ii) higher purchases of both types of beverages (HTHU); (iii) lower purchases of taxed and untaxed beverages (LTLU); and (iv) lower purchases of taxed beverages and higher purchases of untaxed beverages (LTHU-healthier). Next, we estimated differences in purchases after the tax was implemented for each group compared with a counterfactual based on pre-tax trends using a fixed-effects model.
Areas with more than 50 000 residents in Mexico.
Households (n 6089).
The HTLU-unhealthier and HTHU groups had the largest absolute and relative reductions in taxed beverages and increased their purchases of untaxed beverages. Households with lower purchases of untaxed beverages (HTLU-unhealthier and LTLU) had the largest absolute and relative increases in untaxed beverages. We also found that among households with higher purchases of taxed beverages, the group with lowest socio-economic status had the greatest reduction in purchases of taxed beverages.
Evidence associating the SSB tax with larger reductions among high purchasers of taxed beverages prior to the tax is relevant, as higher SSB purchasers have a greater risk of obesity, diabetes and other cardiometabolic outcomes.
Unequal obesity distributions among adult populations have been reported in low- and middle-income countries, but mainly based on data of women of reproductive age. Moreover, incorporation of ever-changing skewed BMI distributions in analyses has been a challenge. Our study aimed to assess magnitude and rates of change in BMI distributions by age and sex.
Shapes of BMI distributions were estimated for 2005 and 2010, and their changes were assessed, using the generalized additive model for location, scale and shape (GAMLSS) and assuming BMI follows a Box–Cox power exponential (BCPE) distribution.
Nationally representative, repeated cross-sectional health surveys conducted between 2005 and 2013 in Mexico, Colombia and Peru.
Adult men and non-pregnant women aged 20–69 years.
Whereas women had more right-shifted and wider BMI distributions than men in almost all age groups across the countries in 2010, men in their 30s–40s experienced more rapid increases in BMI between 2005 and 2010, notably in Peru. The highest increase in overweight and obesity prevalence was observed among Peruvian men of 35–39 years, with a 5-year increase of 21 percentage points.
The BCPE–GAMLSS method is an alternative to analyse measurements with time-varying distributions visually, in addition to conventional indicators such as means and prevalences. Consideration of differences in BMI distributions and their changes by sex and age would provide vital information in tailoring relevant policies and programmes to reach target populations effectively. Increases in BMI portend increases of obesity-associated diseases, for which preventive and preparative actions are urgent. | http://core-cms.prod.aop.cambridge.org/core/journals/public-health-nutrition/issue/A480325C2A98689D35CA4C8E0D49AA88 |
During the early months of 2020, the world experienced a novel, violent, and relentless pandemic era. By the end of the year more than seventy-seven million cases of COVID-19 had been reported around the globe. Due to it being a highly contagious disease, the recommended measures adopted by most nations to prevent infection include social distancing and quarantine. How did these measures affect people's relationship with alcohol consumption in cultures where alcohol plays an important social role? A questionnaire-based study, designed to follow the drinking behaviour of people before and during lockdown was applied to two different cultural groups impacted by the pandemic during the strict phase of lockdown. These are the British and Spanish populations (179 participants from each country were interviewed). Considering the frequency of consumption of the alcoholic beverages evaluated (wine, beer, cider, whisky and spirits), the results showed that a significant lockdown*country interaction was observed. Overall, Spanish participants consumed alcoholic beverages less frequently during lockdown than before, while British participants reported no change in their consumption habits. Spaniards' decrease in alcohol consumption is related to the absence of a social contexts while Britons seems to have adapted their consumption to the modified context. Results suggest that, alcohol consumption is a central core of the British culture, while for the Spanish, socialization is more a cultural characteristic than the alcohol itself.
Long-Term Trends (1994-2011) and Predictors of Total Alcohol and Alcoholic Beverages Consumption: The EPIC Greece CohortBy ebours
The aim of this study was to evaluate the longitudinal changes in alcohol consumption (total alcohol and types of alcoholic beverages) of the Greek EPIC cohort participants (28,572) during a 17-year period (1994-2011), with alcohol information being recorded repeatedly over time. Descriptive statistics were used to show crude trends in drinking behavior. Mixed-effects models were used to study the consumption of total alcohol, wine, beer and spirits/other alcoholic beverages in relation to birth cohort, socio-demographic, lifestyle and health factors. We observed a decreasing trend of alcohol intake as age increased, consistent for total alcohol consumption and the three types of beverages. Older birth cohorts had lower initial total alcohol consumption (8 vs. 10 g/day) and steeper decline in wine, spirits/other alcoholic beverages and total alcohol consumption compared to younger cohorts. Higher education and smoking at baseline had a positive association with longitudinal total alcohol consumption, up to +30% (vs. low education) and more than +25% (vs. non-smoking) respectively, whereas female gender, obesity, history of heart attack, diabetes, peptic ulcer and high blood pressure at baseline had a negative association of -85%, -25%, -16%, -37%, -22% and -24% respectively. Alcohol consumption changed over age with different trends among the studied subgroups and types of alcohol, suggesting targeted monitoring of alcohol consumption.
When tides turn: how does drinking change when per capita alcohol consumption drops?By ebours
PURPOSE: A period of first increasing and then decreasing alcohol consumption in Finland in the 2000s offers an opportunity to scrutinize how population-level changes stem from varying developments in different population subgroups and drinking patterns. We examine 1) whose consumption changed in terms of age, sex, and level of consumption, and 2) how drinking patterns changed and whether the changes indicated steps toward a more Mediterranean drinking style.
MATERIAL AND METHODS: The main data source was the Finnish Drinking Habits surveys of 2000, 2008, and 2016 of the general Finnish population aged 15?69?years (n?=?6703, response rates 59?78%).
RESULTS: Before 2008, consumption increased particularly among women and Finns aged 50+. After 2008, abstinence became more frequent and regular drinking less frequent. Additionally, heavy episodic drinking decreased, especially among men and in younger age groups. However, compared to earlier, similar levels of the volume of alcohol consumption did not result from a more Mediterranean drinking style, i.e. consuming smaller quantities more frequently. Finnish men continue to report very high maximum drinking amounts. The changes in both periods occurred as collective changes across the whole continuum of consumption from light to heavy drinkers.
CONCLUSIONS: Overall, our findings indicate that during the period of decreasing per capita alcohol consumption, both the frequency of drinking overall and of heavy episodic drinking decreased, but heavy episodic drinking is still prevalent.
The Contribution of Alcohol Beverage Types to Consumption, Heavy Drinking, and Alcohol-Related Harms: A Comparison across Five CountriesBy ebours
BACKGROUND: This study examined the relative contribution of alcoholic beverage types to overall alcohol consumption and associations with heavy alcohol use and alcohol-related harms among adults.
METHODS: Cross-sectional survey data were collected from adult samples in two cities involved in the Global Smart Drinking Goals (GSDG) initiative in each of five countries (Belgium, Brazil, China, South Africa, United States). Survey measures included past-30-day consumption of beer, wine, flavored alcoholic drinks, spirits, and homemade alcohol; past-30-day heavy drinking; 14 alcohol-related harms in the past 12 months; and demographic characteristics. Within in each country, we computed the proportion of total alcohol consumption for each beverage type. Regression analyses were conducted to estimate the relative associations between consumption of each alcoholic beverage type, heavy alcohol use, and alcohol-related harms, controlling for demographic characteristics.
RESULTS: Beer accounted for at least half of total alcohol consumption in GSDG cities in Belgium, Brazil, the U.S., and South Africa, and 35% in China. Regression analyses indicated that greater beer consumption was associated with heavy drinking episodes and with alcohol-related harms in the cities in Belgium, Brazil, South Africa, and the U.S. Significant increases in heavy drinking and alcohol-related harms were also consistently observed for spirits consumption.
CONCLUSIONS: Beer accounts for the greatest proportion of total alcohol consumption in most of the GSDG cities and was consistently associated with more heavy drinking episodes and alcohol-related harms. Reducing beer consumption through evidence-based interventions may therefore have the greatest impact on hazardous drinking and alcohol-related harms.
Dose-Response Relationships between Levels of Alcohol Use and Risks of Mortality or Disease, for All People, by Age, Sex, and Specific Risk FactorsBy ebours
Alcohol use has been causally linked to more than 200 disease and injury conditions, as defined by three-digit ICD-10 codes. The understanding of how alcohol use is related to these conditions is essential to public health and policy research. Accordingly, this study presents a narrative review of different dose-response relationships for alcohol use. Relative-risk (RR) functions were obtained from various comparative risk assessments. Two main dimensions of alcohol consumption are used to assess disease and injury risk: (1) volume of consumption, and (2) patterns of drinking, operationalized via frequency of heavy drinking occasions. Lifetime abstention was used as the reference group. Most dose-response relationships between alcohol and outcomes are monotonic, but for diabetes type 2 and ischemic diseases, there are indications of a curvilinear relationship, where light to moderate drinking is associated with lower risk compared with not drinking (i.e., RR < 1). In general, women experience a greater increase in RR per gram of alcohol consumed than men. The RR per gram of alcohol consumed was lower for people of older ages. RRs indicated that alcohol use may interact synergistically with other risk factors, in particular with socioeconomic status and other behavioural risk factors, such as smoking, obesity, or physical inactivity. The literature on the impact of genetic constitution on dose-response curves is underdeveloped, but certain genetic variants are linked to an increased RR per gram of alcohol consumed for some diseases. When developing alcohol policy measures, including low-risk drinking guidelines, dose-response relationships must be taken into consideration.
Where and What You Drink Is Linked to How Much You Drink: An Exploratory Survey of Alcohol Use in 17 CountriesBy ebours
BACKGROUND: This paper aimed to explore the differences in subjective experiences of intoxication depending on drinking location and drink type.
METHODS: Data came from 32,194 respondents to The Global Drug Survey (GDS) 2015, an annual, cross-sectional, online survey. Respondents selected their usual drinking location (home alone: home with partner/family: house parties: pubs/bars or clubs) and usual drink (wine; beer/cider/lager; spirits or alcopops/coolers). They indicated how many drinks they required to reach three stages of intoxication (feeling the effects; an ideal stage of intoxication; and the tipping point) and how frequently they reached each stage.
RESULTS: Drink type affected grams of alcohol reported to reach the tipping point: 109?gm wine, 127?gm alcopops, 133?gm of beer, and 134?gm of spirts. Respondents who drank at home alone, or in clubs reached their tipping point more frequently compared to other locations.
CONCLUSIONS: Where people drink, and the type of alcohol they drink, affected the amount of alcohol reported to reach different stages of intoxication. Understanding why different drinking locations, and drink types lead to a need for greater consumption to reach an ideal state of drunkenness, such as social cues from other people who drink, may enable people to reduce their drinking.
Cross-national time trends in adolescent alcohol use from 2002 to 2014By ebours
BACKGROUND: Adolescent alcohol consumption is a major public health concern that should be continuously monitored. This study aims (i) to analyze country-level trends in weekly alcohol consumption, drunkenness and early initiation in alcohol consumption and drunkenness among 15-year-old adolescents from 39 countries and regions across Europe and North America between 2002 and 2014 and (ii) to examine the geographical patterns in adolescent alcohol-related behaviours.
METHODS: The sample was composed of 250 161 adolescents aged 15 from 39 countries and regions from Europe and North America. Survey years were 2002, 2006, 2010 and 2014. The alcohol consumption and drunkenness items of the HBSC questionnaire were employed. Prevalence ratios and 95% confidence intervals were estimated using Poisson regression models with robust variance.
RESULTS: Data show a general decrease in all four alcohol variables between 2002 and 2014 except for some countries. However, there is variability both within a country (depending on the alcohol-related behaviour under study) and across countries (in the beginning and shape of trends). Some countries have not reduced or even increased their levels in some variables. Although some particularities have persisted over time, there are no robust patterns by regions.
CONCLUSIONS: Despite an overall decrease in adolescent alcohol consumption, special attention should be paid to those countries where declines are not present, or despite decreasing, rates are still high. Further research is needed to clarify factors associated with adolescent drinking, to better understand country specificities and to implement effective policies.
Ageing and Alcohol: Drinking Typologies among Older AdultsBy ebours
The effect of calorie and physical activity equivalent labelling of alcoholic drinks on drinking intentions in participants of higher and lower socioeconomic position: An experimental studyBy ebours
The interplay of Western diet and binge drinking on the onset, progression, and outlook of liver diseaseBy ebours
Non-alcoholic fatty liver disease and alcoholic liver disease, the two most prevalent liver diseases worldwide, share a common pathology but have largely been considered disparate diseases. Liver diseases are widely underestimated, but their prevalence is increasing worldwide.
The Western diet (high-fat, high-sugar) and binge drinking (rapid consumption of alcohol in a short period of time) are two highly prevalent features of standard life in the United States, and both are linked to the development and progression of liver disease. Yet, few studies have been conducted to elucidate their potential interactions. Data shows binge drinking is on the rise in several age groups, and poor dietary trends continue to be prevalent.
This review serves to summarize the sparse findings on the hepatic consequences of the combination of binge drinking and consuming a Western diet, while also drawing conclusions on potential future impacts. The data suggest the potential for a looming liver disease epidemic, indicating that more research on its progression as well as its prevention is needed on this critical topic.
More...
Has beverage composition of alcohol consumption in Sweden changed over time? An age-period-cohort analysisBy ebours
Longitudinal dimensions of alcohol consumption and dietary intake in the Framingham Heart Study Offspring Cohort (1971-2008)By ebours
Existing studies addressing alcohol consumption have not captured the multidimensionality of drinking patterns, including drinking frequency, binge drinking, beverage preference and changes in these measures across the adult life course.
We examined longitudinal trends in drinking patterns and their association with diet over four decades in ageing US adults from the Framingham Offspring Study (n 4956; baseline mean age 36.2 years). Alcohol intake (drinks/week, drinking frequency, beverage-specific consumption, drinks/occasion) was assessed quadrennially from examinations 1 to 8.
Participants were classified as binge drinkers, moderate drinkers or heavy drinkers (4+ and 5+ drinks/occasion; 7 and >14 drinks/week for women and men, respectively). Dietary data were collected by a FFQ from examinations 5 to 8 (1991-2008). We evaluated trends in drinking patterns using linear mixed effect models and compared dietary intake across drinking patterns using heterogeneous variance models. Alcohol consumption decreased from 1971 to 2008 (3.7 v. 2.2 oz/week; P < 0.05).
The proportion of moderate (66 v. 59.3 %), heavy (18.4 v. 10.5 %) and binge drinkers (40.0 v. 12.3 %) declined (P < 0.05). While average wine consumption increased (1.4 v. 2.2 drinks/week), beer (3.4 v. 1.5 drinks/week) and cocktail intake (2.8 v. 1.2 drinks/week) decreased.
Non-binge drinkers consumed less sugary drinks and more whole grains than binge drinkers, and the latter consumed more total fat across all examinations (P < 0.05). There was a significant difference in consumption trends of total grains by drinking level (P < 0.05).
In conclusion, alcohol drinking patterns are unstable throughout adulthood. Higher intakes were generally associated with poorer diets. These analyses support the nuanced characterisation of alcohol consumption in epidemiological studies.
Why Is Per Capita Consumption Underestimated in Alcohol Surveys? Results from 39 Surveys in 23 European CountriesBy ebours
AIMS: The aims of the article are (a) to estimate coverage rates (i.e. the proportion of 'real consumption' accounted for by a survey compared with more reliable aggregate consumption data) of the total, the recorded and the beverage-specific annual per capita consumption in 23 European countries, and (b) to investigate differences between regions, and other factors which might be associated with low coverage (prevalence of heavy episodic drinking [HED], survey methodology).
METHODS: Survey data were derived from the Standardised European Alcohol Survey and Harmonising Alcohol-related Measures in European Surveys (number of surveys: 39, years of survey: 2008-2015, adults aged 20-64 years). Coverage rates were calculated at the aggregated level by dividing consumption estimates derived from the surveys by alcohol per capita estimates from a recent global modelling study. Fractional response regression models were used to examine the relative importance of the predictors.
RESULTS: Large variation in coverage across European countries was observed (average total coverage: 36.5, 95% confidence interval [CI] [33.2; 39.8]), with lowest coverage found for spirits consumption (26.3, 95% CI [21.4; 31.3]). Regarding the second aim, the prevalence of HED was associated with wine- and spirits-specific coverage, explaining 10% in the respective variance. However, neither the consideration of regions nor survey methodology explained much of the variance in coverage estimates, regardless of the scenario.
CONCLUSION: The results reiterate that alcohol survey data should not be used to compare or estimate aggregate consumption levels, which may be better reflected by statistics on recorded or total per capita consumption.
Predictors of Alcohol Use Disorders Among Young Adults: A Systematic Review of Longitudinal StudiesBy ebours
AIMS: Alcohol use disorders (AUDs) are highly disabling neuropsychiatric conditions. Although evidence suggests a high burden of AUDs in young adults, few studies have investigated their life course predictors. It is crucial to assess factors that may influence these disorders from early life through adolescence to deter AUDs in early adulthood by tailoring prevention and intervention strategies. This review aims to assess temporal links between childhood and adolescent predictors of clinically diagnosed AUDs in young adults.
METHODS: We systematically searched PubMed, Scopus, PsycINFO and Embase databases for longitudinally assessed predictors of AUDs in young adults. Data were extracted and assessed for quality using the Newcastle-Ottawa quality assessment tool for cohort studies. We performed our analysis by grouping predictors under six main domains.
RESULTS AND CONCLUSION: Twenty two studies met the eligibility criteria. The outcome in all studies was measured according to the Diagnostic Statistical Manual of Mental Disorders. Our review suggests strong links between externalizing symptoms in adolescence and AUDs in young adulthood, as well as when externalizing symptoms co-occur with illicit drug use. Findings on the role of internalizing symptoms and early drinking onset were inconclusive. Environmental factors were influential but changed over time. In earlier years, maternal drinking predicted early adult AUD while parental monitoring and school engagement were protective. Both peer and parental influences waned in adulthood. Further high-quality large longitudinal studies that identify distinctive developmental pathways on the aetiology of AUDs and assess the role of early internalizing symptoms and early drinking onset are warranted. | https://wineinformationcouncil.eu/index.php?option=com_k2&view=itemlist&task=category&id=51:&Itemid=519&limitstart=14 |
Fruit and vegetable intake has been associated with a reduced risk of cardiovascular disease. Quercetin and kaempferol are among the most ubiquitous polyphenols in fruit and vegetables. Most of the quercetin and kaempferol in plants is attached to sugar moieties rather than in the free form. The types and attachments of sugars impact bioavailability, and thus by: 5. To ascertain long‐term diet, we averaged dietary variables from through the initial cognitive interview. Using multivariate‐adjusted, mixed linear regression, we estimated mean differences in slopes of cognitive decline by long‐term berry and flavonoid by: Dietary flavonoid intake was estimated using daily food consumption data (based on the single hour recall method) and the flavonoid content in foods consumed by the participants. The own dietary database [ 11, 12 ] of the total antioxidant capacity of foods, determined using the FRAP assay, was used to calculate daily antioxidant capacity Cited by: Clinical Study Effects of Dietary Strawberry Supplementation on Antioxidant Biomarkers in Obese Adults with Above Optimal Serum Lipids ArpitaBasu, 1 StacyMorris, 1 AngelNguyen, 1 , 1 DongxuFu, 2 2,3 Department of Nutritional Sciences, College of Human Sciences, Oklahoma State University, Stillwater, OK, USACited by: 5.
Dietary patterns and food availability differ greatly between regions and countries around the world. As a result, there is a large variability in the intake of total flavonoids and flavonoid subclasses, and subsequently in their major food sources. However, we need to be aware of certain methodological issues when we compare studies on flavonoid by: 1. Flavonoids, a broad category of nonnutrient food components, are potential protective dietary factors in the etiology of some cancers. However, previous epidemiological studies showing associations between flavonoid intake and cancer risk have used unvalidated intake assessment methods. A item food frequency questionnaire (FFQ) based on usual Cited by: 7. Won O. Song, Ock K. Chun. Tea consumption is major sources of flavanol and flavonol in US diet. J Nutr (8): SS, Book Chapters (since Fall) Jun Sakaki, Melissa M. Melough, Sang Gil Lee, George Pounis, Ock K. Chun. Chapter Polyphenol-rich Diets in Cardiovascular Disease Prevention. In Analysis in Nutrition Research. Atherosclerosis is a chronic low-grade inflammatory disease that affects large and medium-sized arteries and is considered to be a major underlying cause of cardiovascular disease (CVD). The high risk of mortality by atherosclerosis has led to the development of new strategies for disease prevention and management, including immunonutrition. Plant-based dietary patterns, Cited by: 7.
Flavonoid intake and cardiovascular disease mortality in a prospective cohort of US adults1–4 Flavonoid consumption was associated with lower have enabled more in-depth evaluation of the role of these dietary constituents in chronic disease prevention. Increased consumption of fruit and vegetables can represent an easy strategy to significantly reduce the incidence of cancer. We recently demonstrated that the flavonoid quercetin, naturally present in the diet and belonging to the class of phytochemicals, is able to sensitize several leukemia cell lines and B cells isolated from patients affected by chronic lymphocytic leukemia Cited by: dietary intervention for 6 weeks with either the high flavonoid diet or low flavonoid diet quintile of anthocyanin consumption and the lowest quintile of anthocyanin consumption. The effects of flavonoid-rich diets on risk factors for CVD continues to be of keen There are many traditional risk factors and biomarkers that predict the Cited by: 2. Background: Epidemiologic studies have shown that dietary flavonoids reduce the risk of cardiovascular events. Onion is rich in quercetin, a strong antioxidant flavonoid. In some in vitro studies, quercetin improved endothelial function associated with atherosclerosis, a leading cause of cardiovascular events. Objective: The aim of this study was to determine whether Cited by: | https://bawyjywaloco.the5thsense.com/biomarkers-of-flavonoid-consumption-for-the-evaluation-of-dietary-burden-book-29270ba.php |
Updated Information of the Effects of (Poly)phenols against Type-2 Diabetes Mellitus in Humans: Reinforcing the Recommendations for Future Research
(Poly)phenols have anti-diabetic properties that are mediated through the regulation of the main biomarkers associated with type 2 diabetes mellitus (T2DM) (fasting plasma glucose (FPG), glycated hemoglobin (HbA1c), insulin resistance (IR)), as well as the modulation of other metabolic, inflammatory and oxidative stress pathways. A wide range of human and pre-clinical studies supports these effects for different plant products containing mixed (poly)phenols (e.g., berries, cocoa, tea) and for some single compounds (e.g., resveratrol). We went through some of the latest human intervention trials and pre-clinical studies looking at (poly)phenols against T2DM to update the current evidence and to examine the progress in this field to achieve consistent proof of the anti-diabetic benefits of these compounds. Overall, the reported effects remain small and highly variable, and the accumulated data are still limited and contradictory, as shown by recent meta-analyses. We found newly published studies with better experimental strategies, but there were also examples of studies that still need to be improved. Herein, we highlight some of the main aspects that still need to be considered in future studies and reinforce the messages that need to be taken on board to achieve consistent evidence of the anti-diabetic effects of (poly)phenols.
Alcohol consumption patterns and growth differentiation factor 15 among life-time drinkers aged 65+ years in Spain: a cross-sectional study
AIMS: To examine the association of alcohol consumption patterns with growth differentiation factor 15 (GDF-15) in older drinkers, separately among individuals with cardiovascular disease (CVD)/diabetes and those without them, as GDF-15 is a strong biomarker of chronic disease burden.
DESIGN: Cross-sectional study. SETTING: Population-based study in Madrid (Spain). PARTICIPANTS: A total of 2051 life-time drinkers aged 65+ years included in the Seniors-ENRICA-2 study in 2015-17. Participants' mean age was 71.4 years and 55.4% were men.
MEASUREMENTS: According to their average life-time alcohol intake, participants were classified as occasional ( 1.43-20 g/day; women: > 1.43-10 g/day), moderate-risk (men: > 20-40 g/day; women: > 10-20 g/day) and high-risk drinkers (men: > 40 g/day; women: > 20 g/day; or binge drinkers). We also ascertained wine preference (> 80% of alcohol derived from wine), drinking with meals and adherence to a Mediterranean drinking pattern (MDP) defined as low-risk drinking, wine preference and one of the following: drinking only with meals; higher adherence to the Mediterranean diet; or any of these.
FINDINGS: In participants without CVD/diabetes, GDF-15 increased by 0.27% [95% confidence interval (CI) = 0.06%, 0.48%] per 1 g/day increment in alcohol among high-risk drinkers, but there was no clear evidence of association in those with lower intakes or in the overall group, or across categories of alcohol consumption status. Conversely, among those with CVD/diabetes, GDF-15 rose by 0.19% (95% CI = 0.05%, 0.33%) per 1 g/day increment in the overall group and GDF-15 was 26.89% (95% CI = 12.93%, 42.58%) higher in high-risk versus low-risk drinkers. Drinking with meals did not appear to be related to GDF-15, but among those without CVD/diabetes, wine preference and adherence to the MDP were associated with lower GDF-15, especially when combined with high adherence to the Mediterranean diet.
CONCLUSIONS: Among older life-time drinkers in Madrid, Spain, high-risk drinking was positively associated with growth differentiation factor 15 (a biomarker of chronic disease burden). There was inconclusive evidence of a beneficial association for low-risk consumption.
Mediterranean diet and diabetes risk in a cohort study of individuals with prediabetes: propensity score analyses
AIMS: Randomized controlled trials have demonstrated the efficacy of several dietary patterns plus physical activity to reduce diabetes onset in people with prediabetes. However, there is no evidence on the effect from the Mediterranean diet on the progression from prediabetes to diabetes. We aimed to evaluate the effect from high adherence to Mediterranean diet on the risk of diabetes in individuals with prediabetes.
METHODS: Prospective cohort study in Spanish Primary Care setting. A total of 1184 participants with prediabetes based on levels of fasting plasma glucose and/or glycated hemoglobin were followed up for a mean of 4.2 years. A total of 210 participants developed diabetes type 2 during the follow up. Hazard ratios of diabetes onset were estimated by Cox proportional regression models associated to high versus low/medium adherence to Mediterranean diet. Different propensity score methods were used to control for potential confounders.
RESULTS: Incidence rate of diabetes in participants with high versus low/medium adherence to Mediterranean diet was 2.9 versus 4.8 per 100 persons-years. The hazard ratios adjusted for propensity score and by inverse probability weighting (IPW) had identical magnitude: 0.63 (95% confidence interval, 0.43-0.93). The hazard ratio in the adjusted model using propensity score matching 1:2 was 0.56 (95% confidence interval, 0.37-0.84).
CONCLUSIONS: These propensity score analyses suggest that high adherence to Mediterranean diet reduces diabetes risk in people with prediabetes.
Mediterranean dietary pattern and the risk of type 2 diabetes: a systematic review and dose-response meta-analysis of prospective cohort studies
PURPOSE: Previous meta-analyses assessed the association of adherence to the Mediterranean dietary pattern (MedDiet) with the risk of type 2 diabetes (T2D). Since then, new large-scale cohort studies have been published. In addition, dose-response relation was not previously investigated and the certainty of evidence was not assessed. We aimed to explore the dose-response relationship between adherence to the MedDiet and the risk of T2D.
METHODS: We did a systematic search using PubMed, Scopus, and ISI Web of Science up to April 2021 for prospective cohort studies of the relationship between adherence to the MedDiet and the risk of T2D in the general population. The summary relative risks (RR) and 95%CI were estimated by applying a random-effects model.
RESULTS: Fourteen prospective cohort studies (410,303 participants and 41,466 cases) were included. There was an inverse association for the highest versus lowest category of adherence to the MedDiet (RR: 0.79, 95%CI 0.72, 0.88; I(2) = 82%, n = 14; Risk difference: - 21 per 1000 person, 95%CI - 28, - 12; GRADE = moderate certainty), and for a 2-point increment in the MedDiet adherence score (RR: 0.86, 95%CI 0.82, 0.91; n = 13). The RR remained significant after controlling for important confounders and in almost all subgroups, especially subgroups defined by geographical region. We observed an inverse linear association between MedDiet adherence score and T2D incidence.
CONCLUSION: Adherence to the MedDiet was inversely related to T2D risk in a dose-response manner. Adherence to a Mediterranean-style diet may be a good advice for the primary prevention of T2D.
REGISTRY AND REGISTRY NUMBER: PROSPERO (CRD42021246589). | https://wineinformationcouncil.eu/index.php?option=com_k2&view=itemlist&task=tag&tag=*Diabetes%20Mellitus&Itemid=523 |
An Oxymoron is a term used to define a figure of speech that places two contradictory words beside each other like “deafening silence”. The plural is “oxymora” or “oxymorons”. It is also the proper subset of phrases or expressions called “contradiction in terms”. The term is derived from two Greek words “oxy” meaning sharp and “moros” meaning dull. The distinguishing feature of an oxymoron is that it is used deliberately and knowingly for rhetoric effect.
A combination of an adjective and a noun is the most common form of oxymora. Tennyson’s ‘Idylls of the King’ has two oxymora :
“And faith unfaithful kept him falsely true”.
Examples
Given below are some examples of intentional oxymora :
- “I do here make humbly bold to present them with a short account of themselves...” Jonathan Swift
- “O miserable abundance, O beggarly riches!” John Donne, “Devotions on Emergent Occasions”
- “He was now sufficiently composed to order a funeral of modest magnificence...” Samuel Johnson
- “The bookful blockhead, ignorantly read, / With loads of learned lumber in his head...” Alexander Pope
- “O anything of nothing first create! / O heavy lightness, serious vanity! / Misshapen chaos of well-seeming forms! / Feather of lead, bright smoke, cold fire, sick health!” William Shakespeare, Romeo and Juliet, Act 1, scene 1
Perceived oxymora
Oxymoron is sometimes used to simply describe a contradiction in terms. Often, it is used to describe expressions which are used earnestly without any implication of a paradox. When such expressions are termed oxymorons, the purpose is to criticize its use and label it as nonsensical. Some examples of such oxymora are :
- anecdotal evidence
- civil libertarian
- democratic leadership
- corporate ethics
Some more examples of the different types of oxymora are : | https://articleworld.org/index.php/Oxymoron |
Change is the only constant – Isaac Asimov
Can the above quote be called an example of antithesis or that of oxymoron, or neither of these? I am confused because both antithesis and oxymoron have a contrasting effect.
Antithesis: A rhetorical term for the juxtaposition of contrasting ideas in balanced phrases or clauses.
Oxymoron: A figure of speech in which incongruous or seemingly contradictory terms appear side by side; a compressed paradox. | https://english.stackexchange.com/questions/96716/change-is-the-only-constant-antithesis-or-oxymoron/96724 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.