content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to . To save content items to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle. Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service. You can save your searches here and later view and run them again in "My saved searches". Few contemporary crises have reshaped public policy as dramatically as the COVID-19 pandemic. In its shadow, policymakers have debated whether other pressing crises—including climate change—should be integrated into COVID-19 policy responses. Public support for such an approach is unclear: the COVID-19 crisis might eclipse public concern for other policy problems, or complementarities between COVID-19 and other issues could boost support for broad government interventions. In this research note, we use a conjoint experiment, panel study, and framing experiment to assess the substitutability or complementarity of COVID-19 and climate change among US and Canadian publics. We find no evidence that the COVID-19 crisis crowds out public concern about the climate crisis. Instead, we find that the publics in both countries prefer that their governments integrate climate action into COVID-19 responses. We also find evidence that analogizing climate change with COVID-19 may increase concern about climate change. Ambiguity – the capacity to have multiple meanings – is endemic to politics. Ambiguity creates political opportunities, structures debates and provides leeway for political entrepreneurs to advance their interests. I use the 2012 passage and 2014 rollback of reforms to the National Flood Insurance Program to show how ambiguity enables political entrepreneurship. In this puzzling case, Congress enacted and rolled back changes that threatened to impose politically unpalatable costs. Using semi-structured interviews and congressional testimony, I show how political entrepreneurs engaged with ambiguity in the buildup to the reforms’ passage. They used information strategically to interpret problems, solutions, rules, and goals; shape legislators’ perceptions of the reforms’ political implications; and adapt their arguments to the policy windows that opened. The case shows that ambiguity facilitates policy reform, but the direction of change depends on the priorities that are salient when a policy window opens and on the interests of political entrepreneurs. Email your librarian or administrator to recommend adding this to your organisation's collection.
https://core-cms.prod.aop.cambridge.org/core/search?filters%5BauthorTerms%5D=Parrish%20Bergquist&eventCode=SE-AU
As we are aware of our thoughts and emotions, we must ask, who is it that is aware? – Zen Koan We have the choice every moment to live experiencing what life is and who we are either from our judgmental, personal, reactive self – the ego – or from our discerning, witnessing, responsive self – essential Beingness – which primarily arises within the clarity of pure awareness of the moment. To recognize when ego is dominating our experience causing us to be in judgment and reactivity and to know how to choose and shift into the discerning, responsive awareness of Beingness is the core of the journey to awakening. What I have just written is an intellectual concept. It may be intriguing. It may seem nonsensical. I assure you, it is a use of words whose purpose is to point to a felt-sense reality. These words are drawn from a particular vocabulary a person needs to understand if the statement is to make sense, but deeper still, until a person experiences what these words point toward at a level beyond the intellectual, they will be unable to fully enter into the journey of personal evolution these words are pointing toward. These words point us toward the experience that we exist in two dimensions simultaneously as both a personalized, socialized, conditioned ego-self and an ultimate dimension of our true-Self as an individualized aspect of the fabric of the universe unfolding in the eternal present moment. The ego reacts from its conditioned psycho-social-cultural programming. The Self-in-Being responds to unfolding events from a deep knowing of its flowing connectedness to everything. They are the night and day of the awakening that Buddhism and meditation lead us toward. From within the conditioned mind of ego-identity there is only “me” and everything that is not me. We are trapped in a prison of “me,” struggling with a world that is outside and separate that we hope to master at some level so that we can succeed in bringing the things we want from this outside world to us and in keeping away what we do not want. Fundamental to this task is the ability to judge what it is we want and what we do not want. This “judging” is a projection onto whatever is being perceived and experienced as ideas about who we are and what life is. This is information programmed into us much as a computer is programmed – and as the old saying about programming goes, “garbage in, garbage out.” Our primary experience of the world then becomes this incessant and compulsive evaluation of everything in this world outside of us into the good stuff and the bad stuff, differentiating “good” and “bad” by thoughts about good and bad, which are unique to every person because of their particular and unique programming. Political opinions or religious identification are blatant examples of this. Most of us hold these beliefs because of the people influencing us through our upbringing and current social context. Give a moment’s consideration to the differences between prevailing political and religious opinion of several centuries ago and today, let alone the variety of such opinions today, and my point is readily grasped. Our ordinary day-to-day lives, however, are conducted at a much subtler level than politics and religion, and while political and religious opinions may be pretty obvious lines of separation, our day-to-day lives are being determined by an imperceptible (to ourselves) matrix of judgments programmed into us about the “good” and “bad” of ourselves, others and what is going on around us. With this understanding, it is pretty easy to comprehend why there is so much confusion and disagreement about proper conduct and values in the human realm. It is of the utmost importance to realize we are talking about the human realm, not nature. In nature, there is only what is natural. Ego and conditioning are minimal, though, of course, they exist. Every organism has a sense of its separate biological self and the need to interact with the world so as to bring to itself what it needs and avoid that which is danger. This is ego and conditioning at its most basic level. Humans, however, create an idea of self-in-the world, quite abstract and ruled by conditioning that is then projected out onto the world. This is ego taken to an unnatural level and this projection of egoic-self onto the world is the essence of judgment. Only humans live in the world of judgment. All the rest of nature lives in the straightforward discernment of what naturally supports or threatens its existence. Does this condemn humans to this virtual-reality that creates artificial and subjective levels of suffering, unable to live gracefully and authentically as a human in the way a deer or a fish live gracefully and authentically as a deer or a fish? From within the artificial reality called society and culture, without any sense of our underlying nature, sadly the answer is “yes.” As long as we only believe in the psycho-social-cultural programming and conditioning that creates a very complicated ego-self full of contradictions and conflicts, anxieties and reactivity, we will live, as Buddhism teaches, in dukkha – a word from the ancient Pali language of India – that describes a state of craving, insecurity and sense of dissatisfaction that keeps us reactive, anxious, striving and ultimately unfulfilled, always unsure if we are sufficient. The same Buddhist teaching that describes dukkha fortunately also prescribes its resolution. It is to release clinging to this artificial-reality-identity as who we are and to realize all these confusing thoughts and emotions arise within and pass through the dimension of witnessing awareness that is not plagued by instability, reactivity and dissatisfaction. As we are aware of our thoughts and emotions, we must ask, who is it that is aware? WE are that awareness. Awareness is the irreducible, unchanging dimension of every person’s experience. It is our original nature – awareness experiencing the world before conditioning and judgment. Is this universal awareness arising from Beingness then blank and without intelligence? To the contrary. As our culture will lead us to believe that intelligence is a result of thought, we all know immediately upon consideration, intelligence cannot be the product of thought. Thought is only a tool to express a concept. It can be any concept. If we are unconscious of this process, we will allow conditioning to be the source of the thought/emotive process, and – “garbage in, garbage out.” This is why the history of humanity is rife with ignorant, dangerous and even disastrous thoughts. Contrary to how we are culturally conditioned to believe, awareness is not a faculty of this body and mind. It is far more accurate to say this body and mind are faculties of awareness, tools of the individualized consciousness that is a person. This individualized consciousness directed is awareness. This gives rise to the very inscrutable Zen teaching that actually, we are “nobody,” for while we can hang all kinds of identity onto our body, thoughts and emotions, when we examine just who is awareness, and how is the awareness I experience any different from the awareness you experience, there is no one to be found. There is just awareness. The vessels are very different; the essence, the Beingness is universal. Intelligence arises from the silent mind of awareness – the discerning mind of awareness. Intelligence, the ability to look deeply and understand, arises from the field of consciousness that is the universe individualized as a human-being in awareness. Thus, our journey into wisdom, into awakening into true discerning intelligence, requires we learn to stop running the program of egoic conditioning, become present in the great what-is that is life. Look deeply, listen closely, feel with subtlety the truths that are whispered. Quiet the cacophony of mind-chatter and you will hear. This moment will tell you what it needs – it is whispering to nobody so that the truth of who you are can hear. It will help you understand with clarity the what-is of the moment. Then the tools of body and mind can function with skill and wisdom, and you will know who it is that is aware. Nobody. And it is who you are – a psycho-socially-culturally conditioned intelligent being who now can use the conditioning with discernment.
https://www.billwalz.com/discerning-awareness/
When you bake cookies and cakes, you can see that a transformation occurred between the raw material you started with and the baked goods you ended up with. What actually happens when you pop a pan of cookies in the oven to bake at 350 ºF? Fats melt In the early stages of baking, when your dough warms above 32 ºC (90 ºF), fats begin to melt, releasing trapped air and water. After all, butter is 80 % fat, roughly, so there's more to it than just that. The warming of fats temporarily makes doughs and batters more soft and delicate, or even more fluid. Batters become looser and cookies will start to spread. If you notice that your cookies are spreading too much in the early stages of baking, it's probably because there's an excess of fat in the dough Water evaporates Water boils at 100°C (212°F) at sea level, so once batters and cookie doughs heat up to that temperature, water will begin to evaporate, turning to steam and drying out the baked goods. The water evaporates and the gas bubbles expand and rise between 35 ºC and 70°C (95 to 158°F), contributing to the rise of the cookies and cakes. Evaporation also contributes to the crust of baked goods, further setting the exterior of cookies. Evaporation is what gives crinkle cookies their distinct cracked surface: the cookie dough dries out first, before the leavening agents react, forming cracks. Proteins denature and set As the dough continues to heat up above 60 ºC (140 ºF), eggs and gluten proteins begin to dry out and set. Starch granules swell with water and gelatinize up until about 93 ºC (200ºF). These steps are key to setting the structure of the cookie in place. Leavening agents react Baking soda and baking powder in cookie doughs will begin to react a little before the cookies hit the oven, at room temperature, releasing a small amount of carbon dioxide gas that will lead to a little expansion. Usually, heat is necessary to help them react more (this is especially true for double-acting baking powders, which contain a slow-acting chemical leavener that requires more energy to leaven baked goods). Sugars caramelize Above 149 ºC (300ºF) is the sweet spot where sugars caramelize and the Maillard browning reactions occur. Both of these contribute to that “golden-brown delicious” colour and flavour we love in baked goods. Usually, browning will occur mostly on the exterior and the interior won't reach such a high temperature. That's why cakes and cookies have a golden crust and a soft, pale interior. Caramelization of sugars occurs above 160°C (320°F) and will mostly occur on the edges of baked goods that are in direct contact with metal bakeware. Maillard browning takes place above 105°C (220°F). The pH of the batter or cookie dough has an impact on browning, specifically Maillard browning: under acidic conditions, Maillard reactions are less likely, leading to baked goods that don't brown as well when baked. On the other hands, too much baking soda can lead to an excess of browning reactions and a very dark colour that might not be desirable. Following a recipe and baking seems very simple but actually, a lot of complex transformations and reactions occur when you bake cakes and cookies.
https://bakeschool.com/what-happens-when-you-bake/
As the behavior of a chaotic Chua's circuit is nonstationary and inherently noisy, it is regarded as one of the most challenging applications. One of the fundamental problems in the prediction of the behavior of a chaotic Chua's circuit is to model the circuit with high accuracy. The current paper presents a novel method based on multiple extreme learning machine (ELM) models to learn the chaotic behavior of the four elements canonical Chua's circuit containing a memristor instead of a nonlinear resistor only by using the state variables as the input. In the proposed method four ELM models are used to estimate the state variables of the circuit. ELMs are first trained by using the data spoilt by noise obtained from MATLAB models of a memristor and Chua's circuit. A multistep-ahead prediction is then carried out by the trained ELMs in the autonomous mode. All attractors of the circuit are finally reconstructed by the outputs of the models. The results of the four ELMs are compared to those of multiple linear regressors (MLRs) and support vector machines (SVMs) in terms of scatter plots, power spectral density, training time, prediction time, and some statistical error measures. Extensive numerical simulations results show that the proposed system exhibits a highly accurate multistep iterated prediction consisting of 1104 steps of the chaotic circuit. Consequently, the proposed model can be considered a promising and powerful tool for modeling and predicting the behavior of Chua's circuit with excellent performance, reducing training time, testing time, and practically realization probability. First Page 121 Last Page 140 Recommended Citation UÇAR, AYŞEGÜL and YAVŞAN, EMREHAN (2016) "Behavior learning of a memristor-based chaotic circuit by extreme learning machines," Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 24: No. 1, Article 10.
https://journals.tubitak.gov.tr/elektrik/vol24/iss1/10/
To learn more about the history and evolution of city directories, see: Each entry typically lists the person's name, occupation, and a business address, followed by a home address, which is preceded by an "h." New York City Directory, 1857 Unlike later "business directories" which are arranged by type of trade or business, city directories are arranged alphabetically by the resident's surname. The name of a person's business may also be listed separately, e.g.: Ambler Henry S. coal, 22 Elizabeth h Brooklyn Ambler John C. notary, 29 Wall, h. 80 Clinton pl. Ambler John G. dentist, h 31 Washington pl. Ambler Sam M. pianos 358 Bowery h. 131 Eldridge Ambler Wm strawgoods 24 Warren h. 51 W. 29th AMBLER & COLLARD, coal, 22 Elizabeth Since business names are also listed alphabetically, it may be difficult to associate your relative with the name of a non-eponymous business, or one where the family surname appears but not as the first word. In the example above, for instance, a researcher looking for the name of the business owned by Collard would not be able to find it unless he or she knew that the first name listed was Ambler. Even if you are not lucky enough to find the specific name of your family business in a directory, you can use the business address to search in other sources (such as street directories or newspaper databases) that may lead you to the business name. In most large American cities, city directories began to be published around the beginning of the 19th century. For smaller cities, directories may not have appeared until the late 19th or early 20th centuries. Coverage varies by publisher, but most include only heads of households and those owning or working in a local business. Most directory publishers gathered data through a door-to-door canvas, relying on the voluntary cooperation of residents. If the resident was not at home, an information slip was left, which the resident may or may not have chosen to complete and return to the publisher. African-Americans, when included, are generally identified by a racial designation (e.g., "col'd"), or may be listed in a separate "colored" section. Women who are running a business or selling goods or services are also listed (otherwise, women appear only if widowed). More than just a listing of names and addresses, city directories usually also include some (or many) business advertisements, and may also have a separate business section or "commercial register" at the back, arranged by category rather than simply alphabetically. So make sure you explore beyond the alphabetical listings of names. NYPL DIgital Collections Image ID 56790448 And don't stop with advertisements: city directories are wonderfully informative about many aspects of your ancestors' lives beyond mere business. They may include lists of clubs and organizations, government agencies, banks and insurance companies, churches, docks and wharves, ward maps, and many other details that will enrich your understanding of the community in which a historic business operated. NEW YORK CITY DIRECTORIES Regular publication of New York City directories dates back to 1786, when David Carroll Franks published the New-York Directory (available online through Columbia University Libraries’ Digital Collections). With a few gaps, city directories were published annually thereafter. Free online sources for NYC directories: Many early New York City directories are now freely available online through the following sources: NYC directories available on-site at NYPL: In addition to the directories freely available online through NYPL's digital portal, NYPL provides on-site access to directories through subscription databases, on microfilm, and in print (note that to preserve our print directories, they are only made available when not accessible online or on microfilm). Subscription databases Digital copies of early New York City directories can be accessed through several subscription databases that are available on-site at NYPL: Microfilm If you are looking for later directories, or can’t find the specific year you are looking for online, NYPL holds a nearly complete collection of New York City directories on microfilm. NYPL DIgital Collections Image ID 1252841 Print copies NYPL also holds print copies of many early NYC directories. To locate these in our online catalog, it is useful to know the main directory publishers, which include the following: You can also try searching for New York City directories in our online catalog with the following subject headings: DIRECTORIES FOR OTHER CITIES Directories for cities and towns outside of New York City are also available, both online and at NYPL. Free online sources: Available at NYPL: REVERSE DIRECTORIES NYPL also holds a number of address or "reverse directories" on microfilm, which list both residents and businesses by address, rather than name. These are obviously very useful if you have the address of the business but do not know the name. They also provide information about surrounding businesses and residences, which helps to paint a fuller picture of the community in which your ancestor's business operated. NYPL's reverse directory holdings for New York City encompass the years 1929-1980 and 1986, with some gaps.
https://libguides.nypl.org/c.php?g=835988&p=6137777
Let's assume that a jumper jumps from a full speed of 10 m/s. The estimate is in line with the current world record of about 9 m for men and 7.5 m for women. We have neglected the effect of air resistance on the motion of objects, but we know from experience that this is not a negligible effect. The air has to be pushed out of the way when an object moves. The force of the reaction pushes back on the body and causes it to slow down. We can see some of the properties of air by sticking our hand outside a car. The larger the air force, the greater it is. The force is greater when the palms face the direction of motion. The force increases with the direction of motion. As the body gains speed, the air resistance grows and the net force on the body decreases. The force due to air resistance is equal to the weight if the body falls from a great height. The solution of 3.22 is not constant and can't be obtained with simple techniques. The terminal velocity can be obtained without difficulty. The downward force of gravity is canceled by the upward force of air resistance at the terminal velocity. The square root of the linear size of the objects determines the terminal velocity of different-sized objects that have a similar density and shape. The following argument shows this. Implications on the ability of animals to survive a fall have arisen from this result. This is the speed at which any animal can hit the ground without injury. The force of air resistance on an animal the size of a man is insignificant compared to the weight. A small animal is slowed down by air. It is possible for a small creature to drop from a height. Rats are rarely encountered in deep coal mines. A calculation shows that a mouse can fall down a mine shaft. A fall will kill a rat. There is an effect on the speed of falling precipitation. A 1- cm diameter hailstone falling from a height of 1000 m would hit the Earth at a speed of 140 m/s. The hailstone would hurt anyone that it fell on. Muscular movement is what animals do. The chemical energy in the food eaten by the animal allows the work to be performed. Only a small portion of the energy consumed by the muscles is converted to work. The efficiency of the muscles is 20% in bicycling at a rate of one leg extension per second. One fifth of the chemical energy consumed by the muscle is converted to work. As heat, the rest is dissipated. The metabolism is the amount of energy consumed per unit time. The type of work and the muscles involved affect muscle efficiency. In most cases, the efficiency of the muscles is less than 20%. We will assume a 20% muscular efficiency in our calculations. We will calculate the amount of energy consumed by a person jumping up 60 cm for 10 minutes at a rate of one jump per second. The doughnuts have energy content in them. The metabolic rate is calculated by A. H. Cromer while running. This is in agreement with the measurement. The difference between work and muscular effort should be noted in connection with the energy consumption during physical activity. Work is defined as the product of force and the distance over which the force acts. The wall does not move when a person pushes against it. The act of pushing uses a lot of energy. All the energy is spent in the body to keep the muscles balanced. A 70 kilogram man can only jump to a height of 10 cm because of the heavy equipment he has. The conditions of the jump are described in the text. Consider a person on the moon who does a broad jump. Assignment Panel View flashcards and assignments made for the note Getting your flashcards --/-- Review Quizzes Mine Others Notifications U Profile Mobile App Privacy & Terms Feedback Need Help?
https://knowt.io/note/9654c751-4b09-4aa1-8790-eee81c69c46e/Chapter-3----Part-2
IONOS creates daily backups of your webspace. In Linux Web Hosting packages, you can restore the backups under the Webspace Recovery option. Please note: Backups of your webspace are available for a maximum of 6 days. If you need data backups over a longer period of time, you can also manually create your backups. We have put together various ways for you to do this: MySQL Database Backups Backing Up and Restoring Webspace a) About SFTP b) About Secure Shell (SSH) Backup data: In the following example, a backup with the file name "Backup" + "Date.tar" (e.g. Backup-May-01-2019.tar) is created from the directories " folder1" and " folder2": tar -cvf Backup-$(date "+%d-%b-%y").tar ./folder1 ./folder2 Tip: You can automate the data backup using a cron job. Make sure you specify the complete path. Replace"/homepages/12/d12345678/htdocs" with your root directory. # Perform a backup of folder1 and folder2 every Sunday at 03:00 0 3 * * 7 tar -cf /homepages/12/d12345678/htdocs/Backup-$(date "+%d-%b-%y").tar /homepages/12/d123456789/htdocs/folder1 /homepages/12/d12345678/htdocs/folder2 - Restore data: tar -xvf Backup-May-01-2019.tar Please pay attention to the directory used to create the backup to make sure it is unpacked into the correct directory to overwrite the existing files.
https://www.ionos.com/help/hosting/backup-restore-files/create-regular-backups/
Introduction {#S1} ============ A biomarker is a parameter that can be used as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to therapeutic drugs (Biomarkers Definitions Working Group, [@B7]). In Alzheimer's disease (AD), potential biomarker information comes from multiple sources, including clinical tests for memory impairment, bodily fluid or tissues, neuroimaging, and smell tests among others. AD biomarkers are typically assumed to belong to the following two categories: *biofluid analytes*, e.g., cerebrospinal fluid (CSF), peripheral blood samples such as urine and *imaging measures*, e.g., magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), or positron emission tomography (PET) (Henriksen et al., [@B33]). At present there are five well-established AD biomarkers: two are CSF analytes that measure abnormal protein aggregates -- low level of CSF amyloid-beta and elevated level of both total and phosphorylated CSF tau protein; and three imaging biomarkers -- the Pittsburgh compound-B PIB PET tracer for amyloid-beta deposition, for which MRI scans may detect atrophied sensible brain areas; and Fludeoxyglucose FDG PET to quantify abnormal neuronal glucose consumption (Jack, [@B35]). The diagnostic criteria for AD has not been modified since its original formulation in 1984 until it was recently updated in 2010 (Dubois et al., [@B19]). In the original criteria, AD was strictly diagnosed on a clinical basis (McKhann et al., [@B44]). Other sources of information such as imaging lacked a positive diagnostic role. New diagnostic criteria reckons AD as a complex disorder characterized by a gradual and progressive pathogenesis, with three phases -- preclinical or asymptomatic, prodromal or mild cognitive impairment (MCI), and overt dementia (Dubois et al., [@B20]; Albert et al., [@B2]; Sperling, [@B56]). Despite technological and conceptual advances in AD, we are still lacking preventive therapies to delay the onset of AD as well as disease-modifying treatments. Despite the strong need for early diagnose of AD, and the fact that biomarkers have proved useful in correlating with the different stages in which the disease unfolds, CSF and imaging biomarkers still play a surprisingly minor role in clinical diagnosis. They are, however, increasingly prominent in clinical trials and academic research. There is a growing consensus between clinical researchers that the application of biomarkers should follow a multi-modal and integrative approach. Truly predictive models of disease progression need to take into account the combined effects of biomarkers interactions at the individual subject level. Unfortunately however, few studies have specifically addressed the issue of the integration of different biomarkers for efficient and quantitative diagnostics. Furthermore, it has been particularly difficult to link findings on molecular biomarkers to early stages of the neurodegenerative disease, and no real groundbreaking discovery in imaging-based biomarkers has been produced. Thus, there is a lack of novel therapeutic approaches that efficiently target the underlying mechanisms and disease progression of AD (Corbett and Ballard, [@B14]). There is clear evidence that AD and other neurodegenerative disorders evolve at the systems level (Eidelberg and Martin, [@B21]) and that biomarkers -- molecular, imaging, or CSF -- need to be considered with a holistic point of view. Functional imaging may help us understand disease-related changes in interconnected brain areas. In this regard, functional imaging techniques unburdened of subject compliance such as RS-functional magnetic resonance imaging (fMRI) and TMS/EEG, are being extensively used for biomarkers discovery in neurodegenerative disorders. In this review, we provide a brief panoramic view on recent research on the discovery of AD biomarkers, putting special emphasis on neuroimaging biomarkers derived from functional connectivity data in resting state, that is, the subject is not performing an explicit task. Network-based biomarkers are introduced, and we provide a new framework for the quantitative study of biomarkers that can help shorten the transition between academic research and clinical diagnosis in AD. AD Biomarkers {#S2} ============= Clinical tests for AD diagnosis involve subjective reasoning by experienced practitioners. Episodic memory impairment has little or no relevance in early diagnosis, but it still remains the core diagnostic criterion. Current diagnostic criteria (DSM-IV and NINCDS-ADRDA) have high sensitivity but low specificity (Knopman et al., [@B37]). The delay from symptoms to diagnosis is 20 months on average in the EU, and 36 months in the UK (Mattila et al., [@B42]). Furthermore, molecular pathomechanisms of AD become active for several years before symptoms such as cognitive impairment manifests itself. Blood samples are a non-invasive and cost-effective technique for the identification of plasma biomarkers that has proven useful in distinguishing individuals with AD from cognitively healthy control subjects (Doecke et al., [@B18]). Plasma biomarkers can be used to extract metabolomics (Trushina et al., [@B60]) and proteomics biomarker signatures in AD (Hye et al., [@B34]). Contrary to diagnostic tools like CSF and PET, plasma amyloid-beta measurements are neither invasive nor expensive. Plasma Aβ40 and Aβ42 can be measured in peripheral blood, but they cannot be used in AD identification. Vanderstichele et al. ([@B62]) found no differences in Aβ42 levels between controls and patients with AD. Further work is required before plasma amyloid-beta measurements are unanimously regarded as clinically useful (Mayeux and Schupf, [@B43]; Toledo et al., [@B59]). Using Smell tests to detect hyposmia is another example of inexpensive biomarker in AD (Kjelvik et al., [@B36]). However, the reduced capability to detect odors shown in AD may be more an effect of the cognitive decline characteristic of the disease than a symptom with predictive value (Serby et al., [@B53]). Neuroimaging biomarkers in AD measure brain signals at both mesoscopic (MRI) and macroscopic scales (fMRI, MRS, and PET). Morphometric analysis with MRI data (e.g., atrophy in medial temporal lobes, specifically in the hippocampus and entorhinal cortex) is a well-known marker of disease progression in AD. Hippocampus atrophy correlates with neuronal loss and therefore MRI biomarkers could be used in proof-of-the-concept studies to distinguish between disease-modifying and symptomatic treatment effects (Saumier et al., [@B51]; Hampel et al., [@B31]). PET neuroimaging allows us to collect molecular information. PET image analysis can provide evidence of the accumulation of amyloid-beta plaques that is independent from structural brain changes. It also provides evidence of a reduction of glucose metabolism in the parietal and temporal lobe regions that are involved in memory and executive function (Habeck et al., [@B29]). Both structural MRI and FDG-PET imaging reflect the effects of the disease progress in symptomatic stages, however it is the diagnosis in AD's asymptomatic stages that remains to be solved. Molecular pathomechanisms, such as the accumulation of amyloid plaque, become active several years before cognitive deficit manifest. Furthermore, amyloid-beta is not specific to AD, but may also be found in normal aging. Resting-State fMRI {#S3} ================== Functional magnetic resonance imaging allows us to assess functional connectivity mapping at high temporal resolution by means of correlations in the blood-oxygen-level-dependent (BOLD) signal in spatially distant brain regions. Since the seminal work of Biswal (Biswal et al., [@B8]), task-free or resting-state fMRI (R-fMRI) has been successfully incorporated into the functional MRI imaging repertoire, and represents a comprehensive alternative to the task-based approach. R-fMRI experiments are considerably less demanding for the subject, which makes this technique especially attractive to brain dementia researchers, as it is relatively free of subject compliance and training demands. R-fMRI measures the spontaneous or intrinsic brain activity in terms of low-frequency (\<0.1 Hz) BOLD fluctuations. Fluctuations in the BOLD signal measured in humans in resting state represent the neuronal activity baseline and shape spatially consistent patterns (Fransson, [@B24]; Raichle and Gusnard, [@B47]). The systematic study of those patterns using correlation analysis techniques has identified a number of resting-state networks, which are functionally relevant networks found in subjects in the absence of either goal directed-task or external stimuli. Despite the variability in the data acquisition protocols, statistical data analysis, and groups of subjects employed, resting-state networks have been consistently reported in multiple studies. There are at least eight commonly identified resting-state networks: the primary sensorimotor network, the primary visual and extra-striate visual network, bilateral temporal/insular, and anterior cingulate cortex regions, left and right lateralized networks consisting of superior parietal and superior frontal regions, and the default-mode network (DMN) (Van den Heuvel and Hulshoff Pol, [@B61]). The DMN is a specific anatomically defined brain system that is preferentially active when individuals are focused on introspective activities such as autobiographical memory retrieval, rather than on the external environment (Buckner et al., [@B9]). A number of studies indicate that the default network is also relevant for understanding mental disorders including depression (Sheline et al., [@B54]), autism (Washington et al., [@B66]), and AD. Studies show a decrease in DMN functional connectivity in normal aging, MCI and AD (Hafkemeijer et al., [@B30]). Functional connectivity of the DMN may prove to be a sensitive and specific biomarker for mild AD (Greicius et al., [@B28]; Balthazar et al., [@B3]). The visual identification of the overall connectivity patters in R-fMRI has been assessed using either model-based or model-free approaches. In the former, statistical parametric maps of brain activation are built upon voxel-wise analysis location (Wang et al., [@B63]; Faria et al., [@B23]). This approach has been successful in the identification of motor networks, but it shows important limitations when the seed voxel cannot be easily identified, for example in brain areas with unclear boundaries such as cognitive networks involved in language or memory. Independent component analysis (ICA) (Comon, [@B13]; Stone, [@B58]), on the other hand, is a model-free approach that allows separating resting fluctuations from other signal variations, resulting in a collection of spatial maps, one for each independent component, that represent functionally relevant networks in the brain. While ICA has an advantage over model-free methods that it is unbiased, that is, it does not need to posit a specific temporal model of correlation between regions of interest (ROI), the functional relevance of the different components is still computed relative to their resemblance to a number of networks based on criteria that are not easily formalized (Friston, [@B25]). More recently researchers using graph-theory based methods have been able to not only visualize brain networks, but also to quantify their topological properties as well (He et al., [@B32]; Wang et al., [@B64]). Graph-theory provides a formal and rigorous framework to quantitatively analyze the connectivity pattern, at either a local or global level, underlying cognitive networks. How these network properties are modified during normal development, aging, or pathological conditions is addressed in the next section. R-fMRI and AD {#S4} ============= Altered resting-state functional connectivity patterns have been shown in an impressive range of pathologies and conditions -- AD, schizophrenia, multiple sclerosis, Parkinson's disease, depression, autism, and attention deficit/hyperactivity disorder -- see (Lee et al., [@B38]) for a review on clinical applications. In the context of AD, both amyloid-beta and tau pathologies affect DMN integrity before the clinical onset of the disease (Li et al., [@B39]; Wang et al., [@B65]). DMN regions such as the precuneus and the posterior cingulate are selectively vulnerable to amyloid-beta deposition (Sperling et al., [@B57]). AD weakens structural and functional connectivity between the cingulate cortex and other regions within the DMN, which is consistent with the reduction in metabolic activity and atrophy observed with FDP-PET and volumetric MRI, respectively within the DMN (Zhu et al., [@B72]). Patients with severe AD show decreased connectivity between distant brain regions (Liu et al., [@B40]). Interest in understanding the pathomechanisms of tau-mediated neurodegeneration has been fostered by the failure of amyloid-beta therapies to prevent neurodegeneration by Aβ removal. Tau abnormalities have been found to be more closely related to cognitive dysfunction than Aβ (Yoshiyama et al., [@B71]). Tau deposition is initially located in the medio-temporal lobe to spread later to lateral temporal and frontal parietal areas. This orderly progression found in hypophosphorylated tau maps the regional specificity in the deployment of symptoms in AD, i.e., episodic memory loss in the MTL is followed by semantic memory loss in lateral temporal cortex to aphasic symptoms in parietal cortex (Pievani et al., [@B46]). Functional imaging has been successfully used in population selection in cross-sectional studies to classify between normally aging, MCI, and AD subjects (Rombouts et al., [@B48]; Damoiseaux, [@B15]). R-fMRI can be also used to track AD progression in longitudinal studies. For example, in Damoiseaux et al. ([@B16]) it is shown that functional connectivity in default-mode subnetworks decreases in AD patients compared to healthy controls. Resting-state functional connectivity can help detect early manifestations of genetic effects related to AD. For instance, in (Sheline et al., [@B55]) cognitive normal individuals were categorized into PIB− (no evidence of brain amyloid) and PIB+ (PET evidence of amyloid deposition) and compared with AD patients using resting-state functional connectivity. The study showed that the PIB+ and AD groups share similar modifications in both functional and effective connectivity. Thus, R-fMRI can be used to detect early manifestations of genetic effect, e.g., amyloid deposition in APOE4 carriers, and therefore holds great potential in early diagnosis and disease-modifying strategies. It goes without saying that like any technique, R-fMRI has advantages and disadvantages. fMRI measures the BOLD signal, which is an indirect measure of neural activity and it is susceptible to several imaging artifacts and has, in general, worse temporal resolution than EEG and MEG, and spatial resolution that is not as good as more invasive procedures such as single-unit electrodes. The analysis and interpretation of R-fMRI data is particularly challenging, and further work is still required to address complex issues like network identification, effective connectivity between brain networks, detecting AD risk groups, etc. For a review on the progress and pending problems of statistical approaches to analyzing R-fMRI, see Cole et al. ([@B12]). Network-Based Biomarkers {#S5} ======================== Contrary to other conditions such as brain injury whose onset can be tracked both in location and time, late sporadic AD -- the most common form of dementia and two orders of magnitude more frequent than inherited AD (Bateman et al., [@B5]) -- has a gradual onset that lacks a specific location or temporal window. Experimental studies based on neuropathology, neuroimaging, and transgenic animal models suggest that neurodegeneration relates to neural network dysfunction. Disease-vulnerable intrinsic functional networks are not diffuse or random (Sanz-Arigita et al., [@B50]), however, researchers are still uncertain about the specific way in which neurodegeneration spreads beyond the sites of initial impairment. The network degeneration hypothesis (Seeley et al., [@B52]) -- disease starts in small network assemblies, to progressively spread to connected areas of the initial locus -- supports the view that neurodegenerative disorders can be study as connectivity disorders. In this light, AD can be understood as a disconnection syndrome in which the structural and functional connectivity of large-scale networks is progressively modified by molecular pathomechanisms that are not fully understood. A diagnostic biomarker, in order to be considered as such, should reflect a core pathogenic process. The established biomarkers in AD hold this promise as they measure, for example, amyloid-beta and tau deposition levels, which are responsible for the formation of senile plaques and neurofibrillary tangles. However, it is far from clear whether amyloid and tau deposition are etiologically linked to memory deficits or they rather reflect secondary effects of a different pathogenic mechanism (Eidelberg and Martin, [@B21]). AD is a complex and multifactorial condition and so "secondary processes" such as oxidative stress, immune responses, or inflammation and how they interact with core pathogenic mechanisms need to be properly understood. The discovery of AD biomarkers must go beyond detecting abnormal protein deposition levels and be able to monitor both disease progression and treatment effects in a coherent and integrative way. To that end, a network-based approach for biomarker discovery is required. Erler and Linding ([@B22]) argue that biomarkers should be deployed as network models themselves. The rationale behind this idea is that biomarker discovery needs to take into account the network state and the biological context in which the network evolves, rather than focus on individual nodes or events, e.g., phosphorylation. A network-based approach for biomarker discovery is also being fostered in complex diseases such as cancer and diabetes (Ahn et al., [@B1]). The multifactorial pathogenesis of complex diseases such as AD is at odds with the current implementation of biomarkers which are single-dimensional. Thus, we propose to redefine biomarker as *a network model that can be used as an indicator of normal (including adaptive) biological processes, pathogenic processes, or pharmacological responses to therapeutic drugs*. Under this definition, biomarkers are multidimensional, as they are embedded into a network model in which network parameters, that represent normal or pathological processes but also adaptive responses, can be characterized. This new definition of biomarker allows us to quantify adaptive processes triggered by early pathogenic events, fostering an integrative and multidimensional approach of use in AD early diagnose. For example, it is unclear if, as the disease progresses, functional connectivity in large neural systems is attenuated, e.g., in the DMN (Wu et al., [@B69]; Liu et al., [@B40]; Zhu et al., [@B72]) or on the contrary, AD may induce an increase in functional connectivity that compensates for the disease related atrophy of affected regions (Sanz-Arigita et al., [@B50]). An increase in focal frontal connectivity and heightened hippocampal activation during early stages of AD has been reported in Dickerson et al. ([@B17]). Functional disruption has been observed in the prodromal stage or even earlier and therefore a characterization of this imaging phenotype has potential impact in early prevention and disease-modifying therapies. The relationship between brain development, aging and disease and brain connectivity is not univocal, but instead involves a number of complex mechanisms that alter the network topology in multiple ways. The mechanisms that mediate in the increase in functional connectivity observed in prodromal AD are in dispute. There are several potential explanations for this phenomenon. For example, the increase in connectivity in the early phases of AD could reflect compensatory effects to neutralize the disruption in functional integrity, or represent some form of glutamate receptor-mediated excitotoxicity (Wu et al., [@B68]). An interesting hypothesis borrowed from economic theory is that early network alterations can be interpreted as a discount factor that anticipates the expectation of pending functional network integrity deterioration. Combining existing biomarkers poses important challenges not only in terms of intelligibility due to the heterogeneous and complex nature of biomarker data, but also in terms of cost of data extraction, e.g., expensive SPECT or MRI can not be used in subjects with metal implants, and genetic mutations account for only a small percentage of AD cases (Bertram and Tanzi, [@B6]). Truly predictive models of disease progression need to take into account the combined effects of biomarkers interactions at the individual subject level. Few studies however, have specifically addressed the issue of the integration of different biomarkers (Gomar et al., [@B26]). The long sought goal of early diagnosis of AD necessarily passes by the integration of existing biomarkers and the discovery of new ones. Network-based biomarkers provide a unifying approach for AD biomarker discovery and testing. Graph-based network analysis allows to quantitatively characterize the global organization of the brain and to integrate heterogeneous data in a "neutral" and general mathematical body. A Network-Based Approach in AD Biomarkers {#S6} ========================================= Biomarkers can be compounds obtained from bodily fluids or tissues, or technically derived correlates of pathophysiological events. While three of the five most important AD biomarkers are imaging-based, functional neuroimaging is absent in current diagnostic criteria. Markers of alterations in resting-state functional connectivity networks can discriminate between AD patients and healthy elderly people with a satisfactory level of sensitivity and specificity. Functional connectivity analysis of the DMN has great potential as network biomarker able to objectively quantify asymptomatic and prodromal stages of the disease and as secondary endpoint in multicenter clinical trials in AD (Chhatwal et al., [@B11]). The study of AD biomarkers with R-fMRI imaging, however, has focused on detecting alterations in specific networks such as the DMN and finding abnormal levels of protein deposition, metabolic disruption, and atrophy within the DMN. A system-level understanding of the dependencies that exist among the different biomarkers has not been achieved. The advent of "Big Data" science makes it possible to share large amount of data with unprecedented processing capability. The Alzheimer's disease neuroimaging initiative (ADNI) makes access to clinical imaging and biomarker data freely available to researchers worldwide. The whole genome sequences of the 800 individuals enrolled in the ADNI will be soon available through the Global Alzheimer's Association Interactive Network (GAAIN). The much-needed insight into the pathomechanisms that mediate in AD will benefit from the construction of probabilistic networks from large databases of AD biomarkers that systematically capture the probabilistic dependencies among biomarkers. Once the network or networks are built, a supervised classification algorithm can be used to classify new subjects within different classes, for example healthy and AD. Thus, in a training set of patients diagnosed as healthy or AD, we first build the generative graphs -- *M*~H~ and *M*~AD~ -- containing biomarker dependencies of healthy and AD subjects, respectively, to later perform a classification inference, that is, estimate the likelihood that *M*~H~ or *M*~AD~ has generated new data, i.e., a new subject to be diagnosed. Let us see this with an example. Figure [1](#F1){ref-type="fig"} shows a classification procedure for AD using a biomarker network-based approach. BM is a list of AD biomarkers considered in this example, BM = (w, o, τ, aβ, hc, fc, tac). For convenience, we assume that BM takes discrete values, that is, BM*~i~* = 1 when biomarker *i* reaches the threshold of positivity. Thus, w (Word recognition) and o (Orientation) are neuropsychological markers included in the ADAS-Cog (Alzheimer Disease Assessment Scale-Cognitive) (Rosen et al., [@B49]), τ and Aβ are CSF biomarkers that indicate whether the protein deposition is relevant, hc (hippocampus) is equal to 1 when a significant reduction of the hippocampus volume is found, fc (functional connectivity) indicates whether regions in, for example, the DMN such as the precuneus or the posterior cingulate cortex, has functional connectivity alterations reported in the literature or any other pattern that we want to be tested against other biomarkers. The tactile biomarker (tac) is an inexpensive marker of cognitive and motor decline of interest in AD found in our laboratory (Yang et al., [@B70]). This list of biomarkers can be extended with others, e.g., smell, epigenetic, blood, genetic, etc., with the caveat that a large number of parameters need even larger data sets in order to avoid having an overwhelming choice of networks that are potentially good at explaining the data. ![**Seven biomarkers of interest are listed in BM**. For convenience, we assume that BM is a binary vector, that is, BM(*i*) = 0,1. For example, if the measurement of the biomarker Word recognition reaches the positive threshold BM(1) = 1, if not, BM(1) = 0. The table in the top of the figure shows the training set S consisting of *n* samples or subjects with their biomarkers BM, and diagnosed as AD or healthy. The data in the table can be summarized via the construction of generative networks, one for each diagnostic category, in our example H and AD. There is a number of possible network structures that can characterize the training set, so the generative networks *M*~H~ and *M*~AD~ are the result of model selection. The diagnosis of new patients can be thus be addressed via the computation of the probability that the new data, BM~s~ is generated by the biomarker network that captures the dependencies among biomarkers in healthy subjects or by the biomarker network of healthy subjects.](fnagi-06-00012-g001){#F1} The training data set *S* is ideally composed of a large number of diagnosed subjects with the BM vector of biomarker information for each one. Thus, the training set is given by *S* = \[(BM~1~, *y*)(BM~2~, *y*),...(BM*~n~, y*)\], where BM*~i~* is the vector containing the biomarkers measured in patient *i*, and *y* represents the diagnostic class in which a subject can be classified, e.g., Healthy or AD. Now, we want to build a probabilistic network that captures dependencies among the biomarkers for each diagnostic class. For example, if the training data set contains biomarker information of *n* subjects diagnosed as healthy or AD \[*y* = (*y*~H~, *y*~AD~)\], two generative biomarker networks -- *M*~H~ and *M*~AD~ -- need to be built. This approach is entirely different to conventional AD biomarker studies, summarized above, that treat biomarkers as quantities that reflect relevant biological processes whose correlations with other biomarkers need to be investigated through heuristics methods (Table [1](#T1){ref-type="table"}). An interesting improvement in the quantification and integration of AD biomarkers aiming to improve the efficiency and of AD diagnosis can be found in Mattila et al. ([@B41]). A supervised classifier is implemented via a disease state index (DSI) that compares the biomarker measurements of new patients with previously diagnosed patients' biomarkers. Thus, the DSI is an aggregate measure of a number of biomarkers that allows us to classify based on biomarker data. ###### **Differences between the standard and the network-based AD biomarker approaches**. AD biomarker AD network-based biomarker (NBB) ---------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Dimensionality 1-Dimensional, unsuited for multi-modal integration of heterogeneous data N-Dissmensional, integrate multi-modal biomarkers in a common framework Statistical classification Classifier based on group differences between HC, MCI, AD Supervised classifier for the assessment of risk disease in relation to large population data. Allows group risk classification based on individual-based risk measure built upon network biomarker parameters Temporal scale Temporal window of biomarker efficiency is not considered Well suited for longitudinal studies by implementing computational models of network disruption effects in temporal windows, e.g., short/long term Spatial scale Study of selective vulnerability in region specific neuron classes, i.e., neuronopathy or network component specific, e.g., the precuneus in the DMN Unbiased, NBB address large-scale distributed networks. Long rage disease spread shaped by network connectivity profiles, i.e., network-opathy (Comon, [@B13]) Early diagnosis Diagnosis of patients with overt dementia Characterization of asymptomatic and prodromal stages. NBB can be used as surrogate end points and provide *in vivo* intermediate phenotypes of pathology Preventive therapy Inefficient for disease-modifying or preventive therapies, e.g., reduction of Aβ production has shown limited therapeutic impact Potential for early diagnosis and disease-modifying therapies by detecting alterations in functional connectivity Feature extraction Absence of standardized quantitative metric for AD imaging biomarkers Automated extraction of network parameters borrowing tools and methods from network theory Our network-based approach in AD biomarkers differs from these approaches in that biomarkers are here characterized as structured objects, i.e., networks, in which the dependencies among the network components, i.e., individual biomarkers, need to be quantified via experimentation or computational simulation of the network dynamics. For a training set of diagnosed biomarker data, the computation of the generative biomarker network for each diagnostic class, e.g., *M*~H~, *M*~AD~ is a network structure discovery problem. The idea is to provide a structural model, i.e., a network of the training data set, i.e., biomarker data. For example, for a training data set of patients diagnosed into the categories healthy and AD, two networks -- *M~H~, M~AD~* -- are built. The nodes represent the random variables of the training set (biomarkers) and the edges represent the stochastic dependency between these variables. Dependency structures can be analyzed using Bayesian network models (Buntine, [@B10]). In the context of AD biomarkers, the network represents the dependency structure of the underlying distribution of any two biomarkers. For example, in Figure [1](#F1){ref-type="fig"}, the generative network *M*~H~, which contains a structural representation of the biomarkers dependencies in the subjects diagnosed as healthy, shows no dependency among biomarkers and only one biomarker, amyloid-beta deposition, reaches the threshold of positivity. In the *M*~AD~ network, the generative matrix of patients diagnosed as AD, we find stochastic dependency between all pairs of biomarkers except in fMRI and tactile. The identification of the generative models *M*~H~ and *M*~AD~ from data is the result of statistical learning followed by model selection. It ought to be noted that when the amount of data -- the number of diagnosed individuals -- is small compared to the size of the model -- the number of biomarkers -- there are likely many candidate models that explain the data, and therefore the generative model provided by model selection may not be a good approximation of the underlying process. On the other hand, model selection is more likely to provide a good approximation when a large amount of data is available in models with a relatively small number of parameters. The number of candidate networks is super exponential of the number of model parameters, therefore small size models relative to the large data sample are preferable. For a discussion of the *p, n* (*p* = model size, *n* = data size) problem in statistics, see Gomez-Ramirez and Sanz ([@B27]). The diagnosis of a new subject can be computed via the maximum probability of the biomarker configuration BM~s~ conditional to the generative models, *M*~H~ and *M*~AD~, max~G~ = (*M*~H~, *M*~AD~) P(BM~s~\|G). The utility of this approach will ultimately rely on its power to generate decision support systems to assist the physician in early diagnosis and symptomatic treatment. This work describes the blueprint for the construction of uncomplicated and cost-effective tools for the identification of disease's signatures, based on a new understanding of biomarkers as multidimensional objects, i.e., networks. Thus, biomarkers can be seen here as the heterogeneous building blocks in network-based models. Conceptually, the work flow for the implementation of decision models based on the theoretical framework described here can be divided into three phases: (1) data extraction for biomarker selection, (2) network-based model building, and (3) model validation using classification algorithms. The first phase is intrinsically hypothesis driven. Quantities susceptible to work as biomarkers are selected experimentally or via public repositories such as the ADNI initiative. In the second phase, the interdependencies among biomarkers are studied quantitatively. The idea is to understand how the different biomarkers act together within a network model that can be further characterized in terms of network parameters such as clustering or modularity. As a result, generative models of diagnostic categories, e.g., *M*~H~ and *M*~AD~ are built. In the last step, new subjects can be diagnosed via the maximum probability of the biomarker configuration for a new subject s (BM~s~) conditional to the generative models, max~G~ = (*M*~H~, *M*~AD~) P(BM~s~*\|*G). Thus, in essence, this approach can be seen as a supervised classifier that allows us to assess the clinical value of the network models built upon heterogeneous and structured biomarker data. It ought to be remarked that the Bayes' theorem allows us to calculate the posterior probability P(G\|BM~s~) or the updating of probabilities from an experiment that results in the biomarker values BM~s~. Generally speaking, by increasing the sample size it is possible to reduce the importance of the prior distribution, P(G), which is particularly difficult to specify, and represents the uncertainty about the network structure before the data are examined (Migon and Gamerman, [@B45]). Conclusion {#S7} ========== The network-based biomarker approach described here is in compliance with the new emerging paradigm of network medicine (Barabási et al., [@B4]). In this respect, network medicine, in order to be successful, must offer healthcare professionals not only a conceptual framework, but also comprehensive methodologies and a practical toolkit able to address the challenges and limitations in AD biomarkers research in new ways. New classification methods, such as support vector machine (SVM), have proven to be effective for the identification of MCIs from normal aging using resting-state functional connectivity data (Wee et al., [@B67]). Bayesian network analysis of effective connectivity show differences in the DMN between AD and healthy controls and could be used in the future as a biomarker (Wu et al., [@B69]). The development of efficient tools for use in clinical diagnosis and monitoring of disease progress require the improved use of already known biomarkers and new methods of biomarkers discovery. There is a strong need for objective- and quantitative-based biomarkers of use in asymptomatic and prodromal stages of AD. The systemic understanding of the interactions between biomarkers can be seen as statistical learning followed by a model selection problem. The inclusion of functional imaging biomarkers in the clinical diagnoses of AD necessarily passes over the standardization of imaging protocols and quantitative metrics. In this respect, the network-based biomarkers approach presented here goes beyond the current emphasis on the study of the relationship between specific networks (e.g., DMN) and molecular biomarkers (e.g., amyloid-beta) to learn dependencies between biomarkers from heterogeneous data implemented as a graph, where the nodes are biomarkers and the edges represent the stochastic dependency among the biomarkers. There are, however, challenges that are not addressed here. For example, the review has focused on the integration of predetermined biomarkers, but biomarker selection is a standing problem in AD research. Non-linear relationships between biomarker measurements and disease severity, and handling sparse observations constrain biomarker prediction. Alterations in functional connectivity may play a key role in detecting signatures in pre-symptomatic and prodromal stages. However, functional imaging related biomarkers have so far focused on alterations in intrinsic connectivity networks and the co-occurrence of protein deposition within those networks. Quantified and standardized metrics for AD neuroimaging biomarkers and a system-level understanding of the dependencies among the existing biomarkers are still missing. The network-based approach introduced here aims to bridge this gap by providing a statistical framework able to learn structural representations of biomarkers interactions from biomarker data of previously diagnosed patients. To fully capitalize on the large amount of data that big data science projects are bringing to AD research, a new mathematical framework for finding effective combinations of multi-modal biomarkers is sorely required. Biomarkers deployed as network models rather than as quantities will foster our understanding of disease, paving the way for a predictive, preventive, and personalized medicine. Conflict of Interest Statement {#S8} ============================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This work has been funded in part with funds from the Erasmus Mundus Building European Asian Mobility Program of the European Commission (EM-BEAM), Grant Number L03100048. [^1]: Edited by: Manuel Menéndez-González, Hospital Álvarez-Buylla, Spain [^2]: Reviewed by: Daniel Ortuño-Sahagun, Centro Universitario de Ciencias Biológicas y Agropecuarias, Mexico; Jose-Luis Gonzalez De Aguilar, University of Strasbourg, France [^3]: This article was submitted to the journal Frontiers in Aging Neuroscience.
These guided meditations are offered for free, from Leo Babauta. Breath Meditation A return to the basics exploring the breath with a beginners mind, curiosity, and love for the life it gives us. Compassion Meditation A meditation about dropping down into our body and heart and sending out love, healing and compassion to ourselves and others. Meditation: Fearlessness with Feelings This meditation is about the willingness to be with all feelings. To give space and to show unconditional compassion towards them. To feel its to be alive, open to the unknown. Open Awareness Meditation In this meditation, we begin by scanning for the sensations in mind and body, as well as the ones coming from our surroundings through the senses. By appreciating these sensations without judgement, we allow ourselves to walk through the day with an open and loving awareness. Vast Consciousness In this meditation, we practice sitting upright in the storm, with dignity and integrity. We expand our awareness out layer by layer, out into the universe. And from there we ask a question, and see what answer arises. Meditation on Breath & Impermanence In this meditation, we explore the breath, releasing any tension, savoring each breath, realizing that breath is impermanent, as is our life and this meditation. Enlightened Energy In this meditation, we notice the energy residing in our body. Is this energy somehow familiar? Could this energy be created by your thoughts and beliefs? Seeing that energy in all things in the world and the universe as enlightened energy that connects everything. Interconnectedness In this meditation we realize how interconnected we are with everything around us. We feel and develop appreciation, gratitude and joy for the interconnectedness with the earth, the sun, all life on earth. We are supported by a network, a big web of life, and in turn we support others. Impatience Patience – being in the middle of the chaos of life, practice with whatever comes up, a fundamental of meditation, notice sensations, picture someone or something that causes you discomfort, uncertainties, frustrations, turn toward and be with the feeling, open up to the experience as it is, continuous contact. Showing Up for the Moment In this meditation, we start by practicing nonjudgmental awareness of the present moment and notice how we show up for this moment. We let go of our stories and expectations, and meet ourselves with compassion. We finish by setting our intention to reflect what we’ve practiced, when we show show up for others and the world. Direct Experience Direct experience: dropping down into our body, noticing our sensations, at first labeling them, then just experiencing them without labeling the part of our body where they are occurring. Experiencing our life with a beginner mind. This is a fundamental meditation, I highly recommend it. Let Go of Control Let go of control – relaxing into our posture, upright, not rigid. Being aware of our breath, but not trying to control it. Being in our body, and noticing how well it functions on its own without us trying to control it. Noticing how others in the world around us are able to function without us having to control them. Non-Judgmental Compassionate Awareness Practicing relaxation in the body with nothing to control and being with our feelings and sensations with friendliness. However our experience shows up is perfect, and we welcome it as a good friend. We expand this compassionate awareness out to others in the world even those we find difficult to be friendly and compassionate to in a non-judgemental open way. Open Nature We begin by focusing on our posture and breath and not try to control them but just experience them as sensations. We drop all labels of our experience and try to see them as they really are. We gradually expand our awareness outside ourselves, the sound, the air temperature, the colors, the light and then experience both inner and outer as open nature. Surrendering to Support Relaxing into our seat, realizing the earth is supporting us, as is the air. We consider how impossible it is to try and control everything. We practice gratitude for everything in our life that has supported us up to this point- people, electricity, water… we are grateful for all of it. Tender Heart Energy Beginning by focusing on our breath and feeling how it is in our body, we bring to mind a hurt or anger connected with another person who we have lost that we can recall and bring to our body the feelings of anger or sadness. Underneath these feelings we notice our tender heart energy which we focus on and expand to the person we have hurt feelings towards and then share it outwards to those all around us. Tender Heart Practice We release tension out into the world, noting any uncertainty or frustration or anger, with no judgment, just being aware of how we feel, with compassion. We search for any stress, pain in the heart area. Imagining a situation where we were experiencing strong emotions, and using that to be present with the sensations. Under that we feel our tender heart which we use to feel the pain and anger and difficult sensations- which is our gift to the world. We send compassion to others who need it. Uncertainty Energy In the middle of the storm we sit upright but relaxed, what sensations are present, looking at the feeling of uncertainty, which can produce anxiety, anger, fear, frustration, shakiness, doubt. This can happen during times of crisis, change which produces tightness in the body. We become present with the sensations, just experiencing it without adding more problems to it. We become more comfortable with it. It’s also an energy which is dynamic, open like a cloud. We can be curious about it, more like how we would treat a friend – enlightened energy. Unfixed Nature of Reality Being open and present to the sensations of the breath and our body. Where are we training our attention, to the present moment? Noticing where the mind goes and then coming back to the present moment. Seeing the beauty of the present moment. As our attention wanders, can we have a gentle and friendly awareness of it? We sometimes fix reality the way we think it should be. We feel moments where we may be frustrated, fearful, tightness, we just be present with them. We realize they are not problems, they are just sensations. We loosen our hold on these fixed beliefs, thoughts or stories. And we realize that others are like me too. I’m a part of this universe. Urges & Desires Do we have the urge to move during meditation? What is it like to indulge in our urges? Is it a habit? Can we restrain our selves from these urges for a little while? Try to just sit a watch it. It’s just a sensation. It’s not necessarily a command we have to follow. Temporary pleasure only lasts for a short while. Then, we look for the next thing. Can we be with discomfort for a little bit with patience? It’s neither good nor bad. Just sit with it. This way we give ourselves freedom. We will still want things. We just don’t need to indulge every desire. The same with pain and anxiety. We can try just being with it, accepting of this moment. Vast Consciousness Feeling supported by the earth, sitting with integrity, in the middle of the storm of our life, with dignity in our uprightness. Observing the sensations of our body, aware of our consciousness, the peace that is always available to us, expanding out to the countryside, pure freedom and stillness, covering the earth, below and upwards to space. What is life calling me to do? Whole Upright in our posture, letting our breath bring awareness to our body and our connection to the world. Letting the boundaries in our mind go and returning to wholeness. Noticing our thoughts that are concerned with self, and loosening our attachment to those thoughts. The breathing process is the visible way that we are interconnected to the world. We lose our sense of separateness from others and feel compassion for them , losing our concern for self. Powerful Compassionate Godforms There is a powerful energy in the earth, just as there is power in the heavens. They are completely different energies, and yet we are a connection between them. There is a regality in this. We are compassionate godforms – courageous, dignified, pure light, pure love. As godforms, we stand powerfully in this world, concerned about the suffering of others. Beginner’s Mind, Healing Notice the physical sensations of the breath as if taking our very first breath. Now turn that awareness, gently, to the physical sensation of being wounded. Pour out healing compassion to ourselves and others who have been wounded. In giving the thing we crave, we send and receive healing. Nourishment, Peace, Vulnerability With the force of its being the earth holds us. This stability allows us to sit upright in a relaxed way. Each breath in this relaxed, supported position is pleasurable and nourishing. The nourishment gives us a sense of peace, and we begin to trust and open our hearts, being vulnerable to life. Our vulnerability is a gift to the world. It pulses with love for all beings. Healing Compassion Being fully present in this moment, we bring compassion to what we feel – a warm flow of love that washes over the hurt parts of us. Then we practice compassion for others. We breathe in their pain, letting our heart transform it into a warm, loving light that we breathe back out to them. Our hearts become healing centres that can serve others. Fear to Healing In-breath nourishing healing … Out breath letting go and relaxing … and the whole time a gentle loving awareness. Adding gratitude for having a body. Adding trust in the process. Just sitting here and practicing with that. Uncertainty & Compassion We meditate sitting upright, with dignity in the storm of uncertainty. Uncertainty can be useful, it wakes us up to the part of us that feels compassion. Anxiety & Tender Heart During this meditation we may find peace when just watching the beauty of turbulence like the waves of the ocean. We look for the wonder of the tender heart behind fear and uncertainty and behold the basic goodness that is there unconditionally. This basic goodness is always there and is always available to us. Can we feel at peace or love with the turbulence? Does compassion arise? Can it be an entryway to the tenderness of our hearts?
https://zenhabits.net/guided-meditations/
Pakistan and the United States continue to struggle to find a mutual strategy upon which to build a more positive and productive relationship. While both nations observed positive changes in attitudes during the strategic dialogue held in Washington, D.C. in March 2010, the history of mistrust does not support an enduring relationship. Pakistan's military and intelligence services remain suspicious of the motives and methods of their US counterparts, a wariness mirrored in American attitudes. (1) American humanitarian assistance after the 2005 earthquake in northern Pakistan temporarily improved public opinion of Americans, but Pakistanis still find it difficult to understand how long-term engagement with the United States benefits their nation. (2) Overcoming suspicions and creating trust in an effort to sustain this relationship, however, is absolutely critical if we are to achieve Global War on Terrorism (GWOT) objectives and deny al Qaeda and other militants sanctuary in Pakistan. This article summarizes the causes of this mutual mistrust and provides interlocutors with recommended actions to build confidence and change mindsets for the purpose of creating positive perceptions and a sustainable relationship. Pakistani Perceptions of Americans The basis of mistrust between the two nations is that Pakistan and the United States have very different national interests, and therefore possess different (and often conflicting) expectations of each other. Pakistanis also come from a culture rich in conspiracy theories, often placing the blame for failure on others--first the influence of the British and later the United States. Pakistanis believe that US actions in Afghanistan against the Soviets during the 1980s are responsible for burdening Pakistan's society with millions of Afghan refugees, extremists, a proliferation of weapons, and a prevalent narcotics trade. Pakistan believes the United States is at fault for everything that goes wrong in Afghanistan, and extends those faults to blame the United States and India for negative actions and events in Pakistan. (3) Misguided religious leaders, antistate actors, and other power brokers within the nation's tribal society all have the ability to influence and convince the population that the United States is an adversary. In fact, 64 percent of the populace regards the United States as an enemy, while only nine percent describe it as a partner. (4) For example, Jamiat Ulema-e-Fazl chief Fazlur Rehman claims 9,000 employees of Blackwater International (Xe) operate in Pakistan under US control in an effort to steal Pakistan's nuclear weapons and carry out terrorist activities, accusations the United States denies as ludicrous? Sadly though, the failure of the United States to successfully communicate American policy to Pakistan limits its ability to counter such negative accusations. (6) These accusations usually follow five main themes:
https://www.questia.com/read/1G1-237532737/positive-perceptions-to-sustain-the-us-pakistan-relationship
An attendee may present one paper at the conference. Only the first presentation submitted with the individual’s name as a presenter will be accepted. A participant may appear on the program multiple times as a non-presenting co-author on other papers without exceeding participation limits. The following exemptions do not count toward this limit: - Participation as a session chair - Pre-conference Workshop/Conference Tutorials - Keynote or Plenary Speaker - INFORMS Tutorial Speaker - Award Session Presenter - Poster Presentation - Theme panel (limit 1) - Non-presenting co-author (no limits) There are no other exemptions from the presentation limits. ALL SPEAKERS must be registered by August 30, 2021. If speakers are not registered by August 30th, the speaker will be notified that the presentation will be removed from the program. - Flash submissions & Posters are the only types of submissions available for upload after May 15. Flash sessions will consist of approximately 10 presentations, each timed for six minutes with a one minute break between speakers. Slides must be submitted in advance of the session and be set to automatically scroll in accord with the six minute time limit. Remaining time at the end of the session will be set aside for presenters to meet with interested individuals to answer questions and discuss their work in greater detail. Due to capacity constraints, Flash presentation submission is no longer open. The deadline for a Poster Submission is September 1. Important Dates May 15 Submission Deadline for Contributed, Sponsored and Committee’s Choice Abstracts July 31 Poster Competition Submission Deadline August 30 Speaker Registration Deadline If speakers are not registered by this date, the speaker will be notified that the presentation will be removed from the program. Any change to the presenting author must also be made by this date. Submission Deadlines Deadline for Cluster Chairs to enter Session Chairs May 1, 2021, 11:59PM EST Deadline for Session Chairs to enter Presenters May 1, 2021, 11:59PM EST Deadline for Submitting Contributed/ Sponsored & Committee’s Choice Abstracts May 15, 2021, 11:59PM EST Deadline for Final Editing of Abstracts August 1, 2021, 11:59PM EST Deadline for Poster Competition Submissions July 21, 2021, 11:59PM EST Deadline for Flash Paper Submission June 15, 2021, 11:59PM EST Deadline for General Poster Submissions July 21, 2021, 11:59PM EST Deadline for Presenters to Register August 30, 2021, 11:59PM EST Submit Early, Capacity Limited! Submission Links and Login Instructions by Category: Abstract Guidelines (Size/Length) Abstract should be 600 characters maximum (approximately 60 words), title of abstract – 150 characters maximum. - Letters, numbers, and *common math symbols accepted. - All abstracts must be in English. - Do not include title or author information in the body of your abstract. - Abstracts will be published exactly as entered if accepted. - Review your abstract and check for typographical and spelling errors, coherence and technical content. *PLEASE NOTE: Use only “text pad” when entering math symbols. Flash Paper Once your abstract has been submitted (above instructions) you will then need to upload your PowerPoint. Below are a few guidelines to keep in mind: - Presentations should be prepared for use with Microsoft PowerPoint 2010 in a Windows compatible format; there will not be any MAC equipment available. If made in another program or in an earlier PowerPoint version, please make sure that it is compatible with PowerPoint 2010. - The preferred PowerPoint format would be .ppt and not .pptx - Times slides should be created in Landscape Orientation. - Can your slides be easily read from 15 meters away? - Prepare your timed slides for a 6 minute presentation. A YouTube video of how to do this can be found here. Slides should communicate key findings, not details. If attendees want details, let them ask you during the ‘Questions and Answers’ time right after the final flash talk within the session. - Upload your final PowerPoint into Oasis by September 3rd. - In the event a speaker cannot attend the conference because of an urgent matter: Please contact the INFORMS office as soon as possible. If a new speaker is assigned, send his or her Speaker Information including affiliation, address, and email address.
http://meetings2.informs.org/wordpress/anaheim2021/submit/
The Centers for Disease Control and Prevention (CDC) estimates that 6.1 million children in the US were diagnosed with ADHD in 2016. ADHD symptoms might differ from person to person, but they frequently involve focus, attention, and impulse control issues. Although the exact etiology of ADHD is unknown, experts think that genetics, particular environmental variables, and brain alterations may contribute to its onset. Additionally, the function of neurotransmitters like dopamine has been studied. In this blog, we go through the connection between dopamine and ADHD. We also discuss further consequences of low dopamine levels and available ADHD therapies. About 2.5% of adults, according to the American Psychiatric Association, are thought to have ADHD. People who do have ADHD often have a harder time focusing than those who do not. Additionally, they could behave more hastily than their other classmates. They could find it challenging to perform effectively in their other pursuits, at work or at school, as a result. What is the association between ADHD and dopamine? Causes and factors The cause of ADHD is probably a combination of multiple factors. As per the National Institute of Mental Health, risk factors for ADHD may include the following: - ADHD family history and genetics - Birth defect or preterm birth - Using alcohol, smoking, or doing drugs while pregnant, being exposed to pollutants like lead while pregnant, or having brain damage as a young kid Researchers have been examining the link between dopamine and ADHD and how dopamine contributes to the development of ADHD. A particular kind of neurotransmitter, dopamine, serves many crucial roles in the body and brain. Dopamine levels are linked to many mental and neurological conditions, including Parkinson’s disease. A person’s mood, attentiveness, motivation, and activity may all be influenced by dopamine levels. Dopamine also governs the brain’s reward system, with brain dopamine levels rising during enjoyable activities like eating or having intercourse. Initially, experts thought that low dopamine levels were the cause of ADHD, but they have now recognized that the connection is a little more complex. Dopamine transporters may be more prevalent in the brains of people with ADHD. These transporters deplete brain cells of dopamine. Dopamine has less time to act when multiple transporters are in one brain region because they move too rapidly. Lower serotonin and norepinephrine levels may also have a role in ADHD development alongside dopamine. Scientific reasoning of the link between ADHD and dopamine: Scientists have researched the relationship between dopamine transporters and the symptoms of ADHD. However, additional evidence suggests that anatomical changes in the brain may contribute to ADHD. Experts say that the motor cortex is the area of the brain that regulates voluntary movement and where there is a gene that typically increases the function of dopamine transporters, which is inhibited by medications that increase the amount of dopamine in the brain. The fundamental cause of ADHD is probably a problem with the brain. Although the specific origin of ADHD is unknown, several researchers have investigated the role that the neurotransmitter dopamine may possess in a potential contribution to ADHD. Dopamine enables us to control our emotional reactions and take some action to obtain particular rewards. Pleasure and rewarding emotions are caused by it. Scientists have shown that dopamine levels in patients with ADHD differ from those in those without the disorder. According to the experts, this difference is caused by increased levels of dopamine transporter-containing proteins in the neurons of patients with untreated ADHD. These findings imply that dopamine transporter-related genetic variables may contribute to the development of ADHD. How to treat ADHD: pharmacological approach to increase dopamine Numerous ADHD drugs act by boosting dopamine and promoting concentration. Typically, these drugs are stimulants. They consist of amphetamines that raise dopamine levels in the brain by inhibiting dopamine transporters and boosting dopamine levels. Some people think that taking these drugs in large doses would help them concentrate and pay attention better. That is untrue. It may be challenging to concentrate if your dopamine levels are excessively high. What are other reasons for developing ADHD? It is unclear to scientists what causes ADHD categorically. In a broader sense, just two probable causes are dopamine and its transporters. According to the research, ADHD seems to run in families more frequently. This is partly addressed by the fact that several genes may increase the risk of ADHD. There are several behavioral and lifestyle issues that can potentially affect ADHD. They consist of the following: - Low birth weight - Problems during labor - Exposure to harmful chemicals like lead during pregnancy, infancy, and childbirth See Also: Autism Spectrum Disorder and Gastrointestinal Issues Bottom line: It’s promising that dopamine and ADHD are related. Increasing the action of dopamine in the body is how a number of efficient drugs used to relieve the symptoms of ADHD function. However, researchers are still looking into this connection. Dopamine is not the sole underlying reason for ADHD, though. Experts are looking into the latest theories and evidence, such as the quantity of grey matter in the brain. Consult a doctor if you think you have ADHD. They can properly diagnose you and put you on a treatment strategy involving drugs and other techniques that boost dopamine to manage ADHD. In order to raise your dopamine levels, you can also try the following different techniques: - Prepare a list of quick activities, and then finish them. - Always try to do something new. - Play the music you like. - Regular exercise - Try yoga or meditation. Neurodevelopmental Disorder ADHD can lead to attention, impulsivity, and hyperactivity issues. According to research, structural reforms in the brain and neurotransmitter imbalances like those involving dopamine may contribute to the emergence of this illness. Other neurological and mental health conditions, including Parkinson’s disease, drug abuse, depression, and schizophrenia, all appear to be influenced by dopamine levels.
https://familymedicineaustin.com/adhd-and-dopamine-whats-the-association-between-them/
This article on ADHD for writers is part of the Science in Sci-fi, Fact in Fantasy blog series. Each week, we tackle one of the scientific or technological concepts pervasive in sci-fi (space travel, genetic engineering, artificial intelligence, etc.) with input from an expert. Please join the mailing list to be notified every time new content is posted. The Expert: Josh Michaels Josh Michaels is the pseudonym of Dr. Joel Shulkin, a developmental-behavioral pediatrician and former USAF physician with over fifteen years’ experience diagnosing and treating children with developmental disorders including autism. As Josh Michaels, he writes medical thrillers and dabbles in SF/F thrillers and YA Fantasy. He is represented by literary agent Lynnette Novak of the Seymour Agency. You can find him on Twitter @drjoshmichaels and @authorjshulkin, FaceBook/Instagram @drjoshmichaels, and on his website http://authorjoshmichaels.com. Writing Characters with ADHD And the ADHD—you’re impulsive, can’t sit still in the classroom. That’s your battlefield reflexes. In a real fight, they’d keep you alive. As for the attention problems, that’s because you see too much, Percy, not too little. Your senses are better than a regular mortal’s. -Annabeth, Percy Jackson and the Olympians by Rick Riordan A common trope, at least in Middle Grade (MG) and Young Adult (YA) fiction, is that ADHD is a superpower. In Rick Riordan’s middle grade fantasy series, Percy Jackson and the Olympians, Percy has ADHD and dyslexia (a reading disorder). He has trouble staying focused in class and can’t read English, but he can decipher ancient Greek. Evolution, Tyler. Your gift, you’re an asset, not a liability. You know that, too. Somewhere, deep down, you knew when they told you to take those drugs, to make you fit into their archaic, dying system, you knew. You didn’t take the drugs. Rick, Playing Tyler by T.L. Costa In T.L. Costa’s YA thriller, Playing Tyler, the titular character struggles in school and has a horrible home life, but he excels at playing video games. He can multi-task sufficiently to fly military drones. Tyler sees his teachers and others who want him medicated as villains trying to suppress his abilities. A similar theme is drawn in the Percy Jackson series: Of course, the teachers want you medicated. Most of them are monsters. They don’t want you seeing them for what they are. Annabeth, Percy Jackson and the Olympians Riordan and Costa laudably portray characters with ADHD as successful. However, demonizing treatment is potentially harmful to people with ADHD, many of whom benefit from effective medication. This blog post reviews some of the complexities and considerations in writing about ADHD. ADHD Basics History Attention Deficit Hyperactivity Disorder (ADHD) was first described to some degree in the 19th century and has been known by many names, including “Clumsy child syndrome,” “Hyperexcitability syndrome,” and “Minimal Brain Dysfunction”. ADHD wasn’t formally described in the Diagnostic and Statistical Manual of mental disorders until 1968, when it was called Hyperkinetic Impulse Disorder. In 1980, the name was changed to Attention Deficit Disorder (ADD), with subtypes classifying hyperactivity or no hyperactivity. The term ADHD was first used in 1987, followed in 2000 by classification of subtypes (primarily inattentive, primarily hyperactive/impulsive, and combined). Although the revised DSM eliminated the term ADD, the term is still widely used in common parlance. Diagnosis The diagnostic criteria have changed over time, but commonly include: Inattentive symptoms: - Trouble with focused attention - Distractibility - Daydreaming - Poor organization skill - Trouble completing tasks Hyperactive/impulsive symptoms: - Fidgeting - Incessant talking - Being constantly “on the go” - Blurting out - Acting out of turn or without considering consequences). As most people experience these symptoms at one time or another, a diagnosis requires a minimum number of symptoms over a period of at least six months, occurring in two or more settings, inconsistent with developmental level, and not better explained by another mental disorder like substance abuse or schizophrenia. While individuals with ADHD are often (but not always) creative, adventurous, and humorous, the last D stands for Disorder, which means the symptoms significantly affect their ability to function. Many people are daydreamers or risk-takers, but they don’t meet criteria for ADHD if they’re otherwise highly effective. Neuropsychopharmacology Studies have shown that in ADHD these symptoms reflect deficiencies in neurotransmitters at key cerebral control centers, particularly the brain’s “conductor” known as the prefrontal cortex. Low norepinephrine (a fight-or-flight neurotransmitter) contributes to inattention, while low dopamine (a neurotransmitter that suppresses impulses and stimulates the pleasure center) contribute to hyperactivity/impulsivity. Medications For many years, psychostimulants have been the mainstay of treatment for ADHD, with the first amphetamine trial for child behavior problems in 1937 and FDA-approval of both dextroamphetamine and methylphenidate for similar use in 1955. Although many new formulations have been approved for ADHD, there are two main classes of psychostimulants used for treatment of ADHD: pure amphetamines (either dextroamphetamine or mixed amphetamine salts), and methylphenidate (a synthetic amphetamine). Psychostimulants increase norepinephrine which can make someone more alert and focused. They increase dopamine to suppress impulses and attenuate the “background noise” of external stimuli. Such medications can be quite effective, but can also produce adverse effects, such as appetite suppression, headaches, tics, mood swings, and insomnia. Because dopamine also stimulates the pleasure center of the brain, these medications can be addictive if abused. Therefore, psychostimulants are classified as controlled substances and have special prescribing requirements. Over the last decade or so, medications known as non-stimulants have become more popular. These are not classified as controlled substances because they are not considered addictive drugs. They come in two main classes, norepinephrine reuptake blockers and alpha-2 receptor agonists. Atomoxetine is the only norepinephrine reuptake blocker FDA-approved for treatment of ADHD. By making norepinephrine more available to the brain, it increases alertness and focused attention without affecting dopamine levels. The second class includes the alpha-2-agonists, guanfacine and clonidine, which effectively block the fight-or-flight response, helping to control impulsivity, hyperactivity, and reactivity. Understanding the different treatments and how they work helps in writing a realistic character. Short-acting stimulants are more likely to be abused since they have a more rapid onset, while long-acting forms don’t have the same addictive potential. For example, it doesn’t make sense to have a character selling guanfacine to get his friends high since that’s not how the drug works, unless it was an example of the character’s poor judgment and insight. However, someone taking long-acting stimulants for years and stopped suddenly is more likely suffer withdrawal than someone who only took low-dose, short-acting stimulants for a few weeks. People with Bipolar Disorder or Schizophrenia may stop taking their medications due to dulling effects on their mood and behavior, but people with well-managed ADHD generally feel the medication helps them, unless the dose is too high, or they experience other adverse effects. Additional Treatment Considerations and Prognosis More recent studies have shown that, while medications are helpful, the best outcomes result from a combination of medication and psychotherapy. cognitive-behavioral therapy, or other interventions which teach someone with ADHD how to manage their own executive functioning (organization, planning, self-regulation, etc.). Even with such therapies, approximately 80% of ADHD teens still require medication, but this may change with earlier interventions. While younger children often show more prominent hyperactive symptoms, it is often the inattentive symptoms that persist into adulthood. Teens with ADHD are far more likely to be involved in driving incidents, and more than a third have problems in school, which may result in dropping out. Associated Conditions Roughly half of individuals with ADHD have anxiety, and a third or more have a learning disability. Other mental health conditions, such as depression or eating disorders, are more common in those with ADHD, more often when the ADHD is not properly treated. Oppositional defiant disorders may be associated with ADHD, particular in the face of social pressure and inadequate treatment for the ADHD, and substance abuse is 3-4 times higher than average in those with untreated ADHD compared with their peers. Differential Diagnosis At the same time, many other conditions can present with symptoms similar to ADHD. Children with learning disabilities often seem unfocused in class, when they’re actually confused or frustrated. Sleep problems can cause inattention and mood dysregulation. Autistic children may miss social cues or seem unfocused because they’re over-focused on something else or are socially withdrawn. Anxious children may “shut down,” i.e., seem like they’re not paying attention, because they’re afraid to answer and become the center of attention, or because they’re worrying about something else. Someone with Bipolar Disorder may have extreme highs and/or extreme lows that may be mistaken as ADHD. ADHD in adults There is no such thing as adult-onset ADHD. Those adults who had unrecognized and untreated symptoms as children are probably really struggling by the time they are diagnosed (41 percent of adult cases are considered “severe”), if they are diagnosed. While an estimated 4.4 percent of the adult population has ADHD, less than 20 percent of these seek help for it. Even though ADHD affects roughly 11% of American adults, ADHD is more commonly portrayed in children’s literature and YA books than in adult fiction. Instead, adult characters are described as “high-energy,” “bumbling,” “absent-minded,” or simply “easily distracted.” This can be problematic if mannerisms and behaviors are inconsistent throughout a story. In TV and movies, a character with ADHD is often the ditzy, quirky (usually female) character whose impulsive goofs make you love them that much more, like Anna in Frozen or Julie in Julie and Julia. Or a character with ADHD serves as comic relief, like Barney on How I Met Your Mother. They “space out,” fumble their words, and make careless mistakes. On screen, it makes us laugh. But in real life, we’d probably be as annoyed as the other characters. Conclusion ADHD is neither a superpower nor a punchline. It is a serious condition that shouldn’t be taken lightly. ADHD isn’t just about being a daydreamer or impulsive, nor do medications fix everything. If someone with ADHD doesn’t learn to manage their symptoms, either on their own or with help, they often struggle academically, in relationships, and in everyday activities. Inaccurately describing a character with ADHD can perpetuate derogatory stereotypes. Understanding ADHD can help a writer decide how a character’s expected behavior will affect the way they handle story events, as well as how other characters perceive them. And, as many readers have ADHD, it is critical for writers to be sensitive to their audience. While many characters in popular culture are “suspected” to have ADHD, it’s not presented as a common diagnosis in adult literature. For writers who are willing to do their research, consult with experts/sensitivity readers, and/or who have personal experience, a character with ADHD can be complex and intriguing, as can the writing of their story. I look forward to seeing how your characters cope with ADHD and how they—and you—handle the challenges. Please Share The #ScienceInSF If you liked this article, please share it with your writing friends using the buttons below. You can also click to send a ready-made tweet: |Click to Tweet What you need to know when writing characters with ADHD, from developmental-behavioral pediatrician @drjoshmichaels: http://dankoboldt.com/writing-characters-adhd/ Part of the #ScienceInSF series by @DanKoboldt| |Click to Tweet “While individuals with ADHD are often creative, adventurous, and humorous, the last D stands for Disorder.” How to write ADHD characters, by expert @drjoshmichaels: http://dankoboldt.com/writing-characters-adhd/ Part of the #ScienceInSF series by @DanKoboldt| Follow me and you'll never miss a post:
http://dankoboldt.com/writing-characters-adhd/
In the 30’s, the complications that came along with the Great Depression affected the public severely. In 1929, a stock market crash changed the country remarkably. Poverty and unemployment were widespread in the United States. Factors that led up to the Great Depression include buying on credit, buying on margin, ____________ The Great Depression was catastrophic for everyone but as usual, the African-American population had it harder. During the Great Depression, most African-Americans were working on farms owned by white landowners. Author Richard Grant describes these situations as “a problem from hell” when visiting some of the most rural and poor places in Mississippi. Education is a major obstacle that many students encounter when struggling to escape from Mississippi’s never-ending problems. As a result, the Delta has a consistently high rate of high school dropouts or failures. In addition to deficient school systems, “ The South is home to the most children living below 50% of the poverty line” which supports the idea that children living in unstable environments are enveloped with poverty (Hughes). The Delta has developed into an underprivileged community where “24% of Southern students attended school in districts in which extreme child poverty rates dipped below 5%” (Hughes). At one time in 1932, there were near to 250,000 homeless children throughout America. By far the ones who fared much worse off were African Americans owing to the fact that they were already impoverished anyway. The fact was white Americans were preferable as employees at that day and age, so black Americans were considered the first to be fired in Overall, The Great Depression had many effects on society, including the day to day struggle of the American people, the effect of the Dust Bowl on agriculture and the economy, and the evolution of the role of the President. The Depression grew increasingly worse during Herbert Hoover’s time in office. Herbert During the 1930s, After World War I, the Great Depression from America spread to the whole world. According to the song Wanderin by Vernon Dalhart, describe the common people in America during the 1930s suffered the pressure by the society and homelessness became a serious problem at that time. People at that time can have a normal work. Also, homelessness becomes a serious problem at that time. In addition, it was chaos in social order and the crime rate during citizens was very high. CHAPTER I INTRODUCTION 1.1 Background Hunger is still a major concern in health issues. Hunger causes malnutrition, malnutrition and others. Famine kills more people than TB, HIV / AIDS and Malaria. A quarter of children born in developing countries are underweight. In general, the United States has some of the highest relative poverty rates among industrialized countries, reflecting both the high median income and high degree of inequality. Since the 1960s, the United States Government has defined poverty in absolute terms. When the Johnson administration declared "war on poverty" in 1964, it chose an absolute measure. The poverty line is the line below which families or individuals are considered to be lacking the resources to meet the basic needs for healthy living; having low income to provide food, shelter, clothing, or anything to be sane. Haiti, one of the poorest countries in the world, with over half of the population living in extreme poverty, is in desperate need of help (“Poverty in Haiti: Aid, Earthquakes, and Imperialism”). The level of poverty in Haiti is so high that it stands out from the other countries in Latin America. Because of extreme poverty, children are often separated from their families and end up living in orphanages. Many Haitians live on the streets without money or resources needed to overcome poverty (“Top 5 Facts about Poverty in Haiti”). Throughout history and today, Haitians have lived in poverty with small chance of being able to provide for themselves and their families, but organizations such as KORE are investing in the lives of those suffering. While the United States may be one of the world’s wealthiest nations, teens today face myriad of social, personal, educational and financial problems that impede their development, such as Child poverty, inadequate educational attainment, inadequate health care, parental separation and divorce, foster care system, abuse and neglect, and coping with the modern world. (Siegel p.3) As our book discuss child poverty escalated rapidly since the 2000’s, poverty has risen for every age , gender, and race/ethnic group. With the most severe living among poverty the nation’s youngest families (adults under 30) and even more with those families who have more than one child living in the home.(Siegel p4). Approximately 14 – 16 million children in America Famine: “the incidence of serious food shortage across a country that dangerously affects the nutrition levels, health and livelihood of any people, to the extent that there is a large incidence of acute malnutrition and many people have died of hunger.” – World Food Program Introduction Famine in North Korea is a long history crisis started from food shortage to its worst and being dependent on China and Soviet Union on Food and financial aids. The worst famine cases happened in North Korea is in the 1990s which have killed 1 million North Korean. Noland and Haggard (2008) reported that in the 1990s, 600,000 to 1 million North Koreans, or about 3–5 percent of the pre-crisis population perished in one of the worst famines of the 20th century. Famine cases in 1990s was not the first time in North Korea instead it started in 1950s when they were having a Korean War. When a person hears the word “The Great Depression,” almost everyone thinks the worst economic times in the United States. The Great Depression started in the late 1920’s and continued till the early 1930’s. It was the most worldwide economic down spiral in history. It remains the most important economic event in America history still today. This tragic event caused hardship for millions of people and the failure for many businesses, banks, and farmers. This idea was also seen by Eric Rahimian and Fesseha Gebremikael in their article “Poverty Amid Affluence in Alabama” from Journal of the Alabama Academy of Science. “High poverty rates persist in many inner cities, counties and rural areas, and particularly in areas inhabited by minorities…. In our view, the main causes of poverty are poor education, low income and lack of opportunity.” This idea may have been true during the nineteenth and twentieth centuries, but the higher rates of poverty now are seen between different age groups rather than the demographic groups. According to the United States Census Bureau, the poverty rate for children under the age of 18 is currently at 19.7%, where the rate for those aged 18 to 64 is 12.4% and those aged 65 and older is only at 8.8%. This causes a lot of problems for the infected: depression due to isolation due as well as employment and occupational difficulties. As mentioned earlier, onchocerciasis is the world’s second leading cause of blindness in preventable infectious diseases (WHO, 2014). People that are blind or partially blind are unable to find or keep jobs, and rely heavily on their family for income, food and shelter. According to WHO 2013 report, the loss in socioeconomic development due to onchocerciasis was thirty million U.S. dollars in the early 1970s. Hunger with not meeting the nutritional needs and the medical complications that result from these conditions are the leading cause of death in the poverty population. This is a huge problem that needs to be brought to the attention of the public. Poverty breeds hunger and malnutrition. A congressional investigation in 1968 revealed a widespread hunger fest in the United States communities among the poor. Within the early 2000s these problems were still found in the majority of rural counties of central Appalachia, along with many other surrounding and similar areas (“Poverty”). The Great Depression The Great Depression was from 1929 to 1939 and was an extremely long and in fact the longest economic plumet ever in history. It started when the stock market crashed in the United States in October 1929. This caused a domino affect on Wall street and when they got word of the stock market crash, it drove away millions of investors. Then over the years, the situation did not get any better.
https://www.ipl.org/essay/Causes-Of-Poverty-In-Nigeria-FKSKQC67EACP6
This is where to plug in and discus issues related to the Commonwealth of Virginia. He fought an oppressive regime in Cuba and escaped to the United States. He's a Christian pastor. He wows crowds with an inspiring story about what America means to him. Be a part of the action and join us!
http://va.peninsulateaparty.org/2013/09/do-you-know-rafael-cruz.html
Hi, I'm Ann Rains a professional photographer specializing in weddings, family, portraiture, and fine art images. I am an experienced instructor who is deeply creative and passionate about making excellent images. In this course, we will spend plenty of time discussing camera operation and how to take advantage of the features provided, but most importantly, we explore what makes a good photograph. Today’s digital cameras can seem overwhelmingly complex. In this course, we spend plenty of time discussing camera operation and how to take advantage of the features provided. Most importantly, we will explore what makes a good photograph. Subjects such as composition, selective focus, exposure, and lighting combined with a better understanding of camera controls can open a whole new world of image-making. Each week includes appropriate assignments, and you will be encouraged to share your images with the class. Students of all skill levels are welcome. All students are required to have their own DSLR camera. This class features interactive demos and a field trip, weather permitting. The extra features and camera modes vary from camera to camera, but we will go over the most common camera modes in digital cameras, including Auto Mode, Manual Mode, Aperture Priority, and Shutter Speed Priority. Knowing all of these basic camera settings and modes is the key to unlocking your camera’s fullest potential and capabilities. Most beginner photographers are initially overwhelmed by the endless buttons and menu functions on their cameras. This beginner’s course will teach the basic camera settings to help you understand how to operate your camera and adjust it to the way you want it. Being able to capture great visuals requires a little bit of an introduction to the three most basic camera settings: Aperture, ISO, and Shutter Speed. The combination of these three functions is present in most cameras and is imperative to operating your camera. If you want to take great photos, you need to understand exposure, and the exposure triangle is the foundation of photography. If you’re new to photography, you can get away with taking respectable pictures right out of the box using automatic settings. However, if you want to produce quality images, you need to have a solid grasp of this basic concept before moving on to the next level. Look at it this way; exposure is to the photographer, what measuring is to a carpenter. It’s an essential skill. Understanding what exposure is and how it works is an important first step in becoming a good photographer. As a beginner photographer, you must surely be looking at the images made by other photographers, people whose work you admire. Among the many things that you may undoubtedly notice, you may also have noticed that somehow the compositions of these photographers tend to be a lot more eye-catching. Have you ever wondered what the difference is between an average photo and a remarkable photograph? How do you know if a photo works? What makes a photograph outstanding? Is it the setting, depth-of-field, the subject, lighting, balance, the use of leading lines, how the space is utilized, the uses of color, or contrast? Photography is about communication between the photographer and the viewer. It’s all about the photographer telling their story through an image. So what makes for a great image? The answer can be rather subjective. Most of us would agree that a great image strikes a chord inside of us. It evokes a strong emotional response in the viewer. But there are also some tangible elements of good photography. As we approach the last few weeks of this course, we will plan two field trips where we will go out and put all of the things we have learned into action. You will have the opportunity to shoot with your other classmates and your instructor. This class will be about having fun and getting creative with your camera. All while having your instructor on-site to help critique and guide you with your choices. In this photography course for beginners, you will be immersed in photography over 4 separate lessons and 2 field trips. In order to teach one how to use a digital SLR camera, the fundamentals of photography must be understood. My photography course for beginners teaches the fundamentals of photography through immersion. Manuals instruct the user which buttons to push but don’t teach photography. The Immersion method teaches and reinforces the basic building blocks of photography. Once that is understood, one can learn how to capture an image that is a piece of artwork and reflects one’s style. Over the course of 6 weeks, you will learn photography by listening, doing, and absorbing with repetitive activities. You will be challenged each class to use what you have learned and to learn new skills based on what you have previously learned. Each day will consist of hands-on learning. Students will have the opportunity to photograph landscapes, sunrises, sunsets, flowers, people, and wildlife in a structured, non-competitive environment. My goal is to strengthen and maintain the learning process. You must have a Digital SLR camera with interchangeable lenses. Perfect for Nikon, Canon, Olympus, Fuji, Sony, or other digital cameras with a dial on top that lets you control different camera modes. Camera Manual (If you don't have one you can find the digital version online) If you have a Point-and-Shoot camera that only has an Auto mode, and you are unable to set the Shutter or Aperture yourself, this course may not be for you.
https://www.annrainsphotography.com/basics-of-photography/
As a specialised field, hydraulic engineering is concerned with the hydraulic pressure of fluids, such as oil and water. It also deals with some of the technical challenges facing sewerage design and water infrastructure and studies fluid flow and the behaviour of water in large quantities. A major area of interest for hydraulic engineers is water storage design and transport facilities, including dams, canals, lakes, channels, and any other facilities used in storing and transporting water. Engineers design hydraulic-powered machinery and equipment strong enough to withstand intense pressure. They also apply fluid dynamics theory to help in the prediction of water flow and its interaction with the surrounding environment. This sub-discipline of civil engineering extensively uses gravity to cause fluid movement. It is also linked to the design of canals, dams, and levees. By applying the principle of fluid mechanics, engineers can handle issues of control, storage, regulation, measurement, transportation, and water use. They also develop conceptual designs to support the features that interact with water through various channels such as spillways. History and Applications of hydraulic engineering The earliest application of hydraulic engineering includes crop irrigation, which can be traced back to Africa and the Middle East. The control of water supply allowed food to be grown even when water was limited. Irrigation has since been used for thousands of years to boost crop production and increase food supply. The water clock is among the earliest hydraulic machines used at the beginning of 2nd millennium BC. The Turpan water system (ancient China), irrigation canals (Peru), and the Qanat system (ancient Persia) are also perfect examples of the early use of gravity to control the movement of water. Hydraulic engineering was more advanced in ancient China and engineers applied the concept to construct massive canals with dams and levees to help control the flow of water and use it for irrigation. The concept of hydraulic engineering is still used today and hasnât changed much since ancient times. The gravity phenomenon is still used to move liquids through canals, but reservoirs are now filled using pumps. Many of the worldâs largest cities would not be able to support their population with a limited amount of local water. Effective water distribution and management has enabled cities to support their populations through crop irrigation. The building of dams has also enabled cities to generate cheap electricity to power residential, commercial, and industrial establishments. Todayâs hydraulic engineer applies computer-aided design tools, computational fluid dynamics and related technologies to calculate and accurately predict fluid flow characteristics. Therefore, the concept of hydraulic engineering is not new, as it has been used since ancient times in Africa and the Middle East for crop irrigation to increase food production. It has also been useful in building dams to control water and generate electricity to power cities. Today, modern hydraulic engineering uses CAD tools and related technologies to predict the flow of water. As technology advances, hydraulic engineers will be able to control and manage water in a way that minimises wastage and promotes efficiency.
http://www.selfhelpfraud.com/applications-of-hydraulic-engineering/
#BREAKING: 98 new cases of COVID-19 registered in Oman, total now 2,735 Oman’s Ministry of Health (MoH) has announced the registration of 98 new cases of COVID-19 novel coronavirus in the Sultanate. Image for illustrative purposes only. Source: Shutterstock In a statement issued online today [Tuesday, May 5], the MoH has confirmed that the total number of cases in the country now stands at 2,735. Of the 98 new cases announced today, 42 cases are among Omani nationals and 56 are among non-Omani residents. The MoH also stated that the number of COVID-19 recoveries rose to 858, with 12 deaths recorded thus far.
With PQDT Open, you can read the full text of open access dissertations and theses free of charge. About PQDT Open Search Human apolipoprotein E (apoE) is 34-kDa 299 residue exchangeable apolipoprotein that plays a critical role in lipid transport and cholesterol metabolism in the plasma and brain. The APOE gene polymorphism results in three different alleles ϵ2, ϵ3, and ϵ4, which produce the common protein isoforms, apoE2, apoE3, and apoE4, respectively. Whereas apoE3 is considered to be the anti-atherogenic protein, apoE4 is considered a risk factor for developing Alzheimer’s disease (AD). ApoE associates mainly with VLDL but is associated with HDL-like particles in the brain. During brain cholesterol metabolism, apoE associates with HDL to form discoidal nascent HDL. In this study, we aim to determine the conformation of apoE4 by using chemical crosslinking, N-(1-pyrene)maleimide as a fluorescence probe, and mass spectrometry. Single cysteine mutants of apoE4 were expressed in E. coli, purified by affinity chromatography, and reconstituted with POPC. Discoidal rHDL (apoE4/POPC) was chemically cross-linked using a Cys-specific crosslinker. SDS-PAGE revealed only monomeric bands in all single-Cys variants. The lack of dimers suggests that the Cys residues on the apoE4 mutant molecules were not at a crosslinkable distance when bound to rHDL. We confirmed the conformation of apoE4 around the discoidal particles by an independent approach using spatially–sensitive fluorescence probes that allow us to measure the proximity of two Cys residues on two different apoE4 molecules on rHDL. The combination of cross-linker distance constraints and the excimer fluorescence emission spectra of the seven apoE4 mutants on HDL allows to conclude rule out a parallel double belt confirmation for apoE4 on discoidal rHDL. Our result support an out of sync parallel double-belt, anti-parallel double-belt, or a head to head hairpin model for apoE4/rHDL. The significance of this study is that it offers an innovative approach to obtain insight into the structure and organization of apoE on large lipoprotein complexes. Further, it allows us to identify potential differences between apoE3 and apoE4 from a structural perspective and determine distinguishing features that contribute to the role of apoE4 in developing AD. |Advisor:||Narayanaswami, Vasanthy| |Commitee:||Lee, Yuan Yu, McAbee, Douglas D.| |School:||California State University, Long Beach| |Department:||Chemistry and Biochemistry| |School Location:||United States -- California| |Source:||MAI 58/01M(E), Masters Abstracts International| |Source Type:||DISSERTATION| |Subjects:||Biochemistry| |Keywords:| |Publication Number:||10784272| |ISBN:||978-0-438-20900-8| Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved The supplemental file or files you are about to download were provided to ProQuest by the author as part of a dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or may be a .exe file. We recommend caution as you open such files. Copyright of the original materials contained in the supplemental file is retained by the author and your access to the supplemental files is subject to the ProQuest Terms and Conditions of use. Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be patient.
https://pqdtopen.proquest.com/doc/2081210202.html?FMT=ABS
Large institutions are often bound by inflexible planning cultures, with few immediate incentives for strategic long-term planning, policy thinking and enduring process innovation. ‘Just in time’ mindsets create significant risk of being caught out by surprises and unexpected consequences. Reducing this risk is both possible and compatible within the predictable structure – and focus on delivery – that effective institutions need to maintain. Futures analysis – careful structured thinking about trends, directions of change and likely implications – helps minimise surprise for organisations without undermining other strengths. This is not a new concept; the Singaporean government and organisations such as the US Defense Department have been building capability in futures analysis for years. Providing decision-makers a credible spectrum of what may happen helps them better understand key and potentially misleading assumptions upon which current policy is based, and test future policy and capability plans in different contexts. The National Security College Futures Hub is a significant repository of techniques and is actively developing new techniques that are tailored for policy thinking. With partners across the Futures Network, we are constantly testing and refining the most useful techniques for the public sector. A strong public-sector futures culture will be integral to understanding what constitutes the ‘public good’ of the future and testing the roles for the governments of tomorrow. It will still be governments – even allowing for the internet’s capacity to diffuse power and influence – which retain the systemic authority to shape change in service of that broad public interest. Singapore, the UK, Finland and Canada are considered as having the most established Futures capabilities. Their experience suggests successful Futures units rely on dedicated resources, strong mandates and being empowered to consider a wide range of sensitive issues in an interdisciplinary context.
https://futureshub.anu.edu.au/future-analysis/
I’ve spent the last couple of months in the Taita Hills in SE Kenya where I am studying the impacts of anthropogenic habitat degradation on bird functional diversity and composition. Specifically, I’m working in a sky island complex of massifs topped with remnant montane forests that form the northernmost extent of the Eastern Arc Mountains. The forest fragments on these hills are designated as Key Biodiversity Areas (KBA) and Important Bird Areas (IBA) because of high levels of endemism and biodiversity. This area is ideal for this research as it shows very high levels of historical habitat fragmentation and different degrees of degradation through various human land-uses. I am starting with characterising the bird communities of the different forest fragments and the surrounding agricultural matrix by identifying bird species via point counts & AudioMoth sound recordings. This data will be combined with an existing traits database so that we can determine what functional roles are present (and to what extent) in each habitat. Another approach that we’re using to try to understand how effective birds are at controlling pest insects is by using plasticine model “caterpillars”. The attack marks that are left behind help us to identify the levels of predation relative to habitat quality. This lays the foundation for my next field season when we will be capturing birds to collect faecal samples which will be analysed using DNA metabarcoding. This will provide us with information on how birds’ diets are influenced by habitat quality and also allow us to quantify the ecosystem functions that birds perform – like controlling herbivorous insect pests and seed dispersal.
https://ecologyfieldnotes.com/2019/10/
4.1. Scientific problem in question The project will be focused on the estimation and forecasting of the ongoing environmental changes in the Arctic and their impacts on the human wellbeing and infrastructure. This will help to develop a thoughtful strategy in anticipation of the natural and anthropogenic changes in the Arctic, in order to allow adaptation of the population to these changes, mitigation of major detrimental impacts of changes that cannot be avoided, and to lay down the pathways of sustainable development of the region. 4.2. Importance of the problem for a particular research area The Arctic climate and environment have been rapidly changing. The projected rate of climate change in the Arctic is more than twice that of the global rate of temperature change and the consequences associated with these changes are likely to be serious and felt far beyond the Arctic region (ACIA 2005; IPCC 2013; Walsh et al. 2011a,b). To a great extent, the expected climate-induced changes in the Arctic are associated with warmer temperatures, changes in the hydrological cycle, reduction in ice extent, degradation of permafrost, and an accelerated rate of coastal erosion. Climate-induced changes in the Arctic are likely to affect human society by opening up economic opportunities and causing rapid social changes. For example most of the Arctic regions have the potential for onshore and offshore production and exploration of a variety of non-renewable resources. According to USGS estimates, about 30% of the world’s undiscovered gas resources and 13% of the world’s undiscovered oil resources may be found in the area north of the Arctic Circle (Gautier et al. 2009). Beyond fossil fuels, the Arctic has large reserves of minerals, ranging from gemstones to fertilizers. For these critical commodities the region’s role is likely to increase considerably in the future. Furthermore, at present the maritime activity in the Arctic is restricted by prevailing ice conditions and harsh polar meteorological conditions. Climate change is expected to increase marine access to the Arctic regions, especially with the possible opening of so far closed passages such as the North-West Passage (NWP) and the Northern Sea Route (NSR). Increased offshore/onshore natural resources activity will promote maritime and land transportation and the development of various types of infrastructure in many parts of the region. Nature-based economic activities in the Arctic are highly sensitive to climate change. However, the great uncertainty exists on the overall impact of changes in natural environments on the economic development in the Arctic. Although Climate change in the Arctic might make some economic activities in the region more profitable and potentially leading to overall improvements in welfare in the Arctic and beyond, the distributional impacts of climate-induce changes on development may be economically indifferent and/or environmentally hazardous. The proposed research seeks to address this problem by providing a quantitative evaluation of the magnitude and the spatial pattern of ongoing and anticipated climate-induced changes which have potential to impact beneficially (detrimentally) the socio-economic development in the Circumpolar Arctic and therefore drafting recommendations how to use (mitigate the impact of) these changes. We will focus on the selected human activities in the Arctic including fossil fuels and mineral extraction, maritime and land transportation, and the required infrastructure development. These activities are of great importance to the Arctic region, highly sensitive to changes in climatic conditions and have large societal and environmental impacts. Geographically, we will consider coastal areas of the Circumpolar Arctic. For the purpose of this proposal the term “coastal areas” include the shelf of the Arctic Ocean, Arctic coasts and near-shore land. These areas contain a significant proportion of Arctic communities which are likely to be most affected by climate-induced changes and by ongoing and prospective development. 4.3. The objectives of the project The project will be focussed on the following objectives: - Utilize all available observations and modelling products to quantitatively assess changes in meteorological, oceanographic and environmental variables that directly affect ongoing and future societal well-being and economic development in the coastal areas of the Circumpolar Arctic. - Quantitatively evaluate the impacts of climatic and environmental changes on the societal well-being and economic development of the Arctic coastal areas. These include fossil fuels and mineral extraction, maritime and land transportation, industrial fishing, and infrastructure development. - Quantitatively assess the magnitude and the spatial pattern of positive and negative climate-induced changes which have the potential to influence the economic development in the Circumpolar Arctic. - Prepare a suite of recommendations to mitigate negative climate-induced impacts to achieve a sustainable development that contributes to the highest possible quality of life in the Arctic and benefits both the region and the Arctic nations. Our research will employ the analysis of observation and modelled data on climatic and environmental variables and socio-economic parameters. Methodologically, the research will consist of three interrelated tasks outlined below. Task 1: We shall analyse all data synoptic, cryospheric, oceanographic, and geophysical data (observational and reanalyses output) for the post-1950 period available in the study region for calculating the time series of socially important variables (SIV) important for the built environment, housing and transportation structures and human wellbeing (e.g., heating and warm and cold season degree days, near surface wind speed and sea waves characteristics, icing conditions, duration of the sea ice free period, rates of coastal erosion, etc.). We shall compare these time series with the output of the CMIP5 GCM model runs for the 1950-2010 period and select only those GCMs that reasonably well reproduce climatology and dynamics of these time series (e.g., mean, variance, trends). Thereafter, we shall use the output only of these reliable models (rGCMs) in our next step. Task 2: We shall collect information on regions most perspective in terms of future economic development. For these regions we shall assess the current status of societal well-being of population of the settlements along the Arctic coast, servicemen at the built environment and sailors of transport and fishing fleets. We shall develop the most probable decadal projections of region- and sector-specific SIVs for each scenario of the climate change assessed by rGCMs. Task 3: We shall utilize socio-economic data and analysis in conjunction with SIVs estimates and projections from Tasks 1 and 2 to assess the status of potential societal well-being resulting from increased development and climatic change. We will develop a suite of recommendations as how to mitigate the negative consequences of projected climatic and socio-economic changes for different sectors of Arctic economy, societies, and nations. 4.4. New aspects of the problem The novelty of the proposed research is in an attempt to synergize the analysis of the climate, environmental and socio-economic conditions in the Arctic and their changes in order to provide a new vision/projection of the sustainable development of the living conditions and infrastructures in the Arctic for the present and future conditions. The main hypothesis of the proposed research can be formulated as following: While ongoing and future climatic changes in the Arctic coastal areas are likely to provide opportunities for further development in natural resource industries, transportation and associated infrastructure development, they also have a strong potential to impact the natural environment, all sectors of the economy, and the well-being of the Arctic residents adversely, in other words, to produce climate-induced hazards. Collectively socio-economic and climatic factors can greatly impact the sustainability of Arctic settlements thus promoting changes in land use, demographics, and development policies. An important new challenge of the problem is the necessity to detect the observed changes using sparse observational networks. In the Arctic key variables (e.g., humidity, wind, precipitation, and upper air data) are reported with biases that have changed with time introducing inhomogeneities (Goodison et al. 1998; Groisman and Barker 2002; van Wijngaarden and Vincent 2005; Durre et al. 2006). Automation of synoptic observations introduced in the United States during the past 20 years and in Canada during the past decade adds to these inhomogeneities (e.g., decimated complexity of in situ observations of cloudiness which made them incomparable with manually reported cloud reports). The availability of data from these networks is time-dependent and generating spurious trends and biased climatologies (Wang et al. 2012). Another novel aspect is the focus on the synergy between the observational data and the results of modern GCMs which do not yet well reproduce several critical aspects of the Arctic cryosphere dynamics, in particular the sea ice changes. One potential reason for this underestimation lies in ignoring the role of marine storminess in the sea ice decline. The initial decline of the sea ice extent under the influence of increasing temperatures results in the increase of the open water area in the Arctic Ocean basin. Even with no change in the wind speed, this results in increasing the fetch and potentially in more intense surface wind waves (both sea and swell) precluding mechanically the formation of stable young sea ice during e.g. autumn when temperatures drop. Positive feedback of this effect with growing temperatures may damp the effectiveness of forming seasonal sea ice during the winter cycle and contribute to sea ice decline. Changes in circulation patterns and the strength of surface winds which are likely to occur over Arctic (e.g. Wu et al. 2012) would further contribute to the sea ice decline. Nonlinearity of the Arctic System changes is further expected when the Arctic sea ice becomes seasonal and its extent declines further. It may well be that some of the GCMs, which performed decently in simulation of the Arctic changes during the past 60 years, fail miserably at the next tipping point of the Arctic System change. The novelty of the project is also in the use of environmental and socio-economic data along with those of climate data models for the description of ongoing and the projection of anticipated Arctic environmental changes. In particular, we are going to use the data about the coastal erosion, while these data are still rather sparse and spatially irregular. Average rates of coastal retreat are usually 0.5-2 m/year, however can be up 30 m and higher in some locations (Forbes et al. 2011). Coastal retreat rates are highly variable due to variations in geomorphology, lithological and permafrost conditions (Lantuit et al. 2011). 4.5. The present state of research in the area A large amount of theoretical and observational evidence of rapid climate and environment changes have being accumulated up to date. For example, trends in surface air temperature and sea ice extent, two well-monitored characteristics of the Arctic environment, indicate a significant warming over the last two decades. Moreover, many studies suggest that the Arctic warming will continue at a rate twice that of the global temperature change (ACIA 2005; IPCC 2007; Walsh et al. 2011a,b; cf., Figure 1). Figure 1. Summary of the observed Arctic climate signals. (a) Two independently evaluated observed Arctic annual mean surface air temperature anomalies from the surface temperature dataset compiled by the Russian State Hydrological Institute (RSHI-T, blue, Groisman et al. 2006, updated to 2011) and the the Climatic Research Unit of the University of East Anglia (CRUTEM3, red; Brohan et al. 2006) together with the ensemble of 20th-century simulations with the CMIP3 models using both anthropogenic and natural radiative forcing (after Semenov et al. 2010). The ensemble mean is given by the thick black line; the shading shows the range in which 90% of the individual model realizations lie. Model data were masked (in respect to missing values) as the CRUTEM3 observational data. (b) Zonally averaged trends in surface air temperature for the period from 1951 to 2010 (Hansen et al. 2010). (c) Areal changes in the September Arctic sea ice extent during the last 30 years (106 ×km2), implying a more than 35% decline in the sea ice extent according to the Arctic sea ice extent data compiled by the U.S. National Snow and Ice Data Center (Fetterer 2002). In 2012 this extent was the lowest since the 1980s. Results from the Coupled Model Intercomparison Project 5 (CMIP5) indicate that while all General Circulation Models (GCMs) which account for all known external forcing show quantitatively correct tendencies they cannot accurately reproduce observed changes in the Arctic sea ice during the last decades. For example, IPCC AR4 and AR5 model ensembles, on average, estimate the present rate of reduction in Arctic sea ice to be half of the observed (Strove et al. 2007, Kattsov et al. 2010). However, it is quite possible that the decline of sea ice extent projected by the new models will be somewhat underestimated (Stroeve et al. 2012). Aside from the reduction in sea-ice extent and longer ice-free conditions in the warm season, especially in the Eurasian Sector of the Arctic, observational analyses indicate significant structural changes in the Arctic sea ice such as a reduction in thickness and an increase in the fraction of the year-old ice (Rothrock et al. 1999; Kwok and Untersteiner 2011). These ice changes are accompanied by an increased frequency of unusual atmospheric circulation patterns in the Atlantic Sector (Petukhov and Semenov 2010) and an anticyclonic circulation pattern in the Pacific Sector of the Arctic (Proshutinsky et al. 2012). Projected changes in cyclonic activity indicate that the total number of cyclones will not change significantly with warming and is more likely to decrease (Loeptien et al. 2008, Ulbrich et al. 2009). Also the number and intensity of polar lows will likely decrease (Zahn and von Storch 2008). However there are two important factors to consider when projecting Arctic climate: (i) enhanced poleward deflection of cyclone tracks, i.e. northward (counter clockwise) turn of the major North Atlantic storm track as a result of projected weakening of the meridional temperature gradient over mid-latitudes due to amplified Arctic warming; (ii) increasing number of rapidly developing very deep cyclones (Trenberth et al. 2007; Loeptien et al. 2008, Ulbrich et al. 2009). Due to the decrease in sea ice extent, large areas of open water may become exposed to the direct interaction with the atmosphere, providing diabatic heating and resulting in the intensification of the existing storm tracks and generation of new cyclones. This effect has been investigated by Serreze and Barrett (2008) and Simmonds and Keay (2009) who argued for the formation of a previously unidentified robust summer storm track in the Eastern Arctic during the last decades. The potential changes in cyclonic activity and their impacts on moisture transport and changes in local storminess is still poorly understood and yet to be quantified in both models and reanalyses (Wang et al. 2012). Analysis of cyclonic activity, its relation to changing sea ice conditions, and associated ocean’s impact on the low level baroclinicity contribute to further understanding of the amplification of the Arctic warming (Screen et al. 2012) and the intraseasonal changes in the Arctic heat balance (Screen and Simmonds 2010). One of the most important effects of climate-induced changes on the Arctic economy is the improved accessibility to natural resources. This change corresponds to the increase in the global demand for energy and the production decline in well-developed Arctic areas (e.g., Prudhoe Bay Oil Filed, Alaska and Medvezhye Gas Field in Russia). Although resource exploitation and extraction is not new to the Arctic, the last decades show a significant increase in the number and scale of newly-proposed projects. For example, test drilling has been recently permitted in the Beaufort Sea in Alaska; submission of bids for Exploration Licenses has been launched in Canada for areas in the Beaufort Sea & Mackenzie Delta; Russia adopted its Arctic strategy (Zysk 2010) and intense negotiations are in progress on the development of the Shtokman gas field in the Barents Sea. Such increased activity in the Arctic is attributable not only to climate change-related factors but also to improvements in offshore technology, oil-price development, and to the stable political region in the Arctic promoting long-term investments. The fossil fuel resources of Arctic coasts and shelf might serve as a driver for a further economic prosperity for the region and for the entire Arctic nations. However, such development presents significant technological, socio-economic, and ecological challenges. Recognition of these challenges has promoted a growing discussion on sustainable Arctic development (e.g., Duhaime et al. 1998; Caulfield, 2000), which protects and enhances the environment and the economies, culture and health of Indigenous Peoples and Arctic communities while improving economic and social conditions of Arctic residents. Recently a Sustainable Development Working Group has been established under the auspices of the Arctic Council. Climate change is likely to challenge the petroleum industry in many ways. Offshore exploration and production is likely to benefit from less extensive and thinner sea ice, although equipment will likely be costlier as it will be required to withstand increased wave forces, icing, and ice movement. Onshore, the impact of climate change is likely to lead to increased costs, as described below, but offshore the consequences of climate are uncertain and will probably vary. Therefore, it is important to identify and quantify possible hazards associated with offshore petroleum development in the changing climate and develop recommendations for adaptation and mitigation strategy. Figure 2. Northern Sea Route and the Northwest Passage compared with currently used shipping routes Seasonal variation in shipping activities (both transportation and fisheries) is controlled primarily by prevailing ice conditions. In areas of lower or no ice coverage, transportation activity has a more regular pattern. Climate change is expected to increase marine access to the Arctic regions, especially with the possible opening of presently closed passages such as the Northwest Passage and the Northern Sea Route (north of the North American and the Eurasian continents, respectively). A navigable Northwest Passage could shorten the shipping route from Europe to the West Coast of the United States by 30-40% in comparison to the current route through the Panama Canal and from Europe to Asia by more than 40% in comparison to the current route through the Suez Canal (Figure 2). One benefit of the opening of new passages is that it can make it easier to transport mineral resources, including oil and gas via the new, open sea routes. In addition, increased offshore and mining development will increase maritime activity in the region. Figure 2 shows the present navigation routes in the Arctic, the Arctic fishing grounds (stripped areas), and the approximate boundaries of the oil and gas fields (encircled by blue lines) as they were known in 2000 (source: Protection of the Arctic Marine Environment Working Group of the Arctic Council). Currently, new perspective oil/gas fields have been outlined along the eastern Eurasia shelf seas (Bird and al. 2008; Malyshev et al. 2011). Therefore, accurate forecasting of the ice conditions and climate tendencies in the Arctic region for the next few decades is critically important for the whole economic infrastructure of the Arctic region, including oil and gas industry, off-shore engineering and especially transportation along the NWP and NSR, as well as fishing navigation (cf., Wassman and Lenton 2012). In the distant future, trans-Arctic commercial routes in all directions via geodesic lines through the North Pole may become a reality further shortening the distances and impacting the world economy and wealth. The perspectives of an easier ocean access to transport and resources will generate increased shipping, but also new climate-induced hazards. Impacts of climate changes on these activities, such as increasing storminess or atmospheric humidity, are likely to have significant consequences that have to be quantified. Observed and projected climate change has major impacts on Arctic land and costal infrastructure. Out of approximately 370 Arctic settlements, more than 80% are located in the coastal zone (Anisimov et al. 2010). Climate-induced changes in permafrost temperature (Romanovsky et al. 2010) and increased rates of coastal erosion (Forbes et al., 2011) may have detrimental impacts on arctic coastal communities. The areas underlined by ice-reach permafrost, such as Kara, Laptev, East Siberian, Chukchi and Beaufort Sea coasts are most vulnerable. Ongoing climate change already affecting the infrastructure in Russian permafrost regions which was developed over 1950s-1980s period to support NSR navigation and natural resources development. More than 75% of this infrastructure is constructed according to the passive principle, which promotes equilibrium between thermal regime of permafrost and structure through the foundation bearing capacity (Shur and Goering, 2008) and are not designed to withstand changes in climatic conditions beyond natural variability (Khrustalev et al., 2011). According to some estimates (e.g. Kronic, 2001), over the last decade of the 20th century the rate of building failures has increased by up to 90% in some Russian Arctic settlements and buildings with deformations of the total number of buildings varying from 10% in Norilsk to 80% in Vorkuta. In the North American sector of the Arctic several coastal villages are threatened by the coastal erosion. While some communities adapting to the new changes by building the protective dams (e.g., Barrow), others are considering costs for relocation (e.g., Newtok, Kivalina in Alaska, Tuktoyaktuk in Canada). Figure 3. Annual anomalies of the average thickness of seasonally frozen (permafrost) depth in Russia from 1930 to 2000. Each data point represents a composite from 320 stations as compiled by the Russian Hydrometeorological Stations (RHM) (upper right inset). The composite was produced by taking the sum of the thickness measurements from each station and dividing the result by the number of stations operating in that year. Although the total number of stations is 320, the number providing data may be different for each year but the minimum was 240. The yearly anomaly was calculated by subtracting the 1971–2000 mean from the composite for each year. The thin lines indicate the 1 standard deviation (1σ) (likely) uncertainty range. The line shows a negative trend of –4.5 cm per decade or a total decrease in the thickness of seasonally frozen ground of 31.9 cm from 1930 to 2000 (Frauenfeld and Zhang, 2011) (reproduced from IPCC AR5 report). According to IPCC AR5, an estimate based on monthly mean soil temperatures from 387 stations across part of the Eurasian continent suggested that the thickness of seasonally frozen ground decreased by about 0.32 m during the period 1930–2000 (Figure 3) (Frauenfeld and Zhang, 2011). Inter-decadal variability was such that no trend could be identified until the late 1960s, after which seasonal freeze depths decreased significantly until the early 1990s. A potentially dangerous situation is emerging with respect to transportation routes and facilities (Streletskiy at al. 2012c). Across the Arctic, railroads, paved roads and runaways built on permafrost suffer from subsidence associated with thawing of the ground ice (cf., Grebenets et al. 2012; NRC, 2008; US Arctic Research Commission, 2003). Arctic countries, especially Russia and Canada, rely heavily on winter roads and drivable ice pavements to supply communities in remote areas. Climate warming has caused a reduction in the operating period of winter roads as well as a reduction in the bearing capacity of roads both in the Russian and the American sectors of the Arctic (Lonergan et al. 1993; Streletskiy et al. 2012b). The Russian North is the most severely affected because there, in contrast to Alaska and Northern Canada, air transport is poorly developed. A serious situation was also observed in the conditions of oil and gas pipelines. Approximately 35000 pipeline accidents are reported in the region of West Siberia alone. Ensuring pipeline operability due to changes in permafrost costs up to 55 billion rubles annually (Anisimov et al. 2010). Erosion is threatening oil terminals located in Varandei (Yamal, Russia) and may affect proposed gasprocessing facilities in Yukon and NWT Provinces of Canada. It is clear that for commodities extraction, for marine activities, and for coastal and land infrastructure the climatic change will be important and in ways that are relevant not only for the Arctic regions. A thoughtful strategy should be developed, in anticipation of the natural and anthropogenic changes in the Arctic, in order to allow adaptation of the population to these changes, mitigation of major detrimental impacts, and to lay down the pathways of sustainable development of the region for its population, and the entire Arctic nations. Such strategy should be based on quantitative assessments of the effect of climate-induced changes on socio-economic development. 4.6. Competing partners Currently several research groups in the world perform research in the area of integrated climate impacts in high latitudes. We can particularly mention The Arctic and Antarktic Research Institutions (St. Petersburg, Russia), The Bjerknes Centre for Climate Research (Norway, Prof. Noel Keenlyside) and The University of Alaska, Fairbanks (USA). Also many groups develop research in particular areas associated with the proposal. For instance, monitoring of the ice and snow conditions from in-situ observations and space is perfomed by the NOAA National Snow and Ice Data Center (NSIDC, Dr. Mark Serreze, Boulder, USA). Detailed hindcasting of atsmopheric conditions over the Arctic is developed at The Byrd Polar Research Center (BPRC) at The Ohio State University (Prof. David Bromwich, Columbus, USA) under the project on Arctic System Re-analysis (ASR). Arctic climate modelling is developed at a number of centers, including GEOMAR (Kiel, Germany), NCAR (Boulder, Colorado), and University of Washington (Seattle, USA). We stay in a close co-operation with most of these groups and centres and are going to continue this co-operation under the proposed project.
http://ael-msu.org/?page_id=60
Background {#Sec1} ========== Cervical cancer is the fourth most frequent cancer in women with an estimated 5,70,000 new cases in 2018 representing 6.6% of all female cancers (WHO report, 2018). According to 2013 data from WHO, developing countries account for more than 85% of these cases. In India, the second most populous country in the world, over 80% of cervical cancers present at a fairly advanced stage. The current estimates in India indicate that every year 1,22,844 women are diagnosed with cervical cancer and 67,477 die from the disease (HPV and Related Cancers, Fact Sheet 2017). Infection with high risk human papillomavirus (HPV) has been recognized as an essential factor for the development of cervical cancer. HPV infection is the most common sexually transmitted infection worldwide and most sexually active individuals acquire it at some point of time during their life \[[@CR1]\]. HPVs can also cause cancer of the vagina, vulva, penis, and anus, as well as some head and neck cancers, anogenital warts, and recurrent respiratory papillomatosis. Most HPV infections (\~90%) clear within 6-18 months after acquisition, without any clinical signs or symptoms (transient infections) \[[@CR2], [@CR3]\]. However, some infections become persistent and increase the risk of premalignant or malignant disease \[[@CR4]\]. Of these, only 0.3%--1.2% of initial infections will eventually progress to invasive cervical cancer. In addition to HPVs, other risk factors for progression to cervical cancer are immunodeficiencies such as in renal transplantation or human immunodeficiency virus disease \[[@CR5]\], although sexual and reproductive factors, long term oral contraceptive use \[[@CR6]\] smoking \[[@CR7]\] and *Chlamydia trachomatis* infection \[[@CR8]\] have also been implicated \[[@CR4]\]. Natural history of Human Papillomavirus (HPV) infection {#Sec2} ------------------------------------------------------- Human papillomaviruses (HPV) are DNA tumor viruses belonging to the Papillomaviridae family. More than 200 human and animal papillomavirus genotypes have been characterized and sequenced. Of the approximately 30 HPVs that infect the anogenital tract, 15 HPV types, classified as 'high-risk' types (HPV types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73 and 82) are associated with high grade lesions and invasive cervical cancer \[[@CR9]\]. Of these, HPV16 and HPV18 are the most important types, causing ∼70% of squamous cell carcinomas, and \>90% of adenocarcinomas \[[@CR10]\]. On the other hand, 11 different HPV types, classified as 'low-risk' HPV types (HPV types 6, 11, 40, 42, 43, 44, 54, 61, 70, 81 and CP6108) are mainly associated with genital warts and benign cervical lesions. The human papillomaviruses are non-enveloped DNA viruses with icosahedral capsid that consists of a circular double stranded DNA 7900 bp long. According to protein expression during the viral cycle, two functional genome regions have been identified: (i) a coding region containing the early genes, E1, E2, E4, E5, E6, and E7 and (ii) a region containing two late genes, the major (L1) and minor (L2) capsid proteins. In addition, the HPV genome has a non-coding region, termed long control region (LCR), which includes most of the regulatory elements involved in viral DNA replication and transcription \[[@CR11]\]. During HPV infection, the different viral proteins are expressed sequentially. The present review focuses on understanding the etiology of HPV-mediated carcinogenesis, the cellular pathways and molecular mechanisms involved in transition from HPV infection to malignant transformation leading to cervical cancer. Main Text {#Sec3} ========= HPV Life cycle {#Sec4} -------------- The life cycle of HPV is intimately linked to the differentiation status of the host cell keratinocyte and is characterized by distinct phases of replication \[[@CR12], [@CR13]\]. High-risk and low-risk HPVs initiate infection by gaining access to the proliferating basal cells of the stratified epithelium through a micro abrasion \[[@CR14]\] (Fig.[1](#Fig1){ref-type="fig"}). The mechanisms allowing entry from the extracellular milieu into the cell are known to proceed through interaction with cell surface heparan sulphate followed by clathrin- or caveola-mediated endocytosis \[[@CR15], [@CR16]\]. During productive infection, the viral genome is maintained at a low copy number as an extrachromosomal element known as episome in the basal undifferentiated cells of the epithelium. HPV undergoes a transient round of replication referred to as "establishment replication", which results in a copy number of 50--100 viral genomes per cell. These viral episomes are maintained in undifferentiated basal cells by replicating alongwith the host cell chromosomes. Thereafter, the viral life cycle is tightly coupled to the differentiation program of keratinocytes and relies on several cellular factors and viral proteins.Fig. 1Organization of HPV genome. **a.** HPV genome has a circular double-stranded DNA (8000bp). The viral genes are transcribed in a single direction (clockwise). There are genes coding for non-structural proteins (E1, E2, E4, E5, E6, and E7) and structural proteins (L1, L2), and a transcriptional control region (long control region; LCR). LCR contains a DNA replication origin and functions as a regulator for DNA replication. **b.** The HPV lifecycle. Human papillomavirus is thought to reach the basal cells through microabrasions in the cervical epithelium. After infection, the early human papillomavirus genes E1, E2, E4, E5, E6, and E7 are expressed and the viral DNA replicates from episomal DNA and establishes ∼50 HPV episome copies, which then segregate between the daughter progeny as the cells undergo cell division. In the upper layers of epithelium, viral genome is replicated further, and late genes L1 and L2, and E4 are expressed. The early viral proteins E6 and E7 are key to stimulating proliferation and milieu for E1 and E2-driven viral genome replication to a high copy number. Integration of human papillomavirus genome into the host chromosome occurs with associated loss or disruption of E2, and subsequent upregulation of E6 and E7 oncogene expression. Terminal differentiation of infected cells in the upper epithelial layers activates the expression of E4 and then L1 and L2 to encapsidate the viral genomes to form progeny virions. The shed virus can then initiate a new infection The replication of viral genome requires the viral initiator protein E1, which contains a helicase-ATPase activity, and the multifunctional viral protein E2, which helps in the specific recruitment of E1 to the viral DNA. E1 oligomerizes and assembles as a double-hexamer at the viral origin of DNA replication and functionally interacts with several host replication factors, such as polymerase α-primase, replication protein A, topoisomerase I and cyclin E/Cdk2 \[[@CR17]\]. E2 also functions as a transcription factor, capable of trans-activation and repression \[[@CR18]\] and as a mediator of genome segregation, which is essential for viral persistence \[[@CR19]\]. As the infected cells undergo differentiation, late gene expression and viral genome replication are induced. E4 and E5 are both required for viral amplification. To maintain the cellular replication machinery active, the viral proteins E6 and E7 are expressed and uncouple cell growth arrest and differentiation primarily through the inactivation of p53 and pRb, respectively. The inactivation of pRb by E7 forces infected cells to remain in a proliferative state and escape cell cycle exit, while abrogation of p53 by E6 ensures cell survival by preventing apoptosis triggered by this aberrant growth signal. The productive phase of the viral life cycle is activated further upon epithelial differentiation, resulting in the amplification of viral genomes to thousands of viral copies per cell in the suprabasal layers, as well as activation of late gene expression \[[@CR13], [@CR20]\]. The amplified genomes are then packaged into infectious virions by the L1 and L2 proteins, which form the subunits of the icosahedral capsid. Finally, viral escape probably occurs by natural tissue desquamation and may be facilitated by the keratin network disrupting ability of E4 \[[@CR21]\]. The regulation of viral life cycle in this manner allows HPV to avoid detection by the immune response as high levels of viral gene expression as well as virion production are restricted to the uppermost layers of the epithelium, which are not under immune surveillance \[[@CR22]\]. Due to small coding capacity of the viral genome, HPV depends on the host DNA replication machinery to synthesize its DNA. In order to support productive replication, HPV employs numerous mechanisms to subvert key regulatory pathways that regulate host cell replication, maintaining the differentiating cells active in the cell cycle. As such, HPV is able to reactivate cellular genes and signalling pathways necessary to support late gene expression and amplification of viral DNA. Molecular events in cervical cancer progression {#Sec5} ----------------------------------------------- A complex network of interactions involving various factors are important for transformation and malignant progression. Cervical carcinogenesis is a multistage process associated with the accumulation of DNA alterations in host cell genes. These alterations involve both epigenetic and genetic changes in oncogenes and tumour suppressor genes which are crucial regulators of cell cycle progression, chromosomal stability, telomere activation and apoptosis. But the pivotal step for the onset of tumorigenesis appears to be the integration of viral genome into the host. HPV Integration and overexpression of oncogenes E6/E7 {#Sec6} ----------------------------------------------------- Integration of HPV DNA into the host cell genome is a key event in HPV-mediated carcinogenesis leading to aberrant proliferation and malignant progression \[[@CR23], [@CR24]\]. Integration impacts the host genome by amplification of oncogenes and disruption of tumor suppressor genes as well as driving inter- and intra-chromosomal rearrangements \[[@CR25]\]. The early genes E6 and E7 play an essential role in HPV-induced carcinogenesis by interfering with two essential tumor suppressor genes p53 and pRb that regulate normal cellular proliferation. The interaction of Е7 with pRb protein causes its degradation and aberrant initiation of S-phase and release of E2F transcription factor that triggers the expression of cyclins and other S-phase regulators. The mechanism of Е6/Е7-induced transformation is not confined exclusively to the degradation of the key cellular "guardians" pRb and р53. E7 also associates with other proteins involved in cell proliferation, including histone deacetylases \[[@CR13]\], components of the AP1 transcription complex \[[@CR26]\] and the cyclin-dependent kinase inhibitors p21 and p27 \[[@CR27]\]. At the same time, Е6 protein targets р53 for proteasomal degradation and causes inhibition of apoptosis and DNA repair which is an integral component of the HPV life cycle. The degradation of p53 by E6 is important because p53 is a transcription factor that regulates the expression of genes encoding regulators of cell cycle, DNA repair machinery, metabolism and apoptosis \[[@CR21]\]. This is of key importance in the development of cervical cancers, as it compromises the effectiveness of cellular DNA damage response and allows the accumulation of secondary mutations. Recent studies indicate the existence of an intricate HPV interactome, i.e., a network of intermolecular interactions of Е6 and Е7 with the host cell proteins \[[@CR28]\]. By these interactions, the Е6 and Е7 proteins can modulate the profile of gene expression, host cell proteome, and intracellular signaling pathways (including MAPK-, Wnt-, Akt-, Notch-, mTORC-, and STAT-dependent cascades), leading to remodelling of epithelial cells \[[@CR29]\]. In addition, E6 also binds to and degrades FAS-associated death domain protein (FADD), preventing the transmission of apoptotic signals via the Fas pathway \[[@CR30]\]. All these molecular alterations facilitate resistance to programmed cell death, or apoptosis. Apoptotic cell death also requires the involvement of caspases which are key molecular players of the apoptosis regulatory network \[[@CR31]\]. Thus, the continuous activity of E6 and E7 proteins leads to aberrant cell proliferation, accumulation of oncogene mutations, and ultimately cervical cancer. Molecular mechanism of HPV integration and overexpression of E6/E7 oncoproteins {#Sec7} ------------------------------------------------------------------------------- Integration typically results in the increased expression and stability of transcripts encoding the viral oncogenes E6 and E7, which are known to inactivate and/or accelerate the degradation of numerous cellular proteins, including retinoblastoma protein (E7) and p53 (E6) \[[@CR32]\]. The site of integration is distributed throughout the genome as chromosomal fragile sites where DNA double strand breaks fail to repair and FRA8C at chromosome 8q24 near the c-myc locus has been reported as one such example \[[@CR33]\]. Integration starts with DNA damage, induced either by oxidative stress or HPV protein and the subsequent steps are driven by the DNA damage responses (Fig. [2](#Fig2){ref-type="fig"}). Breaks in HPV DNA are introduced during the replication of virus and these breaks fail to get repaired. HPV takes advantage of this damage response pathway for its own replication and produces an adequate number of episomal HPV which increase the availability of more HPV DNA for integration into the host DNA. The DNA break-induced DNA damage response (DDR) triggers the accumulation of factors for replication at the replication foci and acts as a driving force throughout viral replication \[[@CR34]\]. Virus uses the DDR machinery to promote viral amplification while the viral oncoproteins render the cells to overcome the downstream consequences of damage response. The oncoproteins E6 and E7 disrupt cell cycle checkpoint control by inhibiting CDKs inhibitors (P21, P27) and degrading P53 \[[@CR13]\]. HPV-16 E7 oncoprotein attenuates the DNA damage checkpoint response by accelerating the proteolytic turnover of claspin, a critical regulator of the ATR/CHK1 signaling axis and DNA damage checkpoint recovery in G2 phase of the cell cycle \[[@CR35]\]. In addition, proximity between viral and host genome by E2-BRD4-mediated tethering could increase the feasibility of viral integration. The fusion between both the genome via either homologous or nonhomologous recombination is regulated by the DNA damage response pathway (ATM/ATR and DNA-PK pathways) \[[@CR36]\].Fig. 2Mechanism of HPV DNA integration: Induction of double strand breaks by reactive oxygen species, viral protein induces DNA damage response, following which ATR and P53 get activated to repair the damage. HPV oncogenes deactivate the normal function of DNA damage response (DDR). E7 acts on ATR and degrades claspin, inactivates Rb disrupting the cell cycle inhibitors p21, p27. E6 degrades p53 inhibiting DNA repair, degrades FAS-associated death domain protein (FADD) preventing apoptosis via Fas pathway, promotes degradation of transcriptional repressor, NFX1 and activates hTERT transcription. Virus utilizes the DDR machinery for its replication and close proximity with host genome is mediated by the Bromodomain protein 4, BRD4-E2 complex. The fusion between host and viral genome is accomplished by Nonhomologous mediated end joining (NHEJ) finally leading to integration of HPV DNA into the host genome The association of head-to-tail tandem repeats in the chromosome of cervical cancer cells \[[@CR37]\] indicates that a linear concatemeric HPV genome is synthesized by a rolling circle mechanism of replication which integrates into the host chromosome \[[@CR38]\]. It has been reported that homologous recombination machinery recruited to the regions of double strand breaks only is generated through collapsed replication fork during replication of viral genome \[[@CR39]\]. In addition, HPV genome tethers to host chromatin by HPVE2-BRD4 (Bromodomain protein 4) complex for partitioning of genomes to daughter cells \[[@CR40]\]. The association of BRD4 with the chromosomal fragile region suggests that BRD4 may play an important role in increasing the mechanistic feasibility of integration. Finally, the loss of E2 ORF during integration causing enhanced expression of viral oncogenes E6 and E7, leading to disruption of critical cellular genes plays an important role in progression to carcinogenesis. Chromosomal alterations {#Sec8} ----------------------- The viral oncoproteins E6 and E7 are known to induce DNA damage, centrosome abnormalities and chromosomal segregation defects, thereby leading to chromosomal instability \[[@CR41], [@CR42]\]. High-risk E6 has the ability to activate the catalytic subunit of telomerase \[hTERT (human telomerase reverse transcriptase)\], which adds hexamer repeats to the telomeric ends of chromosomes \[[@CR43]\]. HPV16 E6 associates with E6AP, to promote the degradation of the transcriptional repressor NFX1-91, and consequently activate hTERT transcription; this repressor has also a role in HPV16 E6 activation of the oncogenic transcription factor NF-ҡB \[[@CR44]\]. HPV infected cells display a very high level of telomerase activity, allowing telomere length maintenance and indefinite proliferation \[[@CR45]\]. The activity of telomerase is normally restricted to the proliferative compartment of epithelium, and activation is associated with cellular immortalization and carcinogenesis \[[@CR46]\]. It has been reported that, several regions are typically lost in cervical carcinogenesis (2q, 3p, 4p, 5q, 6q, 11q, 13q and 18q) while other regions are amplified (1q, 3q, 5p and 8q) \[[@CR47]\]. The 3q26 region contains sequences for the RNA component of human telomerase gene, which serves as a template for addition of telomeres, which is the basis for telomerase based cell immortalization \[[@CR48]\]. The frequency of 3q26 gain has been found to increase with the severity of cervical neoplasia \[[@CR49]\]. Epigenetic changes {#Sec9} ------------------ In addition to genetic alterations, it has become evident that oncogenomic processes can be profoundly influenced by epigenetic mechanisms. Epigenetic alterations are often found early in tumorigenesis and are likely to be key initiating events in certain cancers \[[@CR50]\]. In addition to tumor initiation, epigenetic events also contribute to tumor progression \[[@CR51]\]. A number of epigenetic alterations have been identified that occur in both the HPV and the cellular genome, including DNA hypomethylation, hypermethylation of tumor suppressor genes, histone modifications, and alterations in ncRNAs. DNA methylation {#Sec10} --------------- The earliest and most common molecular changes in the multistep carcinogenesis process is DNA methylation \[[@CR52]\]. In normal cells, DNA methylation is involved in the regulation of gene expression, including chromatin organization and genomic imprinting \[[@CR53]\]. In contrast, global DNA hypomethylation in repetitive regions and hypermethylation in CpG islands of tumor suppressor gene promoters are observed in tumors \[[@CR54], [@CR55]\], and increase in the activity of (DNA methyltransferases (DNMT) 1 \[[@CR56]\]. These alterations are also observed in HPV-induced carcinogenesis. HPV E7 binds to DNMT1 and stimulates its activity \[[@CR57]\], and activates transcription of DNMT1 through the pRB/E2F pathway \[[@CR58]\], while HPV E6 upregulates DNMT1 by suppression of p53 \[[@CR59]\]. Increased expression of DNMT3A and 3B has also been observed in HPV-positive cells \[[@CR60]--[@CR62]\]. Aberrant methylation occurs frequently in cervical cancer, leading to silencing of gene expression, activation of oncogenes and transposable elements, loss of imprinting, and inactivation of tumor suppressor genes \[[@CR63]\]. The activities of E6 and E7 causing increase in activity of DNMT1 results in hypermethylation of tumor-suppressor gene promoters, leading to silencing of genes (Fig. [3](#Fig3){ref-type="fig"}).Fig. 3Schematic presentation of DNA methylation by E6 and E7 oncoproteins: Binding of E6-E6AP complex to p53 to leads to ubiquitination and degradation of p53 and induces overexpression and activity of DNA methyltransferase (DNMT) 1. Binding of E7 to pRb causes the release of E2F, favoring the overexpression of DNMT1. The activities of E6 and E7 causing increase in activity of DNMT1 results in hypermethylation of tumor-suppressor gene promoters, leading to silencing of genes, cellular transformation and tumorigenesis Various studies have found that methylation of CpG islands within the promoter regions of tumor suppressor genes can lead to silencing of gene expression. Epigenetic silencing of tumor suppressor genes plays an important role in cervical carcinogenesis \[[@CR54]\]. This is an important epigenetic mechanism, tends to accumulate with disease severity \[[@CR64], [@CR65]\] and has been demonstrated in cervical cancer and its precursors \[[@CR63]\]. A wide range of host genes involved in cell cycle regulation, apoptosis, DNA repair and WNT pathway often undergo epigenetic modification in cervical cancer. The most frequently methylated genes in cervical cancer are cell adhesion molecule 1 (CADM1), Cyclin A1(CCNA1), cadherin 1 (CDH1), death-associated protein kinase 1 (DAPK1), Erythrocyte membrane protein band 4.1 like 3 (EPB41L3), myelin and lymphocyte (MAL), paired box 1 (PAX1), PR domain containing 14 (PRDM14,) and telomerase reverse transcriptase (hTERT) \[[@CR64], [@CR66]--[@CR69]\]. Ras association domain family member 1(RASSF1), a key gene involved in the apoptotic signaling pathway is downregulated in cervical cancer via methylation \[[@CR70]\]. Another study indicated that tumor suppressor gene CADM1 gene is silenced in cervical cancer due to methylation of the promoter region \[[@CR71]\]. Downregulation of CADM1 gene leads to metastasis and cancer progression. Promoter hypermethylation is associated with decreased expression of CADM1 in high grade CIN and SCC \[[@CR72]\]. Another gene CDH1 is downregulated due to promoter methylation in HPV positive cervical cancer \[[@CR73]\]. A recent study showed that PRDM14 is downregulated in cervical cancer HPV positive cell lines due to promoter hypermethylation and its abnormal levels resulted in apoptosis \[[@CR74]\]. A pro-apoptotic serine/threonine kinase, DAPK which plays a major role in metastasis and tumour pathogenesis \[[@CR75]\] is reported to be inactivated due to hypermethylation in cervical cancer \[[@CR76], [@CR77]\]. Furthermore, methylation of decoy receptors (DcR1 and DcR2) was reported in HPV related cervical cancer leading to silencing and inhibition of apoptosis \[[@CR78]\]. The silencing of E6 and E7 was found to decrease methylation of tumour suppressor genes and reverse the transformed phenotype of cervical cancer cells \[[@CR61], [@CR63]\]. Methylation of HPV genes with concomitant silencing of HPV oncogenes could be a strategy of the virus to maintain a long-term infection by evading immune recognition \[[@CR79]\]. Several studies have shown that the frequency of DNA methylation of candidate genes increases with increasing severity of the cervical lesion, suggesting that these changes occur early in cancer development \[[@CR64], [@CR69], [@CR80], [@CR81]\]. Methylation is found to be more common in invasive cervical carcinoma and cervical intraepithelial neoplasia (CIN) III than in CIN I-II (84.6% and 46.2% vs. 29.4%, respectively) as reported by Hong et al. in 2008 \[[@CR82]\]. Increased methylation levels at multiple CpG (Cytosine phosphate Guanine) sites have been reported in the E2, L2 and L1 regions in women with CIN3 and found to be greater than those in women with transient infections \[[@CR83]\]. These epigenetic alterations in HPV infected host cells could serve as molecular markers of malignant transformation. Histone modifications {#Sec11} --------------------- In addition to DNA methylation, the epigenetic regulation of gene expression is also impacted by histone modifications and the remodeling of nucleosomes. Histones can undergo a variety of post-translational modifications at the N terminus, which is represented by acetylation, methylation, phosphorylation, sumoylation, ADP--ribosylation, and ubiquitination. They can alter the DNA histone interaction, with a major impact on chromatin structure \[[@CR84]\]. Distinct posttranslational modifications on histones, characterize transcriptionally active and silent chromatin One mechanism by which HPV E6 and E7 oncoproteins alter the transcriptional competence of infected cells is by associating with and/or modulating the expression, as well as activities, of histone-modifying and chromatin-remodeling enzymes (Fig. [4](#Fig4){ref-type="fig"}). For example, acetylation of lysine residues of histones 3 and 4 (H3 and H4) by Histone acetyl transferases (HATs) leads to transcriptionally active chromatin, while the removal of these marks by Histone deacetylases (HDACs) results in transcriptionally repressed chromatin \[[@CR85], [@CR86]\]. The balance between activity of these enzymes has a key role in regulating gene transcription. E6, E7 oncoproteins can associate with enzymes that modulate histone acetylation, and thus, regulate the transcriptional capacity of host cell chromatin \[[@CR85]\]. E6 high risk HPV protein shares with other DNA tumorigenic viruses the ability to target CBP/p300. Both HPV E6 and E7 can associate with and modulate the activity of the HATs p300 and CBP \[[@CR87]\]. p300/CBP regulates a number of genes \[[@CR88], [@CR89]\]. HPV E6 inhibits p300/CBP-mediated acetylation of p53 \[[@CR90]\] while HPV E7 forms a complex with p300/CBP and pRb, acetylating pRb and decreasing p300/CBP levels \[[@CR91]\]. HPV E7 also associates with p300/CBP-Associated Factor (pCAF), reducing its ability to acetylate histones \[[@CR92]\] and the steroid-receptor coactivator (SRC1), and abrogating SRC1-associated HAT activity \[[@CR93]\]. Thus, E6, E7 protein binding to the transcriptional co-activator p300/CBP is a crucial step in cellular transformation.Fig. 4Schematic presentation of Histone modifications by HPV E6 and E7 interaction with cellular epigenetic modifiers. HPV oncoproteins E6, E7 bind to and/or modulate the expression of histone modifying enzymes, class I Histone deacetylases (HDAC), Histone acetyltransferases (HATs), Histone lysine demethylases (KDMs) and subunits of chromatin remodeling complexes. These interactions contribute to chromatin remodelling and transcriptional regulation leading to either activation or repression of gene expression The HPV E7 oncoprotein interacts with class I HDACs \[[@CR85]\], which function as transcriptional co-repressors by inducing chromatin remodeling via the reversal of acetyl modifications on histone lysine residues. The association of E7 and HDAC1/2 occurs in an Rb-independent manner through the intermediary Mi2β, a member of the nucleosome remodeling and histone deactylation (NuRD) complex; the NuRD complex remodels chromatin structure through the deactylation of histones and ATP-dependent nucleosome repositioning \[[@CR94]\]. The association of E7 and HDAC1/2 plays a role in HPV E7-associated transcriptional regulation. Furthermore, HPV E7 can interact with interferon response factor 1 (IRF1) and recruit HDACs to suppress IRF1 transcriptional activity \[[@CR95]\]. HDAC function is also necessary for HIF-1 (hypoxia inducible factor-1) activity, and it was found that E7HPV protein can block the interaction of HDACs with HIF-1α, activating HIF-1-dependent transcription for a range of pro-angiogenic factors \[[@CR96]\]. Silencing of proliferation repressor protein osteo-protegerin (OPG) and retinoic acid receptor β2 (RAR-β2) was found to occur through histone modification as well as DNA methylation \[[@CR97], [@CR98]\]. The oncoproteins E6 and E7 are also involved in histone methylation which is acknowledged to be a dynamically controlled process by two types of enzymes that work together to maintain global histone methylation patterns- Histone methyl transferases (HMTs) and histone lysine demethylases (KDMs) \[[@CR99]\]. Histone methylation may occur on different lysine residues and the interplay between HMTs and KDMs regulates the methylation level and contributes to the activation or repression of gene expression, depending on the specific lysine residue on which they act. Especially, KDMs expression has been found to be deregulated and associated with cancer aggressiveness. McLaughlin-Drubin et al \[[@CR86]\] sustain that E7 HPV16 can induce epigenetic and transcriptional alterations by transcriptional induction of KDM6A and KDM6B histone 3 lysine 27 (H3K27)-specific demethylases. In addition, KDM5C demethylase is recruited by the E2 viral protein for E6 and E7 oncogenes transcriptional repression through the LCR region of HPV. KDM5C expression levels were found to be increased in CIN2+ lesions and significantly increased in SCC cases \[[@CR100]\]. It is reported that, HPV modulates the activity of two coactivator histone arginine methyltransferases, CARM1 and PRMT1 leading to histone methylation on arginine residues. HPV E6 downregulates their expression, and these HMTs are needed for HPV E6 to attenuate p53 transactivation. E6 hinders CARM1- and PRMT1-mediated histone methylation at p53-responsive promoters and suppresses p53 binding to DNA \[[@CR101]\]. E6 also inhibits SET7, which, in addition to catalyzing H3K4 monomethylation, methylates non-histone proteins, including p53 and downregulates p53K372 mono-methylation, thereby reducing p53 stability \[[@CR102]\]. Together, the modulation of CARM1, PRMT1, and SET7 provides another mechanism by which HPV alters p53 function. There is a strong interplay between DNA hypermethylation and histone deacetylation for silencing and modulating the expression of a number of cancer-related genes that predicts not only a synergy in gene expression at global and individual gene levels but also in antitumor activity. Aberrant expression of non coding RNAs {#Sec12} -------------------------------------- Aberrant expression of non-coding RNAs, such as Long non coding RNAs (lncRNAs) and microRNAs (miRNAs) has been reported to play a vital role in the progression of cervical cancer.Long non coding RNAs (LncRNAs) LncRNAs are long non coding RNAs involved in many diverse biological processes \[[@CR103], [@CR104]\]. Altered expression of lncRNAs is specifically associated with tumorigenesis, tumor progression and metastasis. Several lncRNAs have been found to be aberrantly expressed in cervical cancer. Hox transcript antisense intergenic lncRNA (HOTAIR), a long intergenic ncRNA (lincRNA) was found to be increased in cervical cancer tissues and correlated with FIGO stage, lymphatic metastasis, tumor size and invasive depth, indicating its involvement in cervical cancer progression \[[@CR105]\]. It has also been reported that, HOTAIR might accelerate neoplasm aggressiveness by upregulation of VEGF, MMP-9, and epithelial-mesenchymal transition (EMT)-related genes by decreasing the expression of E-cadherin while increasing the expression of β-catenin, Vimentin (VIM), Snail, and Twist \[[@CR106]\]. Sharma et al reported that the crosstalk between HPV16 E7 oncoprotein and lncRNA HOTAIR was concomitant with cellular proliferation and metastasis in cervical cancer \[[@CR107]\]. A few lncRNAs, were found to be downregulated namely growth arrest-specific transcript 5 (GAS5), Tumor suppressor candidate 8 (TUSC8) and lncRNA low expression in tumor (lncRNA-LET). GAS5 was downregulated in cervical cancer tissues and significantly correlated to advanced cancer progression \[[@CR108]\]. TUSC8 plays pivotal role in cell proliferation through downregulating c-Myc level in cervical cancer. The expression of TUSC8 was significantly decreased in cervical cancer and linked to FIGO stage, size of tumor, and squamous cell carcinoma antigen \[[@CR109]\]. LncRNA LET, a newly identified lncRNA, was found to be downregulated in hepatocellular carcinomas, colorectal cancers, squamous cell lung carcinomas, and cervical cancer \[[@CR110]\].(b)micro RNAs (miRNAs) Besides protein coding genes, methylation mediated silencing of non-coding microRNAs (miRNAs) has also been detected in cervical lesions \[[@CR111]\]. MicroRNAs(miRNAs) are short non-coding RNAs regulating cellular processes such as cell proliferation, cell cycle progression, apoptosis, and metastasis. The expression of viral oncoproteins can modulate the expression levels of miRNAs enhancing malignant progression leading to invasive cancer. HPVs modulate the expression of host miRNAs \[[@CR112]\] via deletion, amplification, or genomic rearrangement. Complex interactions between HR-HPV E6 and E7 involve the activation of transcription factors, such as E2F and c-Myc, which can promote the transactivation of miRNAs \[[@CR113]\]. Continuous E6/E7 expression is linked to a decrease in the intracellular concentrations of miR-23a, miR-23b, miR-27b, and miR-143, all linked to anti-tumorigenic activities \[[@CR114]\]. The carcinogenesis process is influenced by up regulation as well as down regulation of miRNAs. Increased expression of certain mi-RNAs (viz., miR-886-5p, miR-10a, miR-141, miR-21, miR-135b, miR-148a, miR-214 and miR-106b) plays vital role in cervical cancer progression as they are involved in regulation of cell proliferation, apoptotic pathway or cell adhesion \[[@CR115], [@CR116]\]. Zheng et al found that the expression level of miR-31 was significantly higher in cervical cancer patients than in normal individuals \[[@CR117]\] and the expression of HPV16 E6/E7 oncoproteins increased miR-31 levels. Furthermore, they reported that the overexpression of miR-31 can promote cell proliferation and enhance the migration and invasion abilities of cervical cancer cells. Liu et al observed that miR-9 was upregulated in HR-HPV-positive tumors by both E6 and E7 oncoproteins, and that activation of this miRNA by HPV E6 oncoprotein was independent of the p53 pathway \[[@CR118]\]. Furthermore, overexpression of miRNA-21 has been associated with aggressive progression and poor prognosis in cervical cancer \[[@CR119]\]. MiR-21 is transcriptionally induced by Activator Protein 1 (AP-1) which is essential for HPV transcription. The down regulation of let-7c, miR-124, miR-126, miR-143, and miR-145 has been found to regulate the expression of oncogenes. MiR-34a has been identified as a direct transcriptional target of cellular transcription factor p53 \[[@CR120]\]. As HPV E6 oncoprotein destabilizes p53 during oncogenic HPV infection, a down-regulation of miR-34a expression is observed. MiR-34a targets multiple cell cycle components, including CDK4, cyclin E2, E2F-1, hepatocyte growth factor receptor MET, and Bcl-2 \[[@CR121], [@CR122]\]. Melar-New and Laimins recently demonstrated that E7 protein has the ability to downregulate miR-203 expression upon differentiation, which may occur through the mitogen-activated protein (MAP) kinase/protein kinase C (PKC) pathway \[[@CR123]\]. Hence, it is conceivable that, expression of E6 and E7 can modulate expression levels of miRNAs, enhancing the progressive alterations leading to invasive cancer. Structure - based function studies of E6 and E7 proteins {#Sec13} -------------------------------------------------------- ### HPV E6 {#Sec14} The full-length oncoprotein E6 is a basic nuclear protein (\~18 kDa) composed of approximately 150 amino acid residues. Similar to E6 proteins encoded by other papillomaviruses, the 16E6 contains four zinc-binding motifs (Cys-X-X-Cys) and forms two Cys/Cys fingers that bind zinc directly \[[@CR124]\]. These motifs are strictly conserved in all E6 proteins and their integrity is essential for the oncoprotein's normal functions. 16E6 also contains a PDZ domain--binding motif at its C-terminal extremity \[[@CR125], [@CR126]\] and three nuclear localization signals (NLSs) (Fig. [5](#Fig5){ref-type="fig"}). PDZ domains are approximately 90 amino acid protein-protein interaction domains \[[@CR127]\]. HPV-16 and HPV-18, are known to interact with numerous PDZ-domain containing proteins via their PDZ-binding motifs \[[@CR128]\]. These are involved in the regulation of epithelial cellular polarity, emphasizing the importance of this pathway for viral replication and HPV-driven malignancy. The PDZ-binding motif is also important in viral life cycle, since its loss reduces viral replicative potential and leads to episomal integration \[[@CR129], [@CR130]\].Fig. 5Schematic structure of oncoprotein E6. Protein structure and functions of HPV16 E6. Four zinc-binding motifs are indicated as grey boxes. The two zinc fingers are shown together with regions that are involved in interacting with some of its cellular target proteins. E6 contains PDZ domain--binding motif at its C-terminal extremity and three Nuclear localization signals (NLS). Functions associated with proteins in different regions are indicated by arrows The crystal structures of both N-terminal and C-terminal halves, as well as the complete structure of the E6 proteins \[[@CR131]\] have confirmed the fact that E6 interacts with a wide range of cellular substrates \[[@CR132]\]. The principal cellular target of HR E6 proteins is the tumor suppressor p53. High-risk E6 interacts with E6AP and tumor suppressor protein p53 to induce ubiquitination-mediated degradation of p53 \[[@CR133]\]. E6 hijacks a cellular E3 ubiquitin ligase UBE3A/E6AP (E6-associated protein), binding through E6's LXXLL motif, and the stable E6/E6AP complex then labels p53 for degradation in a proteasome-dependent manner \[[@CR134]\] Studies have shown E6 to be closely associated with other components of the proteasome degradation pathway: the E3 ubiquitin ligases UBR5/EDD and HERC2 \[[@CR135]\]. Under hypoxic conditions, high-risk E6 also inactivates the CYLD tumor suppressor through interactions with CYLD deubiquitinase to allow unrestricted activation of NF-κB \[[@CR136]\]. High risk E6 oncoproteins are also involved in the deregulation of cellular DNA replication machinery. Besides the ability to immortalize and transform cells and induce p53 degradation, 16E6 is known to be functionally involved in regulating gene transcription \[[@CR137]\]. 16E6 can interact with other transcription factors and coactivators, including p300/CBP \[[@CR133]\], IRF-3 \[[@CR138]\] and c-Myc \[[@CR139]\]. HPV-16 E6 induces telomerase activity in primary epithelial cells through transcriptional transactivation of the hTERT telomerase catalytic subunit \[[@CR41], [@CR140]\]. The 16E6--c-Myc interaction induces transcription of h-TERT to promote cell immortalization \[[@CR141]\]. Furthermore, it has been demonstrated that E6-interacting regions of p300 are necessary for E6 to inhibit p53-dependent chromatin transcription. E6-mediated repression of p53 correlates with inhibition of acetylation on p53 and nucleosomal core histones, altering p53 and p300 recruitment to chromatin \[[@CR90]\]. In addition, 16E6 is an RNA binding protein and interacts with cellular splicing factors and RNA via its C-terminal NLS3 to regulate splicing of E6E7 bicistronic RNAs \[[@CR142]\]. The multifunctional activity of 16E6 is not restricted to the nucleus, because it can act as a regulator of signal transduction through interacting with cytoplasmic E6BP (Erc55) \[[@CR143]\], protein tyrosine phosphatase H1 \[[@CR144]\] and PDZ proteins such as SAP97/hDlg \[[@CR145]\]. All HR HPV E6 proteins have a class I PDZ (PSD95/Dlg/ZO-1)-binding motif (x-T/S-x-L/V \[[@CR146]\] at their C-termini. Thus, these interactions suggest that 16E6 and other high-risk E6s can be regarded as multifaceted viral proteins with characteristic and distinct activities in the nucleus and cytoplasm of the cells they infect. ### HPV E7 {#Sec15} The full-length oncoprotein E7 is a nuclear protein containing approximately 100 aa residues that has a C-terminal zinc-binding domain, whose structural integrity is critical for E7 activity \[[@CR147]\]. E7 is post-transcriptionally regulated by the proteasome and by phosphorylation. The N-terminus of E7 contains sequence similarity to a portion of CR1 and the entire CR2 of adenovirus E1A and related sequences in SV40 T antigen. The CR2 region of E7 contains the CKII phosphorylation site and the LXCXE binding motif involved in binding to proteins such as the retinoblastoma tumor suppressor (pRb) (Fig.[6](#Fig6){ref-type="fig"}). E7 uses its LXCXE motif to target unphosphorylated pRb for degradation via the ubiquitin proteasome pathway \[[@CR148]\]. Oncogenic E7 induces the degradation of pRb by interacting with the cullin 2 ubiquitin ligase complex \[[@CR149]\]. It has been also demonstrated that Casein Kinase II (CKII) phosphorylation of the E7 N-terminal domain is critical for its transformational activity and for its ability to drive S-phase progression \[[@CR150]\]. The C-terminus of E7 may also be involved in zinc-binding \[[@CR151]\].Fig. 6Schematic structure of oncoprotein E7. Protein structure and functions of HPV16 E7 and the most important amino acid motifs required for integrity and protein functions. Relative locations of the regions with sequence motifs similar to a portion of conserved region 1 (CR1) and the entire CR2 of adenovirus E1A are shown with the pRB-binding site LXCXE in the CR2. Zinc binding motifs are indicated in grey boxes. The zinc finger is shown together with the regions involved in pRb binding (LXCXE) and the two serine residues (31 and 32) that are susceptible to casein kinase II (CKII) phosphorylation. Functions associated with proteins in different regions are indicated by PI3K High-risk E7 interacts with the pRb tumor suppressor protein via the LXCXE motif in the E7 CR2 domain to promote cell cycle progression \[[@CR148]\]. Interaction with pocket proteins has been characterized as one of the major functions of E7. Oncogenic E7 binds related pocket proteins p107 and p130 with high affinity via the LXCXE motif in CR2, whereas low-risk or non-oncogenic E7 binds pRb with much lower efficiency. These play important roles in the regulation of cellular proliferation, differentiation and apoptosis. They inhibit E2F-mediated transcription and negatively regulate the transitions from G0 to G1, and into S phase of the cell cycle \[[@CR152]\]. The LXCXE motif of the E7 CR2 domain that has been shown to be required for pRb inactivation is also required for down-regulation of p107 and p130 \[[@CR153]\]. This indicates the necessity for interaction of E7 with the pocket proteins for its optimal ability to continually drive cell cycle progression. Among other binding partners that interact with this domain of E7 are UBR4/p600 \[[@CR154]\] and p300/CBP-associated factor (P/CAF). Interaction between E7 and UBR4/p600 is required for E7-mediated cell transformation \[[@CR155]\]. In addition to its main role in driving cell cycle progression, the interactions with substrates also indicate that E7 has a crucial role in destabilizing transcriptional complexes and in chromatin remodeling, consequently having an impact on cellular proliferation. E7 contains a nuclear localization signal in the N-terminal domain (aa 1-37) \[[@CR156]\]. In addition to its cellular transformation activities, oncogenic E7 also plays a role in the viral life cycle \[[@CR157]\] and affects many other cellular activities in HPV-infected cells. E7 dysregulates the cell cycle by stabilizing p21 \[[@CR158]\] and upregulating p16 expression \[[@CR159]\]. Oncogenic E7 induces mitotic defects and aneuploidy by inducing centrosome abnormalities through its association with the centrosomal regulator γ-tubulin; this inhibits γ-tubulin recruitment to the centrosome \[[@CR160]\] and leads to chromosomal instability. Thus the interaction of high-risk E6 and E7 with cellular tumor suppressor proteins and perturbation of normal cell cycle control are believed to be the most important factors for malignant conversion. Interaction of pathways in progression to cervical cancer {#Sec16} --------------------------------------------------------- A coordinated interaction of multiple processes and signalling pathways is required for progression to oncogenesis (Fig.[7](#Fig7){ref-type="fig"}). Multiple processes and signaling pathways are altered by HR-HPV E6 and E7 oncoproteins in cervical carcinogenesis \[[@CR161]\] among other affected processes, genomic instability plays a central role leading to mutations in cellular genes, which cooperate with the initial necessary steps induced by HR-HPV oncoproteins, including inactivation of two important tumor suppressor pathways (pRB and p53). As mentioned earlier, E6 is able to induce the degradation of p53 via direct binding to the ubiquitin ligase E6AP, inhibiting p53-dependent signaling upon stress stimuli, and contributing to tumorigenesis. On the other hand, oncoprotein E7 associates with the retinoblastoma family of proteins (pRb, p107 and p130) and disrupts their association with the E2F family of transcription factors, subsequently transactivating cellular proteins required for cellular and viral DNA replication.Fig. 7Molecular events in progression to cervical carcinogenesis. Persistent infection with high risk HPV leads to integration of HPV into the host genome, leading to overexpression of oncogenes E6 And E7. Interaction of Е7 with pRb protein leads to aberrant initiation of S-phase, release of E2F transcription factor that triggers the expression of cyclins and CDK inhibitors p21 and p27, altering the integrity of cell cycle thereby contributing to cellular immortalization and transformation. Е6 targets р53 for proteasomal degradation leading to inhibition of apoptosis and DNA repair. E6 activates PI3K/Akt pathway, interacts with cellular proteins NFX1 and induces activation of hTERT leading to immortalization and transformation. The interaction of both onoproteins with DNMTs leads to aberrant methylation causing silencing of tumor suppressor genes. E7 interaction with HDACs causes chromosome remodeling and genome instability. Thus, the cross interaction of E6 and E7 with various pathways plays a key role in progression to carcinogenesis The interaction of E6 with various pathways is associated with cancer initiation, progression and metastasis. Several studies indicate that E6 can activate PI3K/Akt pathway through various mechanisms. E6 inactivates PTEN through PDZ proteins, leading to increased pAkt as well as increased cell proliferation \[[@CR162]\]. In addition, mammalian target of rapamycin (mTOR) a downstream target of Akt is activated by E6 as indicated by increased ribosomal protein S6 kinase \[[@CR163]\]. The mTOR kinase is also activated by mitogen-activated protein kinase (MAPK) pathway. Activation of Akt can produce a cascade of changes in downstream targets. Akt can phosphorylate E6 to promote its ability to interact with protein 14-3-3σ, which is important in carcinogenesis \[[@CR164]\]. HPV has also been associated with increased expression of c-myc, a downstream protein of Akt \[[@CR165]\]. E6 has been reported to act directly on c-myc, leading to activation of telomerase activity \[[@CR166]\]. Telomerase activation is critical for the immortalization of primary human keratinocytes by the high-risk HPV E6 \[[@CR52]\]. E6 is able to increase telomerase activity by upregulation of telomerase reverse transcriptase (TERT), which is encoded by the human telomerase reverse transcriptase (hTERT) gene. E6 induces the hTERT promoter via interactions with the cellular ubiquitin ligase, E6AP. E6 increases hTERT via NFX1-123. NFX1-123 interacts with hTERT mRNA and stabilizes it, leading to greater telomerase expression \[[@CR167]\]. Several studies have shown that E7 can also activate the PI3K/Akt pathway. The ability of E7 to increase Akt activity is correlated with its ability to bind to and inactivate Rb. Second, silencing of Rb by short hairpin RNAs (shRNAs) in differentiated keratinocytes leads to increased Akt activity. Third, increased Akt activity and loss of Rb were also correlated in HPV-positive cervical high-grade squamous intraepithelial lesions \[[@CR168]\]. Activation of the Wnt/β-catenin, Notch and Hedgehog signaling pathways is characteristic of the cancer stem cells \[[@CR169]\]. Recently, it has been found that Wnt/β-catenin signaling is a very important pathway in the maintenance of CSCs \[[@CR170]\]. Lichtig et al \[[@CR171]\] also showed that HPV16 E6 activated the Wnt/β-catenin pathway; the mechanism is independent of the ability of E6 to target p53 for degradation or bind to the PDZ-containing proteins. It has been demonstrated that, HPV16-associated cervical tumorigenesis is synergized by GSK3β inactivation and overactivation of the Wnt/β-catenin pathway \[[@CR172]\]. Activation of the Wnt pathway results in accumulation of β-catenin, which in turn increases transcription of a broad range of genes to promote cell proliferation. Although the Wnt pathway may be a possible mediator for increased β-catenin, PI3K/Akt is also well known to cause accumulation of β-catenin through inactivation of GSK3β \[[@CR173]\]. The nuclear accumulation of β-catenin correlates with tumor progression in cervical cancer patients \[[@CR174]\]. Xuan et al \[[@CR175]\] observed that the hedgehog-signaling pathway was also extensively activated in carcinoma and CIN of uterine cervix. Additionally, they reported that expression of the hedgehog-signaling pathway is greatly enhanced over the CIN I/II/III-carcinoma sequence in the uterine cervix. They suggested that the inappropriate activation of the hedgehog-signaling pathway and inactivation of p53 by E6 proteins from HR-HPV exert a synergistic effect on the uterine cervix carcinogenesis. The Notch signalling pathway, which is necessary for several biological processes such as cellular proliferation, differentiation and apoptosis is considered an oncogenic pathway \[[@CR169]\]. Recently, the product of the Notch1 gene has been identified as a novel target of p53. In cervical cancer, E6 can down-regulate expression of Notch1 through inactivation of p53. Therefore, its down regulation through p53 with E6/E6AP has been revealed as a novel tumor suppressor mechanism blocking development of HPV-induced cervical carcinogenesis \[[@CR176]\]. The ErbB2 protein expression level is also regulated by p53 degradation and interference with this by E6/E6AP complexes contributes to cervical carcinogenesis \[[@CR177]\]. The activation and deregulation of Notch signalling may provide a permissive environment for development of early pre-cancerous lesions which may lead to proliferation of HR-HPV associated cervical tumors. Both E6 and E7 can deregulate cellular microRNA expression, which can alter cellular signaling pathways. Studies have shown that many miRNAs are involved in E6 and E7 mediated signaling pathways \[[@CR178]\]. Aberrant expression of miRNAs has been reported to play a vital role in the progression of cervical cancer. Thus, a co-ordinated interaction of various pathways involving proteins and other biomolecules contributes towards progression to cervical carcinogenesis. Therapeutic strategies for improving HPV-mediated tumorigenesis of cervical cancer {#Sec17} ---------------------------------------------------------------------------------- ### Prophylactic vaccines {#Sec18} HPV prophylactic vaccine was a major breakthrough for cervical cancer prevention. Gardasil was the first quadrivalent cancer vaccine approved by the U.S. Food and Drug Administration in 2006 for prevention of cervical cancer, precancerous genital lesions, and genital warts caused by HPV6, HPV11, HPV16, and HPV18 \[[@CR179]\]. Thereafter Gardasil®9, a nonavalent HPV-6/11/16/18/31/33/45/52/58 vaccine was approved. In 2009, Cervarix, bivalent vaccine was approved by FDA to prevent cervical cancer and precancerous lesions caused by human papillomavirus (HPV) types 16 and 18. These three vaccines effectively prevented HPV infections caused by the targeted types by eliciting the production of [neutralizing antibodies](https://www.sciencedirect.com/topics/immunology-and-microbiology/neutralizing-antibody) that block the entrance of viral particles into host cells \[[@CR180]\]. However, these vaccines were not effective at eliminating pre-existing infections, since the target [antigens](https://www.sciencedirect.com/topics/immunology-and-microbiology/antigen), L1 capsid proteins, are not expressed in infected basal epithelial cells \[[@CR181]\]. A large number of individuals already infected with HPV did not benefit from these vaccines. In a clinical trial with 440 cancer patients, Rosenberg et al \[[@CR182]\] reported that the objective response rate was low (2.6%) with a lack of powerful adjuvants capable of overcoming the immunosuppression present in cancer patients indicating that adjuvants are required to induce potent and durable immune response. ### TLRs as vaccine adjuvants {#Sec19} At present, research and development of novel vaccine adjuvants are mainly focused on TLR ligands. Targeting TLR signaling pathways has been applied in clinical practice to improve the immunogenicity of DNA vaccines and promote the adjustment of T cells in resisting viral infection or to inhibit the wide spread inflammatory response caused by bacterial infection \[[@CR183]\]. Investigations showed that simultaneous activation of multiple pathways of TLRs by vaccines resulted in better immunogenicity effects. Presently, only three TLR agonists are approved by international regulatory agencies for use in cancer patients: monophosphoryl lipid A (MPL), \[[@CR184]\] bacillus Calmette-Guérin (BCG), and imiquimod \[[@CR185]\]. Similar to LPS, MPL can activate the TRAM and TRIF signalling pathways and also can reduce the MYD-88 dependent signal pathway significantly that promoting inflammation \[[@CR184]\]. Clinical trials using CpG ODNs as immunotherapeutic agents in cancer patients suggested that CpG ODN as monotherapy or in combination with chemotherapy can induce potent anti-tumor immune responses that correlate with clinical benefit \[[@CR186]\]. Adjuvant systems using different combinations of TLR adjuvants, including alum, MPL, and CpG ODN, have shown better efficacy compared with a single TLR adjuvant \[[@CR187]\]. TLRs as molecular adjuvants provide a new target for HPV infection prevention and provide direction for development of efficient vaccines. ### Strategies targeting E6 E7 proteins {#Sec20} Several strategies that target E6 or the E6/E6-AP complex have been developed, including various therapies that employ cytotoxic drugs, a zinc-ejecting inhibitor of the viral E6 oncoprotein, an E6-AP mimetic epitope peptide (mimotope), an anti-E6 ribozyme, peptide aptamers that target the viral E6 oncoprotein, siRNAs that target the viral E6 oncogene, and combinations of therapies \[[@CR188], [@CR189]\]. Recent reports suggest a new strategy to induce viral E6 and E7 instability by using HSP90 and GRP78 inhibitors for the treatment of cervical cancer \[[@CR190]\]. An E7 antagonist peptide showed antitumor effects through pRb reactivation both in vitro and in vivo \[[@CR191]\]. GS-9191, a nucleotide analog prodrug showed an antiproliferative effect in vitro, and its topical application reduced the size of papillomas in the rabbit papillomavirus model \[[@CR192]\]. Chitosan hydrogel containing granulocyte-macrophage colony-stimulating factor (GM-CSF) in combination with anticancer drugs showed antitumor effects through CD8+ T cell immunity \[[@CR193]\]. Heparin-like glycosaminoglycans have been demonstrated to inhibit tumor growth by downregulation of HPV18 long control region activity in transgenic mice \[[@CR194]\]. Finally, 5-aza-2′-deoxycytidine, a demethylating agent, and 5,6-dimethyl xanthenone-4-acetic acid, a vascular disrupting agent, in combination with therapeutic HPV DNA vaccines \[[@CR195]\], showed significant antitumor therapeutic effects in vivo. Furthermore, several plant-derived compounds have been investigated for their therapeutic potential in cervical cancer. In clinical trials, Praneem, a polyherbal formulation, was shown to eliminate HPV16 infection in early cervical intraepithelial lesions \[[@CR196]\]. Curcumin, withaferin A, and epigallocatechingallate (EGCG), methyl jasmonate, also showed therapeutic effects via repression of viral oncogenes, upregulation of tumor suppressor genes, or induction of apoptosis in vitro \[[@CR197], [@CR198]\]. Withaferin A treatment in xenograft model showed significant reduction in tumor volume. Another natural compound, jaceosidin inhibited the functions of E6 and E7 oncoproteins in HPV16-positive cervical cancer cells \[[@CR199]\]. ### RNAi-based Therapeutics against HPV {#Sec21} Recently, novel antiviral RNAi therapies have been developed and tested in clinical trials with short interfering RNAs (siRNAs) \[[@CR200]\]. siRNAs are capable of selective silencing of endogenous genes in mammalian cells \[[@CR201]\], and silencing viral genes in virus-induced diseases \[[@CR202]\]. The RNAi targeting of E7 or E6/E7 led to the accumulation of TP53 and/or pRb, leading to induction of apoptosis and/or senescence in HPV16-positive cervical cancer cell lines, and in HPV18-positive human cervical cancer cells \[[@CR203]\]. Zhou et al reported that two siRNAs targeting the E6/E7 promoter and E7 transcripts produced E6 and E7 mRNA knockdown, increased TP53 protein levels, decreased CDKN2A (p16INK4A) protein levels, and exhibited SiHa cell growth inhibition via apoptosis \[[@CR204]\]. Another study reported that, HPV16 E6/E7 silencing by promoter-targeting siRNA was related to histone modification associated with histone H3-Lys9 methylation \[[@CR205]\]. Chang et al \[[@CR206]\] demonstrated that intratumoral administration of potent siRNA resulted in inhibition of tumor growth and induction of apoptosis in vivo, suggesting that siRNA treatment shows potential as an adjuvant therapy for cervical cancer. ### Strategies Targeting Activation of TP53 Pathway {#Sec22} Another pathway that is often disrupted in cervical cancer is the TP53 pathway. Functional restoration of WT-TP53 may induce the regression of cervical carcinomas, which can be achieved by abrogating the expression of the E6 or E6/E7 oncogenes, or through cisplatin (cis-diaminedichloroplatinum II; CDDP) or radiation treatment. It has been demonstrated that, cisplatin therapy allows TP53 to escape from E6-mediated degradation, thereby facilitating TP53 accumulation in the nucleoli of HeLa cells \[[@CR207]\]. Putral et al \[[@CR208]\] investigated CDDP co-therapy, and found that shRNAs to E6 increased CDDP sensitivity in HeLa cells. Another recent study revealed that, intratumoral injection of both exonic (E6/E7-Exon) and intronic (E6/E7-Intron) siRNAs co-administration with intravenous injection of paclitaxel treatment restored the tumor-suppressive effect \[[@CR209]\]. While both radiation therapy and chemotherapy have been used as treatment modalities in cancer patients, the advantages of combined chemo-radiotherapy have recently been reported. Cisplatin (cis-diaminedichloroplatinum II, CDDP) in combination with concurrent radiation is recommended for patients with disease at stage IIB or greater, and for those with locally advanced cervical cancer. Recently, it was demonstrated that TP53 showed expression in a wavelike or "pulsed" manner following exposure to ionizing radiation alone \[[@CR210]\]. In addition, activation of the tumor suppressor pRb is part of the mode of action of HPV E6/E7 siRNA. Thus, complete reactivation of TP53 is possible, and its dynamics can be sustained by combination therapy with CCRT and HPV E6/E7 siRNA. ### Epigenetic therapies {#Sec23} Epigenetic alterations, unlike genetic mutations, may be reversed by inhibiting the associated enzymes, and need to be evaluated as therapeutic modalities for HPV-associated lesions and cancers. Currently, two main classes of epigenetic drugs, methylation inhibitors and HDAC inhibitors, are in clinical trials for the treatment of cancer. Phase I study of Hydralazine in cervical cancer patients showed that different doses of hydralazine treatment was well-tolerated and effective to demethylate and reactivate the expression of eight tumor suppressor genes without affecting global DNA methylation \[[@CR211]\]. Trichosanthin (TCS), a bioactive component isolated from a Chinese medicinal herb has been shown to have the capacity of restoring the expression of methylation-silenced tumor suppressor genes \[[@CR212]\]. Valproic acid (VPA), an effective inhibitor of histone deacetylases, has been shown to modulate multiple cellular pathways including cell cycle arrest, apoptosis, angiogenesis, metastasis. The antitumor effect of VPA in cervical cancer may be caused due to either hyperacetylation of p53 protein protecting it from degradation by E6 and increasing p53 activity or via inhibition of Akt1 and Akt2 gene expression which resulted in Akt deactivation and apoptotic cell death \[[@CR213]\]. Apicidin, a cyclic peptide HDAC inhibitor, was found to selectively downregulate DNA methyltransferase 1 \[[@CR214]\] whereas trichostatin A (TSA), a classical HDAC inhibitor, was shown to inhibit DNA methyltransferase 3A \[[@CR215]\]. In addition, the HDAC inhibitor suberoylanilide hydroxamic acid (SAHA) synergistically induces apoptosis in HeLa cervical cancer cells with bortezomib by activating caspase-3 and increasing the ratio of bax/bcl-2 expression \[[@CR216]\]. Strategies targeting epigenetic aberrations appear promising therapeutic modalities for regulating cervical carcinogenesis. Conclusion and Perspectives {#Sec24} =========================== We have briefly seen in this mini review that multiple processes and signalling pathways contribute towards progression to oncogenesis. The primary viral factors responsible for altering these pathways and mediating progression to malignancy are the E6 and E7 proteins. The viral oncoproteins directly or indirectly trigger the deregulation of many control mechanisms that ultimately lead to the accumulation of genetic and epigenetic alterations. Recent studies have identified other equally important cellular targets, including telomerase, members of the DNA damage pathway, caspases and micro RNAs. Among these affected processes, genomic instability plays a central role leading to mutations in cellular genes which co-operate with tumor suppressor pathways leading to transformation and malignant progression. The current HPV vaccination strategy targets at preventing the infection of few restricted HPV genotypes in uninfected individuals. They however have no effect on the existing HPV lesions and cancers. Women diagnosed with invasive and metastatic cervical cancer are in critical need of prognostic markers, targeted therapeutic options, and accurate surveillance strategies. Since it takes years to decades for cervical cancer development following acquisition of HPV infection, it provides us with a unique opportunity for cancer interception. Our understanding of the etiology of HPV-mediated carcinogenesis, the cellular pathways and molecular mechanisms involved in the transition from infection to cancer could provide novel opportunities for the design of effective therapeutic strategies to reduce the risk of HPV-mediated cancer. Akt : serine/threonine-specific protein kinase AP-1 : Activator protein 1 Bcl-2 : B-cell lymphoma 2 BRD4 : Bromodomain-containing protein 4 CADM1 : Cell adhesion molecule 1 CARM1 : Coactivator-associated arginine methyltransferase 1 CBP : CREB-binding protein CCNA1 : Cyclin A1 CDH1 : Cadherin 1 CDK 4 : Cyclin-dependent kinase 4 CHK1 : Checkpoint kinase 1 c-Myc : Proto-oncogene CpG : Cytosine phosphate Guanine CR1 : Complement receptor 1 CYLD : Cylindromatosis Tumor Suppressor Protein Deubiquitinase Cys : Cysteine DAPK1 : Death-associated protein kinase 1 DNA-PK : DNA-dependent serine/threonine protein kinase DNMT1 DNMT3A and 3B : DNA methyltransferases 1, 3A, 3B E1, E2, E4, E5, E6 and E7 : Papillomavirus genome consisting of Early region (E) encoding six open reading frames E2F : E2 factor family of transcription factors E6AP : E6 associated protein E6TP : E6 targeted protein EDD : E3 Ubiquitin ligase EMT : Epithelial-mesenchymal transition EPB41L3 : Erythrocyte membrane protein band 4.1 like 3 ERC55 : Endoplasmic reticulum calcium-binding protein FADD : Fas associated death domain protein FIGO : Federation of Gynecology and Obstetrics GAS5 : Growth arrest-specific transcript 5 HDAC : Histone deacetylases hDlg1 : Disc large homolog 1 HMT : Histone methyltransferases HOTAIR : Hox transcript antisense intergenic lncRNA HR E6 : High-risk E6 hTERT : Human telomerase reverse transcriptase IARC : International Agency for Research on Cancer IRF3 : Interferon Regulatory Factor 3 L1 : major and L2: minor capsid proteins LCR : long control region lncRNA-LET : lncRNA low expression in tumor lncRNAs : Long non coding RNAs MAGI1 : Membrane associated guanylate kinase MAL : Myelin and lymphocyte MAPK : Mitogen activated protein kinase MET : Mesenchymal Epithelial Transition miRNAs : microRNAs MMP-9 : Matrix metallopeptidase 9 MPL : Monophosphoryl lipid mTORC : mammalian target of rapamycin complex ncRNA : Non-coding RNA NFX1-91 : splice variant of Nuclear transcription factor, X-box binding 1 NF-κB : Nuclear factor kappa B NLS : Nuclear localization signals NuRD : Nucleosome remodeling deacetylase ORF : Open reading frame p53 : Tumor protein p53 PAX1 : Paired box 1 PKC : Protein kinase C pRb : Retinoblastoma protein PRDM14 : PR domain containing 14 PRMT : Protein Arginine Methyl Transferases PSD95 : Post synaptic density protein PTEN : Phosphate and tensin homolog RASSF1 : Ras association domain family member 1 SAP97 : Synaptic glutamate receptor dynamics SET7 : SET domain-containing protein 7 Snail : Zinc finger protein SNAI1 STAT : Signal transducer and activator of transcription SV40 : Simian virus 40 TLR : Toll-like receptors TNFR1 : Tumor necrosis factor receptor 1 TUSC8 : TUSC8 tumor suppressor candidate 8 UBE3A : Ubiquitin-protein ligase E3A UBR4 : Ubiquitin protein ligase E3 component N- Recognin 4 VEGF : Vascular Endothelial Growth Factor VIM : Vimentin Wnt : Wingless-related integration site ZO-1 : Zonula Occludens 1 None. Funding {#FPar1} ======= No sources of funding were used to assist with the preparation of this review. Availability of data and materials {#FPar2} ================================== Not applicable (The present paper is a review article and it describes published data). S M G performed the literature review, drafted the manuscript and revised it critically for important intellectual content. J M-P reviewed the manuscript. Both authors read and approved the final manuscript. Authors' information {#FPar3} ==================== None. Ethics approval and consent to participate {#FPar4} ========================================== Not applicable (The present paper does not report on or involve the use of any animal or human data or tissue). Consent for publication {#FPar5} ======================= Not applicable (The present paper does not contain data from any individual person). Competing interests {#FPar6} =================== The authors declare that they have no competing interests. Publisher's Note {#FPar7} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
FILD is another lucid dreaming technique that originated from a famous LD forum LD4ALL. It is thought of as a supplementary technique that combines reality checks and the WBTB (Wake Back To Bed) technique. Let's learn this technique by exploring complete information about it. The Creator and Naming of FILD The first name for the FILD technique was HILD Hargart's Induced Lucid Dream from its creators' name, "Hagarts." Now, it is famous with another descriptive name, i.e., Finger Induced Lucid Dreaming. The reason behind this descriptive name is the use of fingers to induce a lucid dream. But, as you will understand while reading further, you can focus on anything other than your fingers. How does Focus Induced Lucid Dreaming work? Focused induced lucid dreaming is a complete step-by-step process that starts with WBTB, and you have to perform certain reality checks to confirm your presence in a lucid dream. Step-1: You should sleep after following your bedtime routine as usual; as mentioned above, it works with WBTB, so you need to set the alarm for after 3 to 5 hours of sleep. Step-2: This wake-up time should ideally align with the REM (Rapid Eye Movement) stage of sleep. If you feel exhausted, the chances are that you just woke up from deep sleep. That's why you should try timing it with a 90-minute cycle – try 3, 4,5, or 6 hours of sleep. Your sleep cycle might be different, so try out other waking times if you feel the former does not work for you. Then, you should occupy your mind for 20-60 minutes. Try not to wake up completely and remain in a drowsy mode of being. Then, go back to bed. Step-3: When you relax and feel like falling back asleep, you should perform a focused body part movement. This movement should be minimal and take little energy to complete, and some are listed below: - Play the piano with two fingers. - Touch your right foot to your left. - Drag your arm slowly. - Bend your legs. Avoid any visualization and tell yourself to do this movement 10-30 sec without counting. You say to yourself that you will do a reality check in 10-30secs. Step-4: Keep your movement, and after 10-20 secs, do a reality check; if it fails, try to sleep for few minutes without motion and reality check. You can try again after few minutes; if it fails again, go to sleep and try again in the morning. Step-5: When the reality check works, that means you are in a lucid dream; keep your eyes close and keep doing a reality check. If it still works, your real world is transitioned into a dream world.
https://luciddreamsnews.com/advanced/focus-induced-lucid-dream-fild
In Australia, the rights of creators and users of copyright material are enshrined in the Copyright Act and apply to any work made or created in Australia. The University takes its copyright obligations seriously, and staff and students who breach copyright may face disciplinary action. Generally, copyright is infringed if a work—or a substantial part of a work—is used without permission in one of the ways exclusively reserved for the copyright owner. Ownership of copyright by staff and students is described in the University's Intellectual Property: Ownership and Management Policy. Section 4 (Ownership of Intellectual Property) describes in more detail how the Policy applies to teaching materials, funded research, scholarly works, student theses and assessment materials. For a work to be protected by copyright, it must be in a material form and have a human author. Copyright protects the expression of the idea, not the idea itself. It protects published and unpublished material, including material available in electronic form. The Copyright Act provides copyright protection to the following materials: The creator of a work is generally the owner of that work until or unless they assign their copyright to someone else. There are some situations where the creator of the work is not the 'first' owner of the work: Copyright can be sold to or divided between different parties. An author can also assign copyright (i.e. sell completely or licence partial use) by territory, media, and time. For works that are still in copyright, copyright generally lasts for the life of the creator plus 70 years. Please refer to the Australian Government's Duration of copyright guide for specific provisions. Under the Australian Copyright Act, copyright owners have a number of exclusive rights, including: These rights are commercial rights and give the copyright owner a monopoly to control how their work can be used. Under Part IX of the Australian Copyright Act, creators also have Moral Rights. These rights cannot be assigned or licensed, and last for the same period of time as copyright protection. Moral Rights refer to the rights of an author or artist: Under Moral Rights law, you must provide sufficient acknowledgement of the work copied, identifying the creator/author and the work from which the copy is taken (by its title and other descriptive information). Attribution and copyright infringement are two different things. You can properly attribute a work and still infringe copyright. The Fair Dealing provisions in the Copyright Act are purpose-based exceptions, which allow individuals to copy a reasonable portion from a copyright work for a limited number of specified purposes, without the need to obtain prior written permission from the copyright owner. Works copied under the Fair Dealing provisions must only be used for the purpose for which they have been copied. If a Fair Dealing exception does not apply to the amount you wish to copy or to the way you wish to use it, permission must be obtained from the copyright owner to use copyright material in any of the ways which are in the copyright owner's exclusive control. To rely on the Fair Dealing exception for Research or Study, the material must be copied for your own study or research, the use must be fair, and the original material must be attributed. The Fair Dealing exception for research or study cannot be relied upon by teaching staff to make copies on behalf of students or to make material available online, or by students making multiple copies of material for distribution. This exception may also not be relied upon to include copyright material in material that will be published. The Copyright Act does not define what constitutes Fair Dealing for: Before using or copying these materials, you will need to assess whether the purpose of your copying is fair, and whether the amount you wish to use is reasonable (even if the amount you wish to use is only small (e.g. a stanza from a poem or a clip from a film or video). The Five Factors of Fairness: For more information, please refer to the Australian Copyright Council's Research or Study information sheet. To rely on the Fair Dealing exception for Criticism or Review, the purpose of the copying must be genuine criticism or review. It is not sufficient that you copy a work merely to illustrate or explain your own work. The Australian Copyright Council's Quotes and Extracts information sheet provides guidance around the scope of this exception. The terms parody and satire are not defined in the Copyright Act. Please refer to the Australian Copyright Council's Parody, Satire and Comedy information sheet to assist in determining whether your intended use may be considered to be parody or satire. If, after consulting this information sheet, you are still unsure, please contact Ask the Library. The Copyright Act requires the University to display notices on individual works copied or communicated for the educational purposes of the University. The University is also required to display prescribed warning notices on University equipment used to make print copies or to create electronic reproductions. These notices provide some protection against the University being found liable for authorising copyright infringements by students or staff. Under s.39 and s.104 of the Copyright Act, the University is required to display Warning Notices on University equipment which is used to make print copies or to create electronic reproductions. Where equipment is unsupervised or does not display the required Notices warning users of the implications of their copying practices, the University can be found liable for authorising copyright infringement. Under s.39A of the Copyright Act, all University faculties, departments and libraries must affix the Notice about the reproduction of works and the copying of published editions and audio-visual items near or on all University equipment which can be used to copy or communicate copyright material, including machines with the capacity to create electronic reproductions. The following Copyright Notice should be displayed if you are reproducing or communicating any text, images, or notated music in electronic form. Click on the Notice to download a .docx copy.
https://guides.library.unisa.edu.au/copyright
Is now a good time to be asking for donations? That depends a little on where you are in your donor cycle and how you have treated your donors since they last gave. It’s been a tough year, for everyone. But Christmas is a time for giving and loyal donors in particular are used to being generous at this time of year. So, can you approach your donors for a donation? First, consider whether you have properly acknowledged their last donation. Was it receipted and did you thank them in a timely manner? Did you use their donation for the purpose it was intended? Did you let them know how the money was used and the impact it had? It is so important to make donors feel valued, no matter the size of their gift. You want them emotionally invested in your organisation so you need to make sure they feel like they have a role. Take them along on the journey, report back and share your successes. If you have stayed in touch, have maintained the relationship, and have not been silent during these past few difficult months, then yes… reach out. Wish them well, and let them know you’ve been thinking of them and that you hope they have been thinking of you too. Tell them of your plans, why you need their support, and begin your donor cycle again! Good luck!
https://www.crayoncreative.com.au/christmas-campaigns/
The ESOL Program is a standards-based curriculum emphasizing social and academic language proficiency which enables ELLs to use English to communicate and demonstrate academic, social, and cultural proficiency. I concur with him that sparking further discussion about accountability is essential, and I hope my comments and the abbreviated dialog with him contribute to that discussion. In Phase 1, SEDL staff met with leaders to examine three major categories of data by student groups, grade levels, and campuses: Even Start Family Literacy Program The Even Start Family Literacy Program provides funds to help break the cycle of poverty and illiteracy by improving educational opportunities of low income families by integrating early childhood education, adult literacy and parenting education into a unified family literacy program. That is, it could still be too much of a one-size-fits-all approach. Grading Schools does not address the issue of making decisions about students, as it is not a component of federal accountability decisions. However, states will be thinking about that should they redesign their systems. The authors surveyed the general public, school board members, and state legislators for how they would weight these, then they offer their own proposed weighting formula. No doubt that is partly because I chair FEA, which has produced what I think are some positive conceptions of the federal role that are fundamentally different from current federal law. They help address non-academic issues in the lives of students and their families to ensure academic success in the classroom. SES are high quality research-based educational programs. However, the draft contains a vast number of core disciplinary ideas and sub-ideas, leaving little or no room for anything else. Specifically, this research found that vocational education, correctional industries, and academic education all significantly reduce the recidivism rate of participating inmates after they are released from prison. A proactive approach to program development, such as inviting input from teachers, students, counselors, and administrators through periodic needs assessments, may maximize existing resources and services offered to non-college and college-bound students. This represents an increase of about 40 percent compared to spending in the prior year. Prison Education Benefits Public Safety. After removing the variables of school enrollment size, socioeconomic status, and percentage of minority students in attendance, positive program effects were identifiable. The concluding chapter does not offer specific recommendations on how the weighting would play out in actions taken in response to the data, though it provides some possible examples. Framing the accountability question more broadly than just schools is important, but much more work will need to be done to construct an integrated accountability system — a point the authors at least implicitly make themselves, saying they intend this book to spur thinking and discussion. Quantitative analyses of research meta-analyses also substantiate the beneficial effects of school counseling programs. Instructors provide inmates with workbooks focused on prerelease skills necessary for successful reintegration to communities as well as some academic material. However, if the system heads toward collecting significant amounts of data annually in each area in most grades, not only would costs be very high, but the assessing could become even more burdensome for schools than the current system. Traditionally, California students' access to counselors varies by grade level, and 29 percent of California school districts have no counseling programs at all. For the remainder of the essay these criteria will serve to define success. And we are planning changes to several key elements for which the Fordham Institute study found limited alignment of ACT Aspire assessments with the CCSS in writing, reading and mathematics. Special Services in the Schools, 6, Family change groups for the early age child. The School Counselor, 36, In some vocational programs, inmates who complete the required curriculum earn professional certifications in those trades, such as air conditioning repair and welding. Counselors can provide students with a variety of information and support. It calls for a clear but limited accountability role for the federal government, returning primary responsibility to the states. SEDL staff facilitated Georgetown and Lancaster teachers in using this process to examine content standards, develop common assessments to gauge student learning, analyze results from these assessments and others to determine student success, and plan how to refine instruction to scaffold or enrich student understanding. But after the destructive reductionism of NCLB, the nation needs public debate on how important various aspects of learning are and how to ensure our children receive a balanced opportunity for human development. Tweets by CADeptEd California Department of Education We oversee the state's diverse public school system, which is responsible for the education of more than six million children and young adults in more than 10, schools withteachers. Proving that counseling programs do count: While a reasonably lean system can be constructed, the quantity of assessing will be a critical issue if states move toward the sort of accountability system the authors outline. A systematic commitment to the wrong quantitative measures, such as the inexpensive multiple-choice testing of factoids, may well result in the appearance of gains at the tremendous cost of suppressing important aspects of learning, attending to the wrong things in instruction, and conveying to students a distorted view of science. In addition to appreciating more specificity in the authors' thinking, I think the inspections would be valuable but not sufficient as an improvement or evaluation tool, for the reasons I explained above. Magnet Schools Magnet schools are public schools that offer a targeted learning environment that attract students interested in specific content areas, such as mathematics, science, technology and fine arts. In this article, Ed World's "Principal Files" team shares strategies that have helped them boost sagging scores -- strategies that could work for you too. The authors more limited recommendations to the states focus on the development of inspectorates. No single assessment can tell educators all they need to know to make well-informed instructional decisions, so researchers stress the use of multiple data sources. A peer facilitator-led intervention with middle school problem-behavior students. Develop a case management system that assigns inmates to most appropriate programs based on risk and needs. National standards and testing, they say, will ensure that all children are ready for college or the workforce and will advance the educational standing of the United States. On the one hand, such. American Educational Research Association, American Psychological Association & National Council on Measurement in Education, Standards of Educational and Psychological Testing, Introduction. In education, the term assessment refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students. The Standards for Educational and Psychological Testing,* created by the American Psychological Association, the American Educational Research Association, and the National Council on Measurement in Education, present a number of principles that are designed to promote fairness in testing and avoid unintended consequences. They include. The Migrant Education Program (MEP) is a federally funded program designed to support comprehensive educational programs for migrant children to help reduce the educational disruption and other problems that result from repeated moves.
https://dyhojuxumuc.cwiextraction.com/improve-educational-testing-standards-43377im.html
Article: Juvenile justice law in Cambodia At present in Cambodia, there is no juvenile justice system in Cambodia. Children are tried in adult courts, most often with limited legal representation. They are generally held in adult prisons with limited or no access to rehabilitation or educational support. Regardless of the law, they are frequently held in pre-trial detention, often beyond the legal limit of two months. Thus, children find themselves in a system that is insufficient to respond to their specific needs and rights as children. In this context, in which both law protection and law enforcement are major challenges, children’s rehabilitation and educational support are big priorities. Evidence shows that currently, Cambodian’s judicial system overlooks this issue and therefore regards prison in a punitive way. This Life Cambodia believes that reintegration of prisoners in society should always be the main goal of any judicial system. Not only does a focus on reintegration promote prisoners’ individual rights, but it is also demonstrated that it constitutes an effective way of preventing prisoners from reoffending upon release. This is of concern especially in the case of children given their propensity for change and also with regard to the success of their leading happy and productive lives into adulthood following their release. The provision of vocational training, together with personal development and family visitation, is fundamental in promoting their right to basic education, to provide them the necessary skills to face life after prison, to prevent them from reoffending. Even though there is a draft law on juvenile justice —which was expected to be enforced by 2013—, NGO support is (and will be) fundamental in promoting children’s rights in Cambodia. The draft addresses the necessity of providing child friendly environment and access to education to children in prison, but the lack of budget —especially relevant in terms of training professionals such as lawyers, prosecutors and judges— and the fact that prisons are not within the main priorities of the government make the action of NGOs essential. Being aware that law itself constitutes a great barrier in promoting children’s rights, but also that law enforcement is a key issue because of the lack of resources, This Life Cambodia has been helping to tackle, together with other organizations, some of the major goals anticipated in the draft law —including vocational training and personal development within prison, supporting family visits, knowledge-sharing with the children’s communities and the main stakeholders, and post-release support to ensure students’ reintegration in society—, achieving successful reintegration results over time. As stated above, the lack of investment and enforcement mechanisms of a potential juvenile justice law makes This Life Cambodia and other organizations’ work as necessary as ever. Did you know that…? - The Cambodian government has been working on the juvenile justice law since 2000. - Cambodia has signed the Convention on the Rights of the Child and yet it has not adopted a juvenile justice system. - Cambodia is the ‘youngest’ country in the region—with almost 50% of its population under 18—, and yet its capacity to protect children who come in contact with the law is among the least developed. - The Guardian stated that there was a 92% increase in the number of under-18s in prison between 2005 to 2010 —from 403 to 772 children—. - According to UNICEF, there are only 296 judges and prosecutors, and 500 lawyers registered within the whole country. - 30% of children in prison are in pre-trial detention. - According to LICADHO, almost 40% of all children in prison are detained with adults —and consequently sentenced without any regard to their age—. - By September 2013, only four Cambodian prisons provided children with basic, on-site educational and recreational opportunities; all of these programs are run by NGOs. About the author: Diego Gines is an intern with This Life Cambodia. A graduate of a double degree in law and political science at Universidad Autónoma de Madrid at 22, Diego is interested in international politics and development. Diego is working within the Community Research and Consultancy Program, the program team responsible for monitoring and evaluation for all This Life Cambodia programs, as well as researching and publishing on issues in development in Cambodia, with the sharing of best practice development knowledge the main focus.
https://thislife.ngo/archives/2831
anyone know how to calculate the maximum and minimum iterations of the algorithms in simulink model? I have my control algorithm built in simulink and I need to count the number of iteration during the running time. thank you Moh Answers (1) Kiran Felix Robert on 15 Jul 2021 Hi Mohamed, Try using a Counter with a Trigger in the loop to count the number of iterations. https://www.mathworks.com/help/dsp/ref/counter.html See Also Products Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!
https://se.mathworks.com/matlabcentral/answers/876503-number-of-iteration-in-simulink
The invention discloses a drawing chair capable of being lifted and horizontally moved automatically. The drawing chair comprises a base, a chair body and a lifting mechanism. A walking mechanism is arranged below the base, and two stand columns are arranged on the base. The lifting mechanism comprises a double-output-shaft gear motor, two winches, two fixed pulleys I and two fixed pulleys II, the two winches are each provided with a coil of steel wire rope, one ends of the steel wire ropes are connected with the winches, and the other ends of the steel wire ropes wind the corresponding fixed pulleys I and the corresponding fixed pulleys II to be connected with the chair body. The drawing chair further comprise a control box and a remote control, the control box is internally provided with a remote control receiver, a controller and a steering device, the remote control is in wireless connection with the remote control receiver, the signal receiving end of the controller is connected with the remote control receiver, and the signal output end of the controller is connected with a walking driving device, the steering device and the gear motor of the walking mechanism. Time can be saved for drawing, and the potential safety hazards caused by frequently going up and down and carrying the chair are avoided.
Disclosure statement Tegwen Gadais is affiliated with UNESCO Chair in Curriculum Development and Education in Emergencies. He is teaching and doing research on Physical Education and Health Education at UQAM. Maud Deschênes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. As physical and health education (PHE) teachers responsible for the health education courses offered at l’Université du Québec à Montréal (UQAM) for future teachers, we have some advice for families living in confinement to help parents and children lead the healthiest and most active lives possible. These can be intense moments of activity such as simple games — playing hide and seek, making a fort in the basement, inventing a route in the alley, throwing a basketball, kicking a soccer ball, playing ball hockey in the street, dancing, biking or skateboarding. What’s important is that the activities are varied and regular.(Shutterstock) You can also ask your child to explain the latest game they learned at recess or in their health and physical education class and try it out with them. The possibilities are endless! Take active breaks As well, you can alternate with fine motor activities such as writing, painting, drawing, modelling, sewing or crafts. These activities should be interspersed with breaks and ideally be done in several short periods of five to 15 minutes rather than in one long 60-minute period. What is important is that the activities are diversified and regular. Other options exist for getting young people moving, such as walking and household chores. Additional strategies can be found online, such as active breaks with GoNoodle, Wixx (in French) and H2GO, for example. Be careful to choose your breaks carefully. Some of them are more useful or even more fun. Finally, free play outside, in a backyard, garden or in the street, without contact with others, is another possibility. Healthy, balanced eating Boredom and withdrawal can easily lead to complacency in a confinement situation. This is an excellent opportunity to cook as a family and to learn how to eat healthily with our children by offering them recipes adapted to their abilities and needs. A balanced menu can be planned ahead of time by including the necessary groceries according to your budget. Remember that eating should be associated with pleasure and not a chore.(Shutterstock) Canada’s Food Guide can be an excellent home reference tool for children. It is also possible to introduce children to new foods, our own cultural specialties, and other foods through well-known sites such as Ricardo. For parents who like to experiment, this is the opportunity to try out new recipes or to dust off the old recipe books that are lying around in your library. It’s also an opportunity to make your children aware of the importance of gardening, food waste, recycling and composting. As the food guide indicates, we must encourage diversity, reasonable portions, meals in good company and the pleasure of enjoying our food. In this time of confinement and upheaval of routine, the temptation to lose good habits can be great. Get a good night’s sleep The current crisis requires a major change of pace. For the well-being of all (parents and children alike), it is important to get enough sleep. A tired child is under stress and will be more irritable, which can have an impact on the whole family. It is best to keep the usual bedtime and wake-up times, with a preference for quiet activities (without screens) just before bedtime. Reduce sources of stress Isolation is a difficult time because we need to exchange with others. It is important to find other ways to do this, such as organizing a meal for friends via Skype, FaceTime or Messenger, calling or writing messages to family and friends. There may also be times when your children experience stress, boredom or mental exhaustion related to the confinement situation. It is important to give them periods of rest, alone and quiet. Pay attention to the well-being of all family members. Organize breaks during the day when you find that motivation is no longer there. It’s good to change tasks after 30 or 45 minutes or when you think that the screen time has lasted long enough. Read something other than the news and allow everyone to quietly retreat to a room in the house when necessary. This accompaniment allows children to distance themselves from the content they consume, to criticize it and to reflect on it. Finally, we must manage what Meirieu called the “available brain time” for learning so that children can continue to learn and not just be entertained. Learn by playing Young people learn first and foremost through play. This can be free play outside, board games or directed play under parental supervision. Above all they must keep their motivation and confidence. A multitude of games and activities help consolidate what has been learned at school. Cooking, arts and crafts and physical activity are extraordinary opportunities for children to confront problems, look for ways to solve them and apply their knowledge. This is a good time to develop children’s curiosity and independence, which will enable them to enjoy classes even more when school starts again. The most important thing is that children grow up playing and being active. If the situation becomes difficult, or your days seem long and complicated or you find that your children need special support, don’t stay isolated. Check out blogs or chat virtually with people in similar situations. A wide range of resources are available to support the physical and mental well-being of your family unit during this time of crisis.
Original by Matthew R. Feinstein. Question: Does my bathtub drain differently depending on whether I live in the northern or southern hemisphere? Answer: No. There is a real effect, but it is far too small to be relevant when you pull the plug in your bathtub. Because Earth rotates, a fluid that flows along Earth's surface feels a "Coriolis" acceleration perpendicular to its velocity. In the northern hemisphere, Coriolis acceleration makes low pressure storm systems spin counterclockwise; however, in the southern hemisphere, they spin clockwise because the direction of the Coriolis acceleration is reversed. This large-scale meteorological effect leads to the speculation that the small-scale bathtub vortex that you see when you pull the plug from the drain spins one way in the northern hemisphere and the other way in the southern hemisphere. But this effect is VERY weak for bathtub-scale fluid motions. The order of magnitude of the Coriolis acceleration can be estimated from size of the "Rossby number" (1) (see below). The effect of the Coriolis acceleration on your bathtub vortex is SMALL. To detect its effect on your bathtub, you would have to get out and wait until the motion in the water is far less than one rotation per day. This would require removing thermal currents, vibration, and any other sources of noise. Under such conditions, never occurring in the typical home, you WOULD see an effect. To see what trouble it takes to actually see the effect, see the reference below. Experiments have been done in both the northern and southern hemispheres to verify that under carefully controlled conditions, bathtubs drain in opposite directions due to the Coriolis acceleration from the Earth's rotation (2). where U is the velocity of a fluid element, L is the scale of the fluid motion, and ω is Earth's rotational velocity ( = 1 rotation/day). In conventional units, the earth's rotation rate is about 10−4/second, so solving the above equation for the fluid velocity, we get that Coriolis acceleration in your bathtub is significant for fluid velocities of less than 2 x 10−6 metres/second. This is a very small fluid velocity. How small is it? Reynolds number = L U density/viscosity . Assuming that physicists bathe in hot water, the viscosity will be about 0.005 poise and the density will be about 1.0. So the Reynolds Number is about 0.04. Now, life at low Reynolds numbers is different from life at high Reynolds numbers. In particular, at low Reynolds numbers, fluid physics is dominated by friction and diffusion, rather than by inertia. That is, at low Reynolds numbers the time it would take for a small piece of fluid to move a significant distance due to an acceleration, is greater than the time it takes for that piece to break up due to diffusion. Ideas about which way a bathtub will drain have also been cited for giving the direction water circulates when you flush a toilet. This is surely nonsense. In this case, the water rotates in the direction in which the pipe points, that carries the water from the tank to the bowl. (1) J. Pedlosky, "Geophysical Fluid Dynamics" section 1.2. (2) Trefethen, L.M. et al, Nature 207, 1084–5 (1965).
http://physicsfaq.co.uk/General/bathtub.html
Keanu Reeves is headed back to The Matrix again for a fourth installment of the groundbreaking sci-fi saga. The initial confirmation of what’s currently being called The Matrix 4 generated no shortage of headlines, and it also prompted quite a few questions — particularly involving what a new chapter could add to the post-apocalyptic cyberpunk franchise. With Reeves and fellow franchise star Carrie-Anne Moss returning for The Matrix 4 (with possibly more familiar faces to com), and original co-director and co-writer Lana Wachowski helming the project, all of the pieces are in place for a return to form. But exactly what shape that form will take is still a mystery. Here’s what we need to see in a new chapter of The Matrix series to make a return to that universe worth jacking in. 1. A new dimension The first chapter of The Matrix introduced audiences to a world in which everything we — via Reeves’ audience surrogate, Neo — thought was real was actually a facade. It swept us along on a dissection of the very natures of humanity and sentience, set against the backdrop of a post-apocalyptic future in which machines harvest human beings for energy, while attempting to discern the contents and fundamental structure of our souls. Much of what the machines did and why they did it was an enigma wrapped in endless layers of riddles that Neo was forced to contend with as he came to grips with his abilities in both the real world and the artificial existence within The Matrix. A new film set within that universe can’t simply throw more riddles at us. In order to justify a return to The Matrix, we need to discover that the story — much like Neo’s world — has more layers to it than initially believed. Whether it’s a change in the balance between the machines and humans, a new role to play for Neo and his allies, a deeper dive into the simulation, or a revelation that changes everything we thought about The Matrix or the reality the characters perceive, something needs to make us sit back and say, “Whoa.” 2. A world worth fighting for Say what you will about Cypher, the Judas character played by Joe Pantoliano in The Matrix who betrayed his shipmates in exchange for a comfortable life within the simulation, but he had a valid point about the real world. It was a dark, dirty, desperate life for the humans freed from The Matrix, forced to constantly run from vicious machines that outnumbered and outgunned them. One can almost understand why Cypher was willing to sell his soul for a return to a faux-life of ease and comfort. Although subsequent installments of the franchise showed us a bit more of humans’ daily lives when they weren’t fleeing machines — basically, a nonstop, sub-terranean, cyberpunk rave — the series never really gave audiences a world that seemed undeniably worth, well … saving. If we’re going to return to the war between humans and machines, here’s hoping we get to see humanity creating a world for itself that feels precious and important enough to justify everything that the protagonists endure. The machines need to be stopped not because they’ll end a kick-ass party, but because they endanger everything that makes us human. 3. Bleeding-edge effects The Matrix was a groundbreaking movie for a long list of reasons, but its most prominent and obvious achievements — and the element that secured one of its four Academy Awards — was its innovative, cutting-edge visual effects. In both its “real” and simulated worlds, The Matrix used visual effects that seemed light years ahead of anything audiences had seen on the big screen before. Whether Neo was jumping off a skyscraper only to bounce off the pavement below, inverting the laws of physics in mid-battle, or running up walls in slow-motion amid a hail of bullets, every scene in The Matrix and its sequels was a unique, fascinating feast for the eyes. A new film set within the worlds of The Matrix will need to look to the future, and then look far beyond it for whatever’s next when it comes to digital effects. The first movie was a game-changer for both the sci-fi genre and visual effects in general, and although the sequels delivered plenty of visual spectacle, they occasionally coasted on the prior films’ visual brand. In order for a return to that world 15 years later to have a lasting impact and feel authentic, the new film will need to raise the visual effects bar all over again. Visual effects have come a long way since 1999 when The Matrix premiered, but if anyone can figure out a way to push the boundaries of what can be done and how it can be brought to the screen, it’s Lana Wachowski. 4. Fresh, fascinating fight scenes Along with its visual effects, another hallmark of The Matrix and its sequels were the acrobatic, physics-defying, bullet-filled fight sequences that filled all three installments of the trilogy. The films earned plenty of praise for their action sequences, which blended aspects of Hong Kong cinema, classic martial arts movies, and more modern “gun-fu” fighting techniques. Those sequences only got more complex as the series progressed. One of the reasons Reeves is back in the spotlight is the success of his 2014 film John Wick and the action franchise it spawned, and one of the primary elements that made John Wick so fascinating is its masterfully choreographed fight sequences. In many ways, Reeves’ performance in John Wick and its sequels evoked a lot of what made The Matrix so entertaining, so it makes sense that he’s being brought back to the latter’s sci-fi universe. If a new Matrix movie is going to work, we need more of that innovative approach to action that made The Matrix so memorable. Audiences need to be energized by what’s happening in the film and feel like what they’re seeing is the farthest thing from the usual big-screen brawls. It will be a big challenge, certainly, but we’ve seen Reeves do it not once but twice now. There’s no official release date for The Matrix 4 at this point, and no indication of when production will begin on the film. The views expressed here are solely those of the author and do not reflect the beliefs of Digital Trends.
https://aroundthenews.info/what-the-matrix-4-needs-to-make-us-take-the-red-pill-again/
At the Associated Students of the University of Washington, partnerships are our bread and butter. None of what we do would be possible without our network of collaboration. We have opportunities to engage with partners at all levels of the Association, and in many different ways. We invite you to read about our past partnerships, and cannot wait to explore new engagement opportunities with you soon. We team up with Registered Student Organizations and other student leaders outside of ASUW on events and advocacy efforts. Together, we can reach broader audiences, share resources, and amplify our voices. or get us involved in an important project on campus, we want to hear from you. or resources with individuals, organizations, and businesses, everyone wins. Attention: the ASUW Partnerships website is currently under development. Check back soon, as we will be adding pages with more in depth and up to date information throughout the winter. If you have questions which are not currently addressed on the website, please send us an email.
http://partner.asuw.org/
The blockchain-based education system is a new way of teaching and learning about how we can organize and use online student data, giving them an ease of access, sense of ownership, and immutability - and it depends on how to store education data in the future. Read this interesting article to learn what is the role of blockchain technology in education and how blockchain can change the education system. |Role of blockchain technology in the education sector.| How Can Blockchain Technology Transform the Education System? Blockchain-Based Education System Better education is very important for everyone to move forward in life and achieve success. It helps in building confidence in us as well as in building our personality. Education has played an influential role in all developed countries around the world. It is also the main axis of progress and development in the lives of individuals and communities. Education is not limited to classrooms, but also includes a range of the most important areas affecting the future of the country, including industry, medicine, agriculture, science, and others. These areas share a close relationship with education and its levels in the country. E-learning system is the online education system, using modern electronic technologies to access everything related to teaching materials outside the boundaries of the classroom. The most important terms used to express and describe it are distance learning and computerized e-learning, where it is an interactive online course in which students can interact with teachers and receive their assignments at the same time. It should be noted that there are many techniques that can be used in raising the efficiency of the workforce and can be applied to solve problems in the field of education, whether governmental or private. One of the most prominent techniques used in the education sector is the blockchain technology, which many government educational institutions are preparing to start incorporating in their field of work. This technique contributes to the creation of a radical solution to many problems that may hinder the education process. In this article, we will discuss the role of blockchain technology in the education sector and how it could transform the education system. Role of Blockchain Technology in Education Sector The role of blockchain technology in the educational fields is as follows: ⇒Blockchain technology in the education sector is a tool to facilitate document retention for educational institutions and the disposal of documents stacked on shelves. ⇒Disposal of paper documents in general, reducing the possibility of forgery and the chance of paper loss. ⇒Preserving official certificates and protecting them from loss. ⇒Access to the information easily if the owner authorizes the user to do so. ⇒Eliminate employer skepticism regarding the employee's experience and knowledge of job performance. ⇒Selection of people with competencies and expertise in full transparency to fill the appropriate positions for them. ⇒Maintain the privacy of the data and store it to the applicant or its owner without unauthorized access. ⇒Rationalize spending of millions of dollars and reduce working hours more than ever. ⇒Provide an opportunity to make some modifications to unknown transactions in educational institutions. ⇒Transparent filtering in the distribution of grants, loans, and projects in a fairway. ⇒Increase the efficiency of financial accounting performance in educational institutions of all kinds. How Can Blockchain Technology Transform the Education System? Blockchain technology has gained a lot of fame over the last few years for its superior protection services and a better cybersecurity system. However, potential uses of blockchain technology extend far beyond its current applications and could bring a major change in classrooms one day. In a few years, blockchain technology can become an integral part of modern schools and institutions worldwide. If you want to understand the potential impact of blockchain technology in the education system, it is necessary to know how other fields have used blockchain to improve their processes. Then you can easily understand how schools and institutions might use it for educational purposes. Applications of Blockchain Technology in Education Sector The most prominent applications of blockchain technology in the education sector are: Diploma and certificates: Blockchain can be applied in education by providing the academic information to the student with its content of titles, diplomas, notes, and experiences as well, and it is characterized by being highly protected from any risk that may affect them. It is the best way to ensure that no changes are made to the certificates and diplomas obtained by each student. Accreditation of certificates and credentials: This is an effective way to accredit papers and certificates based on personal skills. This comes on the sidelines of the implementation of a set of projects that later give the student the opportunity to present credibility with skills they have acquired during group learning. Transaction Documentation: Blockchain technology plays an important role in maintaining transactions and reliability. It contributes to controlling economic transactions in various forms with e-learning institutions, thus verifying the credibility of the virtual educational institutions and avoiding fraud. Secure Data Management: This is often required in virtual education, where documents in this type of education are stolen or characters are impersonated and information is changed. So blockchain technology has dispelled any attempts to change data and information. AI-Powered Chatbots for Exam Preparation Opet Foundation OPET Token is an AI-powered education chatbot available in the form of a mobile application and web development online courses designed to provide students with lessons to help them prepare for exams. Apart from answering student queries, the app is designed to recommend worksheets and problems that users have to solve. This will then create profiles for users, and track their learning speed and grades. This information will be placed on the blockchain, allowing institutions to evaluate students based on their progress. Hyperledger Based Education Industry: Hyperledger is a multi-project of open-source blockchain technology, hosted by The Linux Foundation. Hyperledger Fabric is an enterprise-legal grade framework for distributed ledger technology and blockchain for providing a transparent and systematic approach and developing solutions. The transparent and picturesque nature of blockchain means that records cannot be easily changed because Opet's “Hyperledger” system adds privacy to students. The Authority Validation Framework ensures that universities that want to see student records have access to specific fields of up-to-date information. At the same time, only approved nodes such as universities and institutions can edit posts on the blockchain, ensuring that all information is trustworthy. All of these are done with a dedicated API user interface, which is connected to both the “distributed ledger technology” and “blockchain”. With these innovations, Opet Foundation believes “Hyperledger” is able to gradually change the field of education that we know through a set of steps forward. Nothing is more powerful than the idea that its time has come. It is a good time to link education, artificial intelligence and blockchain together.
https://www.scientificworldinfo.com/2019/10/role-of-blockchain-technology-in-education-sector.html
What is an IDE in programming? - Kritidipta Ghosh - Author In this article, we will mainly focus on an IDE and why programmers use IDE for coding. Also, we will focus on some popular IDEs heavily used nowadays. Every beginner faces one common question “Where should I code?” or “Which IDE will be the best for me to start coding?”. Hopefully, after going through this article any beginner will be able to find out which IDE is good for him/her. Let’s begin. Table of Contents What is an IDE? IDE in programming stands for Integrated Development Environment. As its name suggests an IDE is really an integrated part of developers. IDE is nothing but software that provides flexibility to computer programmers for software development by providing some excellent features and tools. An IDE typically provides three main tools: Source code editing, Build automation tools, and Debugger. Let’s briefly discuss all of them first. - Source Code Editor: This is basically a text editor(like notepad) that helps developers to write and edit code efficiently. This tool provides syntax highlighting as well as different colors to the keywords used in specific programming languages. Modern-day IDEs come with the auto-completion feature that helps developers to write code faster than ever. - Build automation tools: To get the software, that can be run on any machine, the source code needs to go through certain processes like compiling it into binary code, packaging binary code, and running automated tests. These processes can be completed with just a click with the help of an IDE. Especially, beginner level coders find this tool very useful as well as time-saving. - Debugger: IDEs also come with a debugging feature. Sometimes in a large codebase finding the error consumes a lot of time. Here the debugger comes as a time-saving tool. This is basically a program that helps the developers to test another program to locate the error(s). Evolution of IDEs Before moving on to further discussion, I think it is quite interesting to brief the history of modern IDEs. In the earlier days, coding was not possible in a colorful editor with the user’s favorite theme or background. Initially, it started with punched cards and gradually turned into simple text editors( “text-only” like notepad). The world’s first IDE was Maestro I which was a product from Softlab Munich. When the “text-only” text editors became popular programmers used to write code in that editors, compile the code separately and figure out the errors manually, and then fixed them. By the time these processes became automated through IDE. Today, writing code is easier and more interesting because of its beautiful appearance in an IDE. Worldwide, Visual Studio is the most popular IDE, Visual Studio Code grew the most in the last 5 years (10.8%) and Eclipse lost the most (-13.5%). Some popular IDEs’ trends are given below(collected from https://pypl.github.io/IDE.html): Types of IDEs There are various IDEs available in the market nowadays. Among them, some are paid and some are free as well as some are focused on a specific language and some are built to support multiple languages. we are going to discuss some of the types here. Single Language IDEs This type of IDEs is built to support a specific language. This is very much helpful for those who generally work in a single language. But nowadays most developers use multi-language IDEs. Some examples are given below: C & C++: C-Free, Dev-C++ Python: Idle Java: Jcreator Ruby/Rails: RubyMine Multi-language IDEs As its name suggests these types of IDEs support multiple languages. These IDEs are massively used by today’s developers. Some of the popular IDEs are discussed below: - Visual Studio: The most commonly used IDE in the developers’ community and supports many languages. It is supported by MAC os, Linux as well as windows. This helps programmers to code in a productive and innovative way. This IDE also supports AI-completion of code. This is also beginner-friendly. Official page: https://visualstudio.microsoft.com/ - Xcode: This IDE is Apple’s very own product and this is only supported by Mac os. This IDE is very much helpful to build software for iPad, iPhones, Mac, and all other apple devices. The latest release of this version includes many useful packages. Official page: https://developer.apple.com/xcode/ Mobile Development IDEs In today’s smartphones are an integral part of our lives. Now, these smart devices require smart apps. So, mobile app development is one of the most exciting fields. People use various IDEs for mobile app development and among them, we are going to discuss the two most widely used IDEs. - Android Studio: This is an excellent IDE for android developers. This empowers developers to build amazing useful apps for all kinds of android devices. This IDE is based on IntelliJ IDEA. Official Page: https://developer.android.com/studio - Xcode: This IDE is discussed above. This IDE mainly supports Apple developers to create apps for ios. Cloud IDEs With the help of the previously discussed, we can work only on our local machine. But cloud-based IDEs have broken this limitation and have given access to programmers to code from anywhere and from any machine. These IDEs are growing rapidly. There are various cloud IDEs. Among them, the popular ones are CodeAnywhere, Cloud9, Codenvy, CodeTasty, etc. Benefits of using IDEs for coding - The use of IDE makes coding interesting through syntax highlighting, keywords coloring, and more. - IDE saves a lot of time and also automates tedious jobs simpler for developers. - Fast coding is introduced with the use of an auto-completion feature. - The setup is also beginner-friendly. How to choose IDEs Till now, we have discussed various IDEs with their use cases. There are many more IDEs out there but to choose the suitable ones we need to know our requirements from an IDE while developing something. We also need to check the cost and the user-friendliness of an IDE while choosing it. I think going through this article once, it is possible to decide. I personally like to use multi-language IDEs and recommend this to beginners. Conclusion Overall, the use of IDEs varies from person to person as well as project to project. Hopefully, this article will be helpful for resolving confusion regarding choosing an IDE. Learn programming on codedamn Codedamn is an interactive coding platform with tons of sweet programming courses that can help you land your first coding job. Here's how: - Step 1 - Create a free account - Step 2 - Browse the structured roadmaps (learning paths), or see all courses. - Step 3 - Practice coding for free on codedamn playgrounds. - Step 4 - Upgrade to a Pro membership account to unlock all courses and platforms. Programming is one of the most in-demand jobs today. Learning to program can change your future. All the best!
https://codedamn.com/news/developer-tips/what-is-an-ide-in-programming
BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT 1. Field of the Invention The present invention relates to an illumination apparatus and an optical radiation control method thereof, and more particularly relates to an illumination apparatus with temperature compensatory function and an optical radiation control method thereof. 2. Description of the Related Art Because of its power conservation, the LED is now being extensively used for optical display, for instance, traffic lights and tail lamps of motor vehicles. The white light used in lighting is also presently the focus of research on LED technology. Currently, white light is generated by a number of methods described below. One method is by placing the yellow phosphor powder on top of the blue LED. The yellow phosphor powder absorbs blue light and emits yellow light, and the white light can be generated by mixing the blue light and the yellow light. The shortcoming of this method is its short longevity because the yellow phosphor powder ages easily and causes attenuation of the yellow light, results in the phenomenon of color shift in the white light LED. Another method is by mixing the red light LED, the green light LED and the blue light LED to generate white light. However, this method easily causes color shift because of the different conditions of each LED. U.S. Pat. No. 6,498,440 has presented a feedback mechanism to resolve this problem by utilizing an optical detector to detect the conditions of each LED and the detector will transmit a measurement result to a control circuit, and the control circuit uses the result to adjust the driving signal of the LED for rectifying the optical radiation of the LED. FIG. 4 Nevertheless, aside from the fact that aging of the LED affects optical radiation, temperature is also an important factor for changes in optical radiation. Referring to , the energy spectrum distributions of the LED in degree.40 C. and degree.70 C. shows substantial variation with the same driving signal. When temperature rises, the LED optical radiation drops and the wavelength shifts. A drop in optical radiation can be compensated by the aforementioned feedback mechanism, but shift in wavelength cannot possibly be compensated with the feedback mechanism, rather this phenomenon can cause erroneous adjustment of the feedback mechanism. Furthermore, the aging of LED is a slow process and the effect from aging can be effectively compensated with the aforementioned feedback mechanism. But the surrounding temperature of the LED changes frequently with weather and operation of LED, compensation by the feedback mechanism alone is insufficient. In view of the various problems in conventional art, the inventor of the present invention with years of research and development and experience in the industry, has presented an illumination apparatus and an optical radiation control method for overcoming the above shortcomings. The purpose of the present invention is to provide an illumination apparatus and an optical radiation control method to offset effect to the illumination apparatus from temperature changes. In accordance with the purpose of this invention, the illumination apparatus presented comprises at least one light emitting module, one control module, one optical detector module and a temperature calculating module. The light emitting module is for producing a light beam, and the control module is for generating a PWM signal to drive the emitting module. The optical detector electrically connected to the control module is for detecting the optical radiation of light emitted from the light module. The temperature calculating module electrically connected to the control module is for calculating the temperature of the light module based on the PWM signal and a predetermined PWM width. The control module can adjust the optical radiation of light emitted from the light module based on the temperature and the detected data of the optical detector. Moreover, the present invention also presents an optical radiation control method for use in the illumination apparatus, wherein the illumination apparatus has at least one light emitting module and the method includes following steps of: using at least one PWM signal to drive the emitting module; calculating the temperature of the emitting module in accordance with the PWM signal and a predetermined PWM width; detecting the optical radiation of the emitting module; and adjusting the optical radiation of the emitting modules in accordance with the detected optical radiation data and the calculated temperature. To make it easier for our examiner to understand the above technology and the effect achieved, the following feasible preferred embodiment accompanied with the related drawings are described in details. The relevant charts and diagrams below describe a preferred embodiment of the present invention for an illumination apparatus and an optical radiation control method. For helping better understand same symbols are used to indicate the same elements in the embodiment. FIG. 1 1 10 11 12 13 10 14 10 11 10 12 11 14 10 13 11 10 11 14 10 16 15 12 Referring to the block diagram of the illumination apparatus of the present invention as shown in , the illumination apparatus comprises at least one light emitting module , a control module , at least one optical detector and a temperature calculating module . The light emitting module is for emitting a light beam . For instance, the light emitting module may be a red light LED, a green light LED and a blue light LED. If the illumination apparatus comprises these three types of LEDs, the illumination apparatus can emit tri-color light to mix into lights of different colors. The control module is for generating a PWM signal to drive the light emitting module . The optical detector electrically connected to the control module is for detecting the optical radiation of light emitted from the light modules . The temperature calculating module electrically connected to the control module is for calculating the temperature of the light emitting module based on the PWM signal and a predetermined PWM width. The control module can adjust the optical radiation of light emitted from the light modules based on the temperature and the detected data from the optical detector . FIG. 4 12 11 13 11 Wherein the aging of the LED is comparatively slow, if major change of optical radiation is detected in a short period of time it is usually caused by temperature changes of the light emitting module. Hence, a PWM width corresponding to a specific temperature of the light emitting module can be predetermined first, and the temperature of the light emitting module is calculated based on the difference between the PWM signal and the predetermined PWM width. For instance, as shows, when temperature rises to degree.70 C. the optical radiation of the light emitted by the LED would decline and red color shift would occur because of increase of the light wavelength. Thereby, the optical radiation detected by the optical detector would decrease, and the control module would increase the length of the PWM signal for maintaining a stable optical radiation of the light emitted by the LED. To compensate for the wavelength shift caused by increase in temperature, the temperature calculating module must first calculate the temperature of the LED and refer to a look-up table which records the corresponding relationship among the temperature, light wavelength, optical radiation and PWM signal, and allow the control module to generate the most optimum PWM signal. The temperature and the PWM signal width are in a linear or nonlinear proportional relation. 11 For example, the optical radiation and wavelength of the light emitted by the LED corresponding to temperature changes are detected in advance. Likewise, when temperature rises from degree.40 C. to degree.70 C., the detected red light LED light power drops from 2.60E-02 to 1.75E-02; and in accordance with the aforementioned feedback mechanism, the control module would increase the PWM_R width of the PWM signal which is used to drive red light LED for enhancing the light power of the red light LED. Thus, the corresponding relation between the temperature and the duty cycle of the PWM signal can be estimated. A PWM R PWM — Temperature=Parameter ×(−Predetermined width) Wherein the predetermined PWM width is related to the light color desired to maintain by the illumination apparatus. Furthermore, when the detected temperature rises from degree.40 C. to degree.70 C., the emitted light wavelength from the red light LED increases 4 nm; therefore, the compensatory parameter can be estimated in advance and be recorded on the look-up table. 11 15 16 The optical detector may be a silicon photodiode or a CdS photoresistor, and the control module may comprise a signal processing unit, a PWM signal generator and at least one transistor that is electrically connected to the light module. The signal processing unit is for performing a computation based on the detected data and the temperature , and using the computed result to control the PWM signal generated by the PWM signal generator, thereby enabling the light module to emit the desired light. FIG. 2 2 201 202 203 21 241 242 243 25 262 262 263 21 211 Referring to the block diagram of the preferred embodiment of the present invention as shown in , the illumination apparatus comprises a plurality of red light LED , a plurality of green light LED and a plurality of blue light LED , a storage unit , an optical detecting module for detecting the red light, an optical detecting module for detecting the green light, an optical detecting module for detecting the blue light, a temperature calculating module and a plurality of transistors , and . The storage unit stores a look-up table showing records the corresponding relation among the temperature, light wavelength, optical radiation and the driving signal. 261 262 263 23 241 242 243 262 262 263 23 The gate of the transistors , and is electrically connected to the PWM signal generator , and its drain is electrically connected to negative end of the red light LED , the green light LED and the blue light LED respectively and its source is grounded. The gate of the transistors , and respectively receives the PWM signal generated by the PWM signal generator , and the PWM signal includes a pert of high voltage and a part of low voltage part, and when high voltage is received, the transistors would enter the open status and form a pass circuit enabling electric current flowing to the LED to emit light. When low voltage is received, the transistors would enter the close status and form a break circuit and the LED stops to emit light. Therefore, the optical radiation of the LED can be adjusted by changing the width of the high voltage part of the PWM signal or by changing the proportion of high voltage part and low voltage part. 241 242 243 241 242 243 241 242 243 22 22 23 241 241 22 22 23 201 241 241 2 The optical detecting modules , and respectively comprise a light filter and a silicon photodiode, and the light filter of the optical detecting modules , and filters the red light (wavelength 620 nm˜660 nm), the green light (wavelength 510 nm˜550 nm) and the blue light (wavelength 440 nm˜470 nm) respectively, so that the silicon photodiode of the optical detecting modules , and can detect the optical radiation of the red light, the green light and the blue light separately, and transmit the detected data to the signal processing unit . The signal processing unit controls the PWM signal generator to adjust its output PWM signal based on the detected data. For example, if the red light LED ages and makes the silicon photodiode detect a lower red light optical radiation, and the signal processing unit judges that the red light optical radiation signal is lower than the predetermined value, the signal processing unit then controls the PWM signal generator to increase the high voltage part of the transistor PWM signal in order to drive the red light LED to emit stronger light until the silicon photodiode detects that the red light optical radiation restores to its predetermined value. Thus, by using aforementioned feedback mechanism to control the LED and enable the illumination apparatus to emit consistently a predetermined light. 25 22 251 22 22 21 The temperature calculating module is electrically connected to the signal processing unit and transmits the calculated temperature to the signal processing unit . The signal processing unit accesses the look-up table stored in the storage unit and uses the recorded data of the look-up table to control the PWM signal generator to adjust its output PWM signal. FIG. 3 FIG. 2 2 is a flow chart showing the steps for the optical radiation control method of the invention. This method corresponds to the illumination apparatus as shown in . The method includes the following steps: 30 241 242 243 Step : Use the optical detecting modules , and to detect the optical radiation of the red light, the green light and the blue light; 31 25 2 Step : Using the temperature calculating module to calculate the temperature of the illumination apparatus ; 32 22 211 21 Step : Using the signal processing unit to read the look-up table from the storage unit ; 33 22 251 201 202 203 211 Step : Using the signal processing unit to receive the detected red light optical radiation, the green light optical radiation, the blue light optical radiation and the detected temperature data , and compute the corresponding adjustment magnitude of the red light LED , the green light LED and the blue light LED according to the look-up table ; and 34 23 Step : The PWM signal generator generates the PWM signal corresponding to the adjustment magnitude of the LED for adjusting the LED optical radiation. While the invention has been described by way of example and in terms of preferred embodiments, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements and procedures, and the scope of the appended claims therefore should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements and procedures. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing the illumination apparatus of the present invention; FIG. 2 is a block diagram of a preferred embodiment of the illumination apparatus of the present invention; FIG. 3 is a flow chart of the steps for the optical radiation control method of the illumination apparatus of the present invention; and FIG. 4 is a chart recording the temperature changes corresponding to the LED emitted light wavelength and power.
FIELD OF THE INVENTION This invention relates to a device for providing a selectively variable proportion of an electrical signal and relates especially to a device that can be used instead of a conventional resistive track potentiometer. BACKGROUND TO THE INVENTION Resistive track potentiometers are well known and are used in a multitude of different applications. However, a disadvantage of such potentiometers is that they require mechanical movement of a slider along a track and hence they are susceptible to mechanical wear. Moreover, they are not readily disposed to be driven automatically or from a remote location. SUMMARY OF THE INVENTION It is an object of the present invention to provide a device for providing a selectively variable proportion of an electrical signal, which can be used instead of a conventional resistive track potentiometer. It is another object of the invention to provide a device for providing a selectively variable proportion of an electrical signal which can be driven automatically or from a remote location and which need not include moving parts. These objects and others are accomplished by means of a device in accordance with the present invention which comprises first and second switching means each for being switched between relatively high and low conductivity states, said switching means being connected together at an output of the device and being connected in series between inputs for receiving said electrical signal, and a control means including means arranged to operate the switching means cyclically between said high and low states such that when one of said switching means is in its high conductivity state the other said switching means is in its low conductivity state, and means for controlling selectively the relative durations that the devices remain in said high and low conductivity states. In use of a device in accordance with the invention, a low pass filter is connected to the output to provide an output filtered signal which is of the same general waveform as the input signal but of an amplitude determined by the relative durations of said conductivity states. Preferably, the switching means comprise CMOS transmission gates which are controlled by a control signal of rectangular waveform produced by the control means, the control signal having a selectively variable mark to space ratio which determines the relative durations of the different conductivity states of the gates; this preferred arrangement can be manufactured on an integrated circuit chip, the chip being controlled by for example touch plate switches which permit the mark to space ratio of the control signal to be changed selectively. The chip can be used instead of a conventional resistive track potentiometer, and thus the present invention provides a device which performs the function of a conventional potentiometer but with no moving parts. The device can be arranged to have a linear or a non-linear, for example logarithmic, characteristic. BRIEF DESCRIPTION OF THE DRAWINGS In order that the invention may be more fully understood and readily carried into effect, an embodiment thereof will now be described by way of illustrative example with reference to the accompanying drawings in which: FIG. 1 is a schematic circuit diagram of a device in accordance with the present invention; FIG. 2 is a graph of one cycle of a control waveform A for operating the device of FIG. 1; FIGS. 3 and 4 are graphs of various waveforms developed in use of the device of FIG. 1; FIG. 5 illustrates a modification of the device of FIG. 1; and FIG. 6 illustrates a practical form of device in accordance with the present invention, for use in a stereo f.m. tuner. DESCRIPTION OF PREFERRED EMBODIMENT Referring firstly to FIG. 1, the device comprises two semiconductor switching means in the form of CMOS transmission gates 1 and 2 connected in series and connected through an inverter 3 to receive a control signal from a signal generator (not shown) which controls the gates in such a manner that when one of the transmission gates is switched on, the other is switched off. The transmission gate 1 comprises an N-channel MOS transistor TR1 and a P-channel transistor TR2, the gate 2 comprising an N-channel transistor TR3 and a P-channel transistor TR4. The gates of transistors TR1 and TR4 are connected directly to an input terminal 4 which receives the control signal. The gates of transistors TR2, TR3 receive an inversion of the control signal produced by the inverter 3. Input voltages V.sub.1 and V. sub.2 are applied to the terminals 5 and 6 respectively, and an output voltage V.sub.o is developed at terminals 7 connected to the series connection of the gates 1 and 2. The control signal has a rectangular waveform, one cycle of which is shown in FIG. 2, and is arranged to control the relative durations that the gates 1 and 2 are switched on. Thus, the gate 1 is switched on during the period t.sub.1 and is switched off for the remaining period (T-t.sub. 1) of the cycle. Conversely, the gate 2 is switched off for the period t. sub.1, and on for the period (T-t.sub.1). The control signal has a constant frequency but its mark to space ratio is selectively variable so as to vary the duration of t.sub.1 between O and T. Thus, the output voltage V.sub.o for the cycle is given by ##EQU1## Integration of the output signal V.sub.o is performed by a low pass filter (not shown) connected to the output 7 and hence the output V. sub. OF of the filter is given by ##EQU2## This voltage is of the same form as the output from the slider of a conventional resistive track potentiometer having voltages V.sub.1 and V. sub.2 applied to opposite ends of its resistive track, the ratio t. sub. 1/T corresponding to the position of the slider along the track. Thus by varying the mark to space ratio of the control signal, the magnitude of the voltage V.sub.o can be controlled selectively, and when t.sub.1 is varied from O to T, the magnitude of V.sub.OF varies between V. sub.2 and V.sub.1. Operating waveforms of the device are shown in FIGS. 3 and 4, for an alternating input waveform V.sub.in =(V.sub.1 -V.sub.2) and a control waveform A, the device providing an output waveform V.sub.o and a filtered output waveform V.sub.OF. In FIG. 3, the control signal A has equal duration marks and spaces, and the output signal V.sub.OF is of half the amplitude of V.sub. in. However, in FIG. 4, the control signal consists substantially only of marks, and as a result, the amplitude of V.sub.OF is much less than that of V.sub.in. The device described with reference to FIG. 1 corresponds to a linear potentiometer and the output voltage V.sub.OF is a linear function of the mark to space ratio of the control signal A. Such a linear device has particular application in audio signal amplifiers for controlling bass and treble filters. The device can also be used in multi-channel amplifiers for controlling balance of the channels. For volume control in an amplifier, it is however desirable to use a device having a logarithmic response and a device having a logarithmic response will now be described with reference to FIG. 5. The device of FIG. 5 includes the transmission gates 1 and 2 which are shown schematically and are arranged to be switched on and off in response to the control signal A as previously described. The input voltage V.sub.in is applied through a resistor R.sub.1 to the gate 1, and the output V.sub.o is derived through a resistor R.sub.2 and a capacitor C.sub.1. A low pass filter comprising a resistor R.sub.3 and a capacitor C.sub.2 provides the filtered output V.sub.OF. In operation, when the gate 1 is switched on, the capacitor C. sub. 1 is charged in response to the input signal V.sub.in through the resistors R.sub.1 and R.sub.2, but when the gate 2 is switched on, the capacitor is discharged through the resistor R.sub.2 alone. This differential charging and discharging of the capacitor C.sub.1 introduces an exponential term into the voltage V.sub.o thus giving the device a non- linear control characteristic which whilst not as truly logarithmic as high quality conventional logarithmic resistive track potentiometers, is sufficiently accurate for volume control purposes in audio amplifiers. To prevent the control signal from introducing distortion into the output signal V.sub.OF, the frequency of the control signal is selected to be much greater than the maximum frequency likely to occur in the input frequency V.sub.in ; in fact, it will be appreciated by those skilled in the art that frequency of the control signal A must be at least twice the maximum frequency of V.sub.in to prevent any loss of information in the output signal V.sub.OF. When devices of the invention are used in a stereo f.m. tuner amplifier, a multiple of the stereo multiplex frequency (usually 38 KHz) can be used as the frequency of the control signal, so as to avoid beats between the multiplex frequency and the control signal. A practical example of the device of the invention for use as a volume control in an f.m. stereo tuner-amplifier will now be described with reference to FIG. 6. The device includes first and second pairs of series connected transmission gates 1R, 2R, 1L, 2L which are used respectively to control the volume of right and left channels in the amplifier in the manner previously described. The gates for the left hand channel are shown schematically, but correspond to the gates for the right hand channel which are shown in detail, and the pairs of gates are connected to be driven by a common control signal A produced from a bistable 8. The bistable 8 is switched between its two states by means of a counter arrangement driven by clock pulses from a voltage controlled 2.4 MHz oscillator 9. The counter arrangement includes a first five bit counter 10 that cyclically counts pulses from the generator 8 to define the periodicity T of the control signal A and to this end, an AND gate 11 having inputs connected to the five bit stages of the counter 10 is arranged to provide an output on a line 12 to reset the bistable 8 each time the counter 10 is filled completely with clock pulses. A second five bit counter 13 is arranged to store a selectively variable count which determines the mark to space ratio of the control signal. The count held by the counter 13 is compared with the continuously varying count in the first counter 10 by a comparator 14. When the count in the counter 10 is equal to that held by counter 13, the comparator 14 provides an output on line 15 which sets the bistable 8 and thus the control signal A has a mark to space ratio determined by the count held in the counter 13. The count in counter 13 can be varied by operation of a switch 16 which gates clock pulses from a 5 Hz clock 17 into the counter. The counter 13 is arranged to be incremented or decremented by an up/down control 18. Thus the five bit counter 13 permits the volume to be increased or decreased in 31 incremental steps. The frequency of the voltage controlled oscillator 9 is held constant by means of a phase locked loop that includes a phase sensitive detector 19 and an attendant smoothing filter 20, that compares the phase of a control signal derived from the most significant bit of the counter 10 on line 21, with the phase of a 76 KHz stereo multiplex oscillator signal fed on line 22 from the stereo decoder of the tuner, thereby avoiding the aforementioned beats. Each pair of gates 1, 2 is provided with a respective low pass filter and capacitive network of the kind shown in FIG. 5, to provide an output V.sub.OF for each channel, with a logarithmic response. Clearly, however the arrangement can be used for bass, treble and balance controls if non logarithmic response filters are used. The arrangement shown in FIG. 5 is ideally suited to manufacture as an integrated circuit on a single chip, with the switch 16 and the up/down control 18 being operated by means of touch plate switches or by a remote control device utilising an infra-red or ultra sonic transmitter as is now used in domestic television receivers. Thus, the arrangement described with reference to FIG. 5 can perform the function of a conventional resistive track potentiometer and has the advantage of having no moving parts.
The Great Gatsby Essay The Great Gatsby by F. Scott Fitzgerald depicts the reality of the 1920’s in the United States. This was a time period in which people formed an obsession with obtaining easy money, in addition to the widespread scarcity of social values…. Documenting a time period through literature: Fitzgerald in the Roaring Twenties The “Roaring Twenties” had a significant impact on works of literature written during that time period. Between major cultural changes and economic prosperity in major cities, societal customs changed drastically and had an effect… Marie Claire Van Hout and Tim Binghamb combined to write an intriguing case study titled “Responsible vendors, intelligent consumers: Silk Road, the online revolution in drug trading”. This study brought to light the inner workings of a brand new and revolutionary method of drug distribution…. Thirteen Days to Success October of 1962 could almost be categorized as a short thriller full of fear and suspense entitled The Cuban Missile Crisis. Had this been an actual film, it would have been directed by President John F. Kennedy who would control what…
https://eduzaurus.com/free-essay-samples/contemporary-history/page/6/
Effective industry collaboration will be critical in a rapidly evolving landscape where risk is increasingly interconnected Dubai, United Arab Emirates, 10 May 2018 – Against the backdrop of a rapidly evolving business landscape, speakers at the 10th GPCA Supply Chain Conference emphasized the importance of stakeholder collaboration in order to address the risks that could have implications for business continuity. The two-day conference saw speakers identify collaboration as a key driver of growth and business transformation, enabling cost efficiencies and reduced disruption. In his keynote address, Mohammed Al Muallem, CEO and Managing Director, DP WORLD UAE Region and CEO, JAFZA, highlighted the drivers for collaboration to meet future challenges and growing demand from key export markets. He said: “The largest demand growth will come from Asia, which will account for 60% of the middle class globally and 40% of total chemicals demand. With global supply chains becoming increasingly complex and diverse, the need of the hour is increased value and efficiencies underpinned by effective cross stakeholder collaboration.” Dr. Robert de Souza, Executive Director and CEO, The Logistics Institute – Asia Pacific, told the audience in a second keynote address, that risk management will be key to predict future challenges and ensure the robustness and resilience of the region’s supply chain industry. During the event, technological advancements such as autonomous vehicles, 3D printing, IoT, advanced robotics, etc. and identifying and retaining talent to implement digital and analytics to understand them, were identified as pressure points within the global supply chain industry. The importance of talent was further emphasized, with the 7th edition of GPCA’s Leaders of Tomorrow program powered by Sabic taking place alongside the conference and providing students from across the GCC with a valuable opportunity to network with senior industry leaders and learn about the career opportunities within the chemical industry in the GCC. As part of an organized site visit, students from across the region enjoyed a tour around the logistics site of RSE-TALKE in Jebel Ali Free Zone and Dubai World Central.
https://www.gpca.org.ae/2018/05/13/partnerships-will-be-key-in-todays-complex-market-conditions-say-speakers-at-the-10th-gpca-supply-chain-conference/
Oliver is the co-founder and Director of OST Energy, an engineering consultancy specializing in renewable energy. Since conception OST Energy has advised on over 5 GW of solar, wind and biomass renewable energy projects in over 25 countries across the world. Oliver has personally lead over 200 renewable energy project reviews encompassing technical, commercial, contractual and regulatory issues and is dedicated to delivering quality services to the varying needs of utilities, technology providers, project developers, project owners, private equity and lending banks at different stages of project execution. Prior to founding OST, Oliver worked on a comprehensive range of power engineering projects worldwide on various generation plant including; landfill gas, waste incineration, on shore and offshore wind, solar thermal, marine renewable technologies, combined cycle gas turbines, and coal fired plant along with CDM projects for various technologies. Oliver is a skilled project manager and project director, with sound experience leading multi-disciplinary teams of analysts, industry experts, and engineers on sustainable energy projects worldwide. He holds a Master of Science in Renewable Energy Systems Technology, and a Degree in Electrical Engineering.
http://www.energyreservoirs.com/oliver-soper/
Our energy landscape is changing. Local communities are increasingly taking active roles and emerging as new actors in the energy system. For example, some local energy initiatives own solar panels, wind turbines and energy storage system collectively, while others build renewable energy configurations, together with other energy system actors. Community energy and energy storage may enable effective energy system integration and get maximum benefits of local generation. This may lead to more flexible and resilient energy supply systems, and therefore, can play an important role in achieving renewable energy and climate policy objectives. This international conference is being organized as a part of the Community Responsible Innovation in Sustainable Energy (CO-RISE) project and is funded through the social responsible innovation program of The Netherlands Organization for Scientific Research (NWO-MVI 2016 [313-99-304]). The aim of this conference is to explore new and innovative socio-technological energy configurations at local and regional level including energy storage. This conference brings together researchers and practitioners interested in the societal dimensions of technology and the technological aspects of social innovation. The focus is specifically on the field of community renewable energy and energy storage. Some key themes will be:
http://www.sense.nl/news_events/eventsarchive/10898209/International-Conference-on-New-Pathways-for-Community-Energy-and-Storage
- How Does Inflation Affect The Exchange Rate Between Two Nations? - Inflation By City - The Economics Daily - How Is Inflation Measured? The list of unavailable items in January 2021, and the changes to the list from previous months, are shown in Table 58 in the Consumer price inflation dataset. It’s expressed as a percentage increase or decrease in prices over time. For example, if the inflation rate for the cost of a litre of petrol is 2% a year, motorists need to spend 2% more at the pump than 12 months earlier. The US Inflation Rate is the percentage in which a chosen basket of goods and services purchased in the US increases in price over a year. Inflation is one of the metrics used by the US Federal Reserve to gauge the health of the economy. Since 2012, the Federal Reserve has targeted a 2% inflation rate for the US economy and may make changes to monetary policy if inflation is not within that range. A notable time for inflation was the early 1980’s during the recession. The government’s approach to measuring inflation is straightforward enough. The Bureau of Labor Statistics tracks the price of a basket of goods and services that is intended to represent average American patterns. The inflation rate is the monthly percentage change in that price. The monthly and annual inflation rates are determined by computing the rate of change in Consumer Price Index over a trailing 12-month period. Following the end of the transition period, the ONS will cease to provide a monthly submission of consumer price inflation data to Eurostat. The weights and sample of items used to compile the consumer price indices are updated at the beginning of each year. For CPIH and CPI, the 2021 weights would normally be based on spending patterns for 2019 from the national accounts. Given the effect of the coronavirus on spending during 2020 and the problems with collecting prices for new items potentially under lockdown conditions, we have changed the procedures for 2021. The Retail Prices Index does not meet the required standard for designation as a National Statistic. In recognition that it continues to be widely used in contracts, we continue to publish the RPI, its subcomponents and RPI excluding mortgage interest payments . People Are Buying More Of Those Goods Whose Prices Are Rising The Fastest You’ll find yourself making tough choices about what you can afford as inflation eats into your purchasing power. In other words, investors should count on inflation and plan accordingly. As we mentioned, future inflation calculators generally base their projections on recent averages. In the U.S., where inflation volatility hasn’t been a problem lately, it’s pretty safe to assume that future inflation will hover around 2.50%. A future inflation calculator lets you see how many future dollars will equal a certain number of today’s dollars. Next, make sure the source of funds you selected has sufficient funds to cover the total price. If you need to add funds to cover the purchase price, you have to do so before the issue date of the security. If you buy a TIPS directly from us and pay by automatic withdrawal, we withdraw the accrued interest and price. A TIPS accrues interest from the 15th of the month and is issued on the last business day of the month. For an original issue TIPS, accrued interest is payable by the investor from the 15th until the issue date. The principal of Treasury Inflation-Protected Securities, also called TIPS, is adjusted according to the Consumer Price Index. With a rise in the index, or inflation, the principal increases. With a fall in the index, or deflation, the principal decreases. For the CPIH, the indicative estimate shows that the 12-month inflation rate would have been 0.1 percentage points lower. The Coronavirus and the effects on UK prices article describes the approach we have taken for imputing price movements for items that are currently unavailable to consumers to purchase. It is necessary to use the CPI price movement for both, so that both CPIH and CPI are constructed from the same set of item indices. Figure 4 shows the contribution of owner occupiers’ housing costs and Council Tax to the Consumer Prices Index including owner occupiers’ housing costs 12-month inflation rate in the context of wider housing-related costs. Prices usually fall between these two months but price movements across 2020 have been unusual compared with previous years and appear to have been affected by the impact of the coronavirus (COVID-19). The commonly quoted inflation rate of say 3% is actually the change in the Consumer Price Index from a year earlier. By looking at the change in the index we can see that what cost an average of 9.9 cents in 1913 would cost about $1.82 in 2003 and $2.30 in August of 2012. The two tables below show fixed rates and inflation rates, respectively. All inflation rates are calculated using the Australia Consumer Price Index series. The Consumer Price Index for Australia is 117.2 for the month of December 2020. The inflation rate year over year is 0.9% (compared to 0.7% for the previous quarter). They had surged during the pandemic because of shortages of new cars and strong demand. Combining The Two Rates The U.S. inflation rate by year is the percentage of change in product and service prices from one year to the next, oryear-over-year. Inflation is the increase in the prices of goods and services across an economy. When prices inflate, you need more money to buy the same things. The opposite of inflation is deflation, when prices become lower across a range of goods and services. House prices have been rising strongly, but these are not included in the consumer price index calculation, and won’t start causing steeper rent increases until sometime late this year. The February report was the fifth in a six-month series that will set the future inflation-adjusted variable rate for U.S. At this point, five months in, inflation has increased 1.05%, which translates to an annualized variable rate of 2.10%, higher than the current rate of 1.68%. Because gasoline prices are continuing to rise in March, we should see that variable rate climb even higher. Our inflation calculator helps you understand how the purchasing power of a certain dollar amount will change over time. This means that $5 today won’t buy you the same amount of goods or services as it would in 10 years. Series I Savings Bonds Rates & Terms: Calculating Interest Rates The BLS inflation calculator quickly shows how inflation eats away at your purchasing power. For example, a 2.5% inflation rate means that something that cost $100 last year now costs $102.50. In that situation, a hard-earned 3.5% raise would only be worth 1.0% in additional buying power. It extends the Consumer Prices Index to include a measure of the costs associated with owning, maintaining and living in one’s own home, known as owner occupiers’ housing costs , along with Council Tax. Both of these are significant expenses for many households and are not included in the CPI. There was also a large upward contribution (of 0.04 percentage points) from transport. Consumer Price Inflation Weights And Prices: 2021 Longer-term interest rates have been rising recently, but this report shouldn’t push them higher. Series I Savings Bonds are also interested in non-seasonally adjusted inflation, which is used to adjust principal balances on TIPS and set future interest rates for I Bonds. For February, the BLS set the inflation index at 263.014, an increase of 0.55% over the January number. Shelter costs increased 0.2% in the month and are up 1.5% for the year. But keep in mind that eviction moratoriums are holding down rent costs. (For prior years, see historical inflation rates.) If you would like to calculate accumulated rates between two different dates, use the US Inflation Calculator. You may also be interested in a table ofMonthly Inflation Rate data, which shows how much prices have increased over the previous month. In another example we see August 2003 and September with the Government saying inflation rates were 2.2% and 2.3% respectively. This would lead us to believe that inflation rose 0.1% during that period. In actuality however, it rose from 2.16% to 2.32% or a 0.16% increase, substantially more than 0.1%! How Does Inflation Affect The Exchange Rate Between Two Nations? For a reopened TIPS, accrued interest is payable from the dated date on the announcement until the issue date of the reopening. Capital outflow is the movement of assets out of a country, often because of political or economic instability. Manipulation is the artificial inflating or deflating of the price of a security or otherwise influencing the market’s behavior for personal gain. A weak currency is one whose value has depreciated significantly over time against other currencies. The most powerful determiner of value and the exchange rate of a nation’s currency is the perceived desirability of that currency. The most famous example came when, after World War One, Germany was left with high debts. The government printed more of its own currency to pay them off. - For example, anyone with a fixed-rate mortgage benefits from inflation, as it effectively reduces their debt. - The list of unavailable items in January 2021, and the changes to the list from previous months, are shown in Table 58 in the Consumer price inflation dataset. - To use it, just enter any two dates from 1913 to 2021, an amount, and then click ‘Calculate’. - Average annual inflation in the U.S. between 1913 and 2019 was 3.10%. - Once again this finer view gives us a better picture that inflation might be rising more than it appeared to be. - Gasoline prices jumped 6.6% in February and drove overall prices up 0.4% for the month. Sometimes you can even adjust the inflation rate to see what would happen to your purchasing power if there were extreme inflation or deflation. Inflation in the United States was subdued in October following four straight months of gains, according to a government report published Thursday, Nov. 12. Depending on the data available, results can be obtained by using the compound interest formula or the Consumer Price Index formula. Inflation By City Other people who feel the negative effects of inflation are those on a fixed income, or those who hold fixed-income investments while inflation takes its toll on their purchasing power. This US Inflation Calculator measures the buying power of the dollar over time. To use it, just enter any two dates from 1913 to 2021, an amount, and then click ‘Calculate’. This article details price movements for petroleum products in the context of the coronavirus disease 2019 (COVID-19) pandemic. In its August meeting, it announced it would allow inflation to rise above 2% if that would ensure maximum employment. With the ability to work from home or remotely from anywhere, the pandemic has given people a reason to leave cities and move out to the suburbs. Electricity and natural gas also increased, as did food at home and food away from home. Escalation agreements often use the CPI to adjust payments for changes in prices. The most frequently used escalation applications are in private sector collective bargaining agreements, rental contracts, insurance policies with automatic inflation protection, and alimony and child support payments. While this may seem like a great thing for shoppers, deflation often signals an impending recession. With a recession comes declining wages, job losses, and big hits to most investment portfolios. Businesses hawk ever-lower prices in desperate attempts to get consumers to buy their products and services. Besides the core rate in the CPI report, the Federal Reserve looks at thePersonal Consumption Expenditures report because it’s considered more reflective of true underlying inflation trends. The PCE core inflation rate was 1.5%, year-over-year, in January, the latest month available. The effect comprised small movements from a variety of product groups, with the largest upward contributions coming from vegetables (0.02 percentage points) and oils and fats (0.02 percentage points). Overall prices for vegetables, in particular for premium potato crisps and cauliflowers , rose between December 2020 and January 2021, while they were unchanged between December 2019 and January 2020. With oils and fats, prices overall fell by 8.9% between December 2019 and January 2020 but fell by only 0.2% between December 2020 and January 2021. Despite clothing prices rising slightly in December 2020, prices fell by 4.6% between December 2020 and January 2021 as a result of increased discounting. Inflation is caused by other factors, many of them temporary and limited in their scope. An automaker may be forced to pay more for parts and will pass that increase along to consumers. If you are a retiree living on your savings, you can’t keep up the same standard of living if inflation cuts into your purchasing power with every passing year. Eventually this results in a monetary “hangover” as the effects of their buying binge become apparent. Inflation derivatives are used by investors to hedge against the risk of rising inflation levels eroding the real value of their portfolio. Treasury bonds and Treasury Inflation-Protected Securities and forecasts future CPI inflation. Investing a portion of savings in precious metals, such as gold or silver, is another way to outrun inflation. Today, there are also many precious metals ETFs available for investors. An asset allocation that adds a little bit of gold to a stock portfolio can also produce more consistent returns. Investors who want to avoid the volatility associated with individual stocks might opt for mutual funds. Cpih 12 Suppose that you are steadily saving money for a specific goal, such as a college fund for your children or a down payment on a home. Your money’s purchasing power may decline while you’re saving it. Inflation rates went as high as 14.93%, causing the Federal Reserve led by Paul Volcker to take dramatic actions. The cost of shelter is running lower than normal right now because rent increases have been slowed by the pandemic. Rent has been rising at only a 1.5% annual rate, after coming in above 3% for each of the past five years. How Is Inflation Measured? The fact that Social Security benefits automatically adjust for inflation is part of what makes them such a powerful resource for retirees. Now that you know about inflation, you can start working on strategies for beating it.
https://topforexnews.org/news/current-consumer-price-index/
He was brilliant as an individual performer, but her struggled to find his footing as the manager. If we were to ask about the most commonly heard phrases about first time managers, the answers would most likely be on similar lines. Doesn’t it make you wonder, though? If there’s a gap between what’s expected of newly minted managers / leaders and the training they go through? After all, the two worlds are as different as chalk and cheese. And regardless of how well they might be trained, first time managers face a host of challenges while breaking into their newly acquired leadership shoes. One of the biggest reasons for this gap is that most managers in the workforce are promoted because they are good at what they did, and not necessarily good at making the people around them better. And managerial position, like any other leadership role, requires a delicate balance of both – individual competence and enabling others to achieve excellence. Let’s now, take a look at a host of challenges a newly promoted manager faces, and ways to ease them into their new role – When the promotion happens from within the team, there’s a sudden shift in the dynamics of the team. This may lead to some tension and resentment. In such a case, it is important for the newly appointed manager to remember that their success depends on the success of the team. Instead of being unreachable, work to build confidence among your team members. Create a healthy boundary even as you continue to work as an integral part of the team and help your team accomplish the required tasks. As a manager, you play a dual role. You have to accomplish your own tasks, while also helping your team members in their tasks. It is important to prioritise your team because if your team wins, you win. However, that doesn’t mean you can slack off in your individual role. They key is in planning your day out Ina dance. Make a robust plan that allows you to balance your time and energy well. As a manager, you are responsible for the productivity of your team. A challenge of this manner requires you to understand the psychology of each individual team member. Some people lead with the stick, while others do so with the carrot. Have one-on-one meetings with each team member to understand their needs better, and act accordingly. It is equally important to maintain transparency within the team. If any concerns arise, discuss them openly and pro-actively. As their manager, it is important for you to maintain efficient channels of communication with your team. Be direct, patient, and transparent with them. Work to create an open environment where ideas and questions are welcomed. Also, work on being a good listener. As a first-time manager, delegation is yet another skill that needs to be mastered. It can be hard, I initially, to let go of the control you have over all your tasks. This rings true especially in the case of driven professionals who got their promotion by excelling at all their tasks. As a leader, it is your responsibility to guide your team members and share responsibilities with them. This will not only help build better inter-personal relationships but add to your productivity. Effective delegation can help build trust and enable overall team, growth. Give team members sufficient authority, responsibilities, and resources based on their skill levels and experience; while, still keeping an eye out for any help or guidance they might need. This will help them feel confident and competent in their jobs. A common managerial trap that most first time mangers give into, is the temptation to jump in and micro-manage every single team member’s actions. When faced with questions by your team members, encouraging them to figure out the answers for themselves. Your job is to guide them, and help them through the process, even as they solve the riddle on their own. Furthermore, make sure to give them clear goals and hold the bar for quality where it’s needed. But, never do the work for them. This will allow you to build a team of high performing ‘doers.’ Your role as the manager isn’t just to drive your team towards their goals. But, rather, enable them to feel confident and safe in their individual roles. In this respect, effective listening in one skill that can help make a huge difference. Engage your team in heartfelt conversations, often. When you build a safe space for your team members to share their problems and ask questions, you allow for impactful growth. Giving a person your full attention will not only make them feel valued, but also allow you to help them in the best way possible. Remember, the strength of the wolf is the pack. A leader is only as effective as their team. Doing what you say you will, is an important leadership trait. Thus, it is important that you don’t over commit and under deliver. Only when you strive to hold yourself accountable to your words, can you do the same for your team. Regardless of the nature of the obstacle, as a leader you need to keep trying. Your actions will set the path for the rest of your team. People are more easily inspired by their manger’s actions than they are, by mere words. One of the biggest misconceptions about any leadership position is perhaps the idea that leaders have to be infallible. Often times, first time managers can get overwhelmed with the new set of responsibilities bestowed upon them. However, due to the fear of being ridiculed or appearing non-competent, they shy away from talking about their problems. However, it is important for a manager to create and enable an open environment. A safe space for problems to be brought forward will help the team tackle them better. It will also help establish effective inter-personal bonds between the leader and their teams. A vulnerable man aged, open to ideas and discussion can set the tone for positive conversations and constructive feed-forward. As a manager, it is your job to call out your team members on their mistakes. However, this doesn’t come naturally to everyone. First time managers, especially find themselves struggling with giving effective and constructive feedback. They key is to focus on the results, and avoid the blame game. You feedback should aim at helping your team member do their jobs better. Asking if they require any help often is considered an important part of any constructive feedback. Maintaining an open line of communication will help you imitate difficult conversations with much more ease. Remember, at the end of the day, it is about helping people grow into a better version of themselves, while ensuring overall productivity. This brings us to the end of this list. We would love to hear your thoughts on the topic. Feel free to share some of the biggest challenges you might have faced as a first time manager, or leader, with us. If you want to deep dive into leadership and related topics, you can go and check out more blogs here. Comments are closed.
https://focusu.com/blog/challenges-for-a-first-time-manager/
1. Add the following sections to the campaign brief you began for Assessment Task 1: a. A section titled ‘Target Audience’, which will outline the target audience for this campaign, based on the original brief provided to you in Assessment Task 1. b. A section titled ‘Promise’, which will outline the promise that was identified in the original brief and explain how it will be used in the campaign. c. A section titled ‘Message’, which will outline the overall message to be conveyed to the target audience throughout the campaign. d. A section titled ‘Resources’, which will identify and explain the following resources required, along with indicative costing for each. i. Research resources –identify which research resources are required to understand market perceptions, brand awareness and reputation, consumer attitudes, etc. Include any focus groups, interviews or questionnaires as required. ii. Creative resources –identify which creative resources are required, including artists or other creative service providers. iii. Production resources –identify the production resources which will be required for this campaign, including photographers, editors, copyrighters, advertising (digital, print and TV), etc. e. A section titled ‘Budget’, which will: i. Provide an explanation of the expenditure on each component of the campaign against the budget provided by the client ii. Outline each of the resources required and allocate a percentage of the total budget for each resource iii. Include a graphical representation of the overall budget for the campaign, versus the budget allocated in the original brief iv. Provide reasoning as to why the total amount of money for the campaign is required. Note: You may not be able to access costing for all resources, where the supplier doesn’t make pricing freely available. In this instance the learner can select a preferred supplier and allocate an indicative cost to that resource. f. Any additions to the references section 2. Once all the resources costing have been completed, prepare a short presentation to ‘the client’. As with the first assessment, this must be presented to your assessor, or to another person with your assessor present, or a recording taken of the presentation. This presentation will: a. outline the requirements of the campaign, the resources required and the costs in relation to the budget allowed b. explain why the resources required were selected and how they will benefit the campaign c. explain how the budget will fulfil the requirements of the brief and what the benefits are MyAssignmenthelp.com is one of the leading urgent assignment help providers in the USA. We have earned our reputation as best assignment help in multiple countries including the USA. We have designed unique fastest delivery options, which assist us to deliver immediate assignment assistance. Our teams of highly skilled qualified writers are capable of delivering fast assistances. We provide online assignment help to a wide range subjects so that whenever students face the urgent need of assignment help, they can hire our assistance within a short period. Just share requirement and get customized Solution. Orders Overall Rating Experts Our writers make sure that all orders are submitted, prior to the deadline. Using reliable plagiarism detection software, Turnitin.com.We only provide customized 100 percent original papers. Feel free to contact our assignment writing services any time via phone, email or live chat. If you are unable to calculate word count online, ask our customer executives. Our writers can provide you professional writing assistance on any subject at any level. Our best price guarantee ensures that the features we offer cannot be matched by any of the competitors. Get all your documents checked for plagiarism or duplicacy with us. Get different kinds of essays typed in minutes with clicks. Calculate your semester grades and cumulative GPa with our GPA Calculator. Balance any chemical equation in minutes just by entering the formula. Calculate the number of words and number of pages of all your academic documents. 1 1 1 Our Mission Client Satisfaction Site says it will give you an essay, but only gave a paragraph. When an essay is advertised, it is expected to receive an essay. Australia Nice work ! Great Job! Defetnely very helpful realy 5 star job Thank you!!!!!!!!!!!!! Australia Fantastic work by expert! They follow your needs. Couldn\'t ask for better quality work! Australia Thank you. Amazing work. Truly would be using this organization within the near future.
https://myassignmenthelp.com/au/swinburne/bsbadv602-develop-an-advertising-campaign/versus-the-budget-allocated-in-the-original.html
Diseases in the garden Diseases affecting our cultivated plants can have various origins. Generally speaking, a plant weakened by an environment that is not suitable for it, stressed by specific climatic conditions or in competition with other plants will be more sensitive to various diseases. The latter are of cryptogamic origin most of the time, that is to say that micro-fungi develop on the plant. In this case, the spores can last from one year to the next, well hidden in the bark or in the soil, waiting for the right moment to reappear. There are also viral or bacterial diseases that affect our plants, and here again, it is often preferable to destroy the infected plants to eliminate the risk of propagation. Problems with diseased plants in compost As we have seen, diseases can lurk in the bark and buds for many months and reappear in good times, which is why the introduction of diseased plants into the compost is so decried. Spores, viruses or bacteria can then contaminate your compost which, once spread at the foot of your plants, will be more harmful than beneficial. How to introduce diseased plants to the compost? Some composters are composed of a large barrel and are equipped with a crank handle. If the balance between green and brown waste is well respected, the compost can be stirred every day to maintain an optimal level of humidity and aeration and to raise the temperature above 55°C. In this case you will not take the risk of adding diseased plants. In a classic heap, or in a silo, it will be necessary to add these problematic plants well in the center of the heap and only if this one is completely activated, i.e. if it reached a perfect balance between wet and dry matters which allows a good decomposition of the waste. This implies a high temperature (over 55°C) over several days in the center of the pile, which will kill most diseases. So you will need to time it right, take the temperature and make sure it stays constant for a few days if you want to add diseased plants. Be careful, when grinding or pruning such plants, always remember to disinfect the tools after use, otherwise you risk the spread of diseases to the following plants. Composting is the process of turning organic materials such as food scraps and garden debris into nutrient-rich soil or compost. The word “compost” is derived from the Latin word “compositus,” which means “to bring together.” In this context, the word denotes how composting preserves food scraps by bringing them together with inert ingredients like straw and leaves. Composting also reduces waste by recycling kitchen and garden waste in a beneficial way. Composting diseased plants is effective at reducing the spread of disease. When plants are composted, their spores— microscopic reproductive units — are killed. This prevents the spread of plant diseases, such as damping-off in cabbages or black rot in apples. As a result, health experts recommend that people compost diseased plants to prevent the spread of pathogens in landfills and biofuel processing plants. Furthermore, it is important to maintain healthy soils via composting so that no disease issues arise in the first place. Composting diseased plants helps remove disease-causing organisms. When diseased plants are composted, their pathogens die along with their spores. This reduces the risk of further infections in the environment and at home gardens. It also reduces the need to treat contaminated sites with harmful pesticides or fungicides when composting creates pathogen free soils for gardening or biofuel production purposes. Composters also reduce greenhouse gas emissions by reducing the use of Styrofoam containers for growing produce during peak season. Moreover, reducing food waste reduces contamination of our water systems with bacteria and pathogens that make people sick when consumed by fish or other organisms that live in water bodies. Composting diseased plants effectively reduces the amount of waste sent to landfills due to reduced contamination risks from pathogenic organisms dying along with their spores. Many studies confirm that pathogenic organisms can multiply rapidly inside landfill substrates if enough suitable food trash remains undisturbed for long enough periods of time— especially if acidic conditions prevail within landfills’ environments. Consequently, by reducing contamination risks through composting diseased plants, land managers reduce both potential health hazards to humans and wildlife as well as contamination risks for surrounding ecosystems! Composting can be quite beneficial if done correctly since this preserves both soil health and decreases environmental contamination from pathogens! Anyone can start doing this by creating a worm bin near their outdoor produce!
https://gardeninguru.com/can-you-compost-diseased-plants/
Educators have offered different explanations of how learning takes place. The lack of consensus on the ideal learning method has led to emergence of many learning theories. Leonard (2002) defines learning theories as the “conceptual frameworks that describe how information is absorbed, processed, and retained during learning” (23). Some educators argue that learning is simply a change of behavior. Others feel that change of behavior is too simplistic to encompass all that learning entails. To them, learning is complex and thus employs high mental faculties. Several factors influence how human beings acquire knowledge. The cognitive and emotional state of the learner plays a major role in determining people’s worldview (Lefrançois, 2012). Additionally, the physical environment has a profound effect on people’s absorption and retention of new knowledge. This paper will extrapolate three learning theories: behaviorism, cognitive, and constructivism. It will further put the theories side by side and explain their differences. Behaviorism theorists argue that learning takes place through conditioning (Skinner, 1976). To them, learning does not involve mental activities. To measure whether learning has taken place, what one needs to do is to establish whether there has been a change in behavior. Operant and classic conditioning underpins the theory. Pavlov, a psychologist, developed and popularized Classic conditioning. By carrying out experiments with dogs, he established that human beings and animals respond in a specific way to certain stimulus. B.F Skinner reinforced this school of thought through operant conditioning. He taught a pigeon to dance and concluded that a response follows every stimulus (Turner, 2007). If this response attracts a reward, it becomes more frequent. The implication for learning is that to encourage or discourage a behavior, teachers can use positive or negative reinforcement. This is the basis on which educators advocate for the use of rewards and punishment in learning. Cognitive theory explains that a child’s cognitive ability develops with age. Jean Piaget developed and popularized the theory. Piaget and Roberts (1976) aver that children “build cognitive structures and mental maps for understanding and responding to physical experiences within their environment” (56). Educators must therefore desist from “suffocating” learners with complex knowledge that is not in tandem with their cognitive levels. An infant’s capacities are limited to simple reflexes but they develop and become more complex as the child’s grow. In the early life of a child, abstract learning is difficult to conceptualize. Teachers should focus on concrete things especially those that the child can manipulate using motor skills. The complexity of learning materials should increase commensurate with the child’s cognitive structures. The child’s mental map accommodates new learning and creates equilibrium between what the child is capable of absorbing and the experiences emanating from the environment. Constructivism theory is more philosophically grounded than the two theories discussed above. It argues that human beings understand the world through reflection on current and past experiences. Leonard (2002) avers that each human being generates “his own rules and mental models which he uses to make sense of his experiences” (34). In learning, therefore, people create space for new experiences by simply making a change to their mental models. People search for meaning through reflection. Learning is not a mere regurgitation of knowledge but a deep search for meaning in every thing human beings do. Before teaching, a teacher should establish the perception of the learner in the particular discipline or subject. The teaching should not just focus on the whole but also on the parts (Lefrançois, 2012). The theory discounts the use of tests for assessment. According to constructivists, learning is a product of individual’s reflection as he or she interacts with new knowledge. As such, examinations cannot be effective in establishing whether learning has taken place. Comparative Perspective Educators consider behaviorism theory the traditional approach in teaching and learning. The teacher is the source of knowledge and the learner is the recipient. The teacher delivers the knowledge directly to the learner. If the information is complex, the teacher has the discretion to provide it to the learner through contingencies that incorporate rewards and reinforcement. Ritzer and Sage Publications (2005) aver that in behaviorism, “students learn without teaching, in their natural environments, but teachers arrange special contingencies which expedite learning, hastening the appearance of behavior which would otherwise be acquired slowly” (64). Examinations are an indispensable item in learning for behaviorist theorists. Their aim is to measure whether there have been any changes in the behavior of the learner. Educators motivate learners through rewards and reinforcement. If a learner errs, punishment comes in handy to bring about behavioral change. The learners task in this approach is acquire facts and master skills. An educator must praise and reward learners who make small accomplishments. Behaviorist theorists employ progress charts to monitor learners’ improvement. The educator can apply his authority to punish learners who show little or no behavioral change. In constructivism, learning takes place through problem solving and discovery. It is not structured. It instead unfolds in a natural and uninhibited manner. Human beings have an innate curiosity. If given the liberty, they are capable of discovering things by themselves as long as there is motivation. Learning is neither teacher-centered nor at the direction of a teacher or an authority. The approach, in the eyes of behaviorist theorists, lacks “meaningful learning” (Turner, 2007). There are not structured examinations but rather direct tests that correspond to the learners’ skills. The role of the teacher is to reduce threats in the learning environment and make it challenging. This way, learners will become critical thinkers and problem-solvers. The teacher merely facilitates the learning process as learners work in groups. He asks thoughtful and provoking questions that stimulate discussions. Learners brainstorm and arrive at original solutions and present them in a way of their choosing. In cognitive theory of learning, learning is a product of well-formulated strategies. The learners’ aim is to acquire facts and master concepts. They observe the teacher as he demonstrates and explains facts. The teacher employs his own strategies to capture and retain the attention of the learners. Visual aids are very important in the learning process. The learners observe graphics and use them to derive meaning through analysis and synthesis. To enhance retention of information, learners can use mnemonics and other retrieval cues. Schunk (1991) propounds that the approach applies learning strategies such as “review, examine, ask, do, and summarize” (34). According to behaviorist theorists, the major factor that influences learning is the environment. Brain affects learning because it involves internal emotions that cannot be measured. Conditioning is the main way through which to understand learning. Inevitably, therefore, rewards and punishment influence learning. If educators motivate learners for their small accomplishments, learners will achieve more behavioral change. An ideal teacher will transfer facts and skills to the learners and assess their understanding through examinations. Constructivist theories believe that learning motivation influences learning. The role of a teacher is to motivate the learner to develop solutions rather than memorize documented solutions. Apart from motivation, experience also influences learning. The theory propounds that learners come into the learning process with their own experiences; they are not blank. The task of the teacher therefore is not to deposit knowledge on the learners but provoke and elicit discovery of new knowledge. Schunk (1991) further asserts that teacher is an influence “that performs a minimalist role geared towards most learning for least teaching” (45). The major influence in this approach is experience and motivation. Cognitive theorists hold that cognitive structures and mental maps influence learning (Piaget & Roberts, 1976). As a child grows, his cognitive structure develops to be able to absorb abstract materials. The child’s developmental stage inevitably has an influence on the learning process. For instance, children who are below two years in age are in sensor motor stage. They can only interact with its environment at a physical level. Their construct of abstract reality is low. Curriculum developers must therefore formulate a curriculum that is appropriate to children’s cognitive level. Behaviorist theorists acknowledge the role of the brain in learning but their major preoccupation is with what one can observe directly. They consider what is manifest, not the thought process that leads to an action. Emotions have no place in learning according to theorists who propound this learning approach. Rather, students can learn a behavior and unlearn those behaviors that are not acceptable. Rewarding response contributes to learning more than the brain does. Reward and punishment thus form the basis for this theory. Taylor and MacKenney (2008) assert that knowledge “is separate to the human mind and the teacher must transfer it to the learner” (45). While brain is important in retaining “deposited” knowledge, the teacher must enforce acceptable behavior. Cognitive theory of learning recognizes that the brain is very important in the learning process. According to Taylor and MacKenney (2008), mental processes come first in learning. He argues that with “effective cognitive processes, learning is easier and new information can be stored in the memory for a long time” (35). If the learner adopts ineffective cognitive process, it would be hard to learn and retain new information. The educator should use good instructional strategies to suit the brain processing sequence of the learner. The theorists propound that the brain is “wired” to receive knowledge in a hierarchical order. The instructor should therefore present information to the learner starting from the simple to the complex. The level of complexity should be in tandem with the learner’s cognitive levels. Constructivist theorists believe that the brain is important for discovery of new knowledge and reflection. Because they are endowed with a brain, human beings can learn without a teacher. The human brain is capable of arriving at solutions with proper guidance. The role of the teacher is therefore to provide the learner’s brain with the needed motivation to pursue knowledge. To constructivist theorists, knowledge is indivisible from the human brain. To a behaviorist theorist, a learner applies knowledge to change behavior. The educator is the source of all knowledge and he deposits this knowledge to the learner’s empty mind. The knowledge is meant to achieve behavioral change. The teacher rewards good behavior until he accomplishes the level he intends to with the learner. To test acquisition of knowledge, the teacher administers examination. In other words, the learner applies new knowledge by changing behavior. Learners apply knowledge in a way that has observable indicators. In constructivism, the learner applies new knowledge to solve problems by discovering solutions. As highlighted earlier, this learning approach is learner-centered. The teacher creates a challenging situation and allows learners to brainstorm and discuss the solution. He then complicates the issue so that in the end, the learner arrives at a solution after deep thinking. The learner then integrates the new knowledge into the old knowledge and obtains a holistic understanding of an issue. In cognitive theory, new knowledge builds on the old knowledge. As the learner grows, his cognitive abilities increase. Cognitive development puts one in a better position to absorb complex and abstract knowledge. Learners apply new knowledge to “enhance their logical and conceptual growth” (Turner, p.23, 2007). Human beings construct reality as they interact with the environment and other people. To facilitate the learning process, different theorists design their instruction to suit their purposes of education. Behaviorist theorists are interested with behavior change and their instructional design aims to achieve that purpose. Turner (2007) observes, behaviorists, prepare the instructional design “to arrange contingencies of reinforcement under which students learn” (56). The design is teacher-centered and systematic. The teacher provides direction to learners who passively participate in the process. The design is objectivist and focused on an individual. The theory lacks the holistic aspect of other theories because of its focus on behavioral observation. The theorists integrate rewards and punishment in the design. A teacher gives compliments and rewards for small achievements from learners. The design premises on the fundamental believe that learners can learn and unlearn behavior. The proponents of constructivism theory recognize the importance of the conditions under which learning takes taking place. The physical and emotional environment plays a big role in shaping the instructional design. Human beings have an innate curiosity. Instructional design therefore recognizes that the learner is the most important person in the learning process. The design is therefore learner-centered. For testing, the design aims to match the skills acquired by the learner to the items assessed. The design is natural and holistic in several ways. The proponents of this theory discourage standardized curriculum because it makes it impossible to incorporate learner’s experiences in the learning process. The design glorifies reflection, discovery of knowledge, and problem solving. Learners brainstorm in groups and the teacher’s roles is to provide a holistic environment that is not only challenging but also stimulating. Turner (2007) argues that a good design should “reduces the quantity of teaching while leaving everything unchanged” (54). The design is non-directed and learner-centered. It emphasizes the role of cognitive operations in the learning process. Additionally, it pays special focus to the group rather than an individual. Modern educationists consider constructivism theory more holistic than behaviorism theory. Educators in this approach formulate their strategies in a manner that encourages learners to analyze rather than regurgitate knowledge. The classes are usually lively and interactive with the students doing most of the talking. There are not standard tests as learners judge their own progress. Cognitive theorists believe that cognitive and mental structures are very important in learning. As Taylor and MacKenney (2008) assert, the instructional design must be “developmentally appropriate curriculum that enhances their students’ logical and conceptual growth” (43). It is therefore incumbent upon the teachers to take the experience of learners into consideration. The environment plays a big role in shaping the instructional design. The cognitive age of a learner determines the curriculum’s content. For instance, educators should not introduce abstract knowledge to children who have not grown out of the sensor motor stage. Rather, they should structure knowledge hierarchically. The design envisions a situation where instructors will start with easy information and increase the complexity as the learners becomes acquainted with it. Conclusion The paper has addressed three learning theories: behaviorism, cognitive, and constructivism. Each theory is unique in the way it envisions the learning process. Each theory also has its own merits and demerits and it is upon education stakeholders to decide the theory that suits their circumstances. The presence of many learning theories is evident of the attention that philosophers, psychologists, and educationists have attached to the learning process. There is no consensus as to the best theory because each works best under specific situations. For instance, constructivism theory would be inappropriate for learners with special needs. This is because learners with disabilities require specialized attention. However, behaviorism theory has become unpopular because of negating the role of brains in learning. The theories offer insightful information on how to understand and enhance the learning process. References Lefrançois, G. R. (2012). Theories of human learning: What the professor said. Belmont, CA: Wadsworth. Leonard, D. C. (2002). Learning theories, A to Z. Westport, Conn: Oryx Press. Piaget, J., & Roberts, G.-A. (1976). To understand is to invent: The future of education. Harmondsworth: Penguin. Ritzer, G., & Sage Publications. (2005). Encyclopedia of social theory. Thousand Oaks, CA: Sage Publications. Schunk, D. H. (1991). Learning theories: An educational perspective. New York, NY: Merrill Publishing Company. Skinner, B. F. (1976). About behaviorism. New York: Vintage Books. Taylor, G. R., & MacKenney, L. (2008). Improving human learning in the classroom: Theories and teaching practices. Lanham, Md: Rowman & Littlefield Education. Turner, S. (2007). Learning theories. Chandni Chowk, Delhi: Global Media.
https://ivypanda.com/essays/learning-theories-comparative-perspective/
Administrative Process (Stages And Characteristics) An administrative process occurs as a continuous and connected flow of planning, management and control activities, established to achieve the use of human, technical, material and any other resources, which the organization has to perform effectively. This set of activities is governed by certain business rules or policies whose purpose is to strengthen the efficiency in the use of these resources. It is applied in organizations to achieve their objectives and meet their lucrative and social needs. The work of the administrators and managers in this regard is important, it is said that their performance is measured according to compliance with the administrative process. The functions of the administrative process are the same functions of the different stages (planning, organization, direction and control) but they differ from them because they are applied to the general objectives of the organization. Table of Contents Stages of the administrative process The administrative process is developed in different stages, known with the abbreviations of planning, organization, management and control (PODC) , these are consistent and are repeated for each objective determined by the organization or company. Usually, these stages are grouped into two phases, which are: Mechanical phase : Planning What should be done? and Organization How should it be done? This establishes what is going to be done and provides a structure to do it. Dynamic phase : Direction How is it being done? and Control How was it done? The way in which the organism that has been previously structured is managed is specified. Activities and functions of the administrative process 1.- Planning It is the first step to take, is to know in advance what is going to be done, the direction to follow, what you want to achieve, what to do to reach it, who, when and how you are going to do it. For this, some steps are followed such as: - Internal and environmental research (tools such as Porter’s 5 forces and SWOT analysis can be used ). - Statement of purposes, strategies and policies and purposes. - Establishment of actions to be executed in the short, medium and long term. Students of the subject affirm that the planning includes the definition of the organizational goals, the development of a general strategy to reach those goals and the achievement of priority plans to coordinate all the activities. Specifically, this function must be exercised by the administrative body of the company, and will provide for the objectives and goals for the company and the methods it will carry out. Objectively, a plan is established that contains the future activities to be carried out, to be implemented with previous visualization, taking into account each feature in detail. The most important planning activities are: - Predefinition of objectives and goals to achieve during a certain time. - Implement a strategy with appropriate methods and techniques to carry out. - Anticipate and plot against possible future problems. - Clarify, expand and determine the objectives. - Implement working conditions. - Select and state the tasks to be developed to meet the objectives. - Build a general achievement plan emphasizing new ways of performing the job. - Establish policies, methods and procedures of performance. - Modify plans based on the result of the control. 2.- Organization It is the second step to take, it constitutes a set of rules to be respected within the company by all those who work there, the main function at this stage is coordination. After planning, the next step is to distribute and assign the different activities to the work groups that make up the company, allowing the equitable use of resources to create a relationship between the staff and the work to be performed. Organizing is using the work in search of goals for the company, including setting the tasks to be performed, who is going to do them, where decisions are made and to whom they must be accountable. That is, the organization allows to know what must be done to achieve a planned purpose, dividing and coordinating activities and providing the necessary resources. The work that is carried out here is related to the aptitudes (physical and intellectual) of each worker at the same time with the resources that the company possesses. The main intention of the organization is to detail the objective assigned to each activity so that it meets the minimum expense and a maximum degree of satisfaction. The most significant activities of the organization are: - Make the detailed and detailed selection of each worker for the different positions. - Subdivide tasks into operational units. - Choose an administrative power for each sector. - Provide the materials and resources to each sector. - Concentrate operational obligations on jobs by department. - Maintain clearly established the requirements of the position. - Provide personal facilities and other resources. - Adjust the organization based on the results of the control. 3.- Address It is the third step to take, within it the execution of the plans, communication, motivation and supervision necessary to achieve the goals of the company is carried out. At this stage the presence of a manager with the ability to make decisions, instruct, help and direct the different work areas is necessary. Each work group is governed by norms and measures that seek to improve its operation, the direction is to achieve through interpersonal influence that all workers contribute to the achievement of the objectives. The address can be exercised through: - the leadership - The motivation - The communication. The most significant management activities are: - Offer motivation to staff. - Reward employees with salary according to their duties. - Consider the needs of the worker. - Maintain good communication between different labor sectors. - Allow participation in the decision process. - Influence workers to do their best. - Train and develop workers to use their full physical and intellectual potential. - Satisfy the different needs of employees by recognizing their effort at work. - Adjust management and execution efforts according to the control results. 4.- Control It is the last step that must be taken, within this the evaluation of the general development of a company is carried out, this last stage has the task of ensuring that the path that is taken, will bring it closer to success. It is an administrative task that must be exercised with professionalism and transparency. The control of the activities carried out in the company offers an analysis of the ups and downs of them, and then based on the results make the different modifications that are feasible to be carried out to correct the perceived weaknesses and lows. The main function of the control is to measure the results obtained, compare them with the planned results to seek continuous improvement. Therefore, this is considered a follow-up work focused on correcting the deviations that may arise with respect to the objectives set. Then contrast the planned and achieved to unleash the corrective actions that keep the system oriented towards the objectives. The most important control activities are: - Follow, evaluate and analyze the results obtained. - Contrast the results against performance standards. - Compare the results obtained with the established plans. - Define and initiate corrective actions. - Use effective means to measure operability. - Communicate and participate to everyone about the means of measurement. - Transfer detailed information that shows the variations and comparisons made. - Suggest various corrective actions when necessary. Importance of the administrative process The importance of the administrative process lies in the anticipation of future events and the adequate control of resources in an orderly manner. It is necessary that the rules, policies and activities of each process be applied effectively and in line with the objectives and goals of the company, in order to maintain the efficiency of the system and therefore the profitability and economic benefit.
https://www.takecareofmoney.com/administrative-process/
Learning by Repetition: The Beauty of Technology This post originally appeared on PRSonally Speaking, a blog for the Journal of the American Society of Plastic Surgeons, on December 19, 2011, as written by Dr. Anureet Bajaj. Recently, I commented on how wonderful it was to have PRS on the iPad. Well, I will say it again! Usually, I have to wait several weeks to get my new issue of PRS; this month, the December issue magically appeared on iPad – before Thanksgiving. The CME article in this issue is on face lifts, and the timing couldn’t have been better because I had a facelift scheduled for the next day. Many of us periodically review our results – by being critical, we can try to improve and learn from our patients. In training, this process is continual and mandatory; once in practice, we have to make a conscious effort to improve and maintain quality. I’m in the middle of upgrading and updating my website so I have been reviewing all of my before and after photos. During this process, I am discovering my strengths and weaknesses as a surgeon; and I have become dissatisfied with the long-term results of my face lifts – I wanted to get better. I remembered that during residency, I had been focused on learning the techniques used by my attendings; in early practice, I don’t’ think that I had the volume to start perfecting my techniques; as I have matured and as my practice matures, I strive to improve and determine which techniques work best in my hands on which patients. Each surgeon will mature at a different rate, and it may vary for each type of procedure. So for me, the time for facelifts has been during the past three months when I seem to have had a string of them. Doing several has allowed me to read continually – and I learn by reading everything in sight about a particular topic – on a recent ASPS conference call, one well-known plastic surgeon said that “repetition is the key to adult learning.” I completely agree, but we also need repetition using multiple modalities – auditory, visual, and written. Prior to the computer age, we could only read books or articles and look at pictures – I remember spending hours drawing out my surgeries in my sketchbook so that I could translate the words to a visual image. Now we can watch surgeries or anatomical dissections and listen to lectures without leaving our homes. This article was great in helping me to summarize different facelifting techniques and reviewing the anatomical issues involved – more repetition so that I could consolidate my own approach. The videos were also instantly accessible on my iPad – another wonder of technology. Even my father was impressed when I showed him – no more needing to find DVDs and a computer to review a surgery or dissection. But back to face lifts….I read the article, watched the videos, and did more research with in depth articles and textbooks – rereading some old ones and adding new ones to my collection — and did my surgery. The fact is that I do learn by repetition and multiple modalities – a typical product of my generation. Thus far, I am happier with my results, and look forward to further exploring my new educational options online. Resources:
https://bajajplasticsurgery.com/blog/learning-by-repetition-the-beauty-of-technology/
Tired of tossing and turning at night? These simple tips will help you sleep better and be more energetic and productive during the day. How can I get a better night’s sleep? Sleeping well directly affects your mental and physical health. Fall short and it can take a serious toll on your daytime energy, productivity, emotional balance, and even your weight. Yet many of us regularly toss and turn at night, struggling to get the sleep we need. Getting a good night’s sleep may seem like an impossible goal when you’re wide awake at 3 a.m., but you have much more control over the quality of your sleep than you probably realize. Just as the way you feel during your waking hours often hinges on how well you sleep at night, so the cure for sleep difficulties can often be found in your daily routine. Unhealthy daytime habits and lifestyle choices can leave you tossing and turning at night and adversely affect your mood, brain and heart health, immune system, creativity, vitality, and weight. But by experimenting with the following tips, you can enjoy better sleep at night, boost your health, and improve how you think and feel during the day. Tip 1: Keep in sync with your body’s natural sleep-wake cycle Getting in sync with your body’s natural sleep-wake cycle, or circadian rhythm, is one of the most important strategies for sleeping better. If you keep a regular sleep-wake schedule, you’ll feel much more refreshed and energized than if you sleep the same number of hours at different times, even if you only alter your sleep schedule by an hour or two. Try to go to sleep and get up at the same time every day. This helps set your body’s internal clock and optimize the quality of your sleep. Choose a bed time when you normally feel tired, so that you don’t toss and turn. If you’re getting enough sleep, you should wake up naturally without an alarm. If you need an alarm clock, you may need an earlier bedtime. Avoid sleeping in—even on weekends. The more your weekend/weekday sleep schedules differ, the worse the jetlag-like symptoms you’ll experience. If you need to make up for a late night, opt for a daytime nap rather than sleeping in. This allows you to pay off your sleep debt without disturbing your natural sleep-wake rhythm. Be smart about napping. While napping is a good way to make up for lost sleep, if you have trouble falling asleep or staying asleep at night, napping can make things worse. Limit naps to 15 to 20 minutes in the early afternoon. Fight after-dinner drowsiness. If you get sleepy way before your bedtime, get off the couch and do something mildly stimulating, such as washing the dishes, calling a friend, or getting clothes ready for the next day. If you give in to the drowsiness, you may wake up later in the night and have trouble getting back to sleep. Tip 2: Control your exposure to light Melatonin is a naturally occurring hormone controlled by light exposure that helps regulate your sleep-wake cycle. Your brain secretes more melatonin when it’s dark—making you sleepy—and less when it’s light—making you more alert. However, many aspects of modern life can alter your body’s production of melatonin and shift your circadian rhythm. How to influence your exposure to light During the day: Expose yourself to bright sunlight in the morning. The closer to the time you get up, the better. Have your coffee outside, for example, or eat breakfast by a sunny window. The light on your face will help you wake up Spend more time outside during daylight. Take your work breaks outside in sunlight, exercise outside, or walk your dog during the day instead of at night. Let as much natural light into your home or workspace as possible. Keep curtains and blinds open during the day, and try to move your desk closer to the window. If necessary, use a light therapy box. This simulates sunshine and can be especially useful during short winter days. At night: Avoid bright screens within 1-2 hours of your bedtime. The blue light emitted by your phone, tablet, computer, or TV is especially disruptive. You can minimize the impact by using devices with smaller screens, turning the brightness down, or using light-altering software such as f.lux. Say no to late-night television. Not only does the light from a TV suppress melatonin, but many programs are stimulating rather than relaxing. Try listening to music or audio books instead. Don’t read with backlit devices. Tablets that are backlit are more disruptive than e-readers that don’t have their own light source. When it’s time to sleep, make sure the room is dark. Use heavy curtains or shades to block light from windows, or try a sleep mask. Also consider covering up electronics that emit light. Keep the lights down if you get up during the night. If you need some light to move around safely, try installing a dim nightlight in the hall or bathroom or using a small flashlight. This will make it easier for you to fall back to sleep. Tip 3: Exercise during the day People who exercise regularly sleep better at night and feel less sleepy during the day. Regular exercise also improves the symptoms of insomnia and sleep apnea and increases the amount of time you spend in the deep, restorative stages of sleep. For better sleep, time your exercise right Exercise speeds up your metabolism, elevates body temperature, and stimulates hormones such as cortisol. This isn’t a problem if you’re exercising in the morning or afternoon, but too close to bed and it can interfere with sleep. Try to finish moderate to vigorous workouts at least three hours before bedtime. If you’re still experiencing sleep difficulties, move your workouts even earlier. Relaxing, low-impact exercises such as yoga or gentle stretching in the evening can help promote sleep. Tip 4: Be smart about what you eat and drink Your daytime eating habits play a role in how well you sleep, especially in the hours before bedtime. Limit caffeine and nicotine. You might be surprised to know that caffeine can cause sleep problems up to ten to twelve hours after drinking it! Similarly, smoking is another stimulant that can disrupt your sleep, especially if you smoke close to bedtime. Avoid big meals at night. Try to make dinnertime earlier in the evening, and avoid heavy, rich foods within two hours of bed. Spicy or acidic foods can cause stomach trouble and heartburn. Avoid alcohol before bed. While a nightcap may help you relax, it interferes with your sleep cycle once you’re out. Avoid drinking too many liquids in the evening. Drinking lots of fluids may result in frequent bathroom trips throughout the night. Cut back on sugary foods and refined carbs. Eating lots of sugar and refined carbs such as white bread, white rice, and pasta during the day can trigger wakefulness at night and pull you out of the deep, restorative stages of sleep. Nighttime snacks help you sleep For some people, a light snack before bed can help promote sleep. For others, eating before bed leads to indigestion and make sleeping more difficult. If you need a bedtime snack, try: Tip 5: Wind down and clear your head Do you often find yourself unable to get to sleep or regularly waking up night after night? Residual stress, worry, and anger from your day can make it very difficult to sleep well. Taking steps to manage your overall stress levels and learning how to curb the worry habit can make it easier to unwind at night. You can also try developing a relaxing bedtime ritual to help you prepare your mind for sleep, such as practicing a relaxation technique, taking a warm bath, or dimming the lights and listening to soft music or an audiobook. Problems clearing you head at night can also stem from your daytime habits. The more overstimulated your brain becomes during the day, the harder it can be slow down and unwind at night. Maybe, like many of us, you’re constantly interrupting tasks during the day to check your phone, email, or social media. Then when it comes to getting to sleep at night, your brain is so accustomed to seeking fresh stimulation, it becomes difficult to unwind. Help yourself by setting aside specific times during the day for checking your phone and social media and, as much as possible, try to focus on one task at a time. You’ll be better able to calm your mind at bedtime. A deep breathing exercise to help you sleep Breathing from your belly rather than your chest can activate the relaxation response and lower your heart rate, blood pressure, and stress levels to help you drift off to sleep. To follow along with a guided deep breathing exercise, click here. A body scan exercise to help you sleep By focusing your attention on different parts of your body, you can identify where you’re holding any stress or tension, and release it. For a guided body scan meditation to help you wind down and clear your head at bedtime, click here. Tip 6: Improve your sleep environment A peaceful bedtime routine sends a powerful signal to your brain that it’s time to wind down and let go of the day’s stresses. Sometimes even small changes to your environment can make a big difference to your quality of sleep. Keep your room dark, cool, and quiet Keep noise down. If you can’t avoid or eliminate noise from neighbors, traffic, or other people in your household, try masking it with a fan or sound machine. Earplugs may also help. Keep your room cool. Most people sleep best in a slightly cool room (around 65° F or 18° C) with adequate ventilation. A bedroom that is too hot or too cold can interfere with quality sleep. Make sure your bed is comfortable. Your bed covers should leave you enough room to stretch and turn comfortably without becoming tangled. If you often wake up with a sore back or an aching neck, you may need to experiment with different levels of mattress firmness, foam toppers, and pillows that provide more or less support. Reserve your bed for sleeping and sex. By not working, watching TV, or using your phone, tablet, or computer in bed, your brain will associate the bedroom with just sleep and sex, which makes it easier to wind down at night. Tip 7: Learn ways to get back to sleep It’s normal to wake briefly during the night but if you’re having trouble falling back asleep, these tips may help: Stay out of your head. Hard as it may be, try not to stress over your inability to fall asleep again, because that stress only encourages your body to stay awake. To stay out of your head, focus on the feelings in your body or practice breathing exercises. Take a breath in, then breathe out slowly while saying or thinking the word, “Ahhh.” Take another breath and repeat. Make relaxation your goal, not sleep. If you find it hard to fall back asleep, try a relaxation technique such as visualization, progressive muscle relaxation, or meditation, which can be done without even getting out of bed. Even though it’s not a replacement for sleep, relaxation can still help rejuvenate your body. Do a quiet, non-stimulating activity. If you’ve been awake for more than 15 minutes, get out of bed and do a quiet, non-stimulating activity, such as reading a book. Keep the lights dim and avoid screens so as not to cue your body that it’s time to wake up. Postpone worrying and brainstorming. If you wake during the night feeling anxious about something, make a brief note of it on paper and postpone worrying about it until the next day when it will be easier to resolve. Similarly, if a great idea is keeping you awake, make a note of it on paper and fall back to sleep knowing you’ll be much more productive after a good night’s rest. Source:
https://serta.id/promotion/detail/198/mattress-guide-1.html
An expert explains how the clock change affects your sleep At 2 am local time on Sunday November 7, Daylight saving time (DST) ends in the US, which means the clocks go back by one hour. And while this gives us an extra hour in bed, it can also have a knock-on effect, which feels similar to jet lag. We speak to two experts to find out how clock changes affect our sleep and how you can prepare to make the transition as seamless as possible. Plus, we’ll also cover some tips on how to get good sleep in general. How does the clock change affect sleep? For some people, the clock change will be no bother at all. But, for others, the switch will bring on a feeling similar to jetlag, which can lead to moodiness and appetite changes, among other things. April Mayer, a sleep expert at Amerisleep, explains it like this: “As our bodies are used to a certain rhythm, if you don't take steps to minimize the effect of a time change, it can take a few days or even around a week to get your sleep schedule back on track,” says Mayer. (Image credit: Getty) “Our circadian rhythm (or 'body clock') – which governs many of our bodily functions, from hunger cues to knowing when we need to sleep – relies on consistent patterns of sunlight and darkness to operate. “It produces melatonin to make us feel sleepy at night, and responds to light during the day to signal us to stay awake. The clock change can interfere with these patterns. However, knowing this means you can take steps to smooth the transition and avoid any negative effects.” How to prep your sleep for the clocks going back If you are usually affected by the changing of the clocks, then the good news is that there are some simple things you can do before the time change. Mayer suggests: “The night before the clock change, try to eat a light, protein-filled dinner to promote sleepiness. We also recommend shutting off screens about two hours before bedtime, unwinding with a warm bath or shower, doing some light stretches, and reading a good book or doing some light activity. “It's also important to get at least seven hours of sleep each night to keep you from feeling tired when the switch occurs.” (Image credit: Getty) In order to make the transition easier, you can also follow these simple steps, starting this evening: The Thursday before the clocks go back – eat, sleep and exercise 30 minutes earlier than usual. The Friday before the clocks go back – eat, sleep and exercise 25 minutes earlier than usual. The Saturday before the clocks go back – eat, sleep and exercise 20 minutes earlier than usual. The Sunday when the clocks go back – eat, sleep and exercise according to the new time. When the clocks go back, maintaining good sleep hygiene will help make the leap easier. Read on for some essential information about sleep hygiene and how practicing it will help you sleep better night after night. What is sleep hygiene and how can it help you? ‘Sleep hygiene’ is a term you might have heard in discussions about healthy sleep and general wellbeing, but what does it actually mean, and how can it help you sleep better before and after the clocks go back? We recently spoke to Dr Katherine Green, medical director of the UC Health Sleep Medicine Clinic in Colorado, to find out... “Sleep hygiene is the general term used for the habits and behaviors around our sleep routine,” she says. “It encompasses everything from the sleep environment (where the bedroom is, how much noise is around, how much light is there, etc), to the schedule that you typically sleep (bedtime and wake time). “It also includes the things that you do before bed (using electronics before bed, eating or drinking schedules around bedtime, and so on). These things give the brain clues that help regulate the body’s sleep system.” Signs that you have poor sleep hygiene It’s easy to get into bad habits when it comes to our bedtime routines, but how do you know which habits are seriously affecting your sleep? According to Dr Green, there are two big signs of poor sleep hygiene. The first sleep hygiene no-no is using electronics within around an hour of bedtime. “Light from electronics (blue light) inhibits your brain’s production of melatonin, the hormone that helps you to fall asleep and stay asleep,” explains Dr Green. “Using electronics (TV, phone, tablet etc) within 30-60 minutes of bedtime can make it harder for you to fall asleep and stay asleep.” (Image credit: Getty) The second biggest sign? Not keeping your sleep and wake schedule regular. “Having an erratic sleep schedule (going to bed early some nights, late others, waking up early some days and sleeping in until very late on other days)," says Dr Green. "This makes it difficult for the brain’s hormones to regulate sleep, making it hard to fall asleep at night when you want to.” Expert tips for better sleep hygiene Regardless of the clocks going back, it’s always wise to practice good sleep hygiene daily so that you can get the rest you need without issue. Dr Green recommends the following good sleep hygiene tips: 1. “Avoid electronics within 60 minutes of bedtime.” 2. “Make sure your bedroom environment is conducive to sleep. It should be cool, dark, and quiet.” 3. “Avoid caffeine or stimulants after about 1pm, as this will affect your ability to fall asleep later that evening.” 4. “Exercising during the day can help reduce insomnia and improve sleep quality, and getting morning sunlight exposure helps to regulate your circadian rhythm as well. However, try to avoid any strenuous exercise too close to bedtime.” 5. “Keep to the same sleep schedule as much as you can, trying to avoid your bedtime and wake-time varying by more than an hour each day.” 6. “If you do wake in the night, avoid electronics or eating. Instead get out of bed and do a quiet activity like reading until you feel tired, then get back into bed and try to fall asleep.” Davina is an experienced sleep and mattress writer who has previously contributed to our sister site Top Ten Reviews, among other Future plc brands. Davina's a big fan of organic sleep products and has recently invested in a wool mattress topper that she quite happily describes as "life-changing." (Hey, we're serious about our sleep products). When she isn't snoozing or writing about sleep, Davina enjoys reading and creative writing, and incorporates meditation and yoga into her wellness routine.
Both genetic and environmental contributions have crucial roles in the development of a complex disease such as alcoholism. Unfortunately, little progress has been made in identifying the underlying molecular mechanisms altered during abstinence to aid development of novel therapeutics for the maintenance of sobriety. We propose a combined genetic, molecular, pharmacological and behavioral strategy to identify pathways that are altered after a period of abstinence. Neuroadaptations in brain structure, plasticity and gene expression occur with chronic alcohol abuse, but the stability of these expression differences in the abstinent alcoholic is controversial. We have previously reported identification of pathways altered in prefrontal cortex (PFC), a brain region associated with cognitive dysfunction and damage in alcoholics, during a defined period of abstinence. To characterize genetic contributions, both sexes of an animal model with widely divergent responses to alcohol derived by selective breeding, the Withdrawal Seizure-Resistant (WSR) and Withdrawal Seizure-Prone (WSP) lines, were analyzed. During a sustained period of abstinence, the transcriptional response correlated with withdrawal phenotype rather than sex. Bioinformatic analysis showed that among the major pathways altered that were the most dimorphic between WSR and WSP mice were 'acetylation'and 'histone deacetylase complex'. Data shows a complex phenotype-specific regulation during abstinence indicating widespread epigenetic reprogramming in the low response WSR but not the high response WSP mice exposed to the same ethanol concentrations. We will identify phenotype-specific regulatory mechanisms in the low response animal model in three specific aims by integrating data from high-throughput targeting technologies including expression profiling, DNaseI-seq and ChIP-seq, with confirmation of involvement of pathways to modulate relapse using pharmacological intervention in our established dependence-induced relapse drinking model. We hypothesize that targetable epigenetic mechanisms maintain expression differences during abstinence and that these differences increase the risk of relapse in the low response to alcohol endophenotype. These studies have high impact because of the morbidity/mortality associated with alcohol abuse, the high incidence of alcohol use disorders in the general population, and the tremendous impact these maladies have on human health. In addition, neuroadaptive changes and altered expression patterns may also play a role in persistent neurotoxicity and brain damage during abstinence with detrimental consequences for learning and memory functions, to play a role in the down-ward cycle of addiction and the self-sustaining nature of alcoholism. Thus, successful completion of these aims will aid in our understanding of the mechanism(s) underlying the risk for relapse and advance our ability to provide therapy for alcohol abuse targeted to the low response endophenotype, through identification of novel pharmacotherapies or to enhance translational applications for currently available therapeutics with previously unrecognized utility. Alcohol abuse disorder and addiction are major public health problems in the United States and represent one of the leading preventable causes of death. In this project, we identify epigenetic mechanisms underlying long- term expression differences that persist in the abstinent alcoholic, and examine pharmacological interventions with behavioral validation in a mouse model of the low response to alcohol endophenotype. Successful completion of these studies should help identify therapeutic targets to reverse or reduce risk of relapse drinking and help to maintain sobriety.
https://grantome.com/grant/NIH/R01-AA021468-02
The Indian medical device industry is amongst the top 20 markets worldwide and considered as one of the fastest growing. Since approximately 80% of the Indian medical device needs are met through imports, it is essential to have a strong medical devices regulatory regime. - September 2, 2021 Medical Devices - May 24, 2021 Medical Devices It is a known fact that the European Union’s In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746 (IVDR) will be in effect from May 26, 2022 and IVD manufacturers have to be prepared to implement the mandatory Regulatory requirements for this transition. - Understanding Post Brexit Scenario for Medical Devices and Appointing a UKRP The Need of An Exclusive WebinarApril 12, 2021 Medical Devices It is well known that from January 1, 2021, medical devices to be placed on the UK market are obliged to follow a new Regulatory regime. Currently, the UK market is a witness to many Regulatory changes, with the impact of Brexit. - February 16, 2021 Medical Devices The medical devices industry is considered as highly regulated. Hence, devising a right Regulatory strategy will be a potential approach and a great value-added contribution to avoid Regulatory roadblocks. - December 9, 2020 Food and Food Supplements To market a dietary supplement with a “New Dietary Ingredient (NDI)” in the United States (US), manufacturers are required to submit a notification to the Food and Drug Administration (FDA). - Health Based Exposure Limits (HBELs) - Regulatory Expectations and Challenges An Informative WebinarOctober 5, 2020 Pharmaceuticals Pharmaceutical companies sometimes manufacture the products using multipurpose manufacturing facilities to develop different medicinal products. Production at such facilities may create potential cross-contamination and pose a risk on products’ safety and efficacy. - September 21, 2020 Medical Devices Medical device industry is witnessing rapid Regulatory reforms, thanks to evolving technologies and scientific developments. In this rapid development scenario, to aid medical device manufacturers, notified bodies, and other stakeholders align with the sophistication and globalization, Regulatory bodies are parallelly revamping the compliance standards. - February 27, 2020 Regulatory Labeling It is evident that the labeling requirements in the European Union (EU) are quite dissimilar, given various region-specific regulations across the 27 member states. Besides this, the emerging European Regulatory landscape demands life sciences manufacturers to be more cautious in implementing and showcasing the safety information. - November 26, 2018 Medical Devices, Regulatory Affairs A Clinical Evaluation Report (CER) is a safety and performance assessment report of any medical device based on the clinical data related to it. The clinical data is either collected through clinical investigation or by availing previously collected data of a substantially equivalent device.
https://www.freyrsolutions.com/blog-tags/webinar
Pick up a book by Val McDermid and you know you are in for an exciting, tense page-turner. BENEATH THE BLEEDING, one of the Tony Hill series, certainly delivers on this score. Tony is a psychological profiler, who collaborates with the Bradfield Major Incident Team run by DCI Carol Jordan. Not only do Tony and Carol have a longstanding "will they, won't they?" relationship, but Carol is now Tony's tenant, exacerbating their uncertainties and insecurities. In the inevitable scene-setting introductory chapter, Tony is on duty at a mental hospital when a patient goes berserk, eventually being subdued but at a cost to Tony's knee, which is shattered by an axe. Tony's subsequent hospital stay allows us not only to meet his awful mother (and hence to learn more than before about the unusual character of this man), but to have a first-hand account of the horrible death of the local football hero, Robbie Bishop. Carol and her team are assigned to the case, at which point regular readers will recognise the pattern: everyone goes off and does their own thing in competition with each other, not telling each other what they have found in the hope of being the one to make the crucial breakthrough. This was very much the theme of the previous book in the series, THE TORMENT OF OTHERS, and you'd think that after what happened in that, the team members would have learnt their lesson. Not so. Almost the worst offender is Tony himself, who although supposedly immobile after his knee operation, is off with Paula, one of the team members, following up a lead that Carol had dismissed as unlikely and challenging the senior woman's authority. The excitement builds up as the team attempt to home in on the killer, who unimaginatively changes identity, but not initials, between victims. This is nothing, however, compared with the other main plot strand: a planned attack on the football stadium during the match played in honour of the dead soccer star. Carol's boss calls in Counter Terrorism Command to investigate, much to Carol's disgust as she wants her team to follow through the attack as well as the murder case. More turf wars ensue, although the computer experts in both teams are happy to swap technical knowledge. Written before the debacle in which the HM Customs and Revenue lost computer disks containing bank details of 25 million people in the UK, there is a prescient paragraph in the book about Gerry and Stacey, the computer "geeks" on the two teams: "in exchange for a back door into a confidential social security database, he'd given her HM Customs and Revenue, probably the only major government access she didn't have". Fiction meets fact quite chillingly! The two investigations continue. Witnesses are interviewed, characters rub each other up the wrong way, and the tension mounts - in all these respects this author is utterly assured. Unfortunately, the denouement in each of the cases reveals a rather ludicrous motivation - a problem with previous titles by this author. And although Tony and Carol are strongly characterised and their interactions zing, the minor characters are less convincing. The Tony Hill books have been made into a successful TV series, and one does get the feeling that this book has been tailored for the purpose. Even so, the pages flash by as the suspense mounts - despite the slightly mass-produced feel to the book, it is certainly a thrilling read.
http://eurocrime.co.uk/reviews/Beneath_the_Bleeding.html
Chipton-Ross is seeking a UI/UX Designer for an opportunity in Rosslyn, VA. RESPONSIBILITIES: We are seeking a UI/UX Designer who will serve as a designer and will be responsible for user experience and interface design. The ideal candidate will quickly yet thoroughly create process flows, wireframes, and visual design mockups as needed to effectively conceptualize and communicate detailed interaction behaviors as well as develop and maintain detailed user-interface specifications. You should have excellent communication and team-oriented skills along with strong analytical and troubleshooting skills • Create user-centered designs for software projects by considering customer feedback • Design the UI architecture, interface, and interaction flow of new web and on-device software applications • Create and evaluate interaction models, user task flows, screen designs, and UI details that promote ease of use and optimize the user experience • Breakdown complex requirements and concepts into well-thought visual designs and workflows • Ability to work collaboratively with developers, project managers, and cross-functional teams to ensure that designs are successfully created and implemented to achieve the user goals • Experience creating UI specs & wireframes. Flexibility and willingness to move from idea, to whiteboard, to execution quickly. • Participates in testing, validating own and others’ functionality REQUIRED EXPERIENCE: • Minimum 5 years of experience as a user experience designer, interaction designer, information architect, or similar role • Conceptual understanding of User Interface Design and the Design Process. • Intermediate to advanced user of Adobe Creative Suite (or other designer tools) • Intermediate level of HTML, CSS and Bootstrap • Understanding of Less and Sass • Understanding of responsive design and touch interface interaction techniques and be in sync with recent design standards and trends • Understanding of the limitations of web and mobile, with approaches/ideas to flex those boundaries. A solid portfolio of work demonstrating experience creating great user-centered design solutions across multiple interfaces and screen sizes. EDUCATION: Degree in Computer Science, Information Technology, or related field is required. School must be accredited. HOURS: 5/40 Work Week MISCELLANEOUS: Applicants responding to this position will be subject to a government security investigation and must meet eligibility requirements by currently possessing the ability to view classified government information. Employment will be contingent on clearing a drug screen and background check. Both must clear prior to start date. - Contact - Zachary Fasano [email protected] CHIPTON-ROSS, INC. 420 Culver Boulevard Playa Del Rey, CA 90293 Phone: (310) 414-7800 x252 or (800) 927-9318 x252 Candidates responding to this posting must currently possess the eligibility to work in the United States. No third parties please. Employment will be contingent on candidate clearing pre-employment drug screen and background check. Chipton-Ross provides equal employment opportunities to all employees and applicants for employment without regard to race, color, creed, religion, national origin, sex (including pregnancy), age, disability, sexual orientation, gender identity and/or expression, protected veteran status, genetic information, or any other characteristic protected by Federal, State or local law. This policy governs all areas of employment at Chipton-Ross, including recruiting, hiring, training, assignment, promotions, compensation, benefits, discipline, and terminations.
https://www.chiptonross.com/newjobs/185640.htm
usually come back with a idea for the next days painting. Sometimes the composition comes from elements of the walk or I start with elements I want in my painting, then I jiggle them around in my mind and start thinking about color eg. blues here yellows there , greens here etc. Not all paintings come this way ,sometimes I have a new angle on a old composition like ‘the horse and buggy’ I may have thought of a new way to paint it that expresses my compassion for color more effective. Then it’s a light sketch onto the canvas hitting the key points of my composition with the ‘ golden means ‘ rule. Then its into the paint with light yellows first then onto the darker colors. This is the opposite of traditional painting where they painted dark to light. “In palette knife painting the first layers are the ones that give the maximum effect of color, like the purples , blues ,yellows are what make palette knife painting so exciting because the color seems to explode off the canvas so bright and pure” Color and texture are where I am at . The subject matter is just a vehicle for me to apply my color and texture. An example of ‘How I Work’ is a few nights ago ; I was out on a walk and little did I know but it was a magical night . Just before sunset, half foggy and raining perfect ‘dream elements’. This made for wonderful blues from the sky and light yellows glowing from the street lights, with deep greens coming from the spring grass,. All I needed for a small serious of paintings all in the same vain.
http://johnbradfordmaccallum.com/
Cochlear Implant Program Education Coordinator Jennifer Haney helps families find solutions to help their children with hearing loss thrive in the classroom. Jennifer Haney, M.Ed., does all she can to eliminate educational barriers for children who receive a cochlear implant at Lurie Children's. A former teacher of deaf and hard of hearing children who is also an early intervention therapist, Jennifer has special expertise in listening and spoken language development and is fluent in sign language. Jennifer is the Hart Family Cochlear Implant Education Coordinator, a position supported by the Foundation for Hearing and Speech Rehabilitation (FHSR). Since joining Lurie Children's Cochlear Implant Program in December 2015, she has served as an essential bridge between parents, early intervention therapists, teachers and school administrators. Her role is wide-ranging. Jennifer spends time in the medical center meeting with families, in the classroom observing children and with school-based and early intervention professionals in the community. She teaches educators about integrating auditory skills development techniques into the curriculum. Jennifer is also charged with organizing the "A Day at Lurie Children’s" educational program for community-based professionals working with pre-school and school-aged children. Jennifer meets with parents to discuss their goals for their child after implantation, and to provide therapy and educational recommendations. Jennifer develops an initial report that is shared with the family, the school and the implant team to ensure that all stakeholders understand her recommendations. Making families aware of the best educational options for their child's specific situation can make all the difference. For example, Jennifer worked with the family of a child whose progressive hearing loss had gone unrecognized for several years. When finally diagnosed, he still had spoken language, but his untreated hearing loss prevented him from developing the academic skills of his third-grade hearing peers. He received a cochlear implant and, with Jennifer's assistance, was placed in an oral education classroom. This setting supports listening, language and academic skills of children with hearing loss until they are ready to succeed in a mainstream class. He has made tremendous progress, and his path in life has been forever changed. "Being able to meet and support a wide variety of families in varying stages of their child's journey and assisting in their educational success is very gratifying," says Jennifer. "Our team here is really amazing." Lurie Children's Cochlear Implant Program is supported by the Foundation for Hearing and Speech Rehabilitation.
https://www.luriechildrens.org/en/news-stories/an-advocate-for-classroom-success/
Facilities Maintenance Planner-Scheduler | Redmond, WA Just is seeking a highly motivated, detail-oriented and driven Facilities Maintenance Planner/Scheduler that desires a significant opportunity to improve worldwide access to biotherapeutics. The Maintenance Planner/Scheduler will provide lead technical, business support and analysis for the Computerized Maintenance Management System (CMMS) including the support of asset management, work planning, calibrations, and spare parts inventory within the software. This position will be responsible for developing maintenance plans, and schedules for all the plant assets at Just-Evotec Biologics and to coordinate the interface of the CMMS system, procedures and data transfer to various departments and databases and to ensure all CMMS procedures are developed and followed. Additionally, he/she will be responsible for developing plans considering any special tools that are required, environmental considerations, manpower availability (internal and external), equipment availability, work permits, and materials (stocked and non-stocked). Key Responsibility Areas - Maintenance Coordination: Schedule and document maintenance and calibration related services for manufacturing equipment, facilities, and utility systems; assess maintenance needs for facility and equipment change control - Operational Efficiency: Actively partner and communicate with key internal/external stakeholders to develop a maintenance strategy and definition of critical spare parts and inventory control management - Technical Expertise: Serve as a Subject Matter Expert (SME) for Computerized Maintenance Management Systems (CMMS) and related processes Specific Responsibilities: - Provide planning for corrective (non-emergency), preventive, predictive, and proactive maintenance activities, and calibrations for the Manufacturing, Utilities, and Laboratory areas. Planning includes developing and clarifying the scope of work, coordinating the manufacturing windows, evaluating the resource and load balancing, and the spare parts (stock and non-stock materials) - Develop a schedule for Maintenance Technicians and/or contractors. Organize the planned maintenance work schedule to optimize for time - Develop standards for repetitive jobs, historical job estimates and track craft/crew backlogs - Responsible for the Shutdown (planned outage) schedule and coordinating with department personnel and production areas - Work proactively with other functional areas (Facilities & Engineering, Supply Chain, Manufacturing) to prioritize business needs and maintenance requirements - Assess the skills, tools, equipment, and time needs for operational efficiency and to prepare for planned and unplanned interventions; support investigation of instrument and equipment related events - Interface with stakeholders/requesters to determine the need for resource deployment - Manage the maintenance and calibration activity schedule, with consideration to cost control and input from other essential stakeholders - Manage accurate work-order backlog, quotations, requisitions, data change request, and data required for Metrics, Reporting, KPIs, and Quality System Metrics; maintain and upload applicable data records into the CMMS - Create PM/CAL forecasts and communicate to management, stakeholders, and end-users with proper notice to effectively schedule required activities - Review and approve manufacturing documentation to ensure accuracy and all GMP/SOP requirements are met - Engage others by communicating with impact, encourage teamwork through collaboration, drive for results by ensuring accountability, establish strategic approach by clarifying what matters most, lead change through continuous improvement, demonstrate self-awareness by enhancing personal leadership, develop capabilities by growing talent - Provides feedback to area manager on performance of processes, schedulers and planners Qualifications and Education Requirements: - Associate degree required; Bachelor’s degree preferred - Minimum 5 years of experience in Pharmaceutical manufacturing or maintenance and 2 years of direct or indirect leadership experience preferred - Proficient in the use and administration of the Computerized Maintenance Management System (CMMS) - Must be able to lift, push, pull and carry up to 50 lbs; In general, the position requires a combination of sedentary work and walking around observing conditions in the facility - Must be able to work in controlled environments requiring special gowning. Will be required to follow gowning requirements and wear protective clothing over the head, face, hands, feet and body Preferred Qualifications: - Strong communication skills. Able to effectively lead cross-functional meetings and effectively engage management - Strong CMMS background - Strong computer skills including Microsoft office - Proactively looks for opportunities to educate group and external departments - Able to influence within the organization and provide leadership to the team - Ability to multi-task in a highly dynamic and diverse environment - Must be knowledgeable with respect to the operation and maintenance of the manufacturing and utilities equipment and components - Demonstrate thorough knowledge of manufacturing equipment and utilities process, using P&Id’s, equipment manuals, specifications, and standard operating procedures - Use proper judgment when deciding about the scope of work for any corrective work order - Possess good mechanical knowledge and skills - Demonstrated ability to coordinate and prioritize maintenance related issues with managers, manufacturing, Facilities and Engineering, QC, supply chain, CMMS, cleaning, and QA personnel About Just – Evotec Biologics Just – Evotec Biologics, wholly-owned by Evotec SE, is a unique platform company that integrates the design, engineering, development, and manufacture of biologics. With deep experience in the fields of protein, process and manufacturing sciences, the Just team came together to solve the scientific and technical hurdles that block access to life-changing protein therapeutics; from the design of therapeutic molecules to the design of the manufacturing plants used to produce them. Just’s focus is to create access and value for a global market through scientific and technological innovation. Our state-of-the-art labs and cGMP clinical manufacturing plant are co-located in Seattle’s South Lake Union neighborhood – the center of Seattle’s medical, global health, and technology industries and a noted top emerging life science hub in the U.S. Our fast-growing team of 100 employees is expanding Just’s innovative platform and footprint – building our first North American J.POD® commercial manufacturing facility in the Seattle area. For job opportunities, learn more at www.just.bio/careers apply for this position:
https://www.justbiotherapeutics.com/facilities-maintenance-planner-scheduler?et_fb=1&PageSpeed=off
Introduction {#s1} ============ Eukaryotes utilize guanine nucleotides to regulate many intracellular cellular processes, including the endomembrane vesicular trafficking system. In general, trafficking is turned on when a "switch" molecule is activated by binding to GTP; in contrast, the system is off when the molecule is in the inactive, GDP-bound form. This on-off action is cycled consecutively via control of the activity of the switch molecule. To maintain homeostasis, cells require the integrity of membrane trafficking resulting from accurate switch activity. Secretion-associated Ras-related (Sar) and ADP-ribosylation factor (Arf) of small GTPase family proteins belong to the Ras superfamily and serve as such switch molecules for the precise operation of vesicular trafficking systems. The Sar/Arf proteins are highly conserved among species from yeast to mammals as well as in plants and are classified based on amino acid sequence homology. Sar1 was first identified as a multicopy suppressor of a temperature-sensitive *sec12* mutant in *S. cerevisiae* (Nakano et al., [@B60]). Higher eukaryotes have more than two Sar1 orthologs (e.g., two genes in vertebrates), whereas *S. cerevisiae* has only one Sar1. In contrast, several Arf genes are found in various species (three genes in yeast and six in mammals). The first Arf, Arf1, was cloned from bovine and yeast and has been identified as the cofactor for activating the ADP-ribosylation of a heteromeric G protein by cholera toxin *in vitro* (Kahn et al., [@B35]; Sewell and Kahn, [@B79]). Arf proteins are categorized into three classes. Although mammals have all classes, Classes I, II and III, yeast and plants lack Class II. Arf1, belonging to Class I, is the best studied, especially with regard to its role in vesicular trafficking. To date, a large number of genes of the Sar/Arf proteins have been identified in plants (Jurgens and Geldner, [@B33]; Vernoud et al., [@B100]), and the complementation of yeast mutants has been a useful tool to isolate and characterize these genes (D\'Enfert et al., [@B19]; Kim et al., [@B36]; Takeuchi et al., [@B92], [@B93], [@B94]; De Craene et al., [@B18]). Regulators and other interacting proteins of these GTPases have also been identified in plants by the use of yeast mutants and amino acid sequence similarity to yeast and mammalian orthologs (Vernoud et al., [@B100]). In the genome of the model plant *Arabidopsis thaliana*, four *SAR1* genes exist, which form a small gene family; in contrast, there are 12 *ARF* genes, comprising a multiple gene family (Robinson et al., [@B75]). In comparison with *Arabidopsis* Sar1 proteins, *Arabidopsis* Arf proteins appear to be involved in many different vesicular trafficking routes. Their trafficking systems have diversified in the plant kingdom independently of other organisms and are deeply involved in several plant-specific features. Thus, the functional diversification of vesicular trafficking systems is key to understanding the multicellular development of higher plants. Sar1/Arf1 small GTPases ----------------------- Sar1/Arf1 proteins bidirectionally manage vesicular trafficking in the early secretory pathway between the ER and Golgi: the anterograde pathway from the ER to Golgi depends on Sar1, whereas the opposite retrograde pathway from the Golgi to ER depends on Arf1. Distinct sets of vesicles move forward and backward and transfer proteins and lipids via these pathways. The transport vesicles are covered with distinct sets of coat protein (COP) complexes and bud specifically from donor organelle membranes (Brandizzi and Barlowe, [@B13]). COPII-coated vesicles are derived from the ER membrane and carry substances to the Golgi, and COPI-coated vesicles are derived from Golgi membranes to mediate transport to the ER. To ensure proper vesicle formation, COPI and COPII proteins consisting of completely different components should be properly recruited to each organelle membrane. For this purpose, Sar1/Arf1 proteins are switched on and off as appropriate on the respective organelle membranes by specific regulators that convert their guanine nucleotide-binding state. Sar1/Arf1 proteins are primarily responsible for the recruitment of COP proteins to membranes and initiation of the formation of COP-mediated vesicles. These proteins share structural similarity despite their different sequence identity (Figure [1A](#F1){ref-type="fig"}). At the initial step of COP vesicle formation, the two proteins function as switch molecules by similar mechanisms. The Sar1/Arf1 proteins have a characteristic α-helix at the N-terminal end that is composed of approximately 20 hydrophobic and hydrophilic amino acid residues, resulting in its amphipathic nature (Antonny et al., [@B2]; Bielli et al., [@B10]; Lee et al., [@B44]). When bound to GDP, Sar1/Arf1 proteins are cytosolic and inactive: the amphipathic α-helix is sequestered in a hydrophobic pocket on the surface. The exchange of GDP for GTP induces a conformational change into the active form (Figure [1B](#F1){ref-type="fig"}); in this conformation, a loop region flanked by β-sheets between two switch domains (the so-called interswitch region) is displaced from the nucleotide-binding site, which forces exposure of the helix out of the hydrophobic pocket (Goldberg, [@B27]; Bi et al., [@B8]). However, the extruded helix requires engagement in a suitable hydrophobic environment because of its amphipathicity. As a consequence of the hydrophobic face of the helix being laterally inserted into the outer leaflet of the lipid bilayer, GTP-loaded active Sar1/Arf1 proteins associate stably with the donor organelle membrane. ![**Small GTPase Sar1/Arf1 protein. (A)** Schematic diagrams of Sar/Arf. Conserved domains are depicted: the N-terminal amphipathic α-helix, two switch regions (switch 1 and switch 2) and the interswitch region. **(B)** The Sar/Arf protein cycles between membrane-association and dissociation. GDP-bound cytosolic Sar/Arf is inactive and carries the N-terminal amphipathic helix in a hydrophobic pocket. A guanine nucleotide exchange factor (GEF) mediates the exchange of GDP for GTP in Sar/Arf. GTP-loaded Sar/Arf undergoes a conformational change of the two switch and interswitch regions, triggering the extrusion of the helix from the pocket. Subsequently, the shallow insertion of the amphipathic helix into the outer leaflet of the lipid bilayers allows Sar/Arf to associate tightly with the membrane surface. For dissociation, GTPase activating protein (GAP) activates the GTP hydrolysis activity of Sar/Arf. Conserved domains are shown in the same color in **(A)** and **(B)**.](fpls-05-00411-g0001){#F1} The N-terminal helix is also used to deform the membrane (Bielli et al., [@B10]; Lee et al., [@B44]; Beck et al., [@B7]; Krauss et al., [@B39]; Lundmark et al., [@B49]). *In vitro* experiments show that when mixed with purified Sar1/Arf1 protein, liposomes are deformed into a highly curved tubular structure. This tubulation process requires the hydrophobicity of the N-terminal amphipathic helix. In contrast, a Sar1 mutant lacking the N-terminal helix is still able to deform an artificial liposome membrane *in vitro* (Stachowiak et al., [@B85]). When chemically bound to the lipid, the Sar1 mutant protein accumulates in the subdomain; in this case, the crowding of Sar1 on the membrane surface drives the tubulation. Tubular structures could be involved physiologically with non-vesicular carriers for ER-Golgi transport, as suggested by several observations in various organisms, including yeast (Fatal et al., [@B23]; Mironov, [@B58]). Further study is necessary for a better understanding of a role of Sar1/Arf1-formed tubular structures in biological processes *in vivo*. Arf family proteins such as Arf1 possess a modification of a myristoyl moiety on the N-terminal helix, which is required for potential biological activity as well as membrane association (Kahn et al., [@B35], [@B34]; Franco et al., [@B24]) (Figure [2](#F2){ref-type="fig"}). Sar1 does not undergo such modifications. Myristoylation occurs cotranslationally on the N-terminal second glycine residue of Arf exposed after cleavage of the first methionine residue, which contributes to increased hydrophobicity of the N-terminal amphipathic helix. Without the exposed helix, GDP-loaded inactive Arf1 associates unstably with membranes solely through the myristoyl group. Hence, the hydrophobic nature due to both is required for stable membrane association of GTP-bound Arf1. ![**Assembly of COPII and COPI coats drives vesicle formation**. Vesicle formation starts upon the recruitment of Sar1 and Arf1 to the ER (lower) and Golgi membranes (upper), respectively. In COPII vesicle formation, the ER integral membrane protein Sec12 exchanges GDP for GTP bound to Sar1 through its GEF activity. Membrane-associated GTP-bound Sar1 recruits the inner coat Sec23/24 complex and then assembles along with cargo protein into the pre-budding complex. Outer coat Sec13/31 complexes are recruited to the pre-budding complexes and self-assembled by crosslinking. The polymerization of Sec13/31 by self-assembly drives membrane curvature to form a spherically shaped vesicle. COPI vesicle formation is also initiated by GTP-GDP exchange on Arf1 through the action of the GEF Gea protein (Gea1 or Gea2), which is peripherally located on the Golgi membrane. GTP-bound Arf1 stably binds to the membrane by a myristoylated amphipathic helix, as does Sar1. The heptamer complex of the COPI coat is recruited *en bloc* and associates with cargo as well as two Arf1 molecules though the inner layer coat complex (β/γ/δ/ζ-COP). As in COPII, vesicles are formed upon polymerization of the outer coat (α/β′/ε-COP). The amphipathic helix of Sar1 and Arf1 has some role in the scission of budded vesicles.](fpls-05-00411-g0002){#F2} Guanine nucleotide exchange factors ----------------------------------- In the presence of Mg^2+^ and liposomes, the spontaneous exchange of GDP to GTP occurs efficiently in Sar1/Arf1 proteins *in vitro* (Barlowe et al., [@B4]; Franco et al., [@B24]). However, GDP-GTP exchange *in vivo* relies on a catalytically assisting protein called guanine nucleotide exchange factor (GEF). Because free GTP is much more abundant than free GDP in cells, the release of GDP from a nucleotide-binding site is sufficient to complete GDP-GTP exchange. A specific GEF protein catalyzes the conversion from the inactive state to active state of Sar1 and Arf1 on the appropriate organelle membrane (Figure [1B](#F1){ref-type="fig"}). Sec12 is a type II transmembrane protein localizing at the ER and exclusively acts as the GEF for Sar1. Sec12 was originally identified from yeast mutants defective in ER-Golgi transport (Novick et al., [@B65]). Its catalytic domain facing the cytosol is composed of a seven-bladed β-barrel structure, from which a loop with K^+^ bound (termed the K loop) extends (McMahon et al., [@B54]). Catalytically essential residues have been identified around the K loop. Accordingly, it is proposed that Sec12 contacts GDP-loaded Sar1 through the loop, mediating the conversion of the GDP-bound to GTP-bound form of Sar1. The transmembrane domain of Sec12 has an important role in its appropriate ER localization, ensuring strict Sar1 recruitment to the ER membrane. In yeast, when Sec12 escapes from the ER, the Golgi protein Rer1 retrieves Sec12 back to the ER by recognizing its transmembrane domain (Sato et al., [@B78]). In contrast, Arf GEF proteins are more diverse (Anders and Jurgens, [@B1]). Unlike Sec12, Arf GEF proteins localize peripherally to membranes and possess a conserved Sec7 domain to exert their GEF activity. In yeast, Arf1 has four GEF proteins, and two of them, Gea1 and Gea2, play a redundant role in Arf1 activation to regulate COPI vesicle formation (Peyroche et al., [@B69], [@B68]). The Gea protein is soluble but is partially recruited to *cis*-Golgi membranes because of cycling between the cytosol and membrane. The Golgi transmembrane protein Gmh1 was identified as an interactor of Gea and a potential candidate for the membrane recruitment of the Gea protein (Chantalat et al., [@B15]). However, as Gmh1 depletion did not have a strong effect on the membrane association of the Gea protein, it remains unknown how Gea associates with membranes. Although *in vitro* reconstitution experiments have clearly demonstrated that Sec12 and Gea constitutively facilitate GDP-GTP exchange in Sar1 and Arf1, respectively (Peyroche et al., [@B69]; Futai et al., [@B25]), it is unclear whether and how such catalytic activity is controlled *in vivo*. GTPase activating proteins and their regulation ----------------------------------------------- GTP-locked mutant forms of Sar1 and Arf1 display dominant-negative effects, indicating that accomplishment of the GTPase cycle is physiologically essential (Kahn et al., [@B34]; Saito et al., [@B77]). The Sar1/Arf1 protein displays little or no intrinsic GTP hydrolysis activity, though each protein has specific GTPase-activating protein (GAP) partners, which simply evoke a reaction opposite to that of GEF (Figure [1B](#F1){ref-type="fig"}). When a GAP activates the GTP hydrolysis activity, the Sar1/Arf1 protein is inactivated and dissociates from the membrane. However, their physiological functions extend beyond that. After association with the ER membrane, GTP-bound active Sar1 recruits the COPII coat subunit of the Sec23/24 heterodimer complex from the cytosol (Matsuoka et al., [@B53]) (Figure [2](#F2){ref-type="fig"}). Of the complex, Sec23 forms a direct interaction with Sar1 and also acts as its GAP (Yoshihisa et al., [@B107]). The crystal structure reveals the molecular mechanism by which Sec23 stimulates Sar1 GTPase activity, whereby Sec23 inserts a key arginine residue into the active site of Sar1 (Bi et al., [@B8]). However, the Sec23-stimulated GTPase activation of Sar1 is relatively inefficient for triggering full coat disassembly (Antonny et al., [@B3]). Sec24 captures the transmembrane cargo protein and the adaptor/receptor protein for the soluble cargo existing in the ER lumen by binding to the cytosolic tail (Miller et al., [@B55]; Mossessova et al., [@B59]). The Sar1/Sec23/24/cargo complex, termed the pre-budding complex, is relatively stable enough to prevent coat disassembly. Conversely, when the Sar1/Sec23/24 complex fails to capture cargo by Sec24 dissociating from the membrane, each dissociated protein is recycled again to form the pre-budding complex properly (Koizumi et al., [@B38]). In yeast, an ER integral membrane protein, Sed4, potentially plays some role in this process (Espenshade et al., [@B21]; Kodera et al., [@B37]). Sed4 interacts directly with Sar1 but has no GEF activity for Sar1, despite high similarity of the N-terminal, cytosolic domain with Sec12. Instead, Sed4 exhibits stimulation of Sec23-mediated as well as intrinsic Sar1 GTPase activity and the acceleration of coat disassembly only in the absence of cargo proteins. Thus, Sed4 is proposed to have a role in efficient recycling of the coat and Sar1 by disassembly of the Sar1-Sec23/24 complex that is free of cargo. Further study is required to clarify the mechanisms of Sar1 GTPase activation by Sed4. At the next step, the prebudding complex recruits Sec13/31 heterotetramer complexes, which form the outer coat, to cross-link the adjacent prebudding complexes via polymerization (Matsuoka et al., [@B53]; Tabata et al., [@B91]) (Figure [2](#F2){ref-type="fig"}). A cryo-electron microscopy study has revealed that purified Sec13/31 complexes self-assemble to form a spherical lattice-like structure in solution, the size and shape of which closely fit with those of COPII vesicles (Stagg et al., [@B86]). Accordingly, lateral Sec13/31 polymerization could incorporate the cargo into a nascent vesicle and simultaneously drive membrane curvature to form the precise vesicular shape. In addition to the scaffolding role, Sec31 acts as the Sec23 GAP stimulator. The crystal structure of the active fragment of Sec31 with the Sar1/Sec23 complex reveals insight into the mechanisms of GAP stimulation (Bi et al., [@B9]). Sec31 has a C-terminal proline-rich domain as the GAP stimulator. Within the complex, this domain binds across the extended surface of Sec23 and Sar1 and accesses the active site to optimize the orientation of the catalytically important histidine residue of Sar1. Taken together, there are two-step processes for Sec23 GAP activity and Sec31 GAP stimulation for the full activation of Sar1 GTP hydrolysis. This two-step activation system has been successfully reproduced in minimal reconstitution experiments with liposomes (Antonny et al., [@B3]). In these experiments, however, full activation of the Sar1 GTPase causes the membrane-associated coat to be rapidly disassembled. To overcome this paradoxical situation, there are at least two potential solutions, as mentioned above: the constant activation of Sec12 GEF and a contribution by cargo molecules to stabilizing the prebudding complex (Futai et al., [@B25]; Koizumi et al., [@B38]). In addition, a peripheral ER-membrane protein, Sec16, acts as a GAP inhibitor to contribute to stable coat assembly. Sec16 is an essential protein for COPII vesicle formation *in vivo* and interacts with all of the COPII coat proteins through its multiple domains at vesicle-formation sites on the ER and ER exit sites (Espenshade et al., [@B21]; Shaywitz et al., [@B80]; Supek et al., [@B90]; Connerly et al., [@B16]). Recent studies have reported that Sec16 not only functions as a recruiter of the coat but also modulates the interaction of Sec31 with the Sar1/Sec23/24 complex (Kung et al., [@B40]; Yorimitsu and Sato, [@B106]). Although the detailed mechanisms still remain elucidated, Sec16 might interfere with the catalytic interaction between the active domain of Sec31 and the Sar1/Sec23 complex. GTP-bound Arf1 primes COPI coat assembly to the Golgi membrane. The COPI coat is composed of seven proteins, α- (Cop1), β- (Sec26), β′- (Sec27), γ- (Sec21), δ- (Ret2), ε- and ζ-COP (Ret3), with the corresponding yeast proteins in parentheses. Although biochemically separable into two subcomplexes of α/β′/ε-COP and β/γ/δ/ζ-COP, the COPI coat complex is recruited *en bloc* to the Golgi membrane through the direct interaction with membrane-bound Arf1 (Hara-Kuge et al., [@B30]) (Figure [2](#F2){ref-type="fig"}). α/β′/ε-COP forms the cage-like structure of the outer layer coat that resembles Sec13/31 and clathrin structures (Lee and Goldberg, [@B43]). β/γ/δ/ζ-COP serves as the inner coat to capture cargo proteins, with γ/ζ-COP being structurally similar to the α/σ adaptins of the AP2 clathrin adaptor (Yu et al., [@B108]). Two molecules of Arf1 interact with the inner COPI coat through the β-COP and γ-COP subunits. Although likely analogous to the COPII coat, COPI itself has no GAP function. Alternatively, the GAP protein ArfGAP1 separately serves as the Arf1 GTPase activator in COPI vesicle formation. Similar to the COPII systems, the β-COP and γ-COP coat subunits possess the ability to promote the GAP activity of Arf1 in solution (Yu et al., [@B108]), although the mechanisms remain to be elucidated. In reconstitution systems with synthetic liposomes, GAP is not always essential for vesicle formation (Spang et al., [@B83]). In other cases, GAP promotes COPI vesicle formation, with cargo proteins having a key role. Similar to the step of the pre-budding complex formation of COPII, in a situation in which there is no cargo, Arf1 GTP hydrolysis drives the dissociation of the coat from the membrane and then reuses it until cargo is captured. Successful Arf1/COPI/cargo complexes move to the vesicle-forming step by coat polymerization, and finally cargo-incorporated COPI vesicles bud and form (Spang et al., [@B84]; Shiba and Randazzo, [@B82]). Gcs1 and Glo3 in yeast serve as the Arf1 GAP in COPI vesicle formation. These two proteins are distinct in structure and partially overlap in function (Poon et al., [@B73]). Gcs1 belongs to the ArfGAP1 family, whereas Glo3 belongs to the ArfGAP2/3 family; both have conserved GAP catalytic domains at the N-terminus. Only Gcs1 has a specific lipid-binding motif at the C-terminus, which mediates the preferential association with highly curved membranes, which is suggested to have some role in regulating Gcs1 GAP activity (Bigay et al., [@B11]). In contrast, Glo3 does not have an obvious lipid-binding motif. Yeast mutants lacking either the *GCS1* or *GLO3* gene grow well, whereas the double-deletion mutant is lethal (Poon et al., [@B73]). Gcs1 and Glo3 suppress the lethality of an Arf1-malfunctional mutant when overexpressed and may also have a potential function in the formation of the Arf1/COPI coat/cargo complex, which again supports the idea that Arf1 GAP plays a positive role in vesicle formation (Zhang et al., [@B109]). The precise functions of GAP remain controversial, and further studies are required for a comprehensive model. As mentioned above, the functional role of GTP hydrolysis is in the efficient formation of selective cargo-incorporated vesicles by cycling the disassembly and assembly of the coat to allow its successful capture of cargo. A second role is suggested in the scission of the budded vesicle from the membrane. It was shown that the release of the COPII vesicle is inhibited in the presence of a Sar1 mutant lacking the N-terminal amphipathic helix or GTP hydrolysis activity. However, there is conflicting evidence showing that vesicles are successfully formed in the presence of the non-hydrolyzable GTP analog GMP-PNP. Another role is coat dissociation from the completed vesicle prior to arrival at the acceptor organelle. Fusion with the organelle membrane requires at least a partially uncoated, naked part on the vesicle membrane. Uncoating due to the inactivation of Sar1/Arf1 proteins can employ activated GAP. However, it is unknown when and how GAP is activated to hydrolyze GTP on Sar1/Arf1 proteins. It was shown that only a small amount of Sar1 proteins are detected from isolated formed vesicles (Barlowe et al., [@B5]). It is possible that during budding and/or just before completion, at least partial portion of the Sar1/Arf1 proteins were released from the membrane surface of coated vesicles by the action of GAP. This fits with the observation that the TRAPPI complex and Ypt1, which serve in the tethering event, can bind to COPII vesicles through the interaction with Sec23 after Sar1 is depleted (Cai et al., [@B14]; Lord et al., [@B48]). Subsequently, at the Golgi surface, the Hrr25 protein kinase, in association with the Golgi, phosphorylates Sec23/24 to release the coat and eventually promote vesicle fusion. In this model, however, it remains unknown how coat proteins are retained on the forming and formed vesicles without the action of Sar1. Plant vesicular trafficking --------------------------- In plant cells, the secretory pathway exports a variety of proteins to the cell surface and is essential for the expansion and elongation of the cells. The major molecular components in secretory systems are well conserved among eukaryotes. However, the morphological properties of the secretory organelles show great divergence between plants and mammals. The characteristic features of plant secretion, such as the formation of cell plates during cytokinesis, a polydisperse mobile Golgi apparatus and the lack of an intermediate compartment between the ER and the Golgi apparatus, expand many plant-specific molecules in the maintenance and regulation of vesicular trafficking through the secretory pathway. The development and morphogenesis of higher plants require the strict regulation of vesicular trafficking. To elucidate vesicular trafficking in plants, fluorescent proteins have been extensively utilized for the visualization of proteins localizing at various membrane-bound compartments. The advent of live cell imaging utilizing fluorescently tagged proteins has provided unprecedented insight into the movement of proteins and their interactions in plant studies. Plant Sar1 proteins and their interacting proteins -------------------------------------------------- To investigate the roles of Sar1 proteins in plant cells, a system that utilizes their dominant-negative mutants was established. From the knowledge accumulated in yeast and mammalian studies, mutants that are virtually fixed at the GTP- or GDP-bound state, which act dominantly over the wild-type protein, were constructed. By the transient expression of such dominant mutants of *Arabidopsis* Sar1 protein (AtSar1) and green fluorescent protein (GFP)-tagged marker proteins, it was demonstrated that AtSar1 is required for transport from the ER to the Golgi apparatus (Takeuchi et al., [@B93]). This transient expression system in plant cells also provides a tool to manipulate membrane traffic by GTPase mutants, even when their effects are toxic for cell growth. Similar molecular approaches have also been successfully applied for the study of other GTPases and secretory genes (Batoko et al., [@B6]; Phillipson et al., [@B70]). Regardless, it took time before the conditional expression of dominant-negative plant Sar1 became possible in stable transgenic plants (Osterrieder et al., [@B66]). The inducible expression of a GTP-locked mutant of tobacco Sar1 enabled the investigation of protein dynamics after the blockade of ER-to-Golgi transport at the electron microscopic level (Osterrieder et al., [@B66]). The *Arabidopsis* genome encodes several COPII components, four Sar1, seven Sec23, three Sec24, two Sec13, and two Sec31 (Robinson et al., [@B75]) (Table [1](#T1){ref-type="table"}). Regarding Sar1, Sec24 and Sec13 isoforms, functional complementation was reported through yeast expression studies (D\'Enfert et al., [@B19]; Takeuchi et al., [@B92]; De Craene et al., [@B18]). A functional heterogeneity among plant Sar1 isoforms was reported for the function of plant Sar1 (Hanton et al., [@B29]). Yellow fluorescent protein (YFP) fusions of two *Arabidopsis* Sar1 isoforms were differently localized at the ERES and showed different levels of partition with the membrane fraction. In addition, functional analyses using a secretion marker protein indicated that the overexpression of GTP-locked mutants of the two Sar1 isoforms caused different levels of ER export inhibition (Hanton et al., [@B29]). In rice (*Oryza sativa*), four Sar1 isoforms (OsSar1a, OsSar1b, OsSar1c, and OsSar1d) were cloned and characterized (Tian et al., [@B99]). Gene suppression experiments by RNA interference revealed that single knock down of one of the OsSar1 isoforms showed no obvious phenotype but simultaneous knock down of OsSar1 a/b/c resulted in floury and shrunken seeds and caused the generation of numerous novel protein bodies with highly electron-dense matrixes containing both glutelin and α-globulin in the endosperm of transgenic plants, suggesting the presence of a functional redundancy among rice Sar1 isoforms (Tian et al., [@B99]). ###### ***Arabidopsis* Sar1 proteins and their interacting proteins**. **Protein** **Other name** **AGI numbers** **Function or putative function** ------------- ---------------- ----------------- --------------------------------------------------- AtSARA1a AtSar1 At1g09180 Small GTPase AtSARA1b At1g56330 Small GTPase AtSARA1c AtSar2 At4g02080 Small GTPase AtSARA1d At3g62560 Small GTPase Sec12 At2g01470 Guanine nucleotide exchange factor (GEF) for Sar1 Sec23 At3g23660 GTPase-activating protein (GAP) for Sar1 At1g05520 At5g43670 At4g14160 At2g21630 At4g01810 At2g27460 Sec24 At3g07100 Coat protein for COPII vesicle At3g44340 At4g32640 Sec13 At3g01340 Coat protein for COPII vesicle At2g30050 Sec31 At1g18830 Coat protein for COPII vesicle At3g63460 Sec16 At5g47480 Scaffold protein at ER exit sites At5g47490 *Arabidopsis* plants possess three types of Sec24 isoforms (AtSec24A, AtSec24B and AtSec24C) (Robinson et al., [@B75]). An *Arabidopsis* missense recessive mutation in *sec24A* showed an aberrant phenotype, with partially accumulated Golgi membrane markers and a soluble secretory marker in globular structures composed of large amounts of convoluted ER membranes (Faso et al., [@B22]; Nakano et al., [@B61]). Only AtSec24A, but not other AtSec24 isoforms, could complement these mutant phenotypes. The complete loss of *sec24A* function led to a lethal phenotype, suggesting that *AtSEC24A* is an essential gene. In contrast, AtSec24B knockout plants merely showed mild male sterility, with a reduction of pollen germination, and AtSec24C knockdown plants showed an aberration in female gametogenesis (Tanaka et al., [@B97]). These results suggest that the functional diversification of plant COPII components occurred in the regulation of the plant early secretory pathway to maintain the dynamic identity of secretory organelles. In recent studies of plant vesicular trafficking, the spatial relationship between the ERES and Golgi apparatus has been a matter of controversy because exit from the ER has been difficult to visualize, and interpretations of the same observations have not necessarily reached a consensus. To explain these contradictions, two typical models have been proposed by two groups about the organelle relationship (Dasilva et al., [@B17]; Yang et al., [@B103]). One model predicts that protein export from the ER occurs via the sequential recruitment of inner and outer COPII components to form transport intermediates at the mobile, Golgi-associated ERES. The other model predicts that the Golgi apparatus is not continually linked to a single ERES; instead, Golgi stacks associate intermittently and sometimes concurrently with several ERES as they are moving around in the cells. It was proposed that general differences in Golgi motility between plant leaves and suspension cells could be the reason for these discrepancies (Marti et al., [@B51]). More recently, to investigate plant ER import site (ERIS) where retrograde COPI vesicle fuse, fluorescence imaging experiments were performed and revealed that a Golgi-associated mobile domain of the ER, which is labeled by SYP72-YFP, play pivotal roles in COPI vesicle fusion and COPII vesicle budding as a common platform (Langhans et al., [@B42]; Lerich et al., [@B46]). Furthermore, COPI vesicle fusion with the ER is restricted to periods when Golgi stacks are stationary, but that when moving both COPII and COPI vesicles are tethered and collected in the ER-Golgi interface. These findings established a new model where the Golgi stack and an associated domain of the ER thereby constitute a mobile secretory and recycling unit (Langhans et al., [@B42]; Lerich et al., [@B46]). This is a characteristic feature in plant cells. Most recently, another imaging study using yeast system reported that direct contact of *cis*-Golgi with the ERES executes cargo capture and delivery from the ER (Kurokawa et al., [@B41]). The structural relationship between the mobile unit system in plants and the direct contact system in yeast remains to be clarified in future studies. In higher plants, a functional differentiation has emerged in the mechanism of protein export from the ER (Gonzalez et al., [@B28]). In *Arabidopsis*, the PHOSPHATE TRANSPORTER1 (PHT1) gene family encodes phosphate (Pi) transporters that play a fundamental role in Pi acquisition and remobilization in plants. Mutation of PHOSPHATE TRANSPORTER TRAFFIC FACILITATOR1 (PHF1) caused impairment of Pi transport, resulting in the constitutive expression of many Pi starvation-induced genes and reduced Pi accumulation (Gonzalez et al., [@B28]). PHF1 encodes a plant-specific protein family conserved in *Arabidopsis*, rice and tomato. Their protein structures are related to the SEC12 proteins but lack most of the conserved domains of SEC12 proteins essential as guanine nucleotide exchange factors. *Arabidopsis* PHF1 was found to be localized to the ER, and its mutation caused the ER retention and reduced accumulation of the plasma membrane transporter PHT1;1. In contrast, both the plasma membrane localization and secretion of other proteins were not affected in this mutant. These results indicate that plants have evolved a novel mechanism that enables the cargo-specific protein export of Pi transporters from the ER. Plant Arf proteins and their interacting proteins ------------------------------------------------- The fact that 12 Arf proteins exist in *Arabidopsis* indicates that functional specialization has substantially occurred in the plant Arf protein family (Table [2](#T2){ref-type="table"}). In addition to this, when taken into consideration that the single Arf1 protein executes multiple functions, including roles in Golgi-to-ER retrograde traffic and post-Golgi traffic in yeast (Yahara et al., [@B102]), it is quite difficult to elucidate all the functions of plant Arf proteins. Therefore, limited information has accumulated for plant Arf proteins. ###### ***Arabidopsis* Arf proteins and their interacting proteins**. **Protein** **Other name** **AGI numbers** **Function or putative function** ------------- ---------------- ----------------- ----------------------------------- AtARFA1a ARF1A/AtArf1 At1g23490 Small GTPase AtARFA1b ARF1A At5g14670 Small GTPase AtARFA1c ARF1A/BEX1 At2g47170 Small GTPase AtARFA1d ARF1A At1g70490 Small GTPase AtARFA1e ARF1A At3g62290 Small GTPase AtARFA1f ARF1A At1g10630 Small GTPase AtARFB1a ARF1B/ARFB At2g15310 Small GTPase AtARFB1b ARF1B At5g17060 Small GTPase AtARFB1c ARF1B At3g03120 Small GTPase AtARFC1 ARF1C At3g22950 Small GTPase AtARFD1a ARF1D At1g02440 Small GTPase AtARFD1b ARF1D At1g02430 Small GTPase Coatomer α At1g62020 Coat protein for COPI vesicle At2g21390 Coatomer β At4g31480 Coat protein for COPI vesicle At4g31490 Coatomer β′ At1g52360 Coat protein for COPI vesicle At3g15980 At1g79990 Coatomer γ At4g34450 Coat protein for COPI vesicle Coatomer δ At5g05010 Coat protein for COPI vesicle Coatomer ε At2g34840 Coat protein for COPI vesicle At1g30630 Coatomer ζ At1g60970 Coat protein for COPI vesicle At3g09800 At1g08520 Thus far, the ARF1A subclass is the best-characterized group of the *Arabidopsis* Arf protein family (Robinson et al., [@B75]). One of the *Arabidopsis* Arf1A proteins can complement the lethality of the yeast *arf1 arf2* deletion mutant, and its GFP-fusion is localized to the Golgi apparatus in plant cells, as is its animal counterpart (Takeuchi et al., [@B94]). *In vitro* COPI-vesicle-generation experiments have demonstrated that Arf1A as well as γ-COP could be recruited from the cytosol onto mixed ER/Golgi membranes in cauliflower. The presence of plant COPI vesicles was confirmed by *in vitro* vesicle budding assays coupled with immunogold negative staining (Pimpl et al., [@B72]). Molecular approaches utilizing the transient expression of GDP- or GTP-locked *Arabidopsis* Arf1A mutants caused an abrogation of ER-to-Golgi transport and a redistribution of GFP- or YFP-tagged Golgi membrane marker proteins into the ER in plant cells (Lee et al., [@B45]; Takeuchi et al., [@B94]; Stefano et al., [@B87]). These results suggests that plant Arf1A proteins execute highly conserved functions in the formation of COPI vesicles at the Golgi apparatus by recruiting COPI coat complexes to the membranes (Letourneur et al., [@B47]; Pimpl et al., [@B72]; Robinson et al., [@B75]). In addition to the functional Golgi localization of Arf1A proteins, there is strong evidence that Arf1A localizes to the trans-Golgi network (TGN) and FM4-64 positive compartments (endosomes). A protoplast-based transient expression study showed that Arf1A proteins are required for the post-Golgi sorting of soluble vacuolar proteins, implying that Arf1A is involved in clathrin-coated vesicle formation at the TGN (Pimpl et al., [@B71]). Visualization of the TGN localization of Arf1A was provided by immunofluorescence studies using transgenic *Arabidopsis* lines expressing SYP61--CFP and VHA-a1-GFP as TGN markers (Paciorek et al., [@B67]; Tanaka et al., [@B96]). Other expression studies using Arf1A-GFP and immunogold electron microscopy demonstrated that Arf1A colocalizes in FM4-64 positive compartments that are distinct from the Golgi apparatus (Xu and Scheres, [@B101]; Stierhof and El Kasmi, [@B89]). In contrast, a recent study of plant-powdery mildew interactions reported that barley (*Hordeum vulgare*) ARFA1b/1c is localized to multivesicular bodies (MVBs) and required for callose deposition in papillae, leading to penetration resistance against pathogenic fungi (Bohlenius et al., [@B12]). However, because supporting evidence for this MVB localization is insufficient, it remains unclear whether Arf1A is localized to MVB compartments (Robinson et al., [@B76]). Therefore, these results indicate that the single Arf1A protein exerts multiple functions in Golgi-to-ER retrograde traffic, post-Golgi traffic and endocytic traffic in plant endomembrane trafficking. Although Arf1A proteins have been studied primarily at the single-cell level, the developmental functions of the ARF1A subclass of the *Arabidopsis* Arf family are still unknown at the whole-plant or tissue level. This is because six virtually identical ARF1A genes were found to be ubiquitously expressed and single loss-of-function mutants in these genes revealed no obvious developmental phenotypes. To address the mechanism determining the apical-basal polarity of epidermal cells during plant developmental processes, dominant-negative Arf1A mutants were conditionally expressed by a heat-shock-inducible system in transgenic *Arabidopsis*. The expression of GDP- or GTP-locked Arf1A mutants caused the abolition of root hair outgrowth at the early stages of root epidermal cell differentiation (Xu and Scheres, [@B101]). However, unlike in GDP-locked Arf1A-mutant-expressing lines, proper root hair formation recovered abruptly in GTP-locked Arf1A-mutant-expressing lines within 2 days, indicating that the effects of the GTP-locked Arf1A mutant on root hair formation were reversible. This difference in the effects caused by the GTP- and GDP-locked mutants of Arf1A on epidermal cell polarity might suggest that they act on different target molecules, with distinct properties in protein level or stability. The heat-shock-inducible expression study also revealed that the plasma membrane localization of a GFP-tagged auxin transporter, PIN2-GFP, was slowly affected upon Arf1A manipulation when compared with that of Golgi and endocytic markers. These studies enabled the dissection of the Arf1A functions involved in local and specific aspects of cell polarity (Xu and Scheres, [@B101]). Plant Arf proteins are key molecules in the regulation of vesicular trafficking in the multicellular development of plants and are postulated to control both household functions and plant-specific functions through interactions with their interacting proteins. Nonetheless, most of the molecular mechanisms of plant-specific functions are still being elucidated. Unlike Arf1A proteins, which are targeted to the Golgi and post-Golgi structures, one of the ARF1B subclass proteins, ARFB, is localized to the plasma membrane (PM) (Matheson et al., [@B52]). This PM localization is a similar feature with mammalian Class III Arf6 proteins, which specifically play crucial roles in endocytic transport with specific regulators, such as EFA6 and SMAP1, in mammalian systems (Macia et al., [@B50]; Tanabe et al., [@B95]). However, it is largely unknown whether ARFB is involved in the regulation of plant-specific functions, including endocytic transport, because information about ARFB is insufficient. The functions and localization of the plant Arf proteins belonging to other subclasses are largely unknown. Recent studies have revealed that the functional differentiation of the regulator proteins for plant Arf proteins could contribute to the plant-specific functions of vesicular traffic (Table [3](#T3){ref-type="table"}). The *Arabidopsis* Arf-GAP protein family consists of 15 members containing the conserved GAP domain, designated ARF-GAP domain (AGD) proteins (Vernoud et al., [@B100]). Each AGD protein localizes to its specific cellular membrane compartment. For instance, AGD7 localizes to the Golgi apparatus, where its overexpression was found to inhibit the Golgi localization of γ-COPs and to induce the relocation of Golgi membrane proteins into the ER in both protoplasts and transgenic plants (Min et al., [@B57]). Its closely related homologs, AGD8 and AGD9 also localize to the Golgi and are required for the maintenance of Golgi morphology (Min et al., [@B56]). Gene knockdown experiments by RNA interference revealed that low-level expression of AGD8 and AGD9 induced abnormal Golgi morphology, inhibition of protein trafficking, and arrest of plant growth and development. Conversely, high-level expression of AGD8 and AGD9 evoked Arf1A recruitment to the Golgi and suppressed Golgi disruption and vacuolar trafficking defects that were caused by overexpression of AGD7 (Min et al., [@B57], [@B56]). Imaging studies using protoplasts showed that AGD7, AGD8 and AGD9 can recruit a GDP-locked Arf1A mutant (Arf1 T31N) from the cytosol to the Golgi. Thus, the Golgi-localized ARF-GAPs (AGD7, AGD8 and AGD9) fulfill redundant functions in Arf1A-mediated protein trafficking, which is essential for plant development and growth. Another AGD protein, AGD5, is localized to the TGN, where it co-localizes with Arf1A proteins, central GTPases that play essential roles in plant membrane trafficking at the Golgi and post-Golgi structures. The transient expression of a mutant AGD5 protein having impairment in ARF-GAP activity caused longer recruitment of Arf1A on the membranes, indicating that the GTP hydrolysis of Arf1A was impaired due to a defective GAP (Stefano et al., [@B88]). These results define a role of AGD5 for Arf1A as an ARF-GAP localizing at the TGN. ###### **Regulators for *Arabidopsis* Arf proteins**. **Protein** **Other name** **AGI numbers** **Function or putative function** ------------- ----------------------- ------------------ -------------------------------------------------- AGD1 At5g61980 GTPase-activating protein (GAP) for Arf AGD2 At1g60680 GTPase-activating protein (GAP) for Arf AGD3 VAN3/SCARFACE/SFC At4g13300 GTPase-activating protein (GAP) for Arf AGD4 At1g10870 GTPase-activating protein (GAP) for Arf AGD5 NEV/MTV4 At5g54310 GTPase-activating protein (GAP) for Arf AGD6 At3g53710 GTPase-activating protein (GAP) for Arf AGD7 At2g37550 GTPase-activating protein (GAP) for Arf AGD8 At4g17890 GTPase-activating protein (GAP) for Arf AGD9 At5g46750 GTPase-activating protein (GAP) for Arf AGD10 RPA At2g35210 GTPase-activating protein (GAP) for Arf AGD11 CML3/CALMODULIN-LIKE3 At3g07490 GTPase-activating protein (GAP) for Arf AGD12 ZAC At4g21160 GTPase-activating protein (GAP) for Arf AGD13 At4g05330 GTPase-activating protein (GAP) for Arf AGD14 ZIGA4 At1g08680 GTPase-activating protein (GAP) for Arf AGD15 At3g17660 GTPase-activating protein (GAP) for Arf Sec7-type At4g35380 Guanine nucleotide exchange factor (GEF) for Arf ARF-GEF At4g38200 At1g01960 At3g60860 At3g43300 GNOM-type At1g13980 (GNOM) Guanine nucleotide ARF-GEF At5g39500 (GNL1) exchange factor At5g19610 (GNL2) (GEF) for Arf Several AGD proteins are specifically expressed in particular plant tissues and play essential roles in tissue formation. AGD12/ZAC has a novel domain structure in which the N-terminal ARF-GAP domain containing a zinc finger domain and a C-terminal C2 domain are separated by a region without homology to other known proteins (Jensen et al., [@B31]). Expression analyses using a Zac promoter/beta-glucuronidase reporter revealed the highest expression levels in flowering tissues, rosettes and roots. The ZAC protein was mainly associated with membranes that were co-fractionated with Golgi and plasma membrane marker proteins. Recombinant ZAC was found to possess GTPase-activating activity on *Arabidopsis* Arf1A proteins, and the ZAC N-terminal region has a significant binding activity for phosphatidylinositol 3-monophosphate. These data indicate a role for ZAC in the regulation of ARF-mediated vesicular traffic. Another type of AGD protein, AGD1, is an ARF-GAP containing a phosphoinositide binding pleckstrin homology (PH) domain protein. *agd1* mutants have root hairs that exhibit wavy growth and have two tips that originate from a single initiation point. These root hair defects were associated with the bundling of microtubules and filamentous actin that extended to the root hair apex. Characterization of the *agd1* mutant provided evidence that AGD1 plays crucial roles in root hair development through the cross-talk among phosphoinositides, cytoskeleton and other signals mediated by other GTPases (Yoo et al., [@B105]; Yoo and Blancaflor, [@B104]). A different type of AGD protein, VAN3/AGD3, contains four domains: a BAR (BIN/amphiphysin/RVS) domain, a PH domain, an ARF-GAP domain and an ankyrin (ANK)-repeat domain. VAN3 plays a pivotal role in plant venation continuity, and the recombinant protein showed GTPase-activating activity on Arf1A and a specific affinity for phosphatidylinositol. VAN3 localizes at the plasma membranes as well as in intracellular structures, including the TGN (Koizumi et al., [@B38]; Naramoto et al., [@B64], [@B62]). Single-molecule fluorescence imaging showed that VAN3 localizes to discrete foci at the plasma membrane that are associated with the endocytic vesicle coat protein clathrin. Imaging studies using transgenic plants revealed that VAN3 activities are required for the endocytosis and internalization of plasma membrane proteins, including PIN-type auxin transporters. The functions and localization of other AGD proteins remain to be elucidated. The *Arabidopsis* ARF-GEF protein family consists of 8 members containing the conserved Sec7 domain, and they are classified into two groups: Sec7-type and GNOM-type groups (Robinson et al., [@B75]) (Table [3](#T3){ref-type="table"}). When compared with animal ARF-GEFs, all the *Arabidopsis* ARF-GEFs belong to the GBF/BIG family of animal ARF-GEFs, whereas no ARNO family exists in *Arabidopsis*. This is a plant-specific feature of ARF-GEF divergence. *Arabidopsis gnom/emb30* mutants were isolated as embryonic-lethal mutants having impairments in the first zygotic cell division and apical-basal pattern formation during early developmental processes (Shevell et al., [@B81]). These mutant phenotypes are very similar to the inhibition of the polarized transport of the plant hormone auxin. The GNOM/EMB30 gene encodes an ARF-GEF protein that localizes to endosomes and contributes to their structural integrity. This protein is a brefeldin A (BFA)-sensitive ARF-GEF that is required for the proper polar localization of an auxin transporter, PIN1. A molecular approach utilizing an engineered BFA-resistant version of GNOM demonstrated that GNOM is specifically required for the recycling of auxin transport components from endosomes. In contrast, the *Arabidopsis* GNOM-LIKE1 (GNL1) protein is a BFA-resistant ARF-GEF that localizes to the Golgi but is also required for selective internalization from the plasma membrane in the presence of BFA. This is consistent with experimental results that the internalization of an auxin efflux carrier, PIN2, was selectively inhibited in BFA-treated *gnl1* roots. Taken together, these results suggest that both GNOM and GNL1 proteins are involved in the selective endocytosis of auxin transport components, indicating that the evolution of endocytic trafficking in plants is correlated with the neofunctionalization of GNOM-type ARF-GEFs (Geldner et al., [@B26]; Teh and Moore, [@B98]). GNL1 and GNOM are multi-functional proteins and execute multiple roles in protein trafficking. Molecular-genetic experiments based on introduction of an engineered BFA-sensitive GNL1 into a *gnl*1 knockout background revealed that both a block in ER-Golgi traffic and a release of γ-COP into the cytosol were induced by addition of BFA (Richter et al., [@B74]). These results suggest that GNL1 is specifically required for retrograde transport from the Golgi to the ER and maintenance of Golgi integrity. GNL1 is one of the major regulators of ER-Golgi traffic but one or more BFA-sensitive ARF-GEFs are needed for its regulation because a *gnl*1 knockout mutant exhibits no defects in ER-Golgi traffic and Golgi integrity (Richter et al., [@B74]). The most recent study indicates that GNOM primarily localizes to the Golgi apparatus and that GNOM and GNL1 are colocalized at distinct subdomains on Golgi cisternae (Naramoto et al., [@B63]). Short-term BFA treatment stabilizes GNOM at the Golgi, whereas prolonged exposures results in GNOM translocation to the TGN/endosomes. These data are supported by the fact that GNOM can partially replace the function of GNL1 in the Golgi to ER retrograde transport, implying that GNOM might be a minor regulator of ER-Golgi traffic (Richter et al., [@B74]). As for cargo specificity of endocytotic traffic, *gnom* and *gnl*1 mutants showed complicated phenotypes. In *gnom* mutant lines expressing BFA-resistant GNOM, a cytokinesis-specific syntaxin KNOLLE was still aggregated into large BFA bodies in the presence of BFA, suggesting that KNOLLE traffic in cytokinetic cells is dependent on other BFA-sensitive ARF-GEFs (Geldner et al., [@B26]). In contrast, PIN2 and PM-ATPase showed a partial BFA-resistance in the same lines, indicating that the recycling rates or transport routes of these molecules might be different between individual cells and roots (Geldner et al., [@B26]). In the case of *gnl1* mutants, internalization of other molecules except PIN2 can occur normally in the absence of GNL1 function. A plasma-membrane marker PMA4-GFP and a lipophilic dye FM4-64 are accumulated in the BFA bodies of BFA-treated wild-type and *gnl1-2* roots with similar efficiency (Teh and Moore, [@B98]). These results suggest that GNL1 is not a major regulator of internalization from the plasma membrane but one or more BFA-sensitive ARF-GEFs play crucial parts in its regulation. Thus, GNL1 and GNOM execute distinct but overlapping functions in plant vesicular traffic. However, the molecular mechanisms underlying cargo specificity of the GNL1/GNOM-mediated endocytosis remain poorly elucidated. Further functional analyses coupled with molecular characterizations of other ARF-GEF proteins are necessary to elucidate fundamental aspects of ARF-GEF-mediated traffic. Another GNOM-type ARF-GEF, GNOM-LIKE2 (GNL2), is highly expressed in pollen grains and pollen tubes. In addition, pollen germination defects were observed in the corresponding *Arabidopsis* mutants, *gnom-like 2-1* (*gnl2-1*) and *gnl2-2*. These results suggest that GNL2-related traffic plays an important role in pollen germination (Jia et al., [@B32]). Thus, ARF-GEF-mediated vesicular trafficking is tightly correlated with plant developmental processes, including early embryonic development and sexual plant reproduction (Jia et al., [@B32]; Du et al., [@B20]). Concluding remarks ================== Extensive studies by means of genetic and biochemical approaches have given rise to much progress in the understanding of the molecular mechanisms by which Sar/Arf GTPases drive vesicle formation on membrane surfaces. As discussed above, many questions remain to be answered. In particular, limited information is available on the physiological functions of Sar/Arf in plants. These GTPases achieve both housekeeping conserved functions and plant-specific functions in vesicular trafficking. To date, many unique functional differentiations have been described regarding such processes as Golgi organization, endocytic transport and cell polarity. However, further elucidation of these phenomena is required for a better understanding of multicellular development in higher plants. Conflict of interest statement ------------------------------ The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. [^1]: Edited by: Shingo Nagawa, Chinese Academy of Sciences, China [^2]: Reviewed by: David Gordon Robinson, University of Heidelberg, Germany; Ken Matsuoka, Kyush University, Japan [^3]: This article was submitted to Plant Traffic and Transport, a section of the journal Frontiers in Plant Science.
Files with tile file extension can be usually found as images from Eclipse development platform. Software that open tile file TILE file extension - Eclipse image What is tile file? How to open tile files? File type specification: tile file icon: The tile file extension is mainly related to a special image format used by Eclipse. Eclipse is an open source community, whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle. Eclipse Foundation is a not-for-profit, member supported corporation that hosts the Eclipse projects and helps cultivate both an open source community and an ecosystem of complementary products and services. Updated: September 26, 2018 The default software associated to open tile file: Company or developer: Pierre-Emmanuel Gougelet A free (for non-commercial purposes) powerful cross-platform media browser, viewer and converter. Compatible with more than 500 formats and also supports export to 70 different file formats. It works on Windows, Linux and macOS (OS X). List of recommended software applications associated to the .tile file extension Recommended software programs are sorted by OS platform (Windows, macOS, Linux, iOS, Android etc.) and possible program actions that can be done with the file: like open tile file, edit tile file, convert tile file, view tile file, play tile file etc. (if exist software for corresponding action in File-Extensions.org's database). tile file viewer - programs that view tile file - Eclipse image Programs supporting the exension tile on the main platforms Windows, Mac, Linux or mobile. Click on the link to get more information about XnView MP for view tile file action.
https://www.file-extensions.org/tile-file-extension
China's dream to revive a 2,000-year old trade route is emblematic of the country's two-speed reform process, according to Barclays. The legendary Silk Road, named after the lucrative Chinese silk business, spanned three continents until the end of the 14th century. Now, China is spearheading an effort to recreate those trade networks in a project called The New Silk Road. Comprised of a land-based Economic Belt that will snake through Central and West Asia and a Maritime Silk Road that will link South Asian ports, the initiative is one of many plans to open up China's economy and exercise its influence on the global agenda. Other examples include the recently-launched Stock Connect program and the BRICS-focused New Development Bank. Read MoreChina's stock market cracks open "The fast progress in these regional initiatives compared with the relatively slow motion in domestic reforms highlights the contrasting economic circumstances facing China domestically and abroad," said Jian Chang, chief China economist at Barclays, in a report. As Beijing experiences its slowest pace of growth in five years, experts complain that officials aren't doing enough to speed up structural reforms that are desperately needed to transition the economy away from exports. These include social measures such as boosting job creation to fiscal programs like tax relief for small and medium-sized enterprises (SMEs). Chang calls the current stage of domestic reforms "a zero-sum game," due to resistance from vested interests amid the anti-corruption drive and ideological divides. Read More A delicate balancing act The New Silk Road is set offer a raft of business deals in the infrastructure and manufacturing sectors and the way officials deal between state and private sectors in this process will be crucial for reforms, according to Barclays. "Unless carefully regulated, government-led global initiatives [such as the New Silk Road] could strengthen the role of the state sector, but we believe they should encourage a level playing field and support private enterprises," the report said. Read MoreAustralia-China to sign free trade deal Indeed, state-owned enterprises (SOEs) are widely expected to receive the most opportunities, judging by the deals announced thus far. China National Petroleum Corporation (CNPC) is leading the construction of a Central Asian gas pipeline, while China Communications Construction Company (CCCC) is building various ports throughout the Indo-Pacific region. The projects involve central planning so a large share of the pie will be awarded to SOEs with experience in building infrastructure, said Dariusz Kowalczyk, senior economist and strategist at Credit Agricole. Global investors await a major overhaul of Chinese SOEs, widely regarded as the most critical reform area, after President Xi called for measures to improve their efficiency and profitability in 2013. However, Kowalczyk says the New Silk Road is unlikely to impact SOE reform. Read MoreChina protectionism threatens commodity imports On the bright side, he expects it to increase internationalization of the renminbi. Most of the funding for business deals is expected to come from two financial mechanisms: A $40 billion New Silk Road fund and a $50 billion Asian Infrastructure Investment Bank (AIIB), both spearheaded by China this year. It's possible that loans from these facilities will be denominated in renminbi, Kowalczyk said. "Expanding credit to foreign-based projects should definitely boost general acceptance of the currency in those countries."
https://www.cnbc.com/2014/11/17/new-silk-road-highlights-chinas-two-speed-reform.html
In this report, the marketing plan of EE limited has been discussed in depth. This report will give an overview of how EE limited is able to apply marketing tactics and strategies to achieve organisational goals and objectives of the company. In this section, the market issues and opportunities are analysed through different tools like swot analysis and internal and external affairs of the company in a better way. This section is divided into parts like Internal and external analysis, company overview, swot analysis, objectives, strategy, STP, Budget control, etc. All the above sub-sections are discussed in relation to EE limited.
https://www.cram.com/essay/Marketing-Plan-For-Meet-Ee-s-Marketing/F39LCVSZHMQQ
Otosclerosis - to scan or not to scan? In an era of insidiously reducing thresholds for investigating patients, Maxwell and colleagues pose an important question: is high-resolution computed tomography (HRCT) prior to stapes surgery for otosclerosis worthwhile? Their practice typically considers HRCT for cases of suspected otosclerosis presenting... How to manage the concha bullosa in FESS It is an interesting concept to assess how much impact the presence of a large concha bullosa (CB) has on both severity of chronic rhinosinusitis (CRS) and also postoperative outcomes after FESS. The authors accept that the paper has limitations... A classification of a new cell - the retrosphenoid cell This is a concise paper which describes a previously undefined type of cell within the sphenoethmoidal complex. It identifies the retrosphenoid cell, differentiated from an Onodi cell by being entirely within the posterior wall of the sphenoid sinus, lying between... Bone thickness and split pattern in mandibular osteotomies This paper looked at 63 sagittal split ramus osteotomy sites. The type of split was classified according to the Plooij paper and bone measurements were taken at four sites. Of these sites, the thickness of the bone in one point,... Is bone scanning still of value? This is an article from Australia of 109 patients, 83 of which had CT, 72 MRI and the presence of bone invasion on imaging was compared with the histopathology. Bone invasion was present in 44 of 109 resection specimens. Bone... Beware the skinny patient… The adverse health impacts of an excessive BMI are well known. This study highlights one laryngeal pathology for which a low BMI appears to be a significant risk factor. The records of 28 patients treated for arytenoid cartilage dislocation were... Skull base imaging: a review This excellent review paper describes the anatomy, imaging protocols and differentiating imaging findings on CT and MRI in myriad skull base lesions. Skull base protocol MRI and thin section CT are required to evaluate all skull base lesions. According to... First signs of late nodal metastases This is a retrospective review of 65 patients who had late metastases during follow-up after initial curative treatment. They analyse the detection methods of palpation, ultrasound, CT and subjective symptoms. Palpation detected the nodes in 31 patients, ultrasound in 17,... Back to basics: nasendoscopy beats CT, again! There are few otolaryngologists (or patients) who have not been confronted with a computed tomography scan referring to a deviated septum. In a very similar way to the accidental findings of sinus mucosal thickening, the clinician is left in a... Is it time for cone-beam CTs to replace the traditional orthopantomogram in the primary diagnosis of temporomandibular joint disorders? Cone-beam CT requires a lower dose of radiation compared to the multidetector CT and provides much more detailed information in 3D about the bony structures of the temporomandibular joint (TMJ) when compared to the traditional orthopantomogram (OPG). In this article...
https://www.entandaudiologynews.com/reviews/journal-reviews/?tag=computed%20tomography
Customer obsession requires leaders to align at all levels to enable and sustain the energy of their employees in order to meet the needs of transformation. Yet leadership development programs... - Report Design Dynamic Interaction Patterns So Your IT Operating Model Can Adapt To Demand Communication, Collaboration, Cooperation, Continuity, And Coordination Drive High-Performing IT Operating ModelsMay 11, 2021 | Gordon Barnett, Bobby Cameron Tech executives must adapt to constantly changing realities in the era of customer obsession and digitalization. They find that traditional IT operating models, developed as a collection of... - Report The State Of Customer Analytics 2020 Landscape: The Customer Analytics PlaybookMarch 11, 2021 | Brandon Purcell In this annual report update, we share key data so customer insights (CI) pros can benchmark to plan their capabilities. Compared with our 2019 benchmarks, we see firms adopting more analytical... - Report Chief Data Officers: Invest In Your Data Sharing Programs Now With B2B Data Sharing, Benefits Are Greater Than The Sum Of Its PartsMarch 11, 2021 | Jennifer Belissent, PhD Data sharing is on the rise: 35% of data and analytics decision-makers using external data sources share or exchange data with suppliers and partners. But it doesn't come easily for everyone. To... - Report Evolve Your Culture Work Practice Advanced Level: Culture Practices For CX TransformationMarch 5, 2021 | Angelina Gennis This report about culture work — one of the six competencies of customer experience (CX) — is for CX transformation leaders whose companies: 1) follow an effective and documented process for how to... - Report Evolve Data And Analytics Roles And Skills For The Adaptive Enterprise Advanced Level: People Practices For Insights-Driven BusinessesFebruary 26, 2021 | Boris Evelson, Cinny Little Insights-driven businesses (IDBs) must continuously build and improve on five competencies: strategy, people, process, data, and technology. Weaknesses and gaps in any one of the competencies —... - Report Forrester Infographic: State Of Consumer Authentication 2020January 27, 2021 | Andras Cser Customers' sentiment about consumer authentication plays a significant role in how S&R, marketing, and CX professionals select the authentication measures and technologies that provide adequate... - Report Choose The Optimal Type Of AR Program To Drive Your Firm's Business Results Business Case: The Industry Analyst Relations PlaybookJanuary 26, 2021 | Kevin Lucas Analyst relations (AR) novices think that there's a one-size-fits-all AR program that they can copy. For example, if they're short on money, staff, and knowledge, churning out a briefing program to... - Report Agile And Design Teams: Better Together Improve CX And EX With Great Collaboration Between Designers And DevelopersJanuary 21, 2021 | Karine Cardona-Smits Companies are adopting agile frameworks and adding designers to their delivery teams to improve digital experience development efforts. But these two constituencies don't rely on the same workflow,... - Report The Best Tech Leaders Develop And Unleash Creative People Advanced Level: People Practices For IT TransformationJanuary 8, 2021 | Gordon Barnett CIOs attaining an intermediate maturity level in their IT transformation efforts may operate well; however, their organizations often lack the culture, talent, and structure it takes to be a market... - Report Chief Data Officers: Evolve Your Teams To Accelerate Impact From Data InsightsJanuary 5, 2021 | Jennifer Belissent, PhD In many organizations, the chief data officer (CDO) role has evolved to become more of a chief insights officer with purview over both data and analytics. With a single role responsible for the... - Report Implement Essential Change Management Practices To Improve Digital Experience Delivery Continuous Improvement: The Digital Experience Delivery PlaybookDecember 29, 2020 | Joe Cicman, Caleb Ewald So, you've finished your first major digital experience delivery project: You've launched the app, modernized the website, and integrated your digital experience technologies. Now you must create a... - Report Multinational Firms Must Quickly Adapt To The UK's Divergence From EU Privacy StandardsDecember 3, 2020 | Enza Iannopollo With Brexit negotiations coming to an end and a deal with the EU still out of sight, data protection matters are more uncertain than ever. Meanwhile, the UK data protection regulator is softening... - Report Understand AR's Beneficiary Landscape Landscape: The Industry Analyst Relations PlaybookDecember 1, 2020 | Kevin Lucas Analyst relations (AR) can help a vendor win, serve, and retain customers. But whether the AR team is supporting its traditional beneficiaries or not, it typically guesses at the detail of their... - Report Develop Five Processes To Drive Ongoing ECM Program Success Processes: The Enterprise Content Management PlaybookOctober 26, 2020 | Cheryl McKinnon Program governance, information governance, implementation, continuous improvement, and support services are the primary processes that help firms design a sustainable enterprise content management... - Report Organizing For Success In Collaboration Organization: The Enterprise Collaboration PlaybookOctober 23, 2020 | Art Schoeller Too often, technology leaders approach staffing for collaboration as they would other projects led by the technology organization. Application development and delivery (AD&D) leaders must establish... - Report Organize Your App-Dev Teams With Agile And DevOps Organization: The Modern Application Delivery PlaybookOctober 9, 2020 | Diego Lo Giudice Application development and delivery (AD&D) leaders are embracing modern application delivery practices and taking the steps they need to transform their organizations. This means changing culture,... - Report Assess And Enhance Your Modern Application Delivery Journey Assessment: The Modern Application Delivery PlaybookOctober 2, 2020 | Diego Lo Giudice Customer-obsessed organizations increasingly expect application leaders to speed and scale software delivery. Many application development and delivery (AD&D) leaders know they need to improve, but... - Report Invest In Fresh Skills As ECM Shifts To Modern Content Platforms Organization: The Enterprise Content Management PlaybookSeptember 11, 2020 | Cheryl McKinnon As enterprise content management (ECM) investments shift to content services and cloud deployment models, the composition of the program team will change. Enterprise architecture (EA) professionals... - Report It's Time For Retail Stores To Open Their Doors To The Digital Org Organization: The Digital Store PlaybookAugust 28, 2020 | Brendan Witcher In the age of the customer — and especially in the midst of a pandemic — channel-specific strategies, tactics, and organizational structures become irrelevant and (worse) potentially crippling. To... - Report The State Of Disaster Recovery Preparedness In 2020 Review Business And IT Risks With Evolving Business OperationsAugust 24, 2020 | Naveen Chhabra Technology outages damage customer confidence in a business. While infrastructure and operations (I&O) leaders have more tools at their disposal to prepare themselves to recover from any type of... - Report Operational AR During A Pandemic Adjust Your Approach To Deliver Value ContinuouslyAugust 4, 2020 | Kevin Lucas A pandemic shakes up a vendor and its analyst relations (AR), leaving AR people uncertain about where to focus. Their instinct is to protect their existing AR programs, but new vendor priorities... - Report The Future Of Banking Is Built On Trust Vision: The Digital Banking Strategy PlaybookJuly 29, 2020 | Jacob Morgan, Alyson Clarke The economics of the next decade will challenge banks. COVID-19 has shifted the focus but will not dominate the future narrative. Leading banks are pivoting and rebooting their strategy —... - Report Top 10 Recommendations For Private Cloud Success What You Call It Doesn't Matter, But Defining Scope And Governance Is CrucialJuly 1, 2020 | Tracy Woo, Chris Gardner Despite reportedly high private cloud adoption, Forrester continues to see enterprises that are struggling with build-out. Success with private cloud comes only through embracing self-service, full... - Report Collaboration Success Hinges On Effective Change Management Continuous Improvement: The Enterprise Collaboration PlaybookJune 19, 2020 | Art Schoeller For two decades, companies have introduced new and potentially better tools to help employees collaborate, only to see tepid adoption. In the past several years, employees have begun to...
https://www.forrester.com/search-results/searchResults.xhtml?tmtxt=&sort=3&searchOption=0&N=0+20226
Presentation is loading. Please wait. Published byMilton Thomas Modified over 5 years ago 3 They also develop power- using machines such as refrigeration and air- conditioning equipment, machine tools, material handling systems, elevators and escalators, industrial production equipment, and robots used in manufacturing. Mechanical engineers also design tools that other engineers need for their work. The field of nanotechnology, which involves the creation of high-performance materials and components by integrating atoms and molecules, is introducing entirely new principles to the design process. 7 Mechanical engineers may perform the following tasks: consider the appearance of the designs as well as the impact on the user and on the environment design new machines, equipment or systems taking into account cost, availability of materials, strength and maintenance requirements act as consultants, carrying out studies about possible changes or improvements and estimating costs of products for clients. set up work control systems (e.g. testing of equipment) to make sure that standards of performance, quality, cost and safety are met specify, select, install and manage the maintenance of factory production and machinery supervise the operation of manufacturing process plants such as vehicle and electrical appliance production plants, coal handling installations, power stations and sewerage and water supply pumping stations use Computer- Aided Design (CAD) to assist in design and drawing carry out research in the use of different types of fuel and energy, materials handling, heating and cooling processes, the storage and pumping of liquids and gases, and environmental controls undertake the design and construction of resource development projects such as offshore platforms, onshore gas plants and iron ore mining facilities 9 Personal Requirements practical and creative enjoy technical and engineering activities able to identify, analyse and solve problems willing to adhere to safety requirements able to work independently or as part of a team good oral and written communication skills able to accept responsibility enjoy computing and technical design Similar presentations © 2020 SlidePlayer.com Inc. All rights reserved.
http://slideplayer.com/slide/6029275/
This paper reports on a meta-analysis of various studies on community-oriented policing programs, which include community partnerships, organizational transformation, and problem solving, and their effects on crime, disorder, fear of crime, citizen satisfaction with police, and police legitimacy. It sought to answer three main questions: to what extent do community oriented policing strategies reduce crime and disorder in target areas? To what extent do community oriented policing strategies reduce fear of crime and improve citizen satisfaction with police and perceived legitimacy of the police? Do the effects of community oriented policing vary according to particular strategies? MAIN RESULTS 1) COP interventions are most successful at improving citizens’ satisfaction with the police, with residents typically perceived that officers in COP areas were more likely to treat them fairly and with respect, and that they trusted the police. 2) COP helped to reduce citizens’ perceptions of social and physical disorder in their neighborhood, and increase their feelings of safety, in about half of the comparisons that measured these outcomes. COP was associated with a 5-10 % increase in the odds of citizens perceiving improvements in disorder, although this increase was not statistically significant. 3) COP was associated with only a small, non-significant improvement in citizens’ feelings of safety. However, citizens in treatment areas nonetheless believed that the police in these areas were more effective at preventing crime. 4) Citizens reported increased legitimacy, trust and confidence in the police following COP interventions and felt that they treated people more fairly.
https://www.aas.jjay.cuny.edu/single-post/eng-gilletal-2014
Stage 1 – Prepare The core values and needs of companies and teams are mostly inherent but not necessarily obvious. To reveal them, we start by taking a look at the status quo. We call this stage "Prepare". Here we show you how to identify and uncover the existing values, attitudes and mindsets so we can focus on them constructively in the ensuing stages. Stage 2 – Talk Based on the status quo, in this stage we move closer to the values we actually live by and see how they intersect with the experiences of our clients and partners. Working with multiple methods of discussion and interaction, here we focus on the shifting perspective and the realisation that values are more than just words on a page. Stage 3 – Play In this stage we start to playfully develop new ideas that build on the values defined earlier. By engaging in conversation we construct images of the common future and bring the values to life. We look with open minds in every direction and use our imagination and creativity to generate new and convincing ideas. Stage 4 – Create The objective of the "Create" stage is to identify areas where the values can be established and brought to life. Ultimately, the aim is to make the values a reality. We use different methods to highlight abstract ideas and complex issues and translate them into concrete actions and development goals. Stage 5 – Feel The "Feel" stage is all about breathing life into values, because this is often what is lacking in the lived experience. No matter how good a value is, if it is not felt, it fades away and becomes obsolete. We work to counteract this. From a small step to a giant leap - there are many ways we can give values the room they need to grow and to make them an integral part of life at work.
https://short-cuts.de/en/strategy/?kat=000009&cHash=73a43387dd237e7380e353396266dfee
OBJECTIVES: Pulmonary artery (PA) cannulation during peripheral venoarterial extracorporeal membrane oxygenation (ECMO) has been shown to be effective either for indirect left ventricular (LV) unloading or to allow right ventricular (RV) bypass with associated gas-exchange support in case of acute RV with respiratory failure. This case series reports the results of such peculiar ECMO configurations with PA cannulation in different clinical conditions. METHODS: All consecutive patients receiving PA cannulation (direct or percutaneous) from January 2015 to September 2018 in 3 institutions were retrospectively reviewed. Isolated LV unloading or RV support, as well as dynamic support including initial drainage followed by perfusion through the PA cannula, was used as part of the ECMO configuration according to the type of patient and the patient's haemodynamic/functional needs. RESULTS: Fifteen patients (8 men, age range 45-73 years, EuroSCORE log range 14.45-91.60%) affected by acute LV, RV or biventricular failure of various aetiologies, were supported by this ECMO mode. Percutaneous PA cannulation was performed in 10 patients and direct PA cannulation, in 5 cases. Dynamic ECMO management (initially draining and then perfusing through the PA cannula) was carried out in 6 patients. Mean ECMO duration was 9.1 days (range 6-17 days). One patient exhibited pericardial fluid during the implant of a PA cannula (no lesion found when the chest was opened), and weaning from temporary circulatory support was achieved in 14 patients (1 who received a transplant). Three patients (20%) died in-hospital, and 12 patients were successfully discharged without major complications. CONCLUSIONS: Effective indirect LV unloading in peripheral venoarterial ECMO as well as isolated RV support can be achieved by PA cannulation. Such an ECMO configuration may allow the counteraction of common venoarterial ECMO shortcomings or allow dynamic/adjustable management of ECMO according to specific ventricular dysfunction and haemodynamic needs. Percutaneous PA cannulation was shown to be safe and feasible without major complications. Additional investigation is needed to confirm the safety and efficacy of such an ECMO configuration and management in a larger patient population.
https://moh-it.pure.elsevier.com/en/publications/pulmonary-artery-cannulation-to-enhance-extracorporeal-membrane-o
The battle of the North against the South. Two regions of the same nation are divided by different areas, economic, social and political. Such is the irritation between these estranged communities that the unity that had remained until then is threatened, since each group warns that if the differences are not resolved, a secession or complete separation of one of them from the territory could be carried out. national, until finally it is carried out. How long was the civil war? South Carolina, Alabama, Florida, Mississippi, Louisiana and Georgia, among other states, separate from the Union, to start what was called the Civil War. What could have been so serious that this was achieved in a country that had remained completely united until now? Would the country’s presidency play an important role by then? Learn more about these and other details in the development of this document. When and where did the Civil War take place? The terrible civil-military altercation that arose, took place for the then contemporary era , from the unified separation and the very rapid adjacent creation of the Confederation of the States of Latina, specifically during the years that go from 1861 until 1865 , which is equivalent to the sum of four difficult years of battle for this country. It began to have its combat front, not in a specific space or territorial area but rather that the contest was given on the terrain that was merely corresponding to the United States of America , because as we mentioned at the beginning, they were not fighting with another enemy more than themselves. How long was the civil war? Who participated in the Civil War? Among those that we can call the main parties that fought in the Civil War were, on the one hand, the southern states of the United States , who made economic use of agriculture, the cultivation of sugar cane, cotton and other manual jobs. that implied the appropriation and purchase and sale of people as slaves for said heavy work and, on the other hand, the northern states of the country who were characterized by having a much more diversified economic stability than the previous ones. In the north, preference was given to European labor and they were governed by democratic and Bulgarian forms of production. How long was the civil war? It is also necessary to mention President Abraham Lincoln as another of those involved in the battle because he came (after being elected president) to take sides in the war by becoming a partner or participant of those who were against slavery who it was practiced in the southern part of the nation. Causes of the Civil War From early times, in colonial times, the differences in ideological interests that had repercussions in the commercial, social and political sphere of the United States of America played an important role because these discrepancies were taken as one of the main reasons why the huge struggle of the Civil War was reached. However, there were also other things strongly linked to the fact, below we mention them, they are: - The discontent on the part of those who longed for the abolition of existing slavery in the states of Kansas, Nebraska and others, as opposed to President Franklin Pierce who favored or approved and in turn encouraged the spread of slavery throughout the United States. - The revocation or invalidity of the votes made against slavery in 1856 because a greater number of inhabitants of the population were reluctant to accept and respect that decision of the rest of the inhabitants, thus causing the Secession. How long was the civil war? - The collision or clash of regional interests . The rocky and productive land, together with the warm climate that prevailed in the south of this country, made agriculture and planting of food and other products such as cotton feasible. The negative was that production was being exploited with the slave system, on the other hand, in the north of the region a super industrial life was being carried out that abolished and disgusted slavery more and more. - The dissatisfaction of the southern states of the United States with the presidential election of President Abraham Lincoln . Consequences The Civil War is one of the longest and bloodiest armed conflicts in the history of the nineteenth century, so we can only expect sad consequences such as a large number of deaths. How many specifically? Find out in the development of the final results of the match. These are: How long was the civil war? - The reunification of the United States of America as part of the territory of the United States and not a separate or separate nation . This contributed to the total uprising of the nation in the world. - The affirmative abolition, prohibition and revocation of any form of slavery in the national territory . - The continuing beginning of racial denigration and prejudice and the poor social integration of blacks . - The stage of the acceleration of the economic developments of the country and the age of the machine . This is due to the mechano-scientific contributions with steam ships , machinery for harvesting seeds, the machine for dismantling cotton, elevators, sewing machines, weapons, railways , the telegraph and many interesting and important innovations. and practices for streamlining a large number of mechanical jobs. - The south loses power and political influence with the rest of the world countries . - The battle gave rise to the development of heavy weapons, battleships and submarines , some of which were also used at the time, because of this and the enormous violence, the unfortunate loss of many citizens was caused, approximately 600,000 lives disappeared. How long was the civil war? - The death of Abraham Lincoln in 1865 at the hands of a Southerner. - About a third of the army that ruled the south was exiled . - The creation of racist groups, the kukuxklán, an organization in favor of black skin . Victor/Winner Coming to the point to determine the winner of the confrontation, it is not difficult to identify him. Without a doubt the victor was the north , clearly and determinedly the south was defeated. Why? Because it has been taken into account that the Confederate States of America or the states of the South never came to be officially recognized as legitimate, therefore they came to remain as a region that due to its way of life was weak and exposed, lacked the support foreigner, as for example, that of the French and Anglo-Saxon nations that so badly needed. How long was the civil war? In addition, they did not have under their power any type of material resources or weapons superior to those used by their enemies that would help them or allow them to have a greater possibility of winning by fighting against the north, a group of citizens who, being more innovated, were prepared and even armed with the best equipment of the moment. On the other hand, the northern part of the United States obtained a greater expansion throughout the national area due to its effective industrialization and at the end of the war it saw each and every one of its objectives fulfilled, among these: 1) the reunification of nations and 2) the culmination or end of slavery that existed in the south.
https://englopedia.com/how-long-was-the-civil-war/
Mars, Venus and the moon will meet up in a particularly beautiful cosmic display Friday (Feb. 20). If you've been watching the evening twilight sky over the past few weeks, you will have seen the brilliant planet Venus gradually moving away from the sun, setting slightly later every evening. At the same time, the planet Mars has been gradually moving downward toward the sun, setting slightly earlier every evening. On Friday, the moon, moving much faster than either of the planets, will pass by them, so three hands on the celestial clock will almost coincide. The three cosmic bodies will form a triangle only 2 degrees across, small enough to fit into a low-power telescope's field of view. Mars and Venus will also be closely paired in the night sky Thursday (Feb. 19). [Watch a video about Mars, Venus and the moon meeting up] The two planets will pass close to each other on Saturday (Feb. 21), but that close encounter will happen in the daylight sky, shielding the meeting from view. The best time to see the two bodies will be the evening before, on Friday. Photographing a special sight Currently, both Venus and Mars are on the far side of the sun, so their disks are both very small. Venus is only 12 arc seconds in diameter, and Mars is even smaller, at less than 5 arc seconds. These planets are comparable in size to very small craters on the moon. The lunar surface should be partially lit up by earthlight, sunlight reflected off the planet Earth. Close groupings like these are wonderful subjects for photography. Zoom your lens to maximum magnification, and try to frame the cosmic bodies with interesting foreground objects. If your camera has automatic exposure, your pictures may come out overexposed, so you may want reduce the exposure to get a more pleasing result. A celestial timepiece The solar system is like a giant clock, with the objects orbiting each other in precise time like the clock's hands. The movements of the moon and planets can be predicted accurately for thousands of years into the future. Many people wonder whether this celestial clock ever reaches the equivalent of exact midnight on a regular clock, with the hour, minute and second hands all aligned. (The moon, Mars and Venus might serve as a good stand in for the three hands of the clock.) The answer is that this has never happened, not even once, in the 4-billion-year history of the solar system, and will never happen before the sun swells to a red giant more than 4 billion years hence. Editor's Note: If you take an amazing image of the moon or any other skywatching sight that you'd like to share for a possible story or image gallery, please contact managing editor Tariq Malik at [email protected]
Senior Mechanical Engineer Our Senior Mechanical Engineer must have 8–15 years of professional experience in HVAC design and energy modeling. Must have relevant in-depth knowledge of commercial, institutional, hospitality, and manufacturing facilities and their systems with substantial practical experience in reducing a building's energy consumption. Experience with energy audits and commissioning a plus. This position requires significant experience and comfort with energy modeling software such as e-Quest, Trane Trace, MATLAB, etc. Work as the lead mechanical engineer to redesign existing HVAC systems for energy efficiency. Systems range from small unitary systems for senior living/continuum care facilities to large central plant for commercial office buildings or large-scale hospitality projects. Recreate system functions using available resources from onsite observations and interviews with staff. Suggest energy conservation measures based on attractive payback periods. Work directly with assessment team to obtain site info and coordinate with implementation team for construction phase. Education and Qualifications: - Undergraduate or graduate degrees in relevant field — Mechanical Engineering; Electrical Engineering; Building Science — or minimum 10 years' practical experience - Knowledge of energy efficiency and sustainability - CEM, BEAP, and/or PEM preferred - PE or EIT preferred - LEED AP or CxA a plus - Proficiency with MS Office and energy modeling applications is required Mandatory Skills: - In-depth knowledge of large buildings and their mechanical and electrical systems, particularly HVAC and control systems - Knowledge of central plant systems and comfort with building plans and specifications - Familiarity with applicable building codes necessary - Practical experience in energy reduction concepts and calculations - Superlative quantitative analysis abilities - Ability to present analysis in understandable and compelling formats - Creative problem solving - Strong interpersonal and client communications skills - Team player to inspire and mentor colleagues Responsibilities (specific duties will change for each project): - Perform onsite assessments of client facilities to identify energy saving opportunities and meet with facilities personnel to ascertain onsite issues - Analyze existing building systems and collect data on building operation, HVAC, electrical, lighting, and controls to determine energy usage and efficiency - Develop engineering solutions to address current facility issues and potential energy saving opportunities - Translate engineering calculations to client savings to be presented in terms of return on investment - Establish priorities for implementation of capital projects and energy efficiency measures - Lead and direct the work of engineering team, consulting external engineers, and equipment suppliers; coordinate tasks, manage schedules, and ensure deliverables and deadlines are met - Coordinate with Implementation Team and subcontractors to finalize design and confirm feasibility of installation Other tasks may include: - Review and verify energy savings calculations and guide other staff in the best practices for various energy savings technologies, including lighting, HVAC, fans, pumps, air compression, and industrial process use - Plan and deliver presentations at meetings and training events for trade allies and customers, clients, and others - Complete technical and financial analyses - Perform site-specific engineering analysis and evaluation of energy efficiency projects and programs involving industrial process equipment, such as pumps, fans and compressed air systems, complex central plant systems and advanced HVAC equipment - Provide expert judgment and analyses for engineering design in order to ensue owner's requirements are fully documented, minimize first cost, improve energy efficiencies, ensure standards are met and coordinate electrical and control requirements - Write and install energy management routines for building automation systems - Perform other duties as assigned Location: Position is based in Washington, DC metropolitan area Travel: Travel to client locations within the US, and possibly South America, Caribbean, and UK will be required for this role up to 20% of the time Benefits:
https://www.greengen.com/senior-mechanical-engineer
Fairies are around us all the time. THERE’S A PART OF US THAT KNOWS FAIRIES ARE THERE, MAKING MISCHIEF AND HELPING THE FLOWERS WAKE UP. BUT EVEN IF WE BELIEVE, IN OUR HEARTS, IT CAN BE HARD TO REACH THEM. And if we were to reach them, then what? Summer Solstice is the time when fairies are close at hand, delighting among the flowers that are blooming all around us. It is the time when it’s easiest for us humans to attune ourselves to the vibration of fairies, working with flowers to do so. In this magical class, we will: Explore the connection between fairies and flowers Learn what lessons flowers & fairies have to teach us Sip flower tea & nectar to attune ourselves to fairies Learn ways of reaching out to fairies Connect with fairies with a meditation Explore how fairies can inform our herbalism and healing practices Celebrate with a fairy tea party of Summer Flower Oxymel and Fairy Tea Biscuits Make a Summer Flower Oxymel to take home Pre-registration is required. $45 THIS CLASS IS PART OF OUR SUMMER SOLSTICE CELEBRATION! To see the other classes on this day, click here. Taught by Amanda Midkiff CLASS LOCATION This class is held in the Apothecary Garden at Locust Light Farm. To get to the garden, wind your way all the way up to the top of the property, following signs for "The Farm Cooking School." Park in the lot beside the building. Walk straight back to the end of the parking lot, past the high tunnel. You’ll see the garden in front of you. OUR CLASS & CANCELLATION POLICY We are so excited when you sign up for a class! Please sign up for a class at least 8 hours prior to the start of class so that we can be prepared for the proper number of people. If you sign up within fewer than 8 hours of the start of the class, please text or call Amanda to confirm. Note: Social media messages are not a reliable way of contacting Amanda. If you need to cancel, please cancel at least 8 hours before start of the class via email, call, or text. If you cancel fewer than 8 hours before the start of the class, or simply do not attend the class, we cannot refund your class payment.
https://www.locustlightfarm.com/onfarmclasses/flowerfairymagic
In Sports Top Sports Friday, November 5, 2021 4:20 pm Hampden-Sydney College plays its final home football game of the season against Bridgewater College in Old Dominion Athletic Conference (ODAC) action on Saturday afternoon, November ... Read more | Add your comment Garden club meeting held The Buckingham-Dillwyn Garden Club met on Monday, Nov. 8, and enjoyed a program presented by Helena Arouca, a noted expert in the Moribana style of Ikebana flower arranging. She illustrated a number of Moribana styles using a variety of flowers and containers. Members shown with Arouca and her arrangements are: standing in front of table, from left, Elsie Towler, Denise Schmidt andPat Howe. Behind table, from front center, speaker Arouca, Jeanette Reck, Marie Flowers, Suzanne Vandegrift and Barbara Wheeler. Standing in back row, from left, Mary Lohr, Glenda Harris, Jackie Fairbarns, Pat Johnson, Brenda Hamby, Elfriede Wolford, Marie Baker, Pam Murray, Peggy Carwile, Kay Carter and Donna McRae-Jones. Also present was Barbara Knabe. On Dec. 1, the club will be decorating the Buckingham Arts Center for the holidays.
Transferring guidelines from paper into practice has proven to be frustrating for the many who endeavour to standardise the management of cardiovascular disease across Europe. The EUROASPIRE I, II and III surveys, which audited the practice of preventive cardiology in patients with coronary heart disease over a decade, illustrated that patients were not being managed to the standards set by the ESC guidelines and that limited attention was given to prevention in patients with established heart disease. Evidence of the need for more effective lifestyle management was compelling: blood pressure management remained stubbornly unchanged, and lipid targets were not achieved in almost half of patients. Other studies report disappointing levels of guideline observance among physicians; they are often unaware of recommendations given in guidelines and, even when they are, many fail to consistently apply them in treating patients.2-3 Commonly cited barriers to guideline adherence among physicians include lack of time during consultations, financial constraints and lack of confidence in patients’ motivation to comply. Physicians also find that guideline documents are difficult to translate into practice. To address the gap between publication of guidelines and their use in practice, the ESC at a European level organises presentations at conferences for its member national societies and key opinion leaders. It works at a political level to promote the prevention agenda and to directly influence EU health policy, leading, for example, to the EU Commission endorsement of the European Heart Health Charter. However, such efforts must be paralleled by concerted strategies at a national level to realise implementation in the front line. The 4th JTF urged national societies to develop implementation programmes, starting with the translation of guidelines to the local language and their adaptation to the national context. It recommended that the guidelines issued by the 4th JTF be regarded as a framework from which national guidance ‘to suit local political, economic, social, and medical circumstances’ would be developed. The recalibration of the SCORE risk assessment charts to reflect mortality and risk factor distributions in individual countries as part of this adaptation was emphasised. The 4th JTF saw as vital the establishment a multidisciplinary alliance of experts from national professional organisations to oversee the adaptation and to drive implementation. It was necessary that alliances would have the support of national health authorities and work with other sectors such as the medical education and business communities to advance their aims. Other recommendations included: An information and education programme aimed at practising doctors that would include an audit of practices and feedback. The development of supplementary materials to the guidelines, specifically electronic versions for use in hand-held devices, such as PDAs, and of A4 sheet versions of risk algorithms and treatment recommendations. A population health approach addressing lifestyle risk factors in general. A public information campaign explaining the concept of multiple risk assessment and treatment and intervention thresholds, as well as describing how risk can be reduced. History CommentsThis report was completed on behalf of the Prevention Implementation Committee (PIC) of the European Association of Cardiovascular Prevention and Rehabilitation (EACPR), the prevention and rehabilitation-focused association of the European Society of Cardiology (ESC), to assess where and to what extent these measures have been pursued in different European countries. Acknowledging differing structures, traditions, enablers and constraints across European countries, the study sought to evaluate progress in implementation, focusing on guideline implementation structures, processes and outcomes. It is hoped that the insights gained will provide guidance to the EACPR about how best to achieve gains in promoting implementation, as well as informing the 5th JTF in its current work of updating the guidelines.
https://repository.rcsi.com/articles/report/Implementation_of_the_4th_Joint_Societies_Task_Force_Guidelines_on_Cardiovascular_Disease_Prevention_in_Clinical_Practice_Evaluating_implementation_across_13_European_countries_Main_report/10770464/2
The well known, but not well understood, Richter scale was developed by Charles Richter in 1935. Simply put, it measures the amplitude (height of wave) of a waveform recorded on a seismograph. Since magnitude scales are logarithmic, the size represented ramps very very quickly. Each point higher represents an order of magnitude higher — e.g., a 7.0 is 10x larger than a 6.0. That same one point increase also represents a release of 30x more energy. The problem with the Richter scale (ML) is that it’s only valid for certain frequencies and distances of earthquakes (e.g., it’s capped at a magnitude of 7.0 and most effective up to magnitude 5). At first, as more seismograph stations were added around the world, Richter’s work was extended with additional scales including body wave magnitude (Mb) and surface wave magnitude (Ms). Because of the limitations of ML, Mb, and Ms, the Moment magnitude (Mw) was introduced in 1979. The Richter scale has been widely replaced with the Moment Magnitude Scale. With that in mind, if you hear someone talking about earthquake size using “Richter scale,” it’s likely an error — it’s simply no longer used when reporting to the public, and hasn’t been for decades. Moment is looking to put a number to the energy released in an earthquake. Rather than looking at size of waves, it looks at the amount of slip on a fault line multiplied by the area of the fault surface that slips. It’s then converted in a number that makes it similar to Richter and other scales. Magnitude vs. Intensity The moment magnitude of an earthquake is a single number that describes the event. How the earthquake feels is a different type of number … and that very much depends on where you are in relation to the earthquake, the type of soil you are on, and of course, the type and height of building you are in. Today, we measure intensity with the Modified Mercalli Intensity Scale — a scale from I – X. This scale will vary by area and distance for the same earthquake. The following is an abbreviated description of the levels of Modified Mercalli intensity (from https://earthquake.usgs.gov/learn/topics/mercalli.php). |Intensity||Shaking||Description/Damage| |I||Not felt||Not felt except by a very few under especially favorable conditions.| |II||Weak||Felt only by a few persons at rest,especially on upper floors of buildings.| |III||Weak||Felt quite noticeably by persons indoors, especially on upper floors of | buildings. Many people do not recognize it as an earthquake. Standing motor cars may rock slightly. Vibrations similar to the passing of a truck. Duration estimated. |IV||Light||Felt indoors by many, outdoors by few during the day. At night, some | awakened. Dishes, windows, doors disturbed; walls make cracking sound. Sensation like heavy truck striking building. Standing motor cars rocked noticeably. |V||Moderate||Felt by nearly everyone; many awakened. Some dishes, windows broken. | Unstable objects overturned. Pendulum clocks may stop. |VI||Strong||Felt by all, many frightened. Some heavy furniture moved; a few instances of fallen plaster. Damage slight.| |VII||Very strong||Damage negligible in buildings of good design and construction; slight to moderate in well-built ordinary structures; considerable damage in poorly | built or badly designed structures; some chimneys broken. |VIII||Severe||Damage slight in specially designed structures; considerable damage in | ordinary substantial buildings with partial collapse. Damage great in poorly built structures. Fall of chimneys, factory stacks, columns, monuments, walls. Heavy furniture overturned. |IX||Violent||Damage considerable in specially designed structures; well-designed frame structures thrown out of plumb. Damage great in substantial buildings, with partial collapse. Buildings shifted off foundations.| |X||Extreme||Some well-built wooden structures destroyed; most masonry and frame | structures destroyed with foundations. Rails bent. Abridged from The Severity of an Earthquake, USGS General Interest Publication 1989-288-913 How Does Richter compare to Moment? This table, from “Comparing the Richter and Moment Magnitude Scales” by Pearson Education, demonstrates how some of the most famous earthquakes rated on each scale by comparison. |Date||Location||Richter scale||Moment scale| |1811–1812||New Madrid, midwestern U.S.||8.7||8.1| |1906||San Francisco, California||8.3||7.8| |1960||Arauco, Chile||8.3||9.5| |1964||Anchorage, Alaska||8.4||9.2| |1971||San Fernando, California||6.4||6.7| |1985||Mexico City, Mexico||8.1||8.1| |1989||San Francisco, California||7.1||6.9| |1994||Northridge, California||6.4||6.7| |1995||Kobe, Japan||6.8||6.9| Prior to Richter: Looking at Earthquake History Seismographs started being used in around 1890, and as a result, for earthquakes between 1890 and 1935 (when the Richter scale was introduced), scientists can go back to the historical seismograph records and determine the Richter scale.
https://westlakerevelations.com/2019/07/06/earthquake-magnitudes-explained/
Q: Python: list of a single iterable `list(x)` vs `[x]` Python seems to differentiate between [x] and list(x) when making a list object, where x is an iterable. Why this difference? >>> a = [dict(a = 1)] >>> [{'a' : 1}] >>> a = list(dict(a=1)) >>> a = ['a'] While the 1st expression seems to work as expected, the 2nd expression works more like iterating a dict this way: >>> l = [] >>> for e in {'a' :1}: l.append(e) >>> l >>> ['a'] A: [x] is a list containing the element x. list(x) takes x (which must already be iterable!) and turns it into a list. >>> [1] # list literal [1] >>> ['abc'] # list containing 'abc' ['abc'] >>> list(1) # TypeError >>> list((1,)) # list constructor [1] >>> list('abc') # strings are iterables ['a', 'b', 'c'] # turns string into list! The list constructor list(...) - like all of python's built-in collection types (set, list, tuple, collections.deque, etc.) - can take a single iterable argument and convert it.
Antimicrobial Resistance (AMR) is the health threat of our time. Projections for 2050 suggest that AMR could cause 10 million deaths per year globally, if we do not act now. New data published on 17 November, coinciding with the European Antibiotic Awareness Day, indicate that more than 35,000 people die from resistant infections in the EU/EEA each year. This number increased significantly between 2016-2020 and over 70% of resistant infections were healthcare-associated. The development of AMR is accelerated by the misuse of antibiotics in human and animal health. The results published by a Eurobarometer study on the same day, are alarming: more than 50% of Europeans still incorrectly believe that antibiotics kill viruses. HCWH Europe supports the healthcare sector, offering key resources on how to tackle this insidious health threat in healthcare facilities and enhance AMR education in medical schools. The ‘One Health’ approach is key in tackling AMR On 17 November the European Commission published a report reviewing Member States' One Health National Action Plans against AMR and concluded that many would benefit from a stronger 'One Health' approach. 'One Health' is an integrated, unifying approach designed to balance and optimise the health of people, animals, and the environment. Within recent years, the EU/EEA has seen a decrease in overall antimicrobial consumption in both humans and animals, but there are still reasons to be concerned. Between 2012–2021, human consumption of ‘broad-spectrum’ antibiotics increased by 15% in hospitals. Alarmingly, the proportion of ‘last-resort’ antibiotics - which should be reserved for the treatment of confirmed multi-drug resistant infections - almost tripled during this period. In 2021, the European food sector used 136 tonnes of colistin – a vitally important last-resort antibiotic for human health that is used when no other antibiotics are effective. Moreover, 13.9% of antibiotics used in farming are HPCIAs - highest priority critically important antimicrobials - that should be preserved for human use only. The use of HPCIAs in the UK accounts for 0.4% of all antibiotics, which shows that animal husbandry doesn't have to rely on HPCIAs. Evidence that AMR can spread between animals, humans and the environment is mounting. Reducing the use of antibiotics in food-producing animals, and replacing them where possible is therefore essential for the future of animal and public health. The healthcare sector plays key role in the fight against AMR In light of the crucial role healthcare facilities play in minimising the risk of AMR, HCWH Europe has developed criteria for responsible antimicrobial use in products of animal origin. We now invite European healthcare facilities to support these criteria and apply them in their food procurement strategies. We will further help the healthcare sector play a leading role in developing AMR resilience. Doctors have a leading role in antimicrobial stewardship, educating patients about antimicrobial resistance, and improving prescription practices to save antibiotics for the future generations. Next year HCWH Europe will coordinate a EU4Health project to equip the health workforce with the necessary knowledge and skills to address AMR in the areas of prescription, waste management and patient empowerment. This is in line with the first objective of WHO’s Global Action Plan on AMR “Improve awareness and understanding of antimicrobial resistance (AMR) through effective communication, education and training”. The healthcare sector has a crucial role to play - AMR is a silent yet insidious pandemic that has been spreading for decades and is not subsiding.
https://noharm-europe.org/articles/blog/europe/amr-pandemic
Principal: Linda Boyd My name is Mrs. Connors and I am your child’s art educator. My school contact information is [email protected] which is the best way to communicate with me. You can also leave a message at 432-6956 x 5307. You will also find my Connors’ Corner columns in the Hawk Talk newsletter and on my teacher page at the school’s website. Also this year, we welcome back part-time art educator Mrs. Cowan who has the following classes: Mrs. Height (Grade 4), Mrs. Daily (Grade 3), Mrs. Lavoie (Grade 2) and Mrs. Connor (Grade 1). Mrs. Cowan also works at MT and North schools as well. The best way to contact her is through email at: [email protected] There is a lot of news to report this month! Here are some of the Visual Arts programs at South: ART TO REMEMBER FUNDRAISER All students have started their year off by creating works of original art for the Art-to-Remember fundraiser that is managed by our PTA. Please note the following schedule: Students are scheduled to bring their product brochure and price list home to parents on Thursday, October 18. Students return their order form and payment to school for processing by Thursday, October 25. Products are tentatively scheduled to be sent home with students in mid-December. I would like to thank families in advance for participating in this fundraiser which supports our annual Artist-in- Residence program and the Currier Museum of Art 5th grade field trips. I would also like to thank the parent volunteers who help each year with this fundraiser, and other members of the South School community for supporting the Visual Arts programs for our learners. Our parent volunteer coordinator for this project is Denise Inman: [email protected] ARTIST-IN-RESIDENCE PROGRAM In the spring of 2019, all students will have the opportunity to work with our school's art educators Mrs. Connors and Mrs. Cowan to design and paint wooden benches for our Courtyard garden. Our plan is to design and paint the benches in March, have the benches assembled in April and placed in the Courtyard area prior to our Annual Student Art Show on May 15th. CURRIER MUSEUM OF ART 5TH GRADE FIELD TRIPS In early November, our 5th grade learners will be touring the Currier Museum of Art! They will view pieces in the permanent collection as they learn about the “Building Blocks of Art” that artists use in the creative process. Please be on the look-out for the field trip Permission Slips and Health Forms as they will need to be completed and returned to school by October 22. Here is the schedule: Wednesday, November 7 from 9 a.m.- 12 noon- Mrs. Colantuoni, Mrs. Harrison, Mrs. Tharrington Thursday, November 8 from 9 a.m. -12 noon- Mrs. Desfosses & Mr. LaRosa I would also encourage you to check out the Currier Museum of Art’s web page as they offer many family activities throughout the year. You can view the site at: www.currier.org ANNUAL STUDENT ART SHOW- MAY 15, 2019 Mark your calendars! We have already set our date for the Annual Student Art Show which scheduled from 6-8 p.m. next May. It’s a fun family event where you can stop by and see artwork on display from all of our learners over the course of the school of year. Here are some things to keep in mind: As always, in preparation for the art show, Grades 3, 4 and 5 will be keeping their artwork in their portfolios for the entire school year. This allows these learners to choose which work of art to enter into the show. Learners in Grades 1 and 2 will bring their work home on a monthly basis and eventually one art piece will be selected for the show. The month of October means that learners in Grades 1-4 will create a clay project which will be fired in the school’s kiln in mid-November. The goal is to have learners bring their clay art home by Thanksgiving or shortly thereafter to share with their families. Grade 5 learners will create their unique clay pinch pots in March which will then be on display at the Annual Student Art Show before they bring them home. TEACHING ARTISTIC BEHAVIOR During the course of this school year, learners in grades 3, 4 and 5 will have the opportunity to have more ownership of their artwork. Currently the students are experimenting with art materials and art techniques through 4 basic centers: drawing, painting, collage and sculpture. Teaching Artistic Behavior (TAB) encourages learners to be self-expressive with a greater understanding on how to make informed choices with art materials and art techniques. TAB seeks to build intrinsic motivation and teach young artists to better reflect on their own work through group discussion, public speaking and writing.
https://south.londonderry.org/cms/One.aspx?portalId=124033&pageId=2224166
New Zealand is indeed blessed with a hydro legacy, geothermal resources and wind as sources of renewable electricity. This endowment should be sufficient to meet both future demand and the government's target. Technology will contribute to both supply and demand. The emergence of residential solar will present challenges to local networks and market design. In some areas in the US roof top solar is a standard feature of new homes. Utilising renewable resources presents exciting challenges for policy, market design, academic research and community-based organisations. The following links provide insights into relevant topics. Clean Power doing less with more New Zealand has maintained a high share of renewable energy in the electricity sector during its transition from a state monopoly to a competitive electricity market. The development of hydro resources prior to industrial reform in the mid-1980s, coupled with the abundance of competitive low-carbon energy resources - especially geothermal and wind power - has certainly been an advantage. Read the full article in Autumn 2016 edition of The University of Auckland Business Review. Other stories in this edition of e-Horizon:
https://www.business.auckland.ac.nz/en/about/our-research/bs-research-institutes-and-centres/energy-centre/outreach/newsletter/renewable-resources.html
During the signing of the tax overhaul bill on Dec. 22, 2017, President Donald Trump said, "Infrastructure is by far the easiest," according to a Washington Post article from Jan. 3, 2018. "People want it -- Republicans and Democrats. We're going to have tremendous Democrat support on infrastructure as you know. I could've started with infrastructure -- I actually wanted to save the easy one for the one down the road. So we'll be having that done pretty quickly," the Washington Post quoted the president as saying. Over one year later, there is still no firm plan for how to fund and fix the nation's aging infrastructure. In March 2018, the American Society of Civil Engineers (ASCE) released their 2017 Infrastructure Report Card, an assessment of the condition of the nation's infrastructure across 16 categories (roads, bridges, railroads, inland waterways, etc.). This report card is issued every four years. The overall grade for the entire 16 categories was a D-plus. (https://www.infrastructurereportcard.org/…) At that time, in regard to the infrastructure report card, Mike Steenhoek, executive director of Soy Transportation Coalition (STC), told DTN: "First of all, it's hard to have an A-plus economy and A-plus agriculture with a D-plus infrastructure. In order to be profitable, it is not sufficient to stimulate supply and demand. It is also essential to stimulate greater connectivity with supply and demand. Transportation infrastructure provides that connectivity." Steenhoek continued: "I like to argue that agriculture has one of the most diverse and elongated supply chains of any industry in existence. We are heavily exposed to and dependent upon our system of roads and bridges, highways and interstates, inland waterways, railroads and ports. Farmers do not have the luxury of locating themselves in proximity to infrastructure. Rather, farmers hope infrastructure locates in proximity to them. Our viability as an industry depends upon having each of these modes being properly maintained and providing seamless transition from one to the other." After President Trump's State of the Union speech last week, in which the president called for a bipartisan effort to address the needs of the nation's infrastructure, I asked Steenhoek again about his thoughts on the subject. Here is what Steenhoek had to say: "While we are very appreciative of the inclusion of transportation infrastructure during the President's remarks, we are particularly hopeful that this intention will soon become an outcome. We encourage the President and Congress to enhance not just the transportation needs of urban America, but also the needs of rural America. "The temptation among our elected leaders is to regard transportation challenges in terms of urban congestion or long commute times. While this is most certainly a frustrating reality for many Americans that should be addressed, we must also be attentive to the addressing the challenges of moving freight, including agricultural freight." The ASCE report card provided proof that our infrastructure is crumbling and is in dire need of funding. The nation's roads received a D-minus, with inland waterways, dams and levees receiving the next lowest grade of D. In 2018, the Soy Transportation Coalition released its "Top 10 Most Wanted List" of infrastructure priorities. Steenhoek told me last week that the STC is hopeful any effort by our elected leaders to address our transportation challenges will include some or all of these priorities. Soy Transportation Coalition's "Top 10 Most Wanted List" of Transportation Priorities: -- Maintenance and rehabilitation of locks and dams to significantly reduce the potential for unexpected, widespread and prolonged failure. Priority should be devoted to ensuring the reliability of locks and dams along the nation's inland waterways. -- Dredging the lower Mississippi River between Baton Rouge, Louisiana, to the Gulf of Mexico to 50 feet. -- Ensuring the Columbia River shipping channel from Portland, Oregon, to the Pacific Ocean is maintained at no less than 43 feet. -- Permit six-axle, 91,000-pound semis to operate on the interstate highway system. -- Increase the federal fuel tax by 10 cents a gallon and index the tax to inflation. Ensure rural areas receive proportionate, sufficient funding from the fuel tax increase. -- Provide greater predictability and reliability of funding for the locks and dams along the inland waterway system. -- Provide block grants to states to replace the top 20 most critical rural bridges. -- Provide grants to states to implement rural bridge load testing projects to more accurately diagnose which bridges are sufficient and which are deficient. -- Ensure full utilization of the Harbor Maintenance Trust Fund for port improvement initiatives. -- Permanent (or at least multi-year) extension of the short-line railroad tax credit. Steenhoek commented that one particular infrastructure investment included on the list that could provide substantial benefit to farmers would be to dredge the lower Mississippi River from 45 feet to 50 feet. A deeper river would allow both larger ships to be utilized and current ships being utilized to be loaded with more revenue-producing freight "The 256-mile stretch of the Mississippi River from Baton Rouge, Louisiana, to the Gulf of Mexico accounts for 60% of U.S. soybean exports, along with 59% of corn exports -- by far the leading export region for both commodities," said Steenhoek. "The STC research highlighted that shipping costs for soybeans from Mississippi Gulf export terminals could decline 13 cents per bushel ($5 per metric ton) if the lower Mississippi River is dredged to 50 feet." The STC research further estimated farmers in the 31 evaluated states could annually receive an additional $461 million for their soybeans due to a more favorable basis resulting from dredging the lower Mississippi River to 50 feet. "All too often, infrastructure projects can only provide a theoretical benefit. Dredging the lower Mississippi River is an example of a tangible investment having a tangible impact on farmers throughout the country," said Steenhoek. None of that matters if our current lock and dam system fails, shutting off commerce on the river system. The U.S. Army Corps of Engineers (USACE) has said in the past that it is "unable to adequately fund maintenance activities to ensure the navigation system operates at an acceptable level of performance." A failure of significant duration, especially during and after harvest, would have a negative impact on the soybean and grain industries and cause farmers to lose money, affecting their profitability. Steenhoek noted that, in 2013, Congress allocated $4.7 billion to the Army Corps of Engineers' Civil Works Program, which is the account responsible for maintaining and improving our nation's locks and dams. The current appropriations bill allocated $6.9 billion -- a 47% increase over five years. "Congress and the Administration need to build on this momentum in order to provide a well-maintained system of locks and dams. A failure at one or more of these sites would have severe consequences on the competitiveness of the American farmer," said Steenhoek. "We sincerely hope a bipartisan effort can produce an infrastructure initiative that benefits the needs of both urban and rural areas of this country," said Steenhoek. "It is time for infrastructure to move from the on deck circle to the batter's box." Here is a link to the STC website: http://www.soytransportation.org/…. Mary Kennedy can be reached at [email protected] Follow her on Twitter @MaryCKenn (BE/AG) © Copyright 2019 DTN/The Progressive Farmer. All rights reserved.
https://www.dtnpf.com/agriculture/web/ag/blogs/market-matters-blog/blog-post/2019/02/11/infrastructure-funding-remains-limbo
Service Delivery: 1) Monitor performance to ensure smooth functioning in accordance to the pre-set deadlines, procedures and service standards (SLAs) 2) Conduct calibration calls with the stakeholders to understand expectations, provide feedback and reports and resolve queries or escalations 3) Drive process improvements to enhance the operational efficiency of the process by understanding and effectively utilizing resources. Service Excellence 1) Benchmark best practices globally through analysis for Efficiency, Quality, and ensure Trade Payments process is optimized on all the four dimensions. 2) Ensure all MI and other business data requirements are completed accurately and supporting statistics/reports/returns are presented to business/management within agreed timescales by the Associate Assistant Managers. 3) Prepare monthly management reports, variance analyses, trend analysis and financial review documents & partner with key business stakeholders to further enhance optimization of the process 4) Take ownership of all issues/escalations and manage them. 5) Lead, motivate, counsel, develop and coach newly recruited team members to meet their KPIs, mainly accuracy & Productivity Ensure strong, clear process documentation and controls are in place & review them every 6 months. 5) Performance review conducted on monthly basis for all staff Review error and deviation log to identify opportunities for improvements Talent & Org 1) Leading and developing the team of 25-30 associates; responsible for the overall direction, performance management, coordination and evaluation of the team. 2) Carrying out supervisory responsibilities ; responsibilities include interviewing, training and motivating employees; planning, assigning and directing work; rewarding and disciplining employees; and effective conflict resolution 3) Conduct Month team meetings to update team on progress / Issues & feedback from Client 4) Lead, develop and coach team members on their performance and personality Desired Profile 1) Candidates with minimum team handling of 20 people. 2) Should have a minimum team handling experience of 3yrs. 3) Should have experience in Accounts Payable , Accounts Receivables , General Legder Process. 4) Should have done Transition.
http://www.headhonchos.com/jobs/assistant-manager-operations-fmcg-foods-beverages-mumbai-247648.html
True to the mission of RIT, the Imaging Science program emphasizes knowledge application and workforce/graduate school preparation. Undergraduate Imaging Science students undertake laboratory-intensive courses throughout their academic careers, which culminate in a year-long senior research project where students design, carry out, and analyze their own projects under the guidance of a faculty research advisor. MS students produce a research thesis or project, and PhD students complete a research-based dissertation. Through such experiential learning, ImSci students are equipped with real-world teamwork, project management, and problem solving skills in addition to traditional academic studies. The Innovative Freshmen Experience Through our breakthrough project-based Innovative Freshmen Experience class - which features no lectures, no assigned textbooks, and no tests - students get their hands dirty from day one researching, designing, and building a functional imaging system in time for RIT's annual research and creativity festival, ImagineRIT. This revolutionary approach to education has been so successful at the undergraduate level, we are proud to announce that a separate project is now undertaken by each class of incoming Ph.D. students. Watch the video below to learn more about freshmen innovation in CIS, straight from the students themselves. Past undergraduate projects include: Intelligent Telepresence: Immersive Living Room Capture System “Intelligent Multi-Camera Video Chat” uses facial detection algorithms on each of four Raspberry Pi’s (breadboard computers) to choose and control one of four capture cameras. If more than one camera identifies a face, then the face that is larger in the frame will be chosen. Then, based on the location of the face in the frame, our main computer can pan each camera independently using attached servo motors. Output from the selected camera is then passed through a video switcher for display. Our imaging system has many applications including surveillance, automated lighting, and more, but we chose to demonstrate its potential use in a personal and conference video calling system. Learn more from the following video, produced by some IFE2013 students for the 2014 ImagineRIT Innovation & Creativity Festival: "X-ray Vision" with a Multicamera Array The focus of the 2012-2013 freshmen imaging project was "X-ray Vision" with a Multicamera Array. A multicamera array is an advanced imaging system that utilizes multiple camera perspectives. Multicamera arrays have been applied to synthetic aperture imaging, high-resolution still imaging, and special effects for cinematography, such as the “Bullet Time” scene from the film "The Matrix." Our class designed and built a prototype system that successfully demonstrates synthetic aperture imaging using multicamera technology. The large synthetic aperture of the prototype results in a small depth of field in the image, and the different locations of the component cameras allows users to "see through" occlusions in a manner that appears similar to "x-ray vision." | | Camera 1 | | Camera 2 | | Camera 3 | | Camera 4 | | Camera 5 | | Camera 6 | | Final combined synthetic aperture image Note how subject of interest (student on couch) is visible through obstruction (student in foreground) Multicamera Array System Workflow - Cameras Point Grey Chameleon cameras are set up in order to capture frames - Arrangement Each camera views the calibration target from a unique angle - Frame Grabbers Once frames are captured by the cameras, raw data flows to computers to be compressed - Processor The data is then sent to a custom computer where software is run to process a single image - Synthetic Aperture Effect The frames are combined to produce an unobstructed image of the target An array like ours could be used in a variety of security applications. This technology can be used to track an individual or object in a crowded or highly obstructed environment. Watch our video below (originally produced for Imagine RIT) to learn more. 3D Imaging for Medical Applications Using Structured Light The focus of the 2011 freshmen imaging project was 3D Imaging for Medical Applications Using Structured Light. What did we do and why? The Freshman Imaging Project class was asked by Dr. Bo Hu and Dr. Jack Wojtczak from the University of Rochester Medical Center to design a craniofacial phenotyper - a 3D scanner whose purpose is to take certain measurements along the curvatore of a person's face. Dr. Hu and Dr. Wojtczak have done research which showed that certain facial measurements can help doctors determine whether or not a physician would have difficulty inserting a breathing tube into a patient prior to surgery. The goal of the Freshman Imaging Project class was to create a craniofacial phenotyper which quickly, accurately, and inexpensively provides a physicial with the data which would enable that assessment. What is a structured light scanner, and how does it work? This system uses Structured Light technology in order to gather 3D data. Several patterns of alternating dark and light bars with different spatial frequencies and orientations are projected onto the subject. Two cameras take pictures of the patterns, and the system computes the deviation of the bars with respect to a flat reference to determine depth information, The system renders a 3D point cloud that is used for visualization and allows the physician to obtain specific measurements. What is a Craniofacial Phenotyper? Interactive Digital Images with Polynomial Texture Mapping: The Dome The focus of the 2010 freshmen imaging project was a Polynomial Texture Mapping (PTM) device. What are PTMs? PTMs are a type of interactive digital image that allow users to view an object from an infinite number of light source angles. This allows the user to uncover hidden textures, blemishes, and other surface features not visible using traditional photographic techniques. PTMs have uses in historical document and artifact imaging, forensics, dermatology, and more. Some example PTMs can be viewed on the Hewlett-Packard Labs website. How do you create a PTM? PTMs are created by taking many photographs of a static object from a fixed position using varying light angles. The individual image files are then run through software which models the luminance values at each pixel in the image and generates the final interactive PTM image. While this may sound like a straighforward process, there is no such thing as an "instruction manual" or assembly kit. What did the IFE2010 students do? The IFE students reached a number of milestones throughout the 2010-2011 acadmic year: - Research PTMs: how they work, how to make them, what to use them for - Design a robust system to capture images at multiple light angles, determining: - Structure - How to construct a dome or other setup to hold lights at different angles - Illumination - What type of lights to use, considering brightness, temperature, color, etc. - Capture (camera) - What type of camera, lens, and capture settings to use - Electronics - How to wire up and control the lighting and camera systems - Software - How to automate the system and process the imagery - Construct the PTM system - Demonstrate the PTM system at ImagineRIT - Perform actual research - The PTM system and 4 IFE students traveled to the Boston Public Library just after the conclusion of the academic year. (Pictures: 5/25/11; 5/26/11) Check out the videos below made by some IFE2010 students!
http://cis.rit.edu/prospective-students/application-based-curriculum
The present invention relates to a method and apparatus for generating a character pattern, wherein reference character data are defined as a coordinate point array on a stroke or edge, and these coordinate points are developed into character pattern data corresponding to a designated character size, and the developed character pattern data is output. In a conventional character pattern generator for receiving a character code and generating a character pattern corresponding to the input character code, character pattern data for each size is stored in a bit map font format, and dot pattern data corresponding to an input character code is read out and is used to display and print out data. When font data is stored in such a dot pattern format, character patterns for all sizes of each character pattern must be stored, and the capacity of a font memory is greatly increased. A printer, a display unit, and the like capable of generating character patterns having various sizes is available in which font data representing a so-called vector or outline font capable of generating character patterns having free sizes ranging from a large size to a small size for each alphabetic character can be stored. kanji The most important advantage of the so-called vector, outline or scalable font is that characters having free sizes ranging from a small size to a large size can be generated for each alphabetic character, as described above. However, when a character pattern having a small size is to be generated, this character pattern is deformed, and readability of the character is degraded. This is mainly because the area for displaying and outputting the character is small, and line segments constituting the character contact each other. This phenomenon typically occurs in multi-stroke characters such as No countermeasures are taken for this "deformation" phenomenon in conventional printers and display units. Even in a most advanced conventional device, font data designed to prevent this "deformation" phenomenon is prepared in advance. When pattern data for a given character expected to cause this "deformation" phenomenon is to be generated, the prepared font data is used to develop the corresponding character into a pattern. This arrangement, however, requires a memory for storing special font data. It is not economical to prepare these font data for all characters for small character patterns which may cause the "deformation" phenomenon. The most important factor in the fields of printing is readability, i.e., an easy-to-read printed document. For example, when characters are arranged to compose a sentence by using characters each of which has the face of a character designed to fully extend within a character frame and which has a predetermined relationship between the character size and the character face size, no problem is posed by a relatively large character size of 16 points or more. However, when the character size is decreased, adjacent characters adversely affect each other and become contiguous with each other, thereby greatly degrading readability of the character arrangement. On the contrary, when a character is used which has a face designed in advance to be smaller than a body frame, the "deformation" phenomenon for the character arrangement using a smaller point can be prevented. However, a character arrangement having a large character size looks sparse, thus degrading readability. In a character pattern generator using the outline or vector font described above, since the relationship between the character size and the character face size is predetermined regardless of different output sizes, a character having a small size is difficult to read. Document FR-A-2 638 264 discloses an apparatus an a method for avoiding deformations of a character font. The font is stored in the coordinate point format and the character fonts are adjusted in accordance with the desired size. first memory means for storing a reference character font data as a coordinate point array; second memory means for storing parameter data in correspondence with at least character size data; and means for generating size data for a target character, the size data being adjusted on the basis of the parameter data to be different from a desired size, wherein font data corresponding to an input character code is read out from said first memory means and is developed into pattern data of a size corresponding to character size data converted by said generating means. In a first aspect, the present invention provides an apparatus for generating a character pattern, comprising: first memory means for storing reference character font data as a coordinate point array; second memory means for storing density data in correspondence with at least character size data or character type data; pattern developing means for reading out font data corresponding to an input character code and for developing the readout font data into pattern data of a designated character size; and means for adjusting, on the basis of the density data, a line segment width of a character pattern-developed by said pattern developing means. In a second aspect, the present invention provides an apparatus for generating a character pattern, comprising: In a third aspect, the present invention provides a method of generating a character pattern using a first memory means which stores reference character font data as a coordinate point array and a second memory means which stores parameter data corresponding to character size data or character type data; the method comprising the steps of: reading out target font data in a desired size and generating a pattern having a size different from the desired size adjusted on the basis of the parameter data. reading out target font data in a desired size; and generating a pattern having a line segment width adjusted on the basis of the density data. In a fourth aspect, the present invention provides a method of generating a character pattern using a first memory means which stores reference character font data as a coordinate point array and a second memory means which stores density data in correspondence with character size data or character type data, the method comprising the steps of: An embodiment of the invention provides a method and apparatus for generating a character pattern, wherein when character patterns having various sizes are to be output, parameter data are multiplied with size data to output character patterns having optimal sizes for character arrangements. Fig. 1 is a block diagram showing a schematic arrangement of a character pattern generator according to a first embodiment of the present invention; Fig. 2 is a flow chart showing character pattern generation processing in the character pattern generator according to the first embodiment; Fig. 3 is a block diagram showing a schematic arrangement of a character pattern generator according to a second embodiment of the present invention; Fig. 4 is a flow chart showing character pattern generation processing in the character pattern generator according to the second embodiment; and Fig. 5 is a sectional view showing an internal structure of a laser beam printer as an output unit. Preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be noted that the present invention can be realized by an apparatus or a system consisting of a plurality of apparatuses and also incorporates its realization by supplying a program to an apparatus or system. Fig. 1 is a block diagram showing a schematic arrangement of a character pattern generator according to the first embodiment. Referring to Fig. 1, an input means 11 receives a character code signal, a pattern generation instruction, and the like for an output target from an external device (not shown). The input means 11 includes signal hold circuits such as a buffer and a flip-flop. A processor 12 is connected to the input means 11. The processor 12 generates a character pattern of a designated size and a designated font in accordance with a character code signal and a pattern generation instruction input from the input means 11. The processor 12 comprises a central processing unit (CPU) 120, a ROM 121 for storing control programs of the CPU 120 and various data, a RAM 122 used as a work area of the CPU 120, and the like. A pattern data memory 13 prestores character pattern and character attribute data constituted by character edges as a set of coordinate points at memory addresses corresponding to character codes. The pattern data memory 13 comprises a disk or a nonvolatile memory (e.g., a read-only memory). The processor 12 is connected to an output means 14. The output means 14 displays and outputs a character pattern processed by the processor 12. The output means 14 includes a signal hold circuit, a parallel/serial signal converter, and a printing unit/display unit such as a printer or display. The operations of the input means 11, the pattern data memory 13, and the output means 14 are controlled by the processor 12. An operation of the character pattern generator shown in Fig. 1 will be described with reference to a flow chart in Fig. 2. The control sequence in Fig. 2 is stored in the ROM 121 in the processor 12 and is executed by the CPU 120. This processing sequence is started when a character code signal, a pattern generation instruction, or the like is input from the input means 11. In step S1, a character code input from the input means 11 is read. In step S2, designated character size data is input and is converted into character face in the RAM 122 in correspondence with each character size value. For example, assume that a designated character size is given as 10 points. If the parameter value 122a corresponding to this character size is 0.98, an actual character face size is calculated as 9.8 (= 10 x 0.98) points. This parameter value 122a is assigned corresponding to each character size (e.g., 10, 11, and 12 points). The input character size is multiplied with the corresponding parameter value, so that this character pattern can be displayed and output in a character pattern format having a size slightly smaller than the designated character size. At this time, a character feed amount of characters displayed and output at the output means 14 is kept unchanged. Therefore, even if the character size is reduced, an easy-to-read document can be printed or displayed. The flow advances to step S3. The font data read out from the pattern data memory 13 is developed into a pattern on the basis of the character face size data obtained in step S2. The flow then advances to step S4. The pattern-developed character pattern data is output from the processor 12 to the output means 14, and the processing is ended. Note that an output destination of the pattern data is any target such as a printing system (e.g., a ) or any target capable of transmitting data electrically, magnetically or in a format converted from the electrical or magnetic system (e.g., a CRT and a telephone line). According to the first embodiment, as described above, designated character size data is replaced with any character face size data by using a parameter to generate a character pattern. Therefore, even if a character pattern has a free size, it can have high readability and can be output. Fig. 3 is a block diagram showing a schematic arrangement of a character pattern generator according to the second embodiment. The same reference numerals as in the character pattern generator of Fig. 1 denote the same parts in Fig. 3, and a detailed description thereof will be omitted. Referring to Fig. 3, a density adjustment data memory 15 prestores data representing whether density adjustment is performed at the time of generation of a character pattern in correspondence with data representing a type of character and a character size. This density adjustment is particularly processing for changing a line width of a pattern whose character size is converted. In particular, when a character pattern having a small size is to be generated, the line width of the character is decreased to prevent the character "deformation" phenomenon which is caused by contact between adjacent line segments constituting a character or characters. Fig. 4 is a flow chart showing character pattern generation processing in the character pattern generator of the second embodiment. A control program for executing this processing is stored in a ROM 121a in a processor 12a. In step S11, a character code, a character pattern generation instruction, or the like is input from an input means 11, and density adjustment data of a character corresponding to the input pattern or instruction is read out from the density adjustment data memory 15 on the basis of the type of character or the character size data in step S12. The flow advances to step S13 to determine on the basis of the density adjustment data whether density adjustment must be performed. If YES in step S13, the flow advances to step S14. Pattern data of the corresponding character is generated in accordance with the font data corresponding to the character code and read out from a pattern data memory 13. At this time, processing such as thinning of line segments of this character pattern is performed. However, if NO in step S13, the flow advances to step S15 to develop this character code into pattern data as in normal pattern development. When the character pattern data corresponding to the character code input in step S11 is generated in step S14 or S15, the flow advances to step S16, and this pattern data is supplied to an output means 14, thereby printing it on recording paper or displaying it on a display. According to the second embodiment, as described above, even if a character having a small size is to be generated, the character pattern is not subjected to the character "deformation" phenomenon. Therefore, a character having high readability can be displayed and output. An arrangement of a laser beam printer 140 applicable to the output means 14 of this embodiment will be described with reference to Fig. 5. The present invention is not limited to the laser beam printer 140, but may be applied to a bubble-jet printer, an aerojet printer for injecting an ink by utilizing an air flow, a thermal printer, and the like, as a matter of course. Fig. 5 is a sectional view showing an internal structure of the laser beam printer 140 (to be referred to as an LBP hereinafter) of this embodIment. The LBP 140 can receive character pattern data from the processor 12 (or 12a) and can print it on printing paper. Referring to Fig. 5, the LBP 140 forms an image on recording paper serving as a recording medium on the basis of a character pattern supplied from the processor (e.g., 12 in Fig. 1). The LBP 140 has an operation panel 300 having switches and an LED display, and a printer control unit 101 for performing overall control of the LBP 140 and analyzing character pattern data or the like supplied from the processor 12. This printer control unit 101 mainly converts character pattern data into a video signal and outputs it to a laser driver 102. The laser driver 102 is a circuit for driving a semiconductor laser 103 and ON/OFF-controls a laser beam 104 emitted from the semiconductor laser 103 in accordance with an input video signal. The laser beam 104 is oscillated in the right-and-left direction by a rotary polygonal mirror 105 and scans an electrostatic drum 106, thereby forming an electrostatic latent image of a character pattern on the electrostatic drum 106. Note that the pulse width of a beam is variably controlled to divide one pixel into 256 to obtain a multigradation expression. After the latent image is developed by a developing unit 107 arranged around the electrostatic drum 106, the visible image is transferred to recording paper. This recording paper is a cut sheet. The cut sheets are stored in a paper cassette 108 attached to the LBP 140. Each cut sheet is picked up and conveyed by a paper pickup roller 109 and convey rollers 110 and 111 and is supplied to the electrostatic drum 106. In each of the above embodiments, so-called outline font data, i.e., edge character data as a set of coordinate points are stored in the pattern data memory 13. However, the pattern data memory 13 need not store the outline data as font data. Non-dot matrix font data of a stroke format may be stored in the pattern data memory 13, as a matter of course. Conversion parameter values from character size data to the character face size data may be different depending on target character fonts or types. According to the present invention, as has been described above, designated character size data is converted in accordance with parameter data to generate a character pattern corresponding to any character face size data. For this reason, a document having high readability can be output regardless of the character size. According to another aspect of the present invention, a highly readable character can be output without causing the character "deformation" phenomenon even in a character pattern having a small size. Related Background Art BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [Description of Character Pattern Generator (Fig. 1)] [Description of Operation (Figs. 1 & 2)] [Description of Second Embodiment (Figs. 3 & 4)] [Description of Laser Beam Printer (Fig. 5)]
Pre-existing inequalities faced by women have been amplified during the pandemic, with experts warning that without concerted action, advances in gender equality and the limited gains to the gender pay gap risk being rolled back. Globally, women are typically paid less than their male counterparts. According to the Australian Bureau of Statistics, Australian women who work full-time are, on average, paid 14 per cent less, or around $254 less than men per week. At these rates, a woman on the average pay must work an extra 59 days a year to earn as much as her male counterpart. Chief executive officer of the Diversity Council Australia (DCA) Lisa Annese, says: “The cessation of free child care and the increase in caring responsibilities at home combined with job losses are all seriously affecting women’s employment participation I’m really worried about the impact all this is having on women and on the gender pay gap – now and in the years ahead. If we don’t act to address these issues they will continue to be entrenched in future generations.” One reason the pandemic has had a greater impact on women is that it has significantly increased the burden of unpaid care work. Women were already doing most of the unpaid care work prior to the onset of COVID-19 and research suggests the crisis and its subsequent shutdown response have dramatically increased this burden. In Australia, pre-virus estimations by the Workplace Gender Equality Agency revealed that for every hour of unpaid care work performed by men, women do one hour and 46 minutes. Why it matters The gender pay gap is a measure of women’s overall position in the paid workforce and does not compare like roles. It is the difference between women’s and men’s average weekly full-time equivalent earnings, expressed as a percentage of men’s earnings. Addressing workplace gender pay equality can reduce staff turnover, improve morale and productivity and boost an organisation’s reputation as a great place to work. In addition, recent research by McKinsey and Co. found that what is good for gender equality is great for the economy and society as well. Gender pay gaps have a compounding effect that reduces a woman’s earning capacity over her lifetime. The gap is influenced by a number of factors including: - Conscious and unconscious discrimination and bias in hiring and pay decisions - Women and men working in different industries and different jobs, with female-dominated industries and jobs attracting lower wages - Lack of workplace flexibility to accommodate caring and other responsibilities, especially in senior roles - High rates of part-time work for women - Women’s greater time out of the workforce for caring responsibilities impacting career progression and opportunities. - Women’s disproportionate share of unpaid caring and domestic work What can businesses do to help close the gap? There is no single solution to reducing the gender pay gap but taking action on contributing factors and helping women thrive can reduce inequalities in the workplace, and stop the gender pay gap backsliding. Here are five proven strategies for reducing the gender pay gap in your business: Conduct a gender pay gap analysis An audit will help you address and improve pay equity by collecting and reviewing relevant data, identifying any instances of unequal pay, and understanding what is driving any gender pay gaps. Areas of investigation could include, asking if women are disadvantaged on intake? In ongoing raises? Are there large gender disparities in representation in different parts of your firm? Is there a high rate of attrition in female employees? To assist with the audit process, WGEA has an online Gender Equality Strategy Guide and a Gender Equality Diagnostic Tool. Address unconscious bias Unconscious bias can have a big impact on people-related decisions at work, especially when it comes to recruitment, promotion, performance management and ideas generation. When bias is prevalent, your business will have less diversity in teams and fewer opportunities for development and career progression. Gender pay gaps can be a sign of both conscious and unconscious biases, with clear evidence, that between a male and female employee, unconscious bias often results in higher pay for the male employee. Include audits and structured monitoring policies to identify biases against women and ensure your business is not systematically rating men more highly and promoting them more quickly than women. Train managers to understand the impact of gender bias on their decision-making and put clear and consistent criteria in place to reduce bias in staffing decisions and performance reviews. Reset norms around flexibility Allowing employees to work flexibly to better accommodate employee commitments outside work is a simple and effective way to reduce the gender pay gap. By building a workplace culture that supports flexible arrangements and avoids the “motherhood penalty” you will promote higher return-to-work rates, increased employee engagement and greater loyalty. In addition, you can support the growth and development of all employees by establishing progressive, gender-neutral paid parental leave policies that go above and beyond legal requirements. Commit to gender balance in leadership Recognise the importance of gender balance in leadership as a means for increasing profitability and innovation. Setting targets and making public commitments to improve gender balance can boost workplace performance while simultaneously acting to correct gendered pay inequities. A study by Bankwest Curtin Economics Centre (BCEC) in partnership with the WGEA found female representation in senior leadership roles significantly reduced the gender pay gap and, improved company profitability and productivity. End pay secrecy To close the gender pay gap we need to end pay secrecy. In Australia, it is not uncommon for employers to prohibit employees from openly discussing or sharing details about their pay. Don’t bar employees from discussing wages and be open about salary ranges and how compensation is decided. This will make your workplace a fairer place to work – not only for women but for all your employees. When employees believe they are rewarded fairly for their work, they are more likely to put in extra effort and help co-workers. Finally, an additional area to consider is making sure women have equal opportunities for advancement. This could include introducing a mentoring program or offering training for women to learn concrete steps to negotiate for a raise or increased flexibility. Research has shown that when pay is negotiated men tend to benefit more than women. Supporting women to develop negotiating skills is therefore important for supporting women to achieve better outcomes. References and further resources:
https://www.carecorporate.com.au/portfolio-item/addressing-the-gender-pay-gap-a-business-priority/
1. Introduction {#sec1} =============== It is well known that adipose tissue is an endocrine organ. It secretes adipokines, which act at endocrine, paracrine, and autocrine levels \[[@B1]\]. These adipokines are not only synthesized and secreted mainly by adipocytes, but also synthesized and secreted by the other cells that make up the adipose tissue, such as macrophages, lymphocytes, and fibroblasts \[[@B2], [@B3]\]. Moreover, proinflammatory cytokines are secreted mainly by nonadipose cells in adipose tissue \[[@B3]\]. The prevalence of obesity has tripled in the last 30 years \[[@B4]\] in men of childbearing age, which coincides with an increase in infertility that affects currently one in six couples in France (according to the report annual report of the ABM in 2012). Indeed, the Institute of Public Health Surveillance (InVS) found a secular decline in spermatic concentration in the past decades in Western Europe. The link between these two public health problems has been widely described. Studies carried out on large cohorts (1558 men \[[@B5]\] and 526 men \[[@B6]\]) showed a significant correlation between a drop in sperm parameters and an increase in body mass index (BMI) higher than 25 kg/m^2^. The study by Jensen et al. \[[@B5]\] carried out on 1558 men showed a decrease in sperm concentration and count of 21.6% (95% CI: 4.0--39.4%) and 23.9% (95% CI: 4.7--43.2%), respectively, when the BMI was higher than 25 kg/m^2^. In addition, a decrease in sperm motility was observed by an Argentinian team in obese patients (51.4% in the normal BMI group versus 46.6% when BMI was higher than 30, *p* \< 0.007) \[[@B7]\]. In 2007, a Chinese study found in the same way a decrease in spermatic parameters (count, concentration, and morphology) in overweight subjects, regardless of circulating concentrations of LH, FSH, estradiol, and testosterone \[[@B8]\]. This suggests that these hormones alone do not explain the association between BMI and sperm parameters. Moreover, obesity is promoted by a positive energy balance, which impacts on the function of the cells involved in spermatogenesis \[[@B9]\]. This hypothesis is reinforced by the results obtained in animal experiments, which showed the existence of a direct relationship between epididymal adipose tissue and fertility, since in rats, the removal of this tissue caused a significant decrease in sperm count \[[@B10]\]. Relationships between circulating concentrations of adipokines and BMI have been widely studied. Indeed, different studies showed a variation of these factors associated with overweightness. Thus, obesity is associated with hyperleptinemia and leptin resistance \[[@B11]\]. In contrast, adiponectinemia decreases in overweight cases \[[@B2]\]. Interestingly, these variations are not definitive since they are reversible after weight loss \[[@B12]\], especially after bariatric surgery. Nevertheless, an association has set up evidence between circulating concentrations of adipokines and sperm quality. Thus, comparing two groups (obese fertile versus infertile men), an Egyptian team observed circulating concentrations of leptin higher in the infertile group compared to the fertile group \[[@B13]\]. It has also been shown that leptinemia was positively correlated with abnormal sperm morphology and negatively correlated with the concentration and sperm motility \[[@B13], [@B14]\]. This correlation could be the result of the higher circulating leptin levels observed in obese or overweight men leading to a decreased testosterone production by Leydig cells, which is able to interfere with the normal cycle of spermatogenesis \[[@B15]\]. Although it is not an adipokine, ghrelin, a peptide hormone secreted by the stomach which is increased in obesity, is also present in the whole human testis and more particularly in Leydig and Sertoli cells. Its receptors (growth hormone secretagogue receptor (GHS-R)) have been identified in germ cells \[[@B15]\]. In vivo studies demonstrated that ghrelin inhibits the proliferative activity of immature Leydig cells and regulates stem cell factor mRNA expression in rat testis \[[@B15]\]. This hormone in link with fasting is also involved in male fertility. Thus, sperm quality is related to the circulating concentrations of adipokines, but the link with fertility is not currently established. In addition, the concentrations of adipokines in blood and in seminal plasma are not in the same range. Indeed, adiponectin is 1000 times lower in seminal plasma than in blood, whereas progranulin and visfatin are 100 times more concentrated \[[@B2]\]. The varying concentrations between these two biological fluids suggest a difference in production and a potential action on the surrounding cells (germ cells for sperm). Indeed, several studies carried out in humans and animals showed that most of the adipokines and their receptors are expressed in testis especially in seminiferous tubes and more specifically in Leydig and Sertoli cells and on spermatozoa themselves \[[@B16]\]. Thus, the adipokines of seminal plasma could be privileged actors involved in the relationship between obesity and fertility. Obesity is characterized by an increased number of adipose cells and an excessive storage of triglycerides in the adipose cells. The hormonal interaction between the adipose tissue and other endocrine organs including the gonads is complex and not fully understood. Some endocrine changes involving adipokines could contribute to understand the negative effects of obesity on reproductive function. Many studies have shown the presence and the role of adipokines and their receptors in the female reproductive tract of different species. However, fewer studies have investigated the role of adipokines on male fertility in case of obesity or not, whereas it has consistently been shown that high BMI reduces male fertility. The present review will highlight the location of adipokines in the male genital tract, the molecular mechanisms of action of these molecules, and their potential effect on spermatic parameters in human and animal models when this information is available. Indeed, several adipokines (leptin, adiponectin, resistin, chemerin, visfatin, vaspin, and progranulin) and certain cytokines have already been detected in semen. For adipokines that have been studied thoroughly, we will also report their effects on spermatozoa. 1.1. Leptin {#sec1.1} ----------- ### 1.1.1. Topography in Male Genital Tract and Mechanism of Action {#sec1.1.1} Leptin is an adipokine mainly secreted by adipose tissue. This hormone of 167 amino acids is encoded by the obese gene (*ob* gene) \[[@B17]\], and its tertiary structure consists of four alpha helices connected by two long and one short loop \[[@B18]\]. This molecule has been widely studied in animals and in humans. Leptin signaling via STAT3 suggests a role in the proliferation of undifferentiated germ cells. Leptin activation of prosurvival pathways may lead to the activation of ERK1/2 signaling, representing capacitation signaling crosstalk ([Figure 1](#fig1){ref-type="fig"}). It is intriguing to speculate that acrosomal leptin receptor expression is associated with cholesterol efflux and acrosome reaction, whereas tail leptin receptor expression in human sperm may reflect leptin\'s modulation of hyperactivated sperm motility. Leptin STAT3 signaling may enable undifferentiated germ cells to replicate without loss of potency while triggering late-stage spermatocytes to undergo development and differentiation \[[@B19]\]. Moreover, leptin modulates the nutritional support of spermatogenesis by human Sertoli cells \[[@B15]\]. Indeed, a Portuguese team demonstrated that acetate production by human Sertoli cells, a central metabolite for spermatogenesis, is severely decreased after exposure to leptin (5 to 50 ng/mL) \[[@B9]\]. Leptin is present in testis and particularly in seminiferous tubules \[[@B20]\]. In animals, studies demonstrate that leptin is expressed differentially between species. Indeed, in pigs, leptin and its receptors are expressed in Leydig cells, whereas in mice, no leptin is present in interstitial cells. In rats, leptin receptor (LepR) mRNA is present in Leydig cells, in Sertoli cells, and possibly in germ cells \[[@B21]\]. Concerning dogs, LepR is absent from Leydig cells and Sertoli cells but present in spermatocytes and spermatids \[[@B22]\]. Aquila \[[@B23]\] has demonstrated the presence of leptin in human sperm at different levels: mRNA expression, protein expression, and immunolocalization. In humans, the presence of leptin receptor has been reported in seminiferous tubules \[[@B24]\]; however, only Jope et al. \[[@B25]\] have reported that seminal plasma and sperm contain this receptor \[[@B26]\]. The presence of leptin receptors on the tail of spermatozoa suggests an effect on motility \[[@B25]\] as described in [Section 1.1.2](#sec1.1.2){ref-type="sec"} of this review. Leptin receptor has also been reported to be present in the sperm of certain species, but there are also reports claiming its absence in other species. Hatami-Baroogh et al., using several commercial and noncommercial antibodies and various techniques, were unable to detect leptin receptors at protein levels in human spermatozoa of fertile (*n* = 22) and infertile (*n* = 50) individuals \[[@B27]\]. Ishikawa reported that in humans, the leptin receptor is present in testicular tissue and confined only to Leydig cells and is not expressed by Sertoli cells, germ cells, or spermatozoa \[[@B28]\]. The difference in the leptin receptor location has been related to species differences. ### 1.1.2. Effects on Semen Parameters {#sec1.1.2} Although different studies showed contrasting results, it is possible to consider a physiological role of leptin on sperm motility. In fact, studies in which the seminal plasma studied had high concentrations of leptin showed that this adipokine was inversely correlated with sperm motility. Glander et al. \[[@B24]\] showed a negative correlation between seminal leptin and progressive (*r* = −0.53, *p* = 0.0004) and straight (*r* = −0.3, *p* = 0.029) motility for 64 male partners of couples consulting for infertility. This team demonstrated an average leptin concentration in seminal plasma of 2.4 ng/mL, and once the separation into two groups "normozoospermic" and "pathozoospermic", the mean concentrations of seminal leptin were 1.5 ng/mL and 3.19 ng/mL, respectively. Two other studies have shown a negative correlation between leptin concentrations in seminal plasma and progressive motility. The first study \[[@B26]\] was performed on 79 men with asthenospermia (\[leptin\] = 4.72 ng/mL) and 77 control men (\[leptin\] = 3.75 ng/mL). The second study \[[@B20]\] involved 42 infertile patients with varicocele (\[leptin\] = 3.01 ng/mL) compared to 10 control men (\[leptin\] = 1.79 ng/mL). It is important to note that a higher concentration of seminal leptin is often associated with spermatic pathologies, suggesting that high concentrations of leptin in seminal plasma would have deleterious effects. Finally, other studies have concluded that there is no correlation between seminal leptin and sperm motility \[[@B29], [@B30]\]. In these studies, patients had relatively low leptin concentrations (0.93 ng/mL and 0.95 ng/mL). Despite high concentrations of seminal leptin (5 ng/mL in the nonobese group versus 12.5 ng/mL in the obese group), a South African team found no correlation between seminal leptin and sperm motility \[[@B31]\]. The obese group nevertheless had significantly lower sperm motility than the nonobese group (42.2% versus 54.4% for total motility), and this group had a higher seminal leptin concentration. Thus, analysis of published studies to date suggests that increased seminal leptin concentration would be associated with decreased motility. It can therefore be hypothesized that, at high concentrations, leptin in seminal plasma is associated with a decrease in sperm motility \[[@B2]\]. At lower or "physiological" concentrations, leptin may either have a physiological effect, beneficial to motility, or have no effect. The same type of result is found when we explore the relationship between seminal leptin and sperm concentration in the ejaculate. Thus, for low concentrations of leptin (0.83--0.91 ng/mL), there is a positive correlation between seminal plasma leptin and sperm concentration (*r* = 0.24, *p* \< 0.05) \[[@B2]\]. On the other hand, studies carried out on patients with high seminal concentrations of leptin show a negative correlation between this adipokine and not only the concentration (*r* = −0.187, *p* \< 0.05), but also the spermatozoa count (*p* = 0.0001) \[[@B20], [@B32], [@B33]\]. Concerning spermatic vitality, two studies did not find any correlation with the seminal levels of leptin \[[@B29], [@B31]\]. A Chinese team \[[@B33]\], comparing 74 varicocele patients, 70 leukocytospermia patients, and 40 control patients, describes a negative correlation in the case of associated pathology but without supporting their observation by statistical analysis. They showed that patients with varicocele (VC \[leptin\] = 3.2 ng/mL) and leukocytospermia (LC \[leptin\] = 2.72 ng/mL) had high concentrations of seminal leptin as well as increased ROS (reactive oxygen species) and apoptosis compared to the control group. They also noted that there was a correlation, for the VC and LC groups, between leptin, apoptosis, and ROS. ROS are markers of oxidative stress, and an increase in these ROS induces a deleterious effect on sperm function \[[@B34], [@B35]\]. It appears, therefore, that at high concentrations, leptin may be a proapoptotic factor. The correlation between ejaculate volume and seminal leptin has been poorly studied so far since only two studies presenting contradictory results are at our disposal. On the one hand, Thomas et al. \[[@B2]\] found a negative correlation (*r* = −0.34, *p* \< 0.01), whereas Leisegang et al. \[[@B31]\] showed a lack of correlation between levels of seminal leptin and the volume of the ejaculate. It is therefore difficult to decide whether or not there is a link between seminal leptin and ejaculate volume. On the other hand, seminal leptin does not seem to have any effect on sperm morphology since three studies agree on the lack of correlation between these two parameters \[[@B2], [@B29], [@B31]\]. To sum up, the analysis of these different studies suggests that there would be an "ideal" seminal concentration of leptin, a concentration at which this adipokine would have a physiological effect, whereas at high concentration, its effects could be deleterious on spermatic parameters. Indeed, at high concentrations, leptin is rather associated with an alteration of certain spermatic parameters, which could have an impact on fertility. Altered leptin dynamics may contribute to male infertility via at least two mechanisms, both of which may produce hypogonadism. These include leptin resistance or leptin insufficiency at the hypothalamus and leptin modulation of testicular physiology. ### 1.1.3. Direct Effect on Motility {#sec1.1.3} Leptin may have a physiological role in the male reproductive tract. Thus, an *in vivo* study and an *in vitro* study showed a positive correlation between seminal leptin and motility. The *in vivo* study was performed on 96 men without pathologies associated to spermatogenesis and showed a positive correlation between seminal leptin and progressive (*r* = 0.27, *p* \< 0.01) and total (*r* = 0.23, *p* \< 0.05) motility \[[@B2]\]. The mean leptin concentrations in seminal plasma were 0.91 ng/mL in normal weight men and 0.83 ng/mL in overweight or obese groups. The *in vitro* work directed by Lampiao and du Plessis \[[@B36]\] aimed to study the effect of leptin on sperm motility: it was shown that after 1, 2, and 3 hours of incubation, leptin significantly increased the total and progressive motility (*p* \< 0.05). This study was performed on spermatozoa from normozoospermic donors. On buffalos, Khaki\'s team conducted two types of protocols. In the first one, they added 10 ng/mL of leptin on spermatozoa in semen, which was shown to preserve motility and vitality during frozen sequence compared to the control group. For the second protocol, they added 200 ng/mL which had a deleterious effect on semen parameters \[[@B37]\]. This deleterious effect at high levels of leptin consolidates the dual effect of leptin according to the concentration. In male mice, diet-induced obesity induces not only significant impairments of sperm function parameters, but also disruption of the blood-testis barrier integrity \[[@B38]\]. Even if it has been shown that leptin can cross the blood-testis barrier \[[@B39]\], we can hypothesize that obesity could also facilitate the passage of leptin and other adipokines through this barrier. ### 1.1.4. Transgenic Animal Model (cf [Table 1](#tab1){ref-type="table"}) {#sec1.1.4} A recent study showed testicular atrophy in an *ob/ob* mice model, with testis weight 13% less than the control group (*p* \< 0.0001) despite higher body weight \[[@B40]\]. Likewise, this model displayed a decrease in the nuclear volume of Sertoli cells, spermatogonia, and spermatocytes. The same transgenic mice model already demonstrated these effects in 2006 \[[@B41]\]. Furthermore, leptin treatment of adult *ob/ob* males corrects their sterility, an effect that is mediated at least partly by a normalization in testicular weight, spermatogenesis, and Leydig cells morphology \[[@B42]\]. It was shown that LepR gene null mice generate infertile phenotype \[[@B43]\] and present a decrease of gonadal functions \[[@B44]\]. Male *ob/ob* mice are morbidly obese and infertile. Similar phenotypes are observed in Lep-R-deficient mice and the Zucker fatty (fa/fa) rat. Pubertal obese Zucker rats present altered spermatogenesis, as observed in the histological level, which persists up to the adult phase; in the quantitative analysis, sperm production in the fatty animals was reduced as well, but only in the pubertal rats. On the other hand, the increased sperm DNA fragmentation found in the adult rats points out genetic damage generated in the fatty rat gamete, which can be a lead for understanding the obese Zucker rat\'s infertility \[[@B45]\]. ### 1.1.5. Polymorphisms in Human {#sec1.1.5} Different polymorphisms of leptin do exist and are characterized in many studies. Leptin rate is more increased in AA genotype than in AG genotype \[[@B46]\]. LEP-2548G/A genotype is different between fertile and infertile patients (*p* = 0.012). AA genotype is increased in the infertile group, and AG genotype decreased in this group, which induces that AG genotype has a protective/safety effect on fertility by reducing the risk of male infertility by 3-fold \[[@B47]\]. Sperm count is increased in the infertile group with AG and GG genotypes than AA (*p* = 0.0009 and *p* = 0.026). Leptin\'s receptor polymorphisms exist too and influence spermatic motility so that progressive motility is increased in RR genotype than QQ and QR ones \[[@B47]\]. All this data supports a local role for leptin in sperm parameters with the consequent potential impact on fertility capacity. 1.2. Adiponectin {#sec1.2} ---------------- ### 1.2.1. Topography in Male Genital Tract and Mechanism of Action {#sec1.2.1} Adiponectin is a protein of 224 amino acids mainly produced by white adipose tissue but also found in other tissues such as bone and muscle \[[@B16]\]. Unlike the majority of adipokines, plasma adiponectin concentration is negatively correlated with BMI and visceral adiposity \[[@B48]\], but it also regulates gonadotropic axis and gonad function \[[@B49]\]. Adiponectin is found in the circulation in various molecular forms: the so-called LMW (low molecular weight) form corresponding to an assembly of 3 adiponectin monomers in trimers, the MMW (medium molecular weight) form corresponding to hexamers (assembly of 2 trimers), and the form called HMW (high molecular weight) which corresponds to an assembly of 3 hexamers \[[@B50]\]. The HMW form of adiponectin is the predominant circulating form (\>80%) and would be the most active. Most studies published to date have been conducted with total adiponectin, but some have been performed with the dosage form HMW. Adiponectin mRNA is present in testis and has been found in Leydig cells \[[@B51]\] and spermatocytes \[[@B52]\]. AdipoR1 and AdipoR2 (adiponectin receptor) are present in testis \[[@B53], [@B54]\] more particularly in seminiferous tubules and specifically in interstitial tissue of rats \[[@B48]\]. Indeed, these receptors are abundant in Sertoli cells, Leydig cells, and germ cells in rats. Kawwass et al. also reported a presence of adiponectin receptors on spermatozoa themselves \[[@B54]\]. Adiponectin and adiponectin receptors have been immunolocalized on bull\'s spermatozoa in acrosomal, postacrosomal, equatorial, and tail regions \[[@B55]\]. Adiponectin protein is abundant in the tail region of bull sperm, while AdipoR1 is localized mainly at the equatorial and acrosome region and AdipoR2 is expressed primarily on the sperm head region and on the equatorial line. Adiponectin and its receptors are expressed during pre- and postcapacitation of spermatozoa, suggesting that adiponectin might have a role in sperm capacitation \[[@B55]\]. Thus, local actions of adiponectin in testis are involved in the production of sperm capable of fertilization \[[@B56]\]. ### 1.2.2. Effects on Semen Parameters {#sec1.2.2} To our knowledge, only one team until now \[[@B2]\] has studied the relationship between seminal adiponectin concentrations (total adiponectin form) and sperm parameters in humans. This study suggested that adiponectin would rather have a positive effect on sperm function. Mean seminal adiponectin concentrations of 16.8 ng/mL and 14.2 ng/mL (a thousandfold lower than adiponectinemia) were measured in normal weight subjects and in overweight or obese patients, respectively. Adiponectin levels in seminal plasma have been shown to be positively correlated with sperm concentration, sperm count, and percentage of typical sperm forms. An animal study showed an improvement of fertility in bulls positively correlated with the seminal concentration of adiponectin (*r* = 0.80, *p* \< 0.0001) and its AdipoR1 receptors on spermatozoa (*r* = 0.90; *p* \< 0.0001) and AdipoR2 (*r* = 0.65, *p* \< 0.0001) \[[@B51]\]. After capacitation, the levels of adiponectin and its receptors are lowered, suggesting a direct role on sperm motility. Interestingly, a novel association of adiponectin system with sperm motility was shown in rams \[[@B57]\] ([Figure 2](#fig2){ref-type="fig"}). ### 1.2.3. Transgenic Animal Model ([Table 1](#tab1){ref-type="table"}) {#sec1.2.3} An adiponectin receptor gene knockdown study performed in mice highlighted the potential importance of the adiponectin pathway in the male genital tract. Indeed, this work has shown that the loss of AdipoR2 was responsible for seminiferous tubular atrophy associated with aspermia and reduced testicular weight \[[@B54]\] ([Figure 2](#fig2){ref-type="fig"}). Moreover, a decrease of testis weight was evidenced by Bjursell et al. with the same model \[[@B58]\]. It seems that adiponectin may play a beneficial role in male reproductive function, but this pathway has yet to be studied and confirmed. There are no *in vitro* studies carried out on seminal adiponectin. ### 1.2.4. Polymorphisms in Human {#sec1.2.4} Polymorphism has only been described for females, to the best of our knowledge with a link to insulin resistance in polycystic ovary syndrome patients \[[@B59]\]. However, no polymorphism has been described with a link to male infertility. 1.3. Resistin {#sec1.3} ------------- ### 1.3.1. Topography in Male Genital Tract and Mechanism of Action {#sec1.3.1} Resistin is a 12.5 kDa adipokine belonging to a family of cysteine-rich proteins \[[@B16]\]. It is present in testis, in seminiferous tubules, and specifically in Leydig and Sertoli cells \[[@B60]\] of animals, but this has not been demonstrated in humans. TLR-4, a binding site for resistin, has been found in human sperm \[[@B16]\]. ### 1.3.2. Effects on Semen Parameters {#sec1.3.2} To our knowledge, only three studies have measured resistin in seminal plasma. The team of Moretti et al. showed that there was a negative correlation between the concentrations of seminal resistin and spermatic motility and vitality \[[@B61]\]. Two other teams \[[@B2], [@B62]\] studied the relationships between resistin concentrations in seminal plasma and sperm parameters but did not show significant correlation. However, given the low number of studies available, it is difficult to conclude on the role of resistin, which seems to have a rather negative effect on spermatozoa and thus fertility. However, it has been shown that this adipokine is associated with markers of inflammation in seminal plasma. Indeed, the concentrations of seminal resistin correlate positively with those of proinflammatory mediators such as elastase, interleukin-6 (IL-6) \[[@B62]\], and tumor necrosis factor-*α* (TNF-*α*) \[[@B61]\]. During inflammation, the concentrations of cytokines and ROS increase, and this may have a deleterious effect on the male reproductive function \[[@B63], [@B64]\]. Indeed, it has been shown that an increase in ROS could induce a decrease in spermatic concentration, motility, and sperm count \[[@B34]\]. In the study published by Moretti et al., the seminal concentrations of resistin were significantly higher in cases of leukocytospermia or if the patients were smokers \[[@B61]\]. This increase in resistin concentrations was also associated with a significant increase in TNF-*α* and IL-6, as well as a sharp decrease in spermatic motility and the number of normal morphology spermatozoa for patients with leukocytospermia. All these results suggest that resistin could be considered as a marker of inflammation, and in pathological situations such as leukocytospermia, the presence of this adipokine would be related to an alteration of sperm parameters. ### 1.3.3. Polymorphisms in Human {#sec1.3.3} As for adiponectin, we found that resistin polymorphism has only been described for females and more particularly in cases of polycystic ovary syndrome \[[@B65]\]. 1.4. Chemerin {#sec1.4} ------------- ### 1.4.1. Topography in Male Genital Tract and Mechanism of Action {#sec1.4.1} Chemerin, a recently discovered adipokine, is synthesized mainly by the liver, kidney, and adipose tissue \[[@B66]\]. Few studies have been carried out on this adipokine and in particular on its role in the reproductive function. In human as in rodents, chemerin receptors (CMKLR1, GRP1, and CCRL2) are present in testis. Chemerin, CMKLR1, and GPR1 are localized specifically on Leydig cells and poorly on germ cells \[[@B16]\]. ### 1.4.2. Effects on Semen Parameters {#sec1.4.2} To our knowledge, only one study was carried out in humans for this adipokine \[[@B2]\]. Chemerin was detected in the seminal plasma of 96 men with no spermatogenesis abnormalities, and it was shown that this adipokine correlated negatively with spermatic motility and positively with sperm concentration. Thomas\' team \[[@B2]\] showed increased chemerin concentrations in the sperm of control subjects compared to a group of vasectomized patients (*p* \< 0.001). This data suggests that there would be a local secretion of chemerin in the male genital tract, particularly at the testicular level. ### 1.4.3. In Vitro Experiment {#sec1.4.3} Surprisingly, it was demonstrated by experiments conducted *in vitro* on rats that chemerin had an inhibitory effect on steroidogenesis \[[@B16]\] ([Figure 3](#fig3){ref-type="fig"}). The roles played by this adipokine in human semen needs to be further investigated. 1.5. Visfatin {#sec1.5} ------------- Visfatin, also known as NAMPT, is a recently discovered adipokine produced primarily by perivascular adipose tissue. It has been found in Leydig cells, spermatocytes, and spermatozoa \[[@B16]\]. Visfatin levels are a hundred times higher in seminal plasma than in blood suggesting a significant local production in the male genital tract \[[@B2], [@B16]\]. No other studies are available to further understand the effects of this adipokine on male fertility. 1.6. Vaspin {#sec1.6} ----------- Vaspin, another recently discovered adipokine, is expressed in epididymal, retroperitoneal, and mesenteric adipose tissue and is related to the metabolic state \[[@B67]\]. Thomas et al. showed that seminal plasma vaspin was negatively correlated with ejaculate volume (*r* = −0.36, *p* \< 0.001) and positively correlated with sperm DNA fragmentation (*r* = 0.22, *p* \< 0.05) \[[@B2]\]. 1.7. Progranulin {#sec1.7} ---------------- Progranulin is increased in cases of obesity or metabolic syndrome and could contribute to the inflammatory mechanisms found in certain pathologies via a recruitment of macrophages \[[@B68]\]. This adipokine was studied in the seminal plasma only by Thomas et al. \[[@B2]\]. Progranulin is positively correlated with motility (*r* = 0.32, *p* \< 0.001), sperm count (*r* = 0.23, *p* \< 0.05), and sperm morphology (*r* = 0.25, *p* \< 0.01). In vasectomized patients, seminal progranulin levels were significantly decreased (*p* \< 0.05), indicating probable local secretion. 1.8. Cytokines {#sec1.8} -------------- ### 1.8.1. Topography in Male Genital Tract and Mechanism of Action {#sec1.8.1} In the wide family of cytokines, some have been described in semen and related to male fertility. The presence of tumor necrosis factor- (TNF-) *α* and interferon- (IFN-) *γ* will be further investigated here. In dogs, TNF is present in testis (more particularly in germ cells, but not in Sertoli cells nor Leydig cells), epididymis, and spermatozoa \[[@B69]\]. Proinflammatory cytokine like TNF-*α* can directly impair the seminiferous epithelium by damaging the expression and assembly of the junctional proteins leading to an impairment of the blood-testis barrier \[[@B70]\]. Moreover, proinflammatory cytokines disrupt the seminiferous and epididymal epitheliums by creating high levels of ROS \[[@B70]\]. ### 1.8.2. Effects on Semen Parameters {#sec1.8.2} Different authors report that cytokine levels are increased in the seminal plasma of infertile male \[[@B71]--[@B73]\]. It is the case for TNF-*α* and IFN-*γ* that rise in semen from males with an inflammation linked to infertility \[[@B74]\]. On the one hand, cytokines seem to have a bad effect on sperm motility \[[@B75], [@B76]\]. This was confirmed by Paradisi that showed a negative correlation between TFN-*γ* and sperm concentration, motility, and morphology \[[@B73]\]. In 2013, it was also confirmed that TNF-*α* levels are increased in the seminal plasma of oligozoospermic (42%, *p* \< 0.01) and asthenospermic patients (58%, *p* \< 0.001) compared to control patients. On the other hand, one study did not find any effect of either TNF-*α* or IFN-*γ* on sperm motility \[[@B77]\]. ### 1.8.3. In Vitro Experiments {#sec1.8.3} One study observed *in vitro* effects of TNF and IFN on spermatozoa for 3 hours and showed a decrease of 18% of sperm motility between 60 and 180 minutes and a decrease of 16% of sperm vitality at 180 minutes \[[@B78]\]. ### 1.8.4. Polymorphisms in Human {#sec1.8.4} A polymorphism in the TNF-*α*\_308 gene was associated with a significant decrease of sperm count, sperm motility, normal sperm morphology, and acrosin activity \[[@B79]\]. In the same study, the occurrence of A allele was significantly increased in infertile patients than fertile controls (21.6% versus 9.7%; OR: 0.388, *p* = 0.005). An AA genotype of TNF-*α* corresponds more to a lowering concentration, motility, and normal morphology sperm profile. Moreover, TNFR-1 36G allele is more found in oligozoospermia associated to a decrease of sperm concentration \[[@B80]\]. 2. Discussion and Conclusion {#sec2} ============================ Taken together, leptin is the most studied adipokine in male fertility; fewer data are available for the other adipokines. For example, until now, it is unclear if adiponectin, resistin, visfatin, vaspin, progranulin, and chemerin are able to cross the blood-testis barrier. Leptin is present in germ cells, but there is no consensus for the presence of its receptor on sperm. It could depend on spermatozoa origin because Jope et al. found it on ejaculated spermatozoa \[[@B25]\], whereas Ishikawa researched it on spermatozoa obtained directly from the testis and could not found any LepR on these sperms \[[@B28]\]. Otherwise, the other article, which concluded to a lack of LepR on spermatozoa, found nonetheless by RT-PCR LepR on 1 of 10 controls and 3 of 23 infertile patients \[[@B27]\]. Thus, these two articles have to be discussed cautiously and checked on ejaculated sperm. Our point of view is that LepR would appear on mature spermatozoa. We promote the dual role of leptin according to its concentration in seminal plasma. We hypothesize a beneficial role of leptin at physiological levels as we can expect it in men with normal BMI. On the contrary, a negative effect of leptin on spermatozoa is suggested for high concentration, corresponding to those determined in overweight or obese men. The mechanism of action of leptin on spermatozoa could be direct because human receptor of leptin has been found on spermatozoa itself. However, it also could be the consequence of higher circulating levels of leptin in obese or overweight men leading to a decrease of testosterone production by Leydig cells, which therefore interferes with the normal cycle of spermatogenesis \[[@B15]\]. Moreover, leptin can also modulate the nutritional support of spermatogenesis by human Sertoli cells \[[@B15]\]. Indeed, exposure of human Sertoli cells to leptin dramatically decreases the production of acetate, which is a central metabolite for spermatogenesis \[[@B9]\]. In animals and humans, adiponectin is less concentrated in seminal plasma than in serum by 180-fold in bulls \[[@B81]\] and 66-fold in humans \[[@B2]\]. Different isoforms of adiponectin circulate in blood, with a large predominance of HMW adiponectin. One hypothesis of this huge difference of concentration between these two fluids is a possible crossing of the blood-testis barrier only by smaller molecules. Even if Heinz\'s team got proportionally more HMW adiponectin in semen than other forms, which reduce this hypothesis, it is also reported that Ca^2+^ is 3-fold more concentrated in semen than blood and can promote HMW adiponectin forming from small isoforms that come from blood leakage. Adiponectin\'s effects on spermatozoa seem to be beneficial, which is in agreement with a better fertility in lean men. Concerning the other adipokines described in the present review, only one study reported their concentration in human seminal plasma in normal weight, overweight, and obese patients. Even if these data need to be confirmed, it is clear that adipokines might be a link between obesity and male infertility. It would be worthwhile to determine seminal adipokines and adipokines expression in testis cells in some pathologies of male genital tract. The increase of resistin and doubtlessly many other adipocytokines involved in inflammation in seminal plasma is correlated with a decrease of sperm vitality and motility. We need more in vitro experiments to assess the ideal physiologic concentrations of each adipokine and their synergic effects on spermatogenesis and sperm fertilization capacity. We cannot find enough information on combinatory actions of adipokines on male fertility and semen parameters, whereas all these adipokines are present in seminal plasma and many should be increased concomitantly and could interact together. Also, since some circulating adipokines like adiponectin can be modulated by nutrition, it will be very interesting to investigate if dietary supplements could affect seminal adipokines or adipokine testis expression and consequently improve male fertility. More experiments to assess the best levels of each adipokine and synergic effects of these hormones on spermatogenesis and sperm fertilization capacity are necessary. In conclusion, some adipokines have been found in human and animal semen. Studies performed *in vitro* and *in vivo* by using transgenic animal models confirmed the adipokine\'s effects observed on semen parameters. Thus, adipokine profiles in seminal plasma could be a biomarker of male fertility. It could be interesting to measure these markers in the semen of infertile men to evaluate their seminal metabolic profile. Conflicts of Interest ===================== The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. ![Leptin receptor and its interactions with JAK2 and STAT3 system to sperm capacitation \[[@B82]\].](IJE2018-3906490.001){#fig1} ![Adiponectin receptors and its possible interactions to fertility.](IJE2018-3906490.002){#fig2} ![The three transmembrane receptors (CMKLR1, GPR1, and CCLR-2) and its known interactions to male fertility \[[@B83], [@B84]\].](IJE2018-3906490.003){#fig3} ###### Consequences on male fertility phenotypes of animal models with missing adipokine or adipokine receptor. ------------------------------------------------------------------------------------------------------------------------------------------ Type Phenotype References -------------------------- -------------------------------------------------------------------------------------------------- ------------ *ob/ob* mice Testicular atrophy\ \[[@B40]\] Decrease nuclear volume of Sertoli cells, spermatogonia, and spermatocytes *ob/ob* mice Infertile\ \[[@B41]\] Reduction of testis weight, multinucleated spermatids, few spermatozoa, and anormal Leydig cells *db/db* mice Infertile \[[@B43]\] *db/db* mice Infertile\ \[[@B14]\] Impairment of spermatogenesis and sperm motility *fa/fa* rat (Zucker rat) Alteration in sperm production and sperm DNA damage \[[@B45]\] AdipoR2 Seminiferous tubular atrophy with aspermia and reduced testicular weight \[[@B54]\] AdipoR2 Decrease of testis weight \[[@B58]\] ------------------------------------------------------------------------------------------------------------------------------------------ [^1]: Academic Editor: Rosaria Meccariello
In What is emotion? (part 1) – 4 Affective states we discussed the different affective states of emotion. In this post we’ll focus on different perspectives on emotion in the field of psychology. Perspectives When surveying emotion research in the field of psychology, one finds various traditions that hold different views on how to go about defining, studying and explaining emotions. Most contemporary emotion research has its roots in one of three major theoretical traditions: the evolutionary, the bodily-feedback or the cognitive tradition. In this section we comprehensively discuss the 3 major traditions and evaluates their possibilities for explaining how architecture elicit emotions. 1. The evolutionary perspective The evolutionary perspective has its roots in Charles Darwin’s theory of evolution. In his famous work ‘The Expression of the Emotions in Man and Animals‘ (1), he described emotions in the context of natural selection. His major claim is that emotions are functional for the survival of the species and the individual. When an individual is engaged in some behavioural action, an emotion will overrule this action with another action if this action ensures the safety of this individual. Or, in the words of Plutchik (2), the function of emotions is to help “organisms to deal with key survival issues posed by the environment.” An example of this process can be observed when the fire alarm is activated: everyone immediately stops whatever they’re doing and heads for the exit. In this case, fear initiates an impulse to flee in order to survive a threatening situation. Researchers in this tradition regard the adaptive behaviour (including facial expressions and states of readiness to respond) as central to what emotions are. Plutchik (2), for example, proposed that each emotion represents a specific behaviour, which is related to one of our basic needs, such as protection (fear), reproduction (happiness), or exploration (surprise). The assumption that emotions are evolved phenomena implies that their accompanying manifestations should be universal. The theories developed by researchers working in the evolutionary tradition provide us with a basic understanding of how emotions are elicited. These theories clearly demonstrate the role of external stimuli (such as events, objects or surroundings) in the eliciting conditions of emotions. But these theories about the basic survival emotions do not explain particular emotions, like the inspiration elicited by the design of a new museum. They also offer very few clues to explain why two people may experience completely different emotions towards the same space or building. 2. The bodily-feedback perspective Whereas the evolutionary perspective focuses on the function of emotions, the bodily feedback perspective is primarily concerned with the emotional experience. The pioneer of this tradition, the philosopher and psychologist William James, placed the body at the centre of the emotional experience. He was convinced that the involvement of the body is essential for having emotions. In his view, the experience of an emotion is a direct result of a ‘bodily change,’ and he argued that this change is the emotion. (3, 4) From this perspective, emotions are not only the outcome of, but are also differentiated by, bodily changes. In the case of fear, for example, we first start to shiver and our pulse rises, and then we perceive these reactions as being afraid. For an explanation of how architecture elicits emotions, the bodily-feedback tradition seems to offer only few possibilities. The reason for this lies in the fact that this theory does not explain the role of external stimuli in the elicitation of emotions. Moreover, many psychologists assert that the idea that emotions are based only on the awareness of a bodily change is too simple (e.g. Lazarus (5); Frijda (6)). Feedback from physiological responses alone will never account for all the possible emotions humans can experience. 3. The cognitive perspective In this currently popular view, elements can be found of both the evolutionary and the bodily-feedback perspectives. The essence of this perspective is that in order to understand emotions, one must understand how people make judgements about events in their environment, for emotions are generated by judgements about the world. Magda Arnold, the pioneering psychologist of the cognitive view of emotions, argued that an emotion always involves an assessment of how an object may harm or benefit a person. (7) In the cognitive view, the process of emotions is explained by the process of appraisal. According to Arnold (7), an appraisal, “the direct, immediate sense judgement of weal or woe,” is at the heart of every emotion. Without appraisal there can be no emotion, for all emotions are initiated by an individual’s appraisal of his or her circumstances. An important aspect of this perspective is that it holds not the event, but the meaning the individual attaches to this event, responsible for the emotion. An example would be when a friend makes a derogatory remark about you. Depending on the meaning you attach to this remark you might experience anger (i.e. “I am being insulted”), or amusement (i.e. “This is a joke!”). Positive emotions are elicited by stimuli that are appraised as beneficial, and negative emotions are elicited by stimuli that are appraised as harmful. Most contemporary researchers in the cognitive tradition of emotion hold that each emotion is elicited by a distinctive appraisal. (8) Most promising perspective Of the three perspectives reviewed, the most promising for explaining architectural emotions is the cognitive. Like the evolutionary perspective, it considers emotions to be instrumental (i.e. emotions establish our position vis-à-vis our environment, pulling us toward certain people, objects, actions and ideas, and pushing us away from others). However, instead of using basic survival issues to explain how emotions are elicited, it uses a broader notion of possible benefits or harms. A limitation is that, because of the central role given to cognition in the process of emotion, researchers in this tradition find it more difficult to distinguish emotions from non-emotions, e.g. ideas, attitudes or evaluation. Nevertheless, its focus on appraised meaning allows us to explain why different people may have different emotions towards the same building. Probably the ‘Basic model of emotions’ by Pieter Desmet explains best how emotions are elicited from this cognitive perspective, which we will explain in another post. In part 3 we’ll discuss: 4 ways of emotional manifestation. Disclaimer: This article is based on chapters 1 and 6 from the book Designing Emotions by Pieter Desmet. (9) These chapters were originally written for industrial designers and are rewritten here for our architectural approach. References - DARWIN, C. (1872) The Expression of the Emotions in Man and Animals (Penguin Classics) – affiliate link - PLUTCHIK, R. (1980) Emotion: A Psychoevolutionary Synthesis – affiliate link - JAMES, W. (1884) What is an Emotion? – affiliate link - JAMES, W. (1894) The physical basis of emotions. - LAZARUS, R.S. (1991) Emotion and Adaptation – affiliate link - FRIJDA, N.H. (1986) The Emotions (Studies in Emotion and Social Interaction) – affiliate link - ARNOLD, M.B. (1960) Emotion & Personality Volume 1: Psychological Aspects – affiliate link - ROSEMAN, I.J., SMITH, G.A. (2001) Appraisal theory: assumptions, varieties, controversies.
https://experiencingarchitecture.com/2010/01/27/what-is-emotion-part-2-3-perspectives-on-emotion/?replytocom=369
Recently, I had the unfortunate opportunity to observe questionable behavior in the workplace. I’m talking about behavior resulting from poor ethical decisions. But, if ‘right’ and ‘wrong’ are so different, why is it so hard to decide to do the right thing– especially for leaders? Logo, What Do You Stand For? Logos are a quick way to get a sense of an organization’s values or purpose. Shapes, colors, and fonts are used to evoke emotion, recognition, and connection. One of my favorite logos is that of the Olympics. Belief it or not, I have a record from the 1984 Olympics! [Read more…] What Are Your True Colors? Consider your emotional intelligence… Throughout the workday, are you able to manage or regulate your emotions? Are you able to harness those emotions and apply them to problem solving, showing empathy, or building relationships? Are you even aware of your emotions?
http://leadingsynergies.com/author/michelle-s/page/21/
Typographically speaking, the bones of a capital “A” are mostly non-negotiable: two diagonal strokes, one crossbar, and a single counter. But within those constraints, there’s a world of opportunity. In the hands of a graphic designer, an A can be a line that wraps around itself and rises into a peaked coil. Or a triangular block that’s merely the suggestion of the letter’s original form. For Belgian designer Christophe De Pelsemaker, there’s a beauty in expanding the simple concept of a letterform into something abstract and almost art-like. “There are many ways to depict the same letter,” he says. “It’s all in the details.” De Pelsemaker is the co-author of Letters As Symbols, a visual survey of alphabetically driven logos that he wrote with Belgian designer Paul Ibou. The book (now up on Kickstarter) started as a seed of an idea decades ago when Ibou was running an organization called the International Trademark Center. The now 79-year-old Ibou, who is well known in Belgium for his corporate logos, had spent years gathering logos from designers around the world including Saul Bass, Takenobu Igarashi, and Burton Kramer. Ibou would reach out to designers and ask them to submit a form outlining basic information about their logos that he would keep on file in an archive at his home. “He always kept everything physically,” says De Pelsemaker, who got in touch with Ibou last year after reading some of the designer’s earlier books. Ibou was interested in using the ITC to build an archive of logos that could be used in books and exhibitions. “He wanted to publish the logos in as many forms as possible,” De Pelsemaker explains. “He just wanted to create a platform for designers to show their work to the world.” Ibou started compiling logos for Letters As Symbols in 1991, but the project stalled and then was forgotten about until De Pelsemaker began collaborating with Ibou in 2017. Even still, the original concept hasn’t changed much. For Ibou, the book is a way to celebrate the most straightforward of logos. Letters, De Pelsemaker explains, are inherently clear in their meaning. “Letters have a strong connection with humans because their shapes are recognizable,” he says. “It’s one of the fundamental ways of communicating.” In the book’s foreword, Ibou writes about what makes a successful logo:
https://eyeondesign.aiga.org/a-visual-compendium-of-letters-as-logos/
I believe that people should do what is right regardless of what others think or say. I am honest and trustworthy. I try to help other people when I can. I am respectful towards everyone. I respect myself. I know that I need to work hard to get ahead. I am responsible for my mistakes. I am willing to learn new things. I am kind to others. I am loyal to friends. I am helpful to strangers. I am determined to succeed. I am grateful for everything I have. Personal Ethics Statement Example 1 I believe that my life experiences have shaped me into the person I am today. My life experiences include being raised by two parents who were divorced when I was young. I grew up in a household where my father was absent due to alcoholism. I also had to deal with my mother’s depression and mental illness. As a result of these experiences, I became more sensitive to others’ feelings than before. I try to make sure that everyone around me feels comfortable and happy. Personal Ethics Statement Example 2 I am a good person because I try to do my best every day. I think that I’m a good person because I always try to help others when I can. I believe that I’m a good human being because I love people and respect them. I know that I’m a good man because I treat everyone equally. I know that I am a good person simply because I try to be kind to other people. Body Of The Statement The body of the statement should include all my core beliefs, my thoughts and opinions about what’s right individual behavior according to me, and reflect my views and philosophy. You may have to follow some guidelines, depending on the purpose why you are writing this, but the core of this essay has to be a reflection of you and your feelings. Here are a couple of important points to remember while writing the body of the essay. Personal ethics are beliefs that you based your opinions and actions on, hence, it is highly recommended to mention only the ethics that we strongly believe in. In short, include only the practices we preach. Anything that isn’t from our core belief system will make it self-evident and establish itself as untrue right away. For instance, if we aren’t vegetarians ourselves but we write about vegetarianism being needed today, it will become obvious that we don’t really believe what we write, and the concept will become invalid. Every statement has its own requirements. You will have to compile only the ethics that match the nature or the statement. For example if you are writing this as a prerequisite for a Scholarship, you will have to compile your views about academics and related activities. Your views about Global Warming probably won’t help the statement much, except maybe if you are an environmental science student!
https://personal-statement-writer.com/personal-ethical-statements/
This position is responsible for providing case management assistance to the Department of Child Safety Specialists. Alternative work hours may be required, including overtime, weekends and holidays at the discretion of the supervisor. Candidates who are successfully employed as Case Aide for five (5) years will be eligible to apply for a promotional opportunity to a DCS Specialist Trainee Position, grade 16 ($18.6664 hourly). - Supervises visitation meetings among clients (natural parents and children) to ensure compliance - Assists DCS Case Managers with filing, disclosers, packets, case notes, etc. - Enters case notes into Children’s Information Library and GUARDIAN system - Drives on state business - Assists clients with application processes as needed - Provides other clerical support - Transports children to scheduled appointments - Participates in a variety of meetings and/or client hearings, as requested - Performs other duties appropriate to the assignment Knowledge of: - Personal computers; Microsoft Office Suite - Verbal and written communication Skill in: - Establishing and maintaining interpersonal relationships - General clerical functions such as, data entry and filling - Eliciting and gathering information - De-escalating situations - Modeling professional behavior with clients - Time management and multi-tasking - Addressing hostile situations (e.g. terminating visits if harmful to clients) - Written, interpersonal, and observation skills Ability to: - Plan work routines and implement work assignments - Identify resources available in the community - Learn information on human and social services; welfare services - Learn critical State, Federal and local laws concerning placement, custody and treatment of children - Gain understanding of the effects and problems of foster care; social intellectual and behavioral problems of developmentally disabled children - Adhere to general policies, procedures and practices of court foster care review board with regards to cases involving custody and placement of children - Learn how to identify developmental and behavioral problems of children - Comply with casework principles and practices - Gain understanding of cultural environment and community influences on behavioral and development of individuals - Interpret and apply eligibility criteria - Learn and adhere to DCS policies and regulations applicable to assigned programs - Comprehend and follow moderately complex written and oral instructions - Organize data in a logical and coherent manner - Prepare written reports accurately in a prescribed format - Actively listen and provide supportive attention to clients - Acquire and/or maintain CPR Certification or Re-Certification during initial training - Some college experience with credit hours in child development, rehabilitation, social services, counseling, psychology, sociology or related field from accredited college or university OR - One (1) year experience working in a child welfare environment, indirectly or directly with clients (children/families); experience working with children in a controlled environment (i.e., nanny, daycare, teacher’s aide) Employment is contingent on the selected applicant passing a background investigation. Requires a High school diploma or GED Requires experience in providing professional childcare services Requires the possession of, and ability to retain a current, valid state-issued driver’s license appropriate to the assignment. Employees who drive on state business are subject to driver license record checks, must maintain acceptable driving records and must complete any required driver training (see Arizona Administrative Code R2-10-207.12). Employees may be required to use their own transportation as well as maintaining valid motor vehicle insurance and current Arizona vehicle registration; however, mileage will be reimbursed. Must be able to secure and maintain an Arizona Fingerprint Clearance card. Must be able to secure and maintain clearance from the CPS Central Registry. As an employee of the Department of Child Safety you will be entitled to a comprehensive benefits package that can become effective as soon as two weeks after starting! Benefits include: - Paid sick leave. - Paid vacation that includes ten (10) holidays per year. - Competitive health and dental insurance plans. - Life insurance and long-term disability insurance. We also offer optional employee benefits that include: - Vision coverage. - Short-term disability insurance. - Deferred compensation plans. - Supplemental life insurance. - Employee wellness plans For a complete list of benefits provided by The State of Arizona, please visit our benefits page Positions in this classification participate in the Arizona State Retirement System (ASRS). Enrollment eligibility will become effective after 27 weeks of state employment. Persons with a disability may request a reasonable accommodation such as a sign language interpreter or an alternative format by contacting 602-255-2903. Requests should be made as early as possible to allow time to arrange the accommodation. Arizona State Government is an AA/EOE/ADA Reasonable Accommodation Employer. ARIZONA MANAGEMENT SYSTEM (AMS) All Arizona state employees operate within the Arizona Management System (AMS), an intentional, results-driven approach for doing the work of state government whereby every employee reflects on performance, reduces waste, and commits to continuous improvement with sustainable progress. Through AMS, every state employee seeks to understand customer needs, identify problems, improve processes, and measure results. State employees are highly engaged, collaborative and embrace a culture of public service. Arizona State Government is an AA/EOE/ADA Reasonable Accommodation Employer.
https://www.azstatejobs.gov/jobs/case-aide-phoenix-arizona-united-states-0812308e-a23b-4e80-88bd-0fe530d64ca1
The Responsible Jewellery Council is a not-for-profit, standards setting and certification organization. RJC Members commit to and are independently audited against the RJC Code of Practices - an international standard on responsible business practices for diamonds, gold, platinum group metals, silver and coloured gemstones (rubies, sapphires and emeralds). The Code of Practices addresses human rights, labour rights, environmental impact, mining practices, product disclosure and many more important topics in the jewellery supply chain. RJC also works with multi-stakeholder initiatives on responsible sourcing and supply chain due diligence. The RJC's Chain-of-Custody Certification for precious metals supports these initiatives and can be used as a tool to deliver broader Member and stakeholder benefit. 2019-06-17 ~ 2022-06-17 Verified Picture Name Issued By Start Date Description Verified Caring Company The Hong Kong Council of Social Service 2018-03-01 Launched by The Hong Kong Council of Social Service (HKCSS) in 2002,the Caring Company Scheme (the Scheme) aims to foster strategic partnerships between the business and social services sectors to promote good corporate citizenship and create a more inclusive society. The Scheme also helps corporations and social services organisations to know and understand one another at a much deeper level. This will create more room for working together to develop cross-sector community projects that focus on the needs of the community. It's mission is to build a cohesive society by promoting strategic partnerships among business and social service partners and inspiring corporate social responsibility through caring for the community, employees and the environment.
https://nelsonjewellery.en.alibaba.com/company_profile.html
Forage production, defined in Chapter 1 as the integrated end-product of conversion of solar energy into plant biomass, is the foundation of range animal production systems. Because plant biomass is of limited caloric value to man as a primary consumer, the value of this renewable resource is in the production of secondary and tertiary products through grazing animals. Temporal distribution of forage production sets boundaries on the opportunities for directly or indirectly utilizing rangeland resources. Our purpose in this chapter is to depict the role of grazing animals in converting chemicals fixed in plants into animal products (food and fiber). To do so, we trace physiological processes and interactions within the herbivore and describe how these relate to the diets that are consumed. We conclude with a discussion of the implications of these interactions in the nutritional management of range herbivores, primarily domestic livestock. Forage includes browse and herbage which can be consumed by or harvested and fed to animals (Soc. Range Manage. 1989). The structural characteristics of forage are described in various ways and with nomenclature appropriate to the context in which it is considered. Botanists and agronomists approach plant cellular structure from the standpoint of biosynthesis. At what sites or in what organelles do certain chemical reactions occur that result in processes such as photosynthesis, protein synthesis and nutrient translocation? By contrast, animal nutritionists emphasize attributes of cells and tissues that enhance bio-degradation (Van Soest 1982) and liberation of nutrients. The nutritionist asks what cellular configuration affects the digestibility of protein in the plant leaf. Differences and commonalities in the nomenclature of cell/tissue anatomy and biochemistry employed by botanists and animal nutritionists are illustrated in Figure 2.1. Strictly for illustrative purposes, consider a teleological comparison of the plant and animal perspectives relating to the plant cell. Cells of young, plant tissue are biochemically active, capturing and storing energy, synthesizing proteins and fats, etc. (a). These are cytoplasmic activities. Cells of older tissue are comparatively low in biochemical activity. Much of the photosynthate and other synthesized compounds have been translocated to the seeds and roots or deposited in other forms in the cell wall. This leaves the cytoplasm comparatively inactive. Similarly, leaves are biochemically more active compared with stems that contribute structure and resilience in the overall plant function (b). Cool-season (C3) plants have relatively greater cytoplasm compared with warm-season (C4) plants that are higher in cell wall (c). Figure 2.1 The animal measures plant chemicals in terms of availability and nutritional worth, irrespective of their phytochemical functions. Which are easily accessible? Which are difficultly accessible or inaccessible? The two perspectives relate, in that cell structure and function in plant metabolism closely align with nutrient availability and worth to the consuming animal. Forage contains fixed energy largely in the form of complex carbohydrates, waxes, terpenes (essential oils, saponins, etc.) and phenylpropanoids (lignins, tannins, etc.). Plant biomass is a virtually infinite number of combinations of these biochemicals determined by plant species and phenological stages. The structure and form of these biochemicals, to a large extent, determine a plant species' capacity to survive (resilience) which is related to the general inverse relationship between nutritional value to grazing animals and plant resilience. The complex carbohydrates, etc. are generally impervious to mammalian gastric and intestinal digestive enzymes. Readily digested proteins and soluble carbohydrates, including simple sugars and starches on the other hand, usually exist either in lesser proportions (< 40% of dry matter) or are complexed (rendered insoluble and poorly available) with insoluble compounds such as lignins and tannins. Cellulose, the most widely distributed organic compound in nature, is a glucose polymer, differing from starch, in the isomeric arrangement of the bonds between the glucose monomers (Fig. 2.2). Intestinal hydrolytic enzymes can cleave alpha linkages in starch, whereas the beta linkages of cellulose are resistant to these enzymes. Cellulose is of nutritive value only to herbivores that have incorporated anaerobic microbial fermentation in the digestive process (Hungate 1966). In the presence of cellulolytic microorganisms, exposed cellulose is broken down with relative ease. However, in many plant species, especially the warm-season perennial grasses, the cellulose is complexed or "encapsulated" by lignin as plants mature (lignification). Therefore, diet selection, to be discussed in Chapter 3, is nutritionally the important element of grazing animal behavior. This is true for plant species as well as plant structural part (leaf, stem, mast) and physiologic age of the plant tissue (new or old growth) consumed. Figure 2.2 Range animals rely on vegetation for the nutrients needed to support bodily processes. The term "quality" is often used to ascribe worth to the components of diet; worth in turn is defined by the chemical composition (e.g., protein content) of the plants selected for consumption. We propose that the proper concept is "nutritional value" because it includes consideration of both the chemical composition of the dietary components and their adequacy for supporting the physiological functions of the consuming animal. For example, a forage species containing 20% protein is considered of higher quality than a similar forage species containing only 10% protein; yet, both may be equal in nutritional value to an animal having a relatively low protein requirement. Indeed, the lower protein forage may offer greater overall value to the production system if it also possesses a greater tolerance to grazing, higher production of dry matter, or a longer growing season. Thus, a proper perspective of the plant:animal interface requires a dual focus to balance short and long term production goals. Practices promoting maximum production of animal food and fiber will eventually reduce long term secondary production by decreasing the stability of the forage resource. On the other hand an approach which is overly protective from an ecological point of view is economically and sociologically insupportable (see Chapter 9). Hence, both the animal's needs and consequences that result when these needs are adequately, marginally or inadequately met determine the proper balance between short term and long term productivity. Foraging animals possessing microbial fermentation capabilities, whether pre-gastric (foregut) or post-gastric (hindgut), are the principal producers of food and fiber from rangelands (McNaughton et al. 1982, Belovsky 1984). Most pre-gastric fermenters belong to the order Ruminantia (Bovidae, Cervidae, etc.) and Tylopoda (Camelidae). Most range livestock and big-game species belong to Ruminantia. Grazing animals relying upon symbiotic pre-gastric fermentation and "ruminants" are considered synonymous in this chapter. In terms of economic importance, these foregut fermenters, including cattle, sheep, goats, cervids and big game animals, are the most common. However, the suborder, Hippomorpha, which includes the horse (Equidae) is important in some range settings. Evolution of microbial fermentation in mammals has been the subject of extensive reviews (Hungate et al. 1959, Janis 1976, Hume and Warner 1980). Figure 2.3 illustrates the comparative digestive anatomy of the non-ruminant (minimal post-gastric fermentation) and post- and pre-gastric fermenting herbivores. In non-ruminants (a) and post-gastric fermenters (b) in Figure 2.3, foods are exposed to digestion by hydrolytic proteinases (trypsin, pepsin, chymotrypsin, etc.) and carbohydrases (amylase, maltase, lactase, etc.) in the gastric (5) and intestinal regions (6) prior to active fermentation in the large intestine [colon (8) and cecum (7)]. However, because cellulase, the enzyme lysing cellulose, is not present in gastric, pancreatic or intestinal secretions, cellulose passes through the digestive tract essentially unaltered and provides no direct nutrition to the animal. In the colon (8) and/or cecum (7), structural carbohydrates, including cellulose and undigested and endogenous residues that have escaped hydrolytic digestion are exposed to microbial fermentation. Fermentation results in the growth and accumulation of microbial cells (primarily bacteria) high in protein. However, there is limited microbial protein catabolism in and amino acid absorption from the colon/cecum (Janis 1976). Hence, the major by-products of fermentation in these herbivores are short-chain organic acids (volatile fatty acids; VFAs), ammonia, carbon dioxide and methane. A major portion of the VFAs are absorbed and used by the host animal for energy as discussed later. Figure 2.3 By comparison the food consumed by ruminants (Fig. 2.3 c) is subjected to microbial fermentation prior (2,3,4) to digestion by hydrolytic enzymes (proteinases and carbohydrases) in the gastric and intestinal segments (5,6). The microbial population is established in the rumen (3) and reticulum (2) (referred to as "rumen", "ruminoreticulum" or "reticulorumen") into which the food enters via the esophagus (1). Consumed material is mixed with existing ruminal microbial populations, portions of previously consumed meals, and both transient and end products of fermentation. After a variable delay period (rumen retention time, RT), particles move into the omasum, and then sequentially to the abomasum (5), small intestine (6), large intestine (7) and rectum (8) from which the remaining residue is excreted as feces (Phillipson and Ash 1965). Flow dynamics of the ruminal compartment resemble a modified continuous flow system with periodic additions to, and frequent outflow from, the constantly mixed ruminal pool of materials (Fig. 2.4). Rumen retention time of an individual particle may be as short as a few minutes or as long as several days, depending on size of the compartment, levels of dry matter and water intake, particle size and reduction rate, particle density, ruminal motility and chance (Pond et al. 1987). Post-ruminal hydrolytic digestion is similar in the ruminant to that of the non-ruminant (pig) and the post-gastric fermenting animal (horse). The microbial activity in the lower tract (i.e., colon/cecum) of grazing ruminants is quantitatively of less importance than in post-gastric fermenters. However, VFAs and ammonia produced in the cecum may be important to animal status. Krysl et al. (1987a) and Caton et al. (1988b) suggest hindgut VFAs and ammonia production make a significant contribution to the respective pools of these compounds in sheep. Figure 2.4 Proteins are large molecular compounds comprised of approximately 20 individual amino acids bonded together in linear, coiled or branching chain forms. The relative number of each of these amino acids and the sequence in which they are bonded together determine the character of the particular protein in the tissue (muscle, hair, hoof, enzyme, etc.). Protein in the diet must be broken down to the individual amino acids within the gastrointestinal tract and absorbed as such since the large protein molecule cannot be transported through the intestinal wall. These absorbed amino acids are then used to resynthesize proteins that fit the needs of the animal. Some of the amino acids can be formed within the tissue from materials such as other amino acids that are present in excess. On the other hand, some of the amino acids must be absorbed from the gastrointestinal tract preformed and are referred to as essential amino acids. If absorbed amounts of essential amino acids meet or exceed the animal's physiological requirements, protein synthesis can proceed at a normal rate. If one or more are not absorbed in sufficient amounts, tissue protein synthesis is restricted and the associated maintenance or production function is impeded. Vitamins are "cofactors" or catalysts in metabolic reactions, in that they do not appear in the products of reactions, but must be present for reactions to occur. All vitamins or their precursors must be absorbed from the digestive tract as they cannot be synthesized by mammalian tissue. If vitamins are not absorbed in adequate amounts, metabolic activity is restricted. Protein and vitamin nutrition are both influenced by microbial fermentation and its location within the digestive tract. Pathways of protein synthesis in microorganisms are similar to those of mammalian tissue except that amino acid requirements are much less specific. The microorganisms, as a mixed population, have no absolute amino acid requirements. Ammonia, derived from most nitrogen-containing compounds, including urea, can be used in the synthesis of "microbial protein" (Fig. 2.5). Likewise, most vitamins are synthesized by populations of microorganisms except for vitamin A, D, and E. Ruminant animals are insulated against essential amino acid and most vitamin deficiencies because these compounds are synthesized by symbiotic microbial populations in the rumen and subsequently presented for hydrolytic digestion in the gastric-intestinal region. Once microbial protein passes from the rumen to the gastric-intestinal region, it is hydrolyzed to the individual amino acids which are absorbed for use at the tissue level. Therefore, ruminants can survive on a protein-free diet as long as the diet contains a form of nitrogen to yield ammonia under anaerobic fermentation (Virtanen 1968). Additional insulation against protein deficiency is conferred by ammonia nitrogen recycling (Weston and Hogan 1967). However, over the longer term, a base supply of amino acids in the form of dietary protein may be necessary for maximal fiber digestion and ruminal protein synthesis (Petersen et al. 1985). Vitamins synthesized by the microorganisms in the rumen are likewise digested in the lower tract. Figure 2.5 The essential amino acids necessary to achieve and sustain maximum production, defined as rapid growth, successful reproduction and heavy lactation in domestic ruminants, cannot be met solely through microbial protein synthesis (Burroughs et al. 1975). Microbial growth is limited by the maximum level of fermentation which can be supported by a given diet (substrate). Obviously, complete fermentation of a substrate in the rumen can yield only a finite amount of microbial protein. Even at maximum fermentation, microbial synthesis is unable to provide sufficient quantities of amino acids to fully satisfy the physiologic requirements for maximum productivity (genetic potential) of some particular animals in a highly productive state (e.g., rapidly growing). Maximum productivity can be achieved only by the addition of escape protein (Fig. 2.5), with a favorable amino acid profile, to augment microbial protein production (Anderson et al. 1988). The chemical components of the diets of ruminants can be separated into two structural fractions of nutritional significance (Van Soest 1967). The first, cell contents (neutral detergent solubles; NDS), are those substances found inside plant cells. These organic molecules are soluble and so are readily digestible in the intestine. These substances also tend to be rapidly and extensively fermented before reaching the gastric-intestinal region. The second fraction, cell wall components (neutral detergent fiber; NDF) are digested more slowly and less completely. Digestion of the cell wall fraction is performed almost exclusively by microbial hydrolysis and fermentation. Volatile fatty acids produced during fermentation are absorbed through the rumen wall and subsequently metabolized for use at the tissue level as energy. Products of fermentation not absorbed through the rumen wall, including microbial cells, pass to the lower tract together with unfermented dietary residues. These modified (or synthesized) and original dietary and endogenous fractions are exposed to hydrolytic digestion in the gastric-intestinal region. The total amount and quality of nutrients derived from a grazing animal's diet is determined by the type and amount of forage consumed and the proportioning of the material among five possible fates: Thus, inherent species differences in gastrointestinal flow dynamics ultimately influence which species are adapted to particular components of the vegetation on rangeland. Cattle and bison, which have a relatively large capacity rumen compartment in relation to both body size and nutrient requirements (Demment and Van Soest 1985), also have a long rumen retention time (RT) (Fig. 2.6). These anatomical factors permit cattle to extract a large amount of nutritional value from fibrous materials, often in amounts adequate to satisfy all their nutrient requirements. Conversely, small ruminants, e.g., sheep and goats which possess a relatively small rumen compartment in relation to body size and nutrient requirements cannot extract comparable levels of nutrients from the same fibrous forages. Even though nutrient requirements are greater per unit body weight in small ruminants, rumen capacity is significantly less, retention time significantly shorter and flow rate significantly faster than in large ruminants (Table 2.1). Hence for relatively equivalent intake levels, fibrous diets are of less nutritional value to small ruminants. Figure 2.6 Smaller ruminants have evolved two strategies to overcome the metabolic dilemma described above. The first strategy is reduced RT (Van Soest 1982) which allows a slight shift in the site of digestion of the highly digestible components out of rumen fermentation (Fate 1 and/or Fate 2) and into the gastric-intestinal region (Fate 3) thereby decreasing respiration losses associated with fermentation. Also, the shorter RT is associated with a greater level of intake and a slightly depressed fiber digestibility. Taken together this results in a greater level of intake, a slightly lower digestibility compared to larger ruminants, but an opportunity to equal or exceed total digested nutrient intake (Huston 1978). This strategy is important in survival but is seldom effective in allowing the small ruminants to match the productivity of large ruminants when both are limited to high fiber diets. The second strategy is to consume a high quality diet which necessitates a greater degree of discrimination in diet selection. Size and prehensile agility of the lips, teeth and tongue ultimately determine an animal's ability to selectively consume plant species, individual plants on offer within a species, and even discrete plant parts, all from a heterogeneous assemblage of plant biomass. Significant differences in the morphological structure of mouth parts exist in pre-gastric fermenters and post-gastric fermenters (Fig. 2.7) which reflect the types of forages consumed. Generally, increased pliability of the lips and manipulative capacity of the tongue denote greater levels of selectivity. Table 2.1 Range herbivores have been variously classified into as many as six classes based upon the types of foods eaten (Langer 1984). Figure 2.6 is a modified form of the system described by Hofmann and Stewart (1972) applied to ruminants. Bulk/roughage grazers (cattle, bison, cape buffalo, etc.) graze comparatively indiscriminately on the herbaceous fraction of vegetation by wrapping their tongue around individual clumps of plant growth and, with a short jerking motion of the head, break the clump loose then draw it into their mouths. Once in the mouth, the material is wetted with salivary secretions, chewed slightly, formed into a cylindrical "bolus" with the teeth and tongue, then swallowed (Fig. 2.4). Later, when the animal is at rest, swallowed material is regurgitated, chewed extensively, then reswallowed (rumination). Concentrate selectors (white-tail deer, mule deer, dik-dik, etc.) characteristically have pliable and often split lips, soft muzzles and agile tongues (Fig. 2.7). Hence, these animals can select plants or plant parts high in cell contents (protein and other soluble fractions; NDS) and low in cell wall (cellulose and fibrous fractions; NDF). Bite sizes are smaller and more discrete, even consisting of single leaves, leaf tips, fruits, seeds or fallen mast. Figure 2.7 Intermediate feeders are a diverse group characterized by dietary plasticity not found in either bulk/roughage feeders or concentrate selectors. Diet is characterized by variety and frequent compositional changes. The domestic sheep is classified as an intermediate feeder, but its diet often approximates the bulk/roughage group. The goat is a true intermediate feeder, and its diet selections clearly overlap the entire array of forages. Such a classification of the feeding behavior of grazing animals is useful to better understand species adaptability to specific forage conditions but should not lead the reader to believe these are rigid relationships because "crossover" in feeding habits regularly occurs. Especially within sympatric ruminant populations, all species select diets from an array of available plant materials which vary in space and time (see Chapter 3). Availability is the first and most important determinant of what a grazing animal consumes. When the opportunity is presented for selection among types, species and morphological parts of plants, ruminant populations regularly exhibit "preferences" in the materials selected. This ability to discriminate between available materials is sufficiently pronounced that in vegetatively productive periods the diets of ruminant species grazing in common are almost completely different Conversely, during periods when the amount and diversity of forage are limited, dietary overlap between sympatric species is very high. The distinction of ruminants relative to their adaptability to forage-based animal production systems, stems from three characteristics unique to this group of animals. First, by virtue of the evolution of a pre-gastric fermentation chamber, ruminants can more effectively utilize structural carbohydrates (NDF) than either non-ruminants or post-gastric fermenters of comparable size. Increased retention time under conditions of anaerobic fermentation leads to more complete digestion and utilization of forage. It must again be noted that ruminant species vary widely both in RT and the extent of fermentive degradation of forage components. Secondly, whereas non-ruminants depend on preformed amino acids and vitamins in their diets, ruminants are comparatively free of these requirements. Simple forms of dietary or endogenous nitrogen (ammonia releasing compounds i.e., urea, proteins, amino acids, etc.) can be used by ruminants in the microbial synthesis of protein which subsequently is digested in the gastric-intestinal region. This adaptation is further enhanced by the ability to recycle urea via salivary and ruminal mucosal secretions. Microbial protein generally fulfills the minimal amino acid requirements of ruminants for maintenance and moderate levels of production. Genetically possible levels of production in animals in stages of high productivity cannot be achieved without the addition of escape protein to increase the supply of essential amino acids. Lastly, dietary overlap of sympatric animal species can be very high or low depending upon forage diversity and availability, environmental conditions and management. The net effect of these three physiological and behavioral characteristics is that ruminants, as a group, are well adapted to production systems on rangeland. The nutrients required by animals are energy, protein, vitamins and minerals. The concept of requirements is generally seen as the amounts necessary to support "normal" metabolic activity. That is, the animal's requirements are thought to be met when it gives evidence of normal health and vigor, normal rate of growth, normal reproduction and/or normal lactation levels. Obviously, "normal" is not identical in all members of the same species at all times so these requirements should be seen as a set of ranges. Nutrients as limiting factors, while an important concept, should not be thought of as a rigid one-to-one relationship. Generally, nutrients are utilized in the hierarchical order of maintenance, reproduction, lactation and storage (Fig. 2.8). However, across a population of animals, reproduction and lactation can occur when the diet does not provide the "required" levels for these functions. Within that same population, a certain proportion of animals can even reproduce or lactate at nutrient levels well below maintenance "requirements." Despite the absence of rigor, the concepts of nutrient requirements and priority of use are fundamental to an understanding of animal nutrition and management. The overview of nutrient requirements which follows is a general outline. The National Research Council Series on nutrient requirements (NRC 1981b, 1984, 1985a) should be referred to for greater detail. Figure 2.8 Energy is required primarily in making (anabolism), but sometimes in breaking (catabolism) chemical bonds during animal metabolism. Metabolic processes requiring energy include muscle contraction, nerve impulses and tissue synthesis. An example of energy being expended to synthesize protein from amino acids (AA) to form tissue is shown in Equation 1. Amino acids are bonded together in peptide sequences during protein synthesis. The energy necessary for this bonding comes from a coupled reaction during which a high energy phosphate bond in adenosine triphosphate (ATP) is cleaved yielding adenosine diphosphate (ADP) and a free phosphate radical. Formation of these high energy bonds occurs as a result of respiration (Equation 2). In most animal systems, glucose is broken down (oxidized) during respiration to carbon dioxide and water. During this chemical change, energy is captured in the formation of a high energy phosphate bond, which is then available for tissue protein synthesis (Equation 1) or another energy-requiring metabolic process. In ruminants, energy is captured primarily during respiration of VFA's that are produced during fermentation in the rumen (Fig. 2.4) then absorbed into the bloodstream in the rumen wall. These VFAs are metabolized through a network of pathways (simplified in Fig. 2.9) and ultimately yield carbon dioxide (CO2), water (H2O) and captured energy in the form of high-energy bonds (ATP). Although ruminant tissue can metabolize glucose (1) and protein (2), most captured energy arises from either acetate (3), propionate (4) or butyrate (5), the main VFAs produced during rumen microbial fermentation. Figure 2.9 Grazing ruminants derive energy primarily from plant carbohydrates, lipids and proteins, but not all consumed energy is captured in a form usable to the animal. Total dietary energy includes all combustible energy of the diet measured in calories (cal), kilocalories (kcal; 1000 cal) or megacalories (Mcal; 1000 kcal), but not all dietary energy is captured in a form utilizable by the animal. That is, if a cow consumes 20 pounds of hay which if burned would give off 50,000 kcal of heat, then the cow would have eaten 50 Mcal total energy. This total or gross energy (GE) is partitioned (Fig. 2.10) into digestible energy which is DE = GE - fecal energy; metabolizable energy which is ME = DE - Urinary and methane energy; and finally net energy which is NE = ME - heat increment. Net energy is the amount of energy available for maintenance (energy required to maintain normal health and vigor) and production (energy required for growth, reproduction, lactation, etc.). The metabolizability of digestible energy, ME/DE, is rather constant at approximately 82% (NRC 1984). However, the digestibility of gross energy, DE/GE, and the net availability of metabolizable energy, NE/ME, vary with the chemical composition of the diet and the metabolic function for which the net energy is used. Expressions of the energy value of feeds and forages are defined in Table 2.2. Components of the diets of grazing animals can have dry matter digestibility (DMD) values from 14-85% depending on the amount of cell contents (NDS) and cell wall constituents (NDF) in the dry matter. The net availability of metabolizable energy (NE/ME) in a forage varies from about 90% when used for maintenance down to less than 20% for an incremental increase in intake high on the productivity curve (Fig. 2.11; Van Soest 1982, Fox et al. 1988). Therefore, the energy value of a quantity of forage varies as a function of its digestibility and its ability to meet the energy required to support a desired metabolic process or productivity level. Figure 2.10 Table 2.2 Figure 2.11 Ruminant animals require protein in the diet to supply nitrogen (ammonia) and amino acids for intraruminal microbial activity and amino acids for cellular-level tissue metabolism. Protein expressions are defined in Table 2.2. Suboptimal protein supply to the microbial population in the rumen results in a lowered fermentation rate, decreased digestibility of food consumed and decreased voluntary intake (Kempton and Leng 1979). Protein requirements in ruminants include protein and/or nitrogen requirements of the ruminal microbial population. Generally, microbial requirements are met at 6-8% crude protein in the diet. Animal requirements range from 7-20% in the diet depending upon species, sex and physiologic state. Normally animal protein requirements are satisfied by a combination of microbial and dietary escape protein (Fig. 2.5). As animal protein requirements increase, the animal becomes more dependent on dietary escape protein. Priority of protein use can be expressed in the same fashion as priority of energy use (Fig. 2.8). Maintenance requirements are met first and include repair and replacement of body tissue. After maintenance requirements are met, absorbed amino acids are used for productive functions until one of three limitations are encountered: 1. The supply of amino acids in the correct proportion is depleted. That is, the synthesizing system literally runs out of one or more of the necessary amino acids to build the protein; 2. One or more of the other necessary nutrients required in coupled reactions become limiting. This is easily understood for limited energy by reviewing the coupled equation, Equation 1. Alternatively, other nutrients, particularly vitamins or minerals, are not present in the proper proportion and limit protein synthesis; 3. The animal's genetic capability for performing a particular function has been reached. Genetic potential should be viewed as a variable range in a manner similar to nutrient requirements, but generally as a point on the production curve beyond which additional nutrients produce no practical response. Thus a beef cow's requirements for protein or energy are met at a lower level of protein or energy intake than that required by a dairy cow. Vitamins are organic compounds that must be present at the cellular level to act as catalysts in metabolic processes. As noted earlier, many of the vitamins are synthesized by the ruminal bacteria and subsequently absorbed from the intestinal tract. With few exceptions, vitamin A is the only vitamin that is likely to limit the productivity of grazing ruminants. Vitamin A does not occur in plant tissue, but is synthesized by the animal from chemical precursors in plants, mainly beta carotene, but other plant pigments as well. Vitamin A deficiency is most likely to develop during an extended period of low temperature and/or drought when green plants are unavailable to the animal. The second most likely deficient vitamin in grazing ruminants is vitamin E. This condition can become especially severe when combined with low selenium in the diet. Minerals required by animals are classified as either macro-minerals or micro-minerals according to the amounts required. Those required in relatively large amounts, the macro- or major elements, are sodium, chlorine, calcium, phosphorus, magnesium, potassium and sulfur. In each case these elements are either a constituent of animal tissue or are required in large amounts to carry on metabolic functions. Mineral elements required in small amounts, micro- or trace elements, include iodine, iron, copper, zinc, manganese, cobalt, molybdenum and selenium. These generally have special functions as either low level components of certain tissues, or as cofactors for certain metabolic reactions. Macro-minerals All of the major elements are potentially problematic in the range setting. Those most likely deficient in range forages are sodium, chlorine and phosphorus. Deficiencies of salt (sodium chloride) and/or phosphorus can result in perverted animal behavior such as indiscriminate eating of rocks, sticks, bones, etc. and reduced forage intake and productivity. Deficiencies of the remaining four are unlikely under normal range conditions, but where deficiencies occur, the effects can be as devastating as in the cases of the more common deficiencies. A magnesium deficiency, for example, is associated with grass tetany that occurs during lush plant growth periods that appear to provide the opportunity for high production. Reduced potassium can also depress animal productivity, by reducing the appetite and so the food intake. Particular attention should be given to the macro-mineral status of animals grazing on drought or winter dormant forages for extended periods of time. The trace elements, although needed in only minute amounts, are crucial to normal animal metabolism. Iodine is a component of the hormone thyroxine, iron equips blood cells to carry oxygen, and cobalt is required by microorganisms to synthesize vitamin B12. Many of the minor elements are cofactors in the enzyme systems involved in energy and protein metabolism. Therefore, "minor" or "trace" should not be interpreted as meaning of less qualitative importance. Animals cannot function normally without an adequate supply of any of the required elements, major or minor. It is not possible at this writing to make definitive predictions about micro-mineral deficiencies and toxicities due to the wide disparities in the amounts required as compared to the macro-minerals. Trace element deficiencies are less widespread, less predictable, more difficult to recognize and probably quantitatively less important than major element deficiencies. Exceptions to this general statement include those regions deficient in selenium, iodine or cobalt. It should be noted that rangelands deficient in these micro-elements are extensive throughout the world. Toxicities resulting from consuming excessive amounts of micro-elements also occur in natural settings. An example is peat scours, a high molybdenum induced copper deficiency, on high organic matter soils. Yet, the importance and extent of trace element imbalances on rangelands remains largely undetermined. Nutritive value is an inclusive expression used to encompass all nutritional attributes of a forage in relation to its overall value to the consuming animal. However, the term is often used in the more restrictive sense of forage quality, including protein content, digestibility or simply palatability. The reader is encouraged to develop the broad view of quality which includes consideration of usefulness of forage constituents (nutritive value) for particular productive purposes in animals as proposed above. This section discusses systems of nutritional description of forages and the classification of forage types for application in grazing management. A useful description of forages must somehow relate to the nutrient groups required by animals. These groups were enumerated as energy, protein, vitamins and minerals. The Proximate Analysis System was developed over 100 years ago in an attempt to use chemical determinations to describe the value of feeds for animals. The proximate factors used as components are crude fiber (CF); crude protein (CP); crude fat, often stated as ether extract, EE; nitrogen-free extract, NFE; and ash. The most widely used proximate component analysis has been for crude protein. (3) CP (%) = % Nitrogen X 6.25 The protein contained in a wide array of forages averages about 16% nitrogen. So the standard procedure is to determine the nitrogen content of a forage, multiply that value by 6.25 (100/16) and refer to the product as crude protein. Crude fiber (CF) and NFE fractions were intended to estimate the less and more easily digested portions of feeds, respectively. When applied to forages this arbitrary partitioning does not adequately differentiate the digestibilities of these fractions. The adoption of the proximate analysis system to describe feed fractions led to the development of Total Digestible Nutrients (TDN) approach. The latter was an attempt to more adequately describe the energy value in feeds. Total digestible nutrients are defined as the sum of the digestible portion (% composition x coefficient of digestibility, COD) of each of the proximate organic components with an adjustment factor of 2.25 for EE. Ash is not included because it contains no energy, while EE is increased because fat contains about 2.25 times the energy per unit weight compared with carbohydrates. The TDN system has been very useful over a long period of time in assigning values to feedstuffs that are relatively constant in composition but is less adequate for forages, especially range forages which vary widely in chemical components within the proximate fraction. The detergent fiber analysis system (Van Soest 1967) was a major improvement in the evaluation of the nutritional characteristics of forages. Partitioning cell content (NDS) from cell wall (NDF) distinguishes that portion that is essentially totally digestible from that which is partially and variably digestible, respectively. Further fractionation of the NDF into its components including acid detergent fiber (ADF), acid insoluble ash (AIA), lignin and silica has refined the analysis of the fibrous portion. A very useful adjunct to this system of analysis was the development of a two-stage, micro-digestion technique (Van Soest et al. 1966). This technique, in vitro digestion of dry matter (IVDDM), provides an approximation of the digestibility of plants and plant parts. Further computorial correction to an organic matter basis provides an estimate of digestible energy content in megacalories. However, IVDDM does not take into account the variable effects of rate of fermentation, digesta flow rate and retention time on digestive efficiency (Huston et al. 1986). These factors vary among animal species and in response to associative effects of companion dietary constituents. That is, the nutritional value of a dietary constituent can be enhanced by the addition of another dietary constituent which supplies a limiting nutrient. Forage quality is determined by various combinations of micro- and macro- scale biotic and abiotic factors (Morley 1981, Wheeler and Mochrie 1981, Van Soest 1982). The inherent morphological, anatomical, physiological and chemical characteristics of each plant species determine its potential nutritive value. Abiotic and temporal factors modify this potential. Examples of biotic factors can be found in the differences in quality between grasses utilizing three-carbon (C3) versus four carbon (C4) photosynthetic pathways and between monocotyledonous (monocots) and dicotyledonous (dicots) plants (Table 2.3). In the first example the C4 plants, commonly termed warm-season species, contain less mesophyll and greater proportions of schlerenchyma, epidermis and vascular tissue than C3 plants, cool-season species (Fig. 2.12). Vascular bundles are densely packed and parenchyma bundle sheaths are thick-walled in C4 grasses (high NDF), therefore inhibiting microbial digestion in the rumen, while reduced mesophyll (low NDS) provides less protein and soluble carbohydrates. Lignin concentrations are higher and leaf:stem ratios lower in warm-season grasses than in cool-season grasses. Stems have significantly greater proportions of structural carbohydrates and lignin (high NDF) in all forages, while leaves have greater proportions of cell contents (high NDS) and crude protein than stems. Table 2.3 Figure 2.12 Shrubs and most forbs are dicots and their leaf biomass is generally of higher nutritive value than that of grasses (monocots) (Table 2.3). Non-woody plant parts of dicots have greater quantities of cell solubles than monocots and lower levels of structural carbohydrate and lignin. This apparent advantage is often offset, however, by biologically significant proportions of secondary compounds (tannins, volatile oils, alkaloids etc.) in a number of shrub and forb species. Many of these secondary compounds produce inhibitory and/or toxic effects on the microbial fermentation (Hegarty 1982). Hence, even if the quality of a particular plant species is comparatively high, inhibitory factors may reduce the utilization of the metabolizable nutrients (Burns 1978). Food materials of the highest quality are found in metabolically active tissues (live leaves, stems, flowers, etc.) or storage tissue (seeds, fruits and roots). Live plant tissue is of higher quality then dead. Similarly, younger live tissue by virtue of its greater metabolic activity is of higher quality than older live tissue. Generally, live leaf is of higher quality than live stem because of its greater photosynthetic activity. Nutrient quality declines as the rate of development or recruitment of new leaf tissue decreases and the rate of senescence increases (see Chapter 4). While the overall quality of live leaf material may not change drastically with age, increasing amounts of senescent material dilute nutrient density (Greene et al. 1987). Concurrent changes in the leaf to stem ratio also occur as a plant matures. In terms of the energy flow (see Chapter 1) and standing crop (g/m2), available gross energy (Kcal/m2), usually peaks when stems have elongated in mid-anthesis. However, maximum available net energy (NE Kcal/m2) occurs earlier in the late vegetative and early anthesis stages before significant reproductive culm elongation occurs (see Chapter 4). Turning to the abiotic factors which affect forage quality the most important are air temperature and soil moisture. These environmental conditions modify the rates at which live material is accumulated and senescence occurs. Generally, the leaf and stem tissue of grasses grown at high temperatures is lower in both digestibility and crude protein content. Lignification and the formation of structural carbohydrates (NDF) occur rapidly at elevated temperatures causing a concomitant reduction in the cell soluble fraction. Shrubs and forbs usually exhibit little change in leaf quality until senescence; however, stems of forbs and juvenile leaders of shrubs exhibit exaggerated declines in quality with advancing age (Petersen et al. 1987). Below normal ambient temperatures that occur during the growing period frequently reduce growth rate and respiration rate, thereby reducing the rates of senescence, stem elongation and lignification. These reduced rates effectively extend vegetative growth further into the growing season so the resultant standing crop maintains greater proportions of digestible dry matter and protein than the same forage crop under normal temperature conditions. Restricted soil moisture can either increase or decrease forage quality. If moisture is restricted during the vegetative growth stage creating slowed growth, but not senescence, delayed maturation maintains forage quality in a manner similar to lower ambient temperature. However, if restriction progresses to severe water stress, forage quality decreases in response to nutrient translocation and senescence of plant parts. In most rangeland environments, drought is often accompanied by above normal ambient temperatures which exacerbate the plant's growing conditions by increasing the rateof evapotranspiration. In summary, the primary factors influencing the quality of forage are the plant species present and their level of metabolic activity. The more active a particular tissue is the greater its quality. Environmental conditions in turn modify this activity by affecting the rate at which it occurs. A variety of plant communities, each having a unique assemblage of plant species, occurs in rangeland ecosystems. Intra- and interspecific competition among plants for resources and interactions with prevailing climatic conditions lead to formation of plant communities (see Chapter 5). Animals, however, are neither plant taxonomists nor community ecologists and consume plants according to availability and preference (see Chapter 3). Whether a plant is an increaser, decreaser or invader (see Chapter 4 and Chapter 5) is immaterial to the animal. Instead, the amount of live-to-dead and leaf-to-stem material available, presence or absence of inhibitory factors, etc., in various species or species groups are the only matters of concern (see Chapter 3). Generally, animals select from the highest quality components of the available forage pool first. Some plant species are highly nutritious but available only in limited quantities while more readily available species are less nutritious. As the pool of highest quality plants is depleted, increasing quantities of the next highest quality components are consumed. These selection and consumption processes are integrated through space and time (see Chapter 3). Although each rangeland environment is composed of a unique agglomeration of plant communities, each with particular vegetational characteristics, the following general classification of their functional nutritional components has been proposed (Huston et al. 1981). Semiarid and arid rangelands are usually dominated by a particular forage type that is relatively high in quality during early vegetative growth but quickly declines in quality as the forage accumulates and matures. This forage type provides the majority of organic matter consumed by grazing animals on rangeland and is termed the production component. On temperate and tropical rangeland, this component is comprised of perennial grasses. Characteristics limiting the nutritional value of these plants are the very same as those ensuring their availability for consumption. Their content of structural carbohydrates is quite high, they enter dormancy during unfavorable periods and reinitiate growth during favorable periods. Adult bulk feeders can maintain acceptable levels of productivity when grazing these forages, provided their reproductive cycle conforms closely to the temporal nutrient profile of the vegetation. In other ecosystems, the production component may be annual grasses as for example California annual grassland or shrubs in salt desert shrub ecosystems, but the common characteristic of the production component is that it ultimately determines the sustained animal yield potential because it is the principal stable component under existing grazing conditions. Other plant species provide a quality component to diets of ruminants on rangeland. These species, which differ from one ecosystem to the next provide only a minor amount of forage, but that forage is significantly higher in nutrients CP, DE, etc. than the production component. Certain perennial forbs, shrub leaf buds and tips, mast, fruits, seeds, etc. contribute disproportionately to the productivity of bulk feeders both by raising the overall diet quality and preventing nutrient deficiencies, for example vitamin A and phosphorus. Perhaps more importantly, quality components provide a suitable diet for grazing and browsing small ruminants having higher nutritional requirements, thereby increasing the overall production potential of a specific rangeland. The quality component is also important to big game populations. The plant species making up the level component of forage materials in this classification system can be characterized as those which remain green throughout the grazable portion of the year. These species rarely produce forage that is either exceptionally high nor ruinously low in nutrient content, but offer fair to good quality forage during all seasons. The level component competes with the production component in a plant community for space, moisture and nutrients, but substitutes for the quality component during periods of dormancy and can significantly reduce reliance on supplemental feed. Examples of plants in the level component include elk sedge in the Intermountain area of North America and Texas wintergrass in north and central Texas. Leaves of evergreen browse species, such as fourwinged saltbush of western North America belong to this component. Plant species that are of exceptionally high quality and are available episodically make up the bonus component. These species are the antithesis of the production component species, being neither stable nor predictable. In continental climates, annual forbs and grasses commonly form this component. When present, these plants contribute significantly to the live standing crop and offer a substantial short-term opportunity for enhanced animal production. Sufficient management flexibility must exist to exploit their presence. Animals having high nutritional requirements such as growing or heavily lactating animals make the most efficient use of this component. This component is also particularly important to upland and non-game birds and big game animals. In this classification system, null component plant species are those not used unless the availabilities of the other components, particularly the production component, are severely restricted. Significant animal consumption of these forages indicates a badly depleted forage resource. In Texas these plants include prickly pear, creosote bush, tarbush, honey mesquite, Texas persimmon, broomweed and croton. These species are of limited value to grazing animals yet may be an extremely important part of the diets of sympatric mammals and bird species. The presence of these plants is often mistakenly considered desirable by stockmen because they are viewed as emergency forage. But this view is incorrect. Cyclic utilization of this component is an indicator of unstable nutrient intake where nutrient demand grossly exceeds nutrient availability from alternative components. The toxic component includes all species poisonous or injurious to grazing animals. Many of these species serve dual roles. They are of some value in other components but are harmful when consumed in excess or at a particular stage of growth. Examples of these dual-role plants in Texas include kleingrass, peavine, sacahuista, oaks, johnsongrass and pricklypear. Acute effects of toxicity are obvious and can be dealt with promptly. Conversely, chronic effects, often go undetected and may even be more costly by virtue of reducing production efficiency. Ruminants optimize forage consumption to meet their nutrient requirements if no physical or metabolic restrictions are imposed (Weston and Poppi 1987). Voluntary intake of forage is the amount consumed by the animal when its accessibility to forage is unrestricted. In such a case, regulation of intake is dependent only on endogenous mechanisms triggered either within the animal or by some characteristic(s) of the forage (Baile and Forbes 1974, Forbes 1980, Van Soest 1982, Grovum 1986). Forage (nutrient) intake under grazing conditions is a modified expression of voluntary intake and is influenced by forage quality (Table 2.4), forage availability (Table 2.5), forage harvestability, environmental stress and management (Chacon and Stobbs 1976, Hodgson 1977, Arnold and Dudzinski 1978, Finch 1984, Allison 1985, Young 1986, 1987). We group environmental stress with nutrient intake in this discussion because nutrient demand for travel, diurnal and seasonal thermal fluctuations and predator avoidance are more pronounced under free-grazing than controlled feeding conditions. Forage intake of grazing ruminants is usually controlled by distension of the reticulum and cranial sac of the rumen (Grovum 1986). Distension of this sensory region is decreased by digesta passage to the lower tract and/or by reducing ingesta volume and mass through mastication and fermentation. Mastication, primary and secondary, is the major means of particle size reduction (McLeod and Minson 1988) resulting in more dense, less bulky digesta and more rapid fermentation and passage. Table 2.4 Table 2.5 Animal Factors Affecting Nutrient Intake Voluntary intake may decrease before, and increase after, parturition in both sheep and cattle (Jordan et al. 1973, Weston 1982, Warrington et al. 1988). Decreased intake during late gestation is attributed to decreased reticulorumen capacity caused by a combination of rapid fetal growth and/or increased deposition of abdominal fat and hormonal mechanisms (Forbes 197l, Baile and Della-Fera 1981). The extent to which these mechanisms ultimately control voluntary intake is not known. Voluntary intake increases post partum, but lags behind increased energy requirements for lactation by 2-6 weeks, apparently because of the time required for the rumen to increase in size and reestablish maximum volume (Weston 1982). There is no clearly defined relationship between body condition (fatness) and nutrient intake in cattle and sheep (Freer 1981, Weston 1982). The general consensus is that abdominal fat restricts voluntary intake 3-30% (Cowan et al. 1980, Freer 1981, Fox 1987), although various effects of fatness have been reported (Bines et al. 1969, Holloway and Butts 1983, Adams et al. 1987b). Conversely animals in a depleted state, consume greater quantities of moderate to high quality forages (compensatory intake). Beef cattle and sheep of different genetic backgrounds exhibit markedly different voluntary intakes (Arnold and Dudzinski 1966, Table 2.5) and efficiencies of production. Maintenance requirements of beef cattle account for 70-75% of the ME requirements through a production cycle, under pen fed conditions (Ferrell and Jenkins 1987). While limited quantitative data are available (Osuji 1974, Havstad and Malechek 1982) the maintenance energy costs of free-ranging cattle are estimated to be 20-50% greater than under pen fed conditions (Cook 1970). Therefore, the mature size and milk production capability of cows could have a marked effect on their efficiency of production under grazing conditions. Metabolizable energy intake increases as mature size and milk production increases. Similarly, Havstad and Doornbos (1987) reported voluntary intake of 3/4 Simmental cows was greater than Hereford cattle under free ranging conditions. Under conditions of low forage quantity and/or quality the production potential of 3/4 Simmental cattle was not achieved. Animal genotype and phenotype can have marked effects on voluntary intake and efficiency of production. Dairy cattle breeds have higher maintenance (Solis et al. 1988) and lactation (NRC 1978, 1984) energy requirements and intake per unit weight than beef breeds. These are attributed to differences in physiological prioritization of tissue growth and maintenance (Solis et al. 1988). Dairy breeds have a higher proportion soft tissue organ mass having high maintenance requirements. Additionally, dairy breeds store a larger proportion of fat internally than beef breeds, thereby decreasing insulatory capacity. Bos indicus cattle (Brahman type) have been found to exhibit lower maximum intakes of moderate quality diets, under minimal stress, than Bos taurus (Hunter and Siebert 1985a, 1985b). Lower intake may be the result of B. indicus having a smaller digestive tract; however, on poor quality tropical grasses, B. indicus digests forages more completely and still exhibits greater voluntary intake than B. taurus (Hunter and Siebert 1985a, 1985b). Voluntary intake of moderate to high quality forages is greater for B. taurus than for B. indicus. When low quality tropical grass diets are supplemented with nitrogen, voluntary intake of B. taurus is greater than B. indicus indicating B. indicus may have a greater capacity to recycle nitrogen (Hunter and Siebert 1985b). Adaptability of these cattle species to the thermal environment also influences intake patterns. Based upon these findings for domestic ruminants, selecting genotypes suited to a particular range setting is an important management consideration. Thermal conditions affect intake more than any other environmental factor (see Chapter 3). The range of temperature and humidity where the ruminant is at relative equilibrium with the environment is the thermal neutral zone (TNZ). Beef cattle have a TNZ for intake of 10-25 C (Finch 1984, NRC 1981a). Below the TNZ, cold stress, intake increases in response to heat loss down to -25 C if fill limitations are not encountered. Above the TNZ, heat stress, intake decreases in response to heat loading. Abrupt changes in temperature, i.e., blizzard or sleet, may cause a transitory decrease in intake, even within the TNZ. At sustained temperatures below -25 C, grazing time and intake may be restricted under free ranging conditions to minimize energy expenditures for grazing (Young 1986, Adams et al. 1986, 1987, NRC 1981a). As would be expected from their origin, B. taurus are more cold tolerant than B. indicus animals. The reverse is true in terms of heat tolerance (Finch 1984). Intake responses follow the same trends as tolerances. Crosses of these cattle types exhibit intermediate intakes across the ranges of heat and cold stress. Level of forage intake and associated forage quality interactions are complex functions that vary through time and across animal and forage types (Table 2.4). Generally, short-term intake responds in positive manner to increasing digestibility up to 80% (Hodgson 1977, Freer 1981). However, because ruminants tend to consume forage in response to physiological requirements, long-term intake regulation is relative to a certain level of homeostasis in body condition. Hence, the treatment which follows is an attempt to blend both short- and long-term intake responses to forage quality relative to physiological requirements. Long-term voluntary intake patterns are determined by the amount of food needed to meet the physiological requirements but modified by the amount which can be consumed before physical constraints are encountered. Both are affected by forage quality, in that less food is needed if the food items have higher concentrations of nutrients, and more food can be physically consumed if the bulky, indigestible fraction is lower. Figure 2.13 illustrates the relationship between forage digestibility and intake assuming no other restrictions. The descending curve represents forage intake needed for maintenance requirements for digestible dry matter, 4.3 kg (9.5 lb) DDM, for a 500 kg (1200 lb) beef cow (NRC 1984). At 20% digestibility, 21.5 kg (47 lb)/day of forage must be consumed to permit the cow to extract the required 4.3 kg of DDM. However, only 5.4 kg/day of an 80% digestible forage must be consumed to supply the same 4.3 kg DDM. The ascending curve depicts the theoretical maximum consumption of forages within the range of 20-80% digestibility, assuming a constant 1% body weight of feces (Conrad et al. 1964). The two curves intersect at approximately 46% forage digestibility and 9.3 kg/day forage intake. Note that to the left of the point of intersect, maximum intake falls below required intake. In the above example, the cow fed a forage that is less than 46% digestible cannot consume enough to reach the required amount of DDM. In the right-hand portion of the figure, maximum intake rises above required intake, so the cow can consume greater amounts of forage at these digestibility levels than are required to meet DDM maintenance needs. The model proposed by Conrad et al. (1964), postulated that voluntary intake tends to take on the pattern formed by the area below both curves. In which case, voluntary intake of forage increases as the digestibility of the forage increases to the point of intersect. Further increases in digestibility lead to decreased food intake and so no change in DDM intake occurs. Figure 2.13 This model has been challenged in recent years as being inaccurate and too simplistic (Freer 1981, Grovum 1986), and in some cases with good reason. For example, low digestibility forages are, almost without exception, also low in protein. A small addition of protein to the diet dramatically increases intake of a low quality forage indicating that its inherent low digestibility alone did not lead the animal to consume less. On the other side of the scale, grazing animals do not abruptly quit eating the moment their daily nutrient requirements for on-going physiologic processes are met. This fact is easily seen in cows becoming overly fat after the loss of an infant calf or failure to breed. Animals clearly initiate and terminate feeding in response to an array of physical, chemical and humoral signals (Grovum 1986). In the lower digestibility range, physical factors are most important, although ruminal nitrogen status is certainly involved. At higher diet digestibility, physical factors are less important so internal chemical and humoral factors become more important in producing hunger and satiety signals. Although the model shown in Figure 2.13 does not account for all factors modifying forage intake, it does depict generalized long-term forage intake patterns of grazing ruminants. The area to the left of the point of intersect, below 46% digestibility in this example, forms the zone of response. As forage quality increases, nutrient intake (Ventura et al. 1975) and productivity increase. If a cow which is not lactating and at mid-pregnancy, consumes 9.3 kg of forage, normal growth of fetus and some accumulation of fat for later use after parturition occurs. However, if she consumes forage of lower quality, < 46% digestibility, little or no fat accumulates and fetal development is retarded. Once born, the calf will be smaller and weaker. The cow will produce less milk, wean a lighter calf and have a reduced probability of rebreeding on schedule. In the extreme case, < 30% digestibility, the cow is malnourished and will eventually die. The area to the right of the point of intersect, above 46% digestibility in the example, forms the zone of adequacy. As forage quality increases above the 46% digestibility level, the model indicates that the cow is correspondingly less stimulated to consume the forage, thus intake declines. Because requirements are met at a lower level of intake of a more digestible diet, no decline in productivity accompanies the decline in intake. An animal previously restricted by either quantity or quality of diet to the point of nutrient depletion increases intake to a greater level than depicted. Once recovered from the depleted state, voluntary intake is adjusted lower. Adequate data on forage intake in free-grazing ruminants in different physiological states and over a wide range of forage digestibility is limited due to the difficulty of making such measurements. However, sustained access to forages in the higher range of digestibility is rare under range conditions. If this occurs, animals having higher nutrient requirements (stockers, replacement heifers) should be grazed to make the most efficient use of this resource. The art and science of grazing management is matching the nutrient supply in the forage to the nutrient requirements of the foraging animal to reach sustained optimal productivity. In that spirit we submit the Huston - Pinchak Theorem: Diets of range animals typically fluctuate above and below the theoretical point of intersect (Fig. 2.13) in a more or less cyclic fashion based upon short-term intake responses to quality of forage consumed (Table 2.4). During periods of high physiological requirements such as early to mid-lactation, the animal may not be capable of consuming adequate amounts of forage to prevent tissue loss. Whereas during periods of low physiological requirements and on occasion higher forage quality, nutrient intake may greatly exceed current requirements and result in substantial tissue accretion. We define this spectrum of forage quality as the normal range of forage quality and intake. This spectrum is specific for each of the animal species, types, ages and uses in production systems and is reflected in Table 2.4. The data in this table illustrate a wide array of seasonal trends of intake in response to forage quality over an equally wide array of forage type-animal species combinations, i.e., "a spectrum of normal ranges". A forage or assemblage of forages providing a diet in the normal range is therefore correct for that production system. Good enough is excellent. An obvious interaction exists between the quantity and quality of available and consumed forage (see Chapter 1). Selective utilization of areas within pastures as well as selective utilization of plants and plant parts within these areas (see Chapter 3) make it difficult to determine which component of available forage is regulating intake. Table 2.5 is an overview of the dynamic interactions between forage and animal type demonstrating the relationships between forage intake and forage availability. Generally, standing grass crops below 1000 kg/ha restrict forage intake by sheep and cattle on temperate native grasslands of North America. However, on improved pasture, temperate and tropical, standing crops become limiting between 1000 and 4000 kg/ha (Stobbs 1973, Forbes and Coleman 1987). Differences within and between regions are related to forage species or species mix of the pastures. The vertical distribution of leaf and stem biomass and their live and dead fractions ultimately limits intake (Chacon and Stobbs 1976, Poppi et al. 1980, Freer 1981, Forbes and Coleman 1987). Hence the amount of available live leaf biomass (kg/ha) within an exploitable zone (see Chapter 3) determines maximum rate of intake. Departure from maximum rate of intake results from the decline of live leaf within this zone below a critical threshold. The point at which this threshold is reached varies with forage species, growing season, length of grazing period and animal species. Historically the relationships between forage availability and intake have been described in relation to forage standing crop (Table 2.5). However, overwhelming evidence exists that the amount of leaf and the ratio of leaf to stem within harvest horizons ultimately determines the upper limit of intake, and therefore production, for a given set of forage conditions at a specific point in time. The frequency, severity and duration of periods of restricted intake determine the sustained animal yield capacity of any land area. Short-term conditions can be overcome through supplemental (substitutional) feeding. Chronic intake restriction can be overcome by destocking and/or increasing forage production and/or increasing of the amount of leaf material. The latter two remedies depend on increased cultural inputs. Supplemental nutrition management is defined as the implementation of practices specifically aimed at improving the nutritional status and/or efficiency of converting available forage into animal products in a given circumstance. Supplemental nutrition is an option when the forage base fails in quantity and/or quality of nutrients to meet the physiological requirements of the grazing animal. Supplemental feeding is targeted at correcting nutrient deficiencies or providing nutrients to stimulate intake, digestion and/or utilization of forage (Table 2.6). In a broader sense, supplemental nutrition management includes corrective practices to align nutrient supply with nutrient demand. Replacing a quantity of forage that would otherwise have been consumed by feeding an alternate feed supply is called substitution. Supplying a limited nutrient, i.e., protein, to animals having unrestricted forage available of poor quality is called supplementation. Huston et al. (1988) (Fig. 2.14) clearly demonstrated the potential stimulation in forage intake of low quality forages by sheep with low levels of protein supple-mentation. Generally, field experiments have been less conclusive (Table 2.6) although low levels of protein supplementation on poor quality, (< 6% crude protein) diets can stimulate forage intake (Caton et al. 1988a). Table 2.6 Figure 2.14 Grazing management is a primary means of achieving a balance between animal demand and nutrient supply. Decisions on animal populations (species, breeds and classes), stocking rates, breeding dates, pasture sizes, rotation schedules, etc. (see Chapter 7) set the degree of match or mismatch between the supply of and demand for nutrients. Supplemental nutrition management in this context is then the fine adjustment in the balance between supply and demand. The following discussion describes four general categories of mismatches of nutrient supply and demand. The difference between high and low quantities relates an animal's ability or inability to achieve adequate intake of forage in a reasonable length of grazing time (see Chapter 3). High and low quality refers to the normal range defined in the previous section. This range condition is seldom found on a sustained basis but often occurs on a short-term basis. Seasonally, such a condition occurs on temperate rangelands during the late spring growth period. Small grain pastures, wheat, oats, rye, provide forage of this type until mid-anthesis. This is an important interval for animals matched to forage within the normal range as this "up" period follows and precedes a "down" period. Therefore, it is essential for recovery from a past depletion period and preparation for future depletion. It is conceivable that under some conditions both diet quantity and quality consistently, or at least frequently exceed requirements. In such cases, forage quality and nutrient intake rise above the normal range. The expected result is overly fat animals, reduced efficiency in transferring dietary nutrients into animal products and possibly reduced individual animal performance. The corrective management strategy is to restructure the grazing population. That is, animals having greater productive potential and a greater capability for utilizing the high quality forage should be selected or the stocking rate of existing animals should be increased. This range condition commonly occurs when abundant plant growth is followed by an extended period of temperature and/or moisture induced dormancy. This condition is characteristic of the dormant season in the temperate region. The residual forage contains comparatively high proportions of structural carbohydrates, thereby diluting its energy and protein value. The key concern is whether the digestibility of the forage fluctuates within the normal range. Remember that forages that fall in the lower region of the normal range for dry cows virtually always are below the normal range for lactating cows and small ruminants, sheep, goats, deer, etc. A supplemental nutrition program should provide the limiting nutrients (e.g., protein, phosphorus, vitamin A). This supplemental feeding program may stimulate forage consumption if protein and/or phosphorus are critically low or may decrease forage consumption by substitution (Fig. 2.14). Assuming that protein and/or phosphorus are not limiting in the forage, forage consumption decreases by approximately one-half of the amount of concentrates fed. The converse of the previous range profile, this condition favors small ruminants, especially goats and deer, which are flexible in their foraging behavior. This condition is characteristic of shrub dominated landscapes and often results from overgrazing and/or protection from fire (see Chapter 5). Supplemental feeding can be used to increase the stocking rate, but if the range is properly stocked with the correct animal types, supplemental feeding does not improve the productivity of the individual grazing animals. Special use pastures can also be assigned to this category such as small grain pastures of extremely high quality, especially in protein. Feeding grains to growing animals, lambs and calves, on small grain pastures allows an increase in stocking rate without altering animal performance. In this case, an almost exact substitution occurs, the small grain forage intake is reduced by the amount of the grain fed. Benefit is realized because the high concentration of protein is more efficiently distributed to a larger number of animals resulting in greater net secondary productivity. This range condition is best typified by desert or arid landscapes. The limited standing crop typically contains an abundance of structural carbohydrates, lignin and/or secondary plant chemicals that reduce palat-bility, intake and utilization. The proper nutritional strategy in this circumstance is to encourage high plant selectivity by maintaining a low stocking density. Feeding during dormant interim periods provides a balance of nutrients when little or no alternative natural supply is available. Similar seasonal conditions exist on rangelands overstocked during periods of dormancy. Very little forage is available and that which remains is low in quality. For best results in the short term, a good quality hay or a complete feed should be provided. Heavy rates of stocking on yearlong range lead to similar nutritional conditions during winter dormancy (Heitschmidt et al. 1987, Greene et al. 1987) hence establishing a cyclic pattern. The range between the highs and lows in nutritional adequacy is too broad to fit within the nutritional state characterized earlier as the normal range. The lows are too low for adequate recovery during the highs; thus, productivity is substantially reduced. Management alternatives include reduced stocking to increase quantity and quality of diet or liberal feeding which rarely yields economic returns and only prolongs an unsustainable ecological condition. So, what have we said? Ruminant animals are placed on rangeland as primary consumers of the vegetation formed by the capture of solar energy. In a natural state, these animals would in turn adapt spatially and in proper numbers for more or less sustained survival. However, the human demand for the offtake of consumable products (food and fiber) imposes a requirement in excess of survival and so creates an equilibrium that is less than a natural balance. Restricted movement, altered numbers and controlled breeding impose an unnatural match between what is offered by the vegetation and what is required by the grazing animal. Through an understanding of what nutrients are important, their probable concentrations and fluctuations in forages and their requirements by animals, management can partially align nutrient supply and demand on rangeland. Supplemental nutrition management is then required to provide a fine adjustment for optimal productivity. Perhaps the most important aspect of management is the recognition of what is involved in grazing behavior and diet selection, the topic of Chapter 3. List of Figures and Tables Figure 2.1 Diagrammatic representation of the anatomical and biochemical relationships in plant cell nomenclature between botany and ruminant nutrition. Figure 2.2 The isomeric arrangrments of the glycoside bonds in the complex carbohydrate polymers starch and cellulose. Figure 2.3 Stylized representation of the digestive anatomy and arrangement of...... Figure 2.4 Stylized paths of consumed materials within the reticulorumen of ruminants. Figure 2.5 Diagram of nitrogen flow in ruminants. Figure 2.6 Relationships between ruminant bulk, intermediate and concentrate feeders. Figure 2.7 Prehensile mouth parts of cattle, sheep, goats, and deer reflecting the degree of selectivity and harvest efficiency expressed by these herbivores. Figure 2.8 Generalized diagram indicating the prioritization of energy use by ruminants. Figure 2.9 Major pathways of respiration for absorbed metabolites that yield energy in ruminants. Figure 2.10 Catabolism of dietary energy depicting the energetic losses involved with digestive and metabolic processes in the ruminant. Figure 2.11 Efficiency of metabolizable energy utilization with increasing levels of energy intake. Figure 2.12 Cross sections of leaves of warm season and cool season grasses depicting the anatomical differences in nutritive value between these types of forages. Figure 2.13 Relationship between forage digestibility and long term forage intake patterns. Figure 2.14 Effect of forage quality on intake responses to supplemental feeding. Table 2.1 Comparative nutritional dynamics in livestock species. Table 2.2 Expressions of energy and protein in forages Table 2.3 Expected range of crude protein, digestibility, and phosohprus content in warm and cool season forages from native and improved pastures throughout the world. Table 2.4 A summary of relationships between forage intake and diet quality. Table 2.5 Relationships between forage availability and intake restriction across forage types. Table 2.6 Relationships between type and amount of supplement fed and forage intake.
https://cnrit.tamu.edu/rlem/textbook/Chapter2.htm
Scientists studying tiny ancient meteorites have found evidence that Earth’s atmosphere used to contain much more carbon dioxide, and maybe less nitrogen, than it does now. What was Earth’s atmosphere like a few billion years ago, early in its history? Researchers at Penn State say they’ve found some clues by analyzing iron micrometeorites in ancient soils. These particles from space – a subset of cosmic dust – suggest that carbon dioxide made up 25% to 50% of Earth’s atmosphere 2.7 billion years ago. That’s in contrast to today’s levels of carbon dioxide of around 0.04%. There might also have been less nitrogen then than in our present-day atmosphere; now nitrogen is by far our atmosphere’s primary gas. The new peer-reviewed findings were published in the journal Proceedings of the National Academy of Sciences on January 21, 2020. Carbon dioxide concentrations have varied widely over the Earth’s 4.54-billion-year history. This new work helps quantify the elements that made up Earth’s atmosphere in the very distant past. The tiny iron micrometeorites that were studied are no larger than grains of sand. They were discovered in ancient soils – called paleosols – that are about 2.7 billion years old. The soils were collected in the Pilbara region of Western Australia. These scientists believe the micrometeorites fell from space during the Archean eon, when the sun was weaker than today. How do the scientists determine atmospheric composition from such small particles? As the meteorites streaked through the atmosphere, they melted from the heat. As they encountered the gases in the atmosphere, they became oxidized. That oxidation can still be seen today, and analyzed. Rebecca Payne, lead author of the study at Penn State, said in a statement: This is a promising new tool for figuring out the composition of the upper atmosphere billions of years in the past. It was previously thought that free oxygen molecules in the upper atmosphere oxidized the meteorites, but the new research refutes that idea. In order for that scenario to be plausible, the amount of oxygen at the time would have had to be similar to what it is now. But other research shows there wasn’t as much oxygen as there is today, or even none at all. That leaves carbon dioxide as the gas that could have oxidized the meteorites. That conclusion is based on new analysis using photochemical and climate models. But that analysis also indicates that at least 25% of the atmosphere must have been composed of carbon dioxide, and perhaps a lot more. This fits with previous atmospheric models, which said there was much less oxygen early on, until the Great Oxidation Event about 2.4 billion years ago. Owen Lehmer, a doctoral student at University of Washington, stated: Our finding that the atmosphere these micrometeorites encountered was high in carbon dioxide is consistent with what the atmosphere was thought to look like on the early Earth. The other key finding is that there was probably a lot less nitrogen in Earth’s atmosphere 2.7 billion years ago than there is now. Today, nitrogen makes up about 78% of the atmosphere. There is a problem, however, in what we know about conditions at the time, nearly three billion years ago. With so much carbon dioxide, the Earth should have been warmer, but evidence shows it was not. In fact, it was partly covered by glaciers. That can be explained, however, if there was less nitrogen than today. That would cause lower atmospheric pressure, which then could allow for both higher carbon dioxide levels and cooler temperatures. According to Jim Kasting, an Evan Pugh Professor at Penn State: There are data, referenced in our paper, that support lower nitrogen concentrations during this time. Our study of micrometeorite oxidation falls in line with that interpretation. The possibility that our major atmospheric gas, nitrogen, was less abundant in the distant past is really intriguing. There has been much debate about how much carbon dioxide was in the atmosphere a few billion years ago, and this new study will now add to that. Various studies have often contradicted each other; this is because they mostly relied on ancient soils, which can be affected by weather or ground cover. They also tend to reflect conditions in the lower atmosphere rather than the upper atmosphere, where the meteorites would have first been affected and became oxidized. As Payne said: It was getting difficult to figure out where the agreement should have been between different paleosol studies and climate models. This is interesting, because it’s a new point of comparison. It may help us find the right answer about atmospheric carbon dioxide in the deep past. These findings may also help scientists better understand the evolution of the atmosphere on Mars. It also is predominately carbon dioxide, but is much thinner than Earth’s, and its composition early on is still a matter of debate as well. There is also similar disagreement among scientists as to how the Martian atmosphere was once thick and warm enough for liquid water to exist on the planet’s surface, when some evidence still points to a colder climate for most of Mars’ history. Venus’ atmosphere is also predominately carbon dioxide, but it is much thicker than Earth’s with crushing surface pressure. It is thought that Venus used to be more Earth-like (as we know it now) in its younger days, yet ended up with a dense carbon dioxide atmosphere in the present day, which has turned the planet into a scorching hot world due to the greenhouse effect. Figuring out the true composition of the atmosphere of the early Earth will help scientists better understand how the atmosphere has changed over the past few billion years, and what conditions were like when life first started to evolve. This data can then be extrapolated to other rocky planets with atmospheres, like Mars or Venus, to understand how they took such different evolutionary paths. Lehmer said: Life formed more than 3.8 billion years ago, and how life formed is a big, open question. One of the most important aspects is what the atmosphere was made up of, what was available and what the climate was like. Bottom line: New analysis of tiny ancient meteorites indicates that Earth’s atmosphere used to contain much more carbon dioxide than it does now. Source: Oxidized micrometeorites suggest either high pCO2 or low pN2 during the Neoarchean Paul Scott Anderson has had a passion for space exploration that began when he was a child when he watched Carl Sagan’s Cosmos. While in school he was known for his passion for space exploration and astronomy. He started his blog The Meridiani Journal in 2005, which was a chronicle of planetary exploration. In 2015, the blog was renamed as Planetaria. While interested in all aspects of space exploration, his primary passion is planetary science. In 2011, he started writing about space on a freelance basis, and now currently writes for AmericaSpace and Futurism (part of Vocal). He has also written for Universe Today and SpaceFlight Insider, and has also been published in The Mars Quarterly and has done supplementary writing for the well-known iOS app Exoplanet for iPhone and iPad.
https://news.nmnandco.com/2020/02/02/study-suggests-early-earths-atmosphere-was-rich-in-carbon-dioxide-earthsky/
The utility model relates to a medical instrument, especially relates to a medical rigid endoscope with the diameter of PHI2.6 millimeters to 3.0 millimeters, which is composed of a microscope body, a microscope sheath and a optical part. The objective imaging system comprises two groups of separated negative and positive lenses, the first group is a strong optical power negative light lens group, the last group is a positive light lens group, the combined focal length f' is equal to: f'=(f(1)' * f(2)')/(f(1)' + f(2)' - d), the f(1)'is the negative light lens group focal length, the f(2)' is the positive light lens group focal length, the d is the interval of the positive and negative light lens groups. The image transferring lens system is a replay rod-lens system which comprises a negative rod-lens and a positive rod-lens; the lateral magnification of the replay rod-lens system is equal to plus or minus 1; the space of the replay rod-lens system is filled with glasses, only a air-deck of 4 millimeters to 5 millimeters is provided between the lenses, which is no need to use wavy interval ring; the eyepiece system comprises a double-concave lens, a double-convex lens and a double crowned lens; the objective lens field angle is equal to 35 to 55 degrees; the working distance is equal to 5 to 20 millimeters. The beneficial effects is to meet needs of clinical endoscopic, diagnosis, and treatments of nose, ears, throat, uterine, bladder,etc.
Our client in the engineering industry wanted to establish a water management ecosystem to help the company... Ensuring a sustainability policy matches best practice Our client hoped to obtain insights on the strengths and weaknesses of its current practice when compared to... Identifying technologies to stabilise your chemical formulation Our client needed to improve the stability of a chemical formulation after alterations were made to enhance its... Identifying packaging options to meet both functional and sustainability needs Our client wanted to use paper packaging that was fully recyclable but that also met their specific functional... Developing global sustainable packaging strategies Our client wanted to improve the sustainability of its flexible packaging by adapting its strategy to meet the... About Science Group Science Group offers independent advisory and leading-edge product development services focused on science and technology initiatives. Its specialist companies Sagentia Innovation, Leatherhead Food Research, TSG Consulting and Frontier Smart Technologies collaborate closely with their clients in key vertical markets to deliver clear returns on technology and R&D investments.
https://www.sagentiainnovation.com/case-studies-archive/?category_filter=361&tag_filter
Moving to a different town can be stressful enough. Moving country magnifies the stress. Emigrating can be one of the most exciting times of your life, it can also be particularly stressful, especially if you find yourself unprepared for what lies ahead. Research and planning is essential to ensure the move is as smooth as possible to your new destinations. Before deciding to emigrate, you should do some research on your intended location. Every country is different. Moving to a new country means you will become immersed within an entirely different culture. If you have not properly prepared for this event it will come as major a shock to you, as all of the traditions you are familiar with are replaced with new and foreign ones. You should definitely spend some time in the country you are planning to emigrate to. Check out the housing market and job prospects for your trade. [view overseas emplyment article] Visiting your possible future home will give you the opportunity to view the cultures and customs of the country. This visit is valuable in getting contacts to help find suitable accommodation. If a pre-move visit is not possible it is important to research and find as much information as possible about the country. Start by researching information available on the internet. Here you will find a broad range of information on any topic you care to learn about. Try participating in an online forum , maybe there is a UK ex pat forum or community for that country. This will give you the chance to communicate with people who have visited or are currently living in the country. Take this opportunity to raise any questions or concerns you may have. Learning about the social ideals of your intended destination will optimise your chances of settling in quickly and becoming comfortable with your new environment as you will be able to identify with new friends and work colleagues. In the long term this should make adjusting to your new life significantly easier by creating a friendly and supportive environment within your new home. Researching your possible new location is also important in regards to knowing the safety level and required security measures of your particular country.
https://www.deals4homes.co.uk/movinghome/emigrate/index.html
BACKGROUND Field Related Art Aspects of the example implementations relate to methods, systems and user experiences associated with learning sensory media association (e.g., audio and/or visual) without use of text labels. Related art deep learning techniques require large amounts of text labeled data. The text label data is created by a human labeler for training models. In the related art, the cost of performing the text labeling creates limitations on the use of deep learning technology in many real-world situations. For example, use of related art deep learning techniques to create a customized product image data set with millions of image labels is tedious and costly, sometimes to the extent that it is prohibitive to performing such a task. Further, creating a detailed description of an image for a video with proper text labels, as is required for related art deep learning techniques, will also require a great cost in the form of human labelers expending tremendous amounts of time and resources on tasks such as record reviewing and typing. Accordingly, there is an unmet need in the related art deep learning technology to collect real-time data and create data sets without related art costs and disadvantages associated with text labeling. SUMMARY According to aspects of the example implementations, a computer-implemented method of learning sensory media association includes receiving a first type of nontext input and a second type of nontext input; encoding and decoding the first type of nontext input using a first autoencoder having a first convolutional neural network, and the second type of nontext input using a second autoencoder having a second convolutional neural network; bridging first autoencoder representations and second autoencoder representations by a deep neural network that learns mappings between the first autoencoder representations associated with a first modality and the second autoencoder representations associated with a second modality; and based on the encoding, decoding, and the bridging, generating a first type of nontext output and a second type of nontext output based on the first type of nontext input or the second type of nontext input in either the first modality or the second modality. According to further aspects, the first type of nontext input is audio, and the second type of nontext input is an image. According to other aspects, the audio is sensed by a microphone and the image is sensed by a camera. According to still other aspects, the first type of nontext input is one of audio, image, temperature, touch, and radiation, and the second type of nontext input is another of audio, image, temperature, touch, and radiation. According to yet other aspects, the first type of nontext input and the second type of nontext input are provided to an autonomous robot for training. According to additional aspects, text labels are not used, and the receiving, encoding, decoding, bridging and generating are language-independent. According to still further aspects, a third type of nontext input is received, the third type of nontext input is encoded using a third autoencoder having a third convolutional neural network, the third autoencoder is bridged to the first autoencoder and the second autoencoder by the deep neural network that learns mappings between the third type of representation associated with a third modality and the first type of representation and the second type of representation, and a third type of nontext output is generated, without requiring retraining of the first autoencoder, the second autoencoder, the first convolutional neural network and the second convolutional neural network. Example implementations may also include a non-transitory computer readable medium having a storage and processor, the processor capable of executing instructions for assessing whether a patent has a condition. BRIEF DESCRIPTION OF THE DRAWINGS The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. 1 FIG. illustrates an example implementation of the system and method. 2 FIG. illustrates results associated with the example implementation. 3 FIG. illustrates results associated with the example implementation. 4 FIG. illustrates results associated with the example implementation. 5 FIG. illustrates results associated with the example implementation. 6 FIG. illustrates results associated with the example implementation. 7 FIG. illustrates results associated with the example implementation. 8 FIG. illustrates results associated with the example implementation. 9 FIG. illustrates an example process according to an example implementation. 10 FIG. illustrates an example computing environment with an example computer device suitable for use in some example implementations. 11 FIG. shows an example environment suitable for some example implementations. 12 FIG. illustrates an example implementation that is associated with the application of a robot. DETAILED DESCRIPTION The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. There is an unmet related art need for tools that permit deep learning technology operations for machine learning of sensory media, without requiring text labels. As explained above, the related art approaches include a cost of obtaining text label data, which creates bottlenecks for many data demanding machine learning tasks. On the other hand, a human can learn cross media association without text labels (e.g., a child may be able to learn how to name an object without knowing open numeric characters, or an object may learn how to name an object in a language that he or she does not know, from an alphanumeric perspective). Aspects of the example implementations are directed to cross-modal speech visual association, without text labels. While the related art approaches may use text as a bridge to connect speech and visual data, the example implementations are directed to machine learning that uses sensory media, in a non-textual manner, such as without keyboards. By eliminating text, such as keyboard labelers, there may be various benefits and advantages. For example, but not by way of limitation, machine learning techniques may be performed in a manner that is more natural and more precisely mimics the human behavior, and will not be restricted by the related art limitations of keyboard labelers, such as schedule, cost, etc. As a result, the related art problem of insufficient training data for machine learning tasks may also be alleviated; moreover, a new zone of training data may become available. Further, according to the example implementations, because there is no cost associated with text labeling, or the related complexities, it may be easier for ordinary consumers to be able to train systems in a manner that is not currently possible in related art systems. For example, but not by way of limitation, the example implementations may be useful to help individuals who are impaired with respect to sight or hearing, such that visual inputs may be provided as audio outputs for individuals with sight impairment, and audio inputs may be provided as visual outputs for individuals with hearing impairment. According to an example implementation, plural deep convolutional autoencoders are provided. More specifically, one deep convolutional autoencoder is provided for a first nontext domain (e.g., learning speech representations), and another deep convolutional autoencoder is provided for a second nontext domain (e.g., learning image representations). Thus, hidden features can be extracted. The latent spaces of these autoencoders represent compact embeddings of speech and image, respectively. Thus, two deep networks are trained to bridge the latent spaces of the two autoencoders, which generates robust mappings for both speech to image an image to speech. Thus, audio can be converted to an image that a user can visualize. With these mappings, and image input can activate a corresponding speech output, and vice versa. The example implementations associated with present inventive concept may be employed in various situations. For example but not by way of limitation, systems may be used to assist individuals with disabilities; further, autonomous robot training may be performed, and machine learning algorithms and systems may be generated that can use a large amount of low-cost training data. Further, machine learning systems may be employed that are not limited by the related art problems and disadvantages associated with text labelers, such as cost, schedule, etc. In the present example implementations, a machine may be provided with sensors, such as cameras and microphones, which may collect real-time data on a continuous basis, similarly to how a human may sense the same information. Other sensors may be provided, such as thermometers associated with temperature sensing, pressure sensitive arrays associated with making a pressure map to sense touch, radiation sensors, or other sensors associated with sensed parameter information. The collected real-time data is used by the encoder and decoder architecture of the present example implementations. For example, the sensing device is may obtain usable data from normal daily activities, as well as from existing videos. Without the related art restrictions of having such data being labeled by human text labelers as in the related art approaches, the example implementations can continuously sense and observe information of the environment, and learn from the environment. 1 FIG. 100 101 103 109 105 111 107 illustrates an example implementation of the architecture . More specifically, an audio input and an image input are provided, which may receive information from devices such as microphones and cameras, respectively. The example implementation includes an encoder and decoder architecture, which are used for each of the audio module and the image module, to learn audio representations and image representations, respectively. Through an encoding process , an audio output is generated, and through an encoding process , an image output is generated. Because the audio module uses audio signals as training inputs and outputs, it does not require text labels to train the deep network. Similarly, the image module uses images as inputs and outputs of the network, and also does not require text labels. 113 115 119 117 With representations between each pair of encoder and decoder, one neural network is used to map audio representation to the image representation , and another neural network is used to map the image representation to the audio representation . According to the present example implementation having the foregoing arrangement and learning parameters, and audio input can activate an audio output, as well as an image output. Conversely, an image input can activate an image output, as well as an audio output. 1 FIG. 121 123 101 103 121 123 125 127 More specifically, according to the example implementations, for each of the modalities (two modalities are illustrated in , but the example implementations are not limited thereto, and additional modalities may be provided, as explained herein), the autoencoder includes respective encoder portions and which received the respective inputs and , in this case audio and video modalities, respectively. After several layers of the encoder portions and have been applied to input information, first modality representations are generated as shown at , and second modality representations are generated as shown at . 125 127 113 115 119 117 125 127 The first and second modality representations at and are then provided to the deep neural networks to perform the cross modal bridging, such as mapping from first modality representation to second modality representation , or mapping from second modality representation to first modality representation . The sending and receiving of the representations is shown by the broken lines extending from the representations and . 129 131 125 127 129 131 125 127 105 107 Further, decoder portions and are provided so as to decode the respective first and second modality representations at and , which include the results of the cross modal bridging, as explained above. After several layers of decoder portions and have been applied to the first and second modality representations at and , the outputs are generated at and , respectively. The foregoing example implementation may be used with different input-output combinations. For example, but not by way of limitation when the foregoing architecture does not have information about pairing between an audio input and a learned audio output, the example implementations may feed the input signal to both the input and the output of the audio module, and may use an autoencoder learning procedure to learn the representation. When the pairing information between an audio input and an existing audio output is known, the example implementation may learn to associate the audio input and the existing audio output through the autoencoder. When the audio output and the image output are both available, the example implementation may use both outputs and the audio input for training. Conversely, a similar approach that uses the example implementation architecture may also be applied to train the image module, in a similar manner. 1 FIG. The example implementations learn relations between images and audio clips. More specifically, pairing info between audio clips and images is presented to the system associated with the example implementations. The pairing according to the example implementations is analogous to the pairing that occurs when one person teaches another person to name an object. Thus, the example implementation provides the machine learning with a more natural learning approach. With the pairing information provided by a teacher of the machine, corresponding parameters in the network shown in are trained. More specifically, according to one example implementation, adversarial convolutional autoencoders are used for both image and audio learning modules to save low-level feature computation cost, and to reduce the number of training parameters, audio inputs are converted to 2-D MFCC representations, which are fed to a convolutional autoencoder. This conversion results in an audio learning module that is very similar to the image learning module. The autoencoder includes seven layers for its encoder and decoder, respectfully. However, the present example implementations are not limited thereto, and other numbers of layers may be substituted therefore, without departing from the inventive scope. According to the example implementation, a 3×3 convolutional filter is used to process data at each convolutional layer. Without losing input fidelity, the autoencoder compresses the input audio, which according to one example may have 16,384 samples, 232 dimensions of the autoencoder middle layer. With this 32 dimension representation of the input, the example implementations may reconstruct similar audio, with the decoder, without audible distortions. With respect to images, the 28×28 handwriting images are reshaped two 784 dimension vectors, and fed to image autoencoders. The image autoencoder has five fully connected layers, to reduce the input to a 32 dimension image representation. The 32 dimension image representation may be used to reconstruct the input image with the trained decoder. 2 FIG. 200 201 203 illustrates spectrograms and images that correspond to different hidden node values located on grids in latent spaces, when to hidden nodes are used. These drawings illustrate data clustering and latent spaces. At , an audio learning module output is provided in the form of spectrograms that correspond to different hidden node values. At , image learning module output images are provided that correspond to different hidden node values. Two node latent space is provided for visualization, although it may cause information loss and big distortion on outputs. To avoid such disadvantages and problems, and to keep the audio encoder output distortion small, the example implementations use 32 bit nodes, both for the audio learning module and the image learning module. In order to learn mappings between the 32 node audio representation layer and the 32 node image representation layer, two five layer 512 node per layer, fully connected networks are used to learn mappings from audio to image and from image to audio, respectively. The foregoing example implementation was applied to data in the following illustrative example. An NMIST handwriting digital data set, which has 60,000 training images and 10,000 testing images, and English spoken digital data set from FSDD, which has three speakers and 1500 recordings (50 of each digit per speaker) was used as training data for tuning the network parameters. 3 FIG. 300 301 307 303 309 305 311 illustrates examples of input audio spectrogram , , corresponding audio learning module spectrogram outputs , , and corresponding output images , , which were generated with audio inputs, from the image decoder. When feeding the learning system with audio from different speakers, image outputs have small variations on digit outputs. 4 FIG. 4 FIG. 400 401 403 6 7 8 As shown in at , typical handwriting images and speech activated images are provided as shown herein with image inputs and image outputs , the output images may be more recognizable than the input images. This is particularly visible with respect to the digits , and as shown in . Additionally, the 512 node latent space autoencoder was tested for both image to image module and audio to audio module, using an adversarial network to learn the mapping from image to audio. 5 FIG. 5 FIG. 500 501 503 505 501 As shown in at , inputs and outputs of the image learning module and the corresponding audio spectrogram outputs activated by the image input are shown. The images shown in illustrates that the image to image module can output images that are more similar to the image inputs, because of the latent space expansion. 6 FIG. 600 601 603 605 shows test results with a coil—100 data set, including inputs , autoencoder outputs and speech outputs . Since the images in this data set are bigger, the convolutional autoencoder is used to extract 512 dimension features for representing the input image. Further, using the Abstract Scene data set, speech information is generated for 10,000 128×128 images. Using the foregoing learning architecture, the image representation layer and the audio representation layer were scaled up to 1024 nodes each, respectively. Similarly, the audio to image and the image to audio mapping network with was increased from 512 to 2048, to handle increased data complexity. 7 FIG. 7 FIG. 700 701 703 Results of this example are shown in at . More specifically, the first row of shows ground truth , and the second row shows audio generated images . 8 FIG. 800 801 803 805 shows MFCC coefficients of three speech segments , , that were generated with images. By asking witnesses to listen to image activated speech segments, a determination was made as to whether the speech segments were easy to understand. To enhance training quality, the example implementation may employ a trainer having an ID as a token. For the mode of showing an image and then generating speech, the token may be a random speaker or a specified one. On the other hand, for the mode of speaking and then generating the image, the results should be independent of speaker, such that the example implementation may operate according to one or more of the following options. According to one example implementation, separate encoder decoder models may be trained for the two cases. In other words, one of the encoder decoder models may be speaker independent, in other words directed to speech to image, and the other encoder decoder model may use a token, and be directed to image to speech. According to another example implementation, a combined model may be trained, which uses tokens, and which also has the token set ID for all speakers. This combined model would train on each utterance twice. Alternatively, if there is a large quantity of data, utterances may be randomly assigned to either the speaker token or the “everyone” token. According to yet another example implementation, a speaker ID may be used. However, according to this example implementation the speakers the system would pay attention to may be limited to those having a speaker ID. This approach may be useful in certain circumstances, for example at airports where an official may be attempting to match an individual to a photograph, and a more precise and quick determination may be made where there is a dialect sensor and a speaker ID associated with the individual. Using this approach, clustering in the audio module may be performed in an easier and cleaner manner. The example implementations described herein may have various implementations and applications. As explained above, aspects of the example implementations may be used to build systems that may assist people with disabilities, especially those who may be able to provide a visual or audio output that does not involve typing or entering information from a keyboard or mouse that may require fine motor skills. Further, the example implementations may also be useful in fields such as autonomous robot training, which require the robot to learn about the audio and visual environment in a manner's similar to a human, so as to be able to perform safely and efficiently in the environment. Further, the example implementation may be directed to machine learning algorithms and/or systems that need a large amount of low-cost training data, as well as machine learning systems that do not intended to be limited by text labeling limitations, such as schedule, cost, etc. According to one example implementation, a language independent device may be trained to assist a person with a hearing disability to determine the object of conversation by others around the person, or to use speech to tell a person who is visually impaired the physical surroundings of his or her environment. Because text is not used in the present example implementations, the training system is also language independent, and can be used across countries, cultures and languages. Because the example implementations may include pluralities of sensors that are connected to a common network, users in the same region and speaking the same language may be able to train the system in a common manner. According to another example implementation related to autonomous robot training, the example approach is advantageous over shared latent space, or function bounded latent spaces. More specifically, according to the example implementations, the de-coupling of latent spaces allows users to add more modalities in a machine at a later time without having the new modalities impact the old learned modalities. Instead, according to the example implementations, the new modalities will learn by themselves, and gradually build more connections with the old modalities. For example, but not by way of limitation, the autonomous robot they initially have a sensor directed to visual aspects such as a camera, and another sensor directed to audio aspects such as a microphone. However, the user may wish to add additional sensors directed to other modalities, such as temperature, touch, radiation or other parameters that may be sensed in and environment. Those new modalities can be added to the example implementations without impacting the already present modalities (e.g., visual and audio), in a manner that cannot be accomplished in the related art. Further, the robots may permit learning associated with environments in which human operation is difficult, such as deep-sea, outer space, or the like. According to one example implementation associated with a modality of touch, a robot may be taught how to grab an object, such as a bottle or glass. The robot may learn from its own training data associated with touch, to determine whether the object is being gripped with too little force or too much force. Because there is no text labeling concept, the robot may use its own output as a sensed input, or may learn from previously provided human training data. 9 FIG. 900 900 illustrates an example process according to the example implementations. The example process may be performed on one or more devices, as explained here. 901 At , nontext inputs of various types are received from sensing devices. For example, but not by way of limitation, an audio input may be received from a microphone as one type of nontext input, and an image input may be received from a camera as another type of nontext input. The example implementations are not limited to just two types of nontext inputs, and other nontext inputs, such as temperature, touch, radiation, video, or other input that is capable of being sensed may be included according to the example implementations. 903 At , auto encoding and decoding is performed for each of the types of nontext inputs for which the inputs have been received. The auto encoding and decoding may be performed using convolutional neural networks, for example. Thus, an audio input that was received from the microphone may be encoded by an autoencoder, and an image input that was received from the camera may be encoded by another autoencoder. The deep convolutional autoencoders that learn each of the respective types of nontext input representations may be used to generate outputs. 905 903 At , deep networks are used to bridge the latent spaces of the two deep convolutional autoencoders used at . More specifically, deep neural networks that learn mappings between the first modality representations and second modality representations are used to bridge the latent space between the autoencoder representations of the first type and the autoencoder representations of the second type. For example, but not by way of limitation, the deep networks are provided such that inter-conversion can be performed between inputs of an audio type and outputs of an image type, or vice versa. When audio output and an image output are both available, the example implementation may use both audio output and image output with the audio input for training; a similar approach may be taken with respect to the image input, when available. When pairing info nation is not available, autoencoder training can be performed using historical data. 907 At , based on the encoding, decoding, and the bridging, appropriate outputs including a first type of nontext output and a second type of nontext output, with nontext inputs in either the first modality or the second modality are generated for each of the types of nontext inputs. For example, an audio learning module output spectrogram or output images corresponding to various hidden note values may be provided as outputs. Examples of inputs and outputs are illustrated in the forgoing drawings, and also explained above in the description of the example implementations. 10 FIG. 1000 1005 1005 1000 1010 1015 1020 1025 1030 1005 illustrates an example computing environment with an example computer device suitable for use in some example implementations. Computing device in computing environment can include one or more processing units, cores, or processors , memory (e.g., RAM, ROM, and/or the like), internal storage (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface , any of which can be coupled on a communication mechanism or bus for communicating information or embedded in the computing device . 1005 1035 1040 1035 1040 1035 Computing device can be communicatively coupled to input/interface and output device/interface . Either one or both of input/interface and output device/interface can be a wired or wireless interface and can be detachable. Input/interface may include any device, component, sensor, or interface, physical or virtual, which can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). 1040 1035 1040 1005 1035 1040 1005 Output device/interface may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, input/interface (e.g., user interface) and output device/interface can be embedded with, or physically coupled to, the computing device . In other example implementations, other computing devices may function as, or provide the functions of, an input/interface and output device/interface for a computing device . 1005 Examples of computing device may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, server devices, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like). 1005 1025 1045 1050 1005 1050 Computing device can be communicatively coupled (e.g., via I/O interface ) to external storage and network for communicating with any number of networked components, devices, and systems, including one or more computing devices of the same or different configuration. Computing device or any connected computing device can be functioning as, providing services of, or referred to as, a server, client, thin server, general machine, special-purpose machine, or another label. For example but not by way of limitation, network may include the blockchain network, and/or the cloud. 1025 1000 1050 I/O interface can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.11xs, Universal System Bus, WiMAX, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment . Network can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like). 1005 Computing device can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media includes transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media includes magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory. 1005 Computing device can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others). 1010 1055 1060 1065 1070 1075 1080 1085 1095 Processor(s) can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit , application programming interface (API) unit , input unit , output unit , non-text input unit , non-text output unit , the encoder/decoder and cross-media neural network unit , and inter-unit communication mechanism for the different units to communicate with each other, with the OS, and with other applications (not shown). 1075 1080 1085 For example, the non-text input unit , the non-text output unit , and the encoder/decoder and cross-media neural network unit may implement one or more processes shown above with respect to the structures described above. The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. 1060 1055 1065 1075 1080 1085 In some example implementations, when information or an execution instruction is received by API unit , it may be communicated to one or more other units (e.g., logic unit , input unit , non-text input unit , non-text output unit , and the encoder/decoder and cross-media neural network unit ). 1075 1085 1080 2 5 FIGS. and For example, the non-text input unit may receive and process inputs such as images and sounds, and via processing of the encoder/decoder and cross-media neural network unit (e.g., using the foregoing, especially as disclosed above with respect to ), generate a respective image or sound output at the non-text output unit . 1055 1060 1065 1075 1080 1085 1055 1060 In some instances, the logic unit may be configured to control the information flow among the units and direct the services provided by API unit , input unit , non-text input unit , non-text output unit , and encoder/decoder and cross-media neural network unit in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit alone or in conjunction with API unit . 11 FIG. 1100 1105 1145 1160 1130 1145 shows an example environment suitable for some example implementations. Environment includes devices -, and each is communicatively connected to at least one other device via, for example, network (e.g., by wired and/or wireless connections). Some devices may be communicatively connected to one or more storage devices and . 1105 1145 1005 1105 1145 1105 1110 1115 1120 1125 1135 1140 1130 1145 10 FIG. An example of one or more devices - may be computing devices described in , respectively. Devices - may include, but are not limited to, a computer (e.g., a laptop computing device) having a monitor and an associated webcam as explained above, a mobile device (e.g., smartphone or tablet), a television , a device associated with a vehicle , a server computer , computing devices -, storage devices and . 1105 1120 1125 1145 In some implementations, devices - may be considered user devices associated with the users of the enterprise. Devices - may be devices associated with service providers (e.g., used by the external host to provide services as described above and with respect to the various drawings, and/or store data, such as webpages, text, text portions, images, image portions, audios, audio segments, videos, video segments, and/or information thereabout). 12 FIG. 1200 1201 1203 1201 1205 1203 1201 1205 illustrates an example implementation that is associated with the application of a robot. More specifically, at , a robot is represented. The robot may include a sensor that is coupled either by direct connection or wireless communication to provide input to the robot. Plural sensors may be provided, each being associated with one or more modalities. At , a storage is provided that includes instructional information associated with the present example implementations, such as executable computer instructions, as well as data received from the sensor . At a processor, such as a microprocessor or CPU, is provided that receives the instructions and data from a storage , which may be located remotely or within the robot. Further, it is noted that the sensor may also directly provide data to the processor , either remotely or within the robot. 1205 1207 1209 1200 12 FIG. The processor performs the various operations described in the forgoing example implementations, and generates output commands and data. The output commands and data may be provided, for example to a player at that outputs information in one or more modalities, as well as to a device at that performs an action, such as a motor or the like. While the drawing of shows communication by way of a network, the elements shown therein may be directly connected to one another without departing from the inventive scope, such as using the internal circuitry of the robot . The foregoing example implementations may have various advantages and benefits over the related art. For example, but not by way of limitation, related art approaches to machine learning have been explored for style transfer within a single modality, but for cross sensory media associations the related art only employees text labeling as a branch. The example implementations take advantage of the advancement and wide adoption of IOT type sensors such as cameras and microphones sensors, to provide a novel way of associating audiovisual sensory data, without requiring text labels. Further, while related art approaches exist that convert speech to text, and use text to retrieve images. However, speech to text requires a predefined speech recognition engine, while the foregoing example implementations do not require a pre-existing speech engine for a machine to learn. Related art approaches that require priest existing speech engines also create difficulties for the machine learning to be performed directly from the sensory data. Additionally, and in contrast to related art approaches that use common latent space for images and speech, the example implementations are directed to the use of mapping between two embeddings. More specifically, using a common latent space, as in the related art, requires the system to replace single shared latent space with respective separate latent spaces, which in turn increases the dimensionality of the manifold substantially, and further, introduces an objective function to force two separated spaces close to each other. This related art approach may also create interference between different modalities. By using the present example implementation, which includes a learning structure that is directed to decoupled learning of each modality, and generating nonlinear modality links separately, the related art problems and disadvantages associated with modality interference are avoided, while the example implementation may continue to learn nonlinear relations between two modalities. Additionally, the example implementations also differ from related art approaches that only involve data from one modality such as text, by building bridges between two different modalities, such as images and audios. Thus, the example implementations are able to handle data with asymmetric dimensions and structures across two modalities which the related art solution cannot solve for. Further, use of lookup tables instead of a neural network approach is not an option as compared with related art lookup tables, because the ability to achieve the same function with the lookup table as with the example implementations using the CNN based autoencoders as explained above cannot be achieved due to space and storage limitations on a lookup table which would become memory space inefficient if attempted. Although a few example implementations have been shown and described, these example implementations are provided to convey the subject matter described herein to people who are familiar with this field. It should be understood that the subject matter described herein may be implemented in various forms without being limited to the described example implementations. The subject matter described herein can be practiced without those specifically defined or described matters or with other or different elements or matters not described. It will be appreciated by those familiar with this field that changes may be made in these example implementations without departing from the subject matter described herein as defined in the appended claims and their equivalents.
Abstract: As post-secondary institutions continue to expand online offerings, increased numbers of students are enrolling in online doctoral programs. The results of this study can guide the development of retention strategies for students who are at risk of academic failure and who might ultimately drop from online doctoral programs. Click here to view chart. The expansion of online programs and student enrollment continues through post-secondary education at all levels, including the doctoral level. The number of online students increased by approximately 1 million to 5.6 million in fall 2009, demonstrating an increase of 21% (Bollinger & Halupa, 2012). Although doctoral online programs are gaining popularity, student persistence remains comparable to the 50% undergraduate retention rates. It is unclear which factors contribute to student persistence at the doctoral level; however, faculty are deemed integral contributors for the support of doctoral students. Yet, despite mentoring, increased academic support systems, and implementation of retention strategies, student retention at the doctoral level remains nearly 50% (Stallone, 2009). Bollinger and Halupa (2012) noted that students have been opting for online education because the courses tend to fit their busy lifestyles. They analyzed 84 first course doctoral students in four areas related to anxiety and satisfaction and explored student anxiety expressed over the use of the computer, the Internet, and online course delivery, as the literature supported each of these areas as potential areas of anxiety for students. Anxiety provoking experiences reported by the students included their lack of information literacy that highlighted their inability to navigate the Internet and locate appropriate resources. Bollinger and Halupa (2012) noted a correlation between higher levels of satisfaction and reduced anxiety. Enhanced student orientation, utilizing student-centered approaches, and planned interventions to lessen student apprehension were recommended. Faculty lack the online experience and literacy skill sets in their interactions with doctoral students in online doctoral programs to generate increased student satisfaction. Although the study did not directly examine faculty/student time in the online course room, the relationship between satisfaction and anxiety may relate and therefore pertain to this study and issues of student retention. The doctoral curricula has been studied to identify how to strengthen and guide individuals in doctoral programs (Kumar, Dawson, Black, Cavanaugh & Sessums, 2011) with the application of research-based knowledge and the link of context-based knowledge to enhance and improve the practice (Shulman, Golde, Conklin, Bueschel & Garabedian, 2006). Stallone (2009) assessed four characteristics associated with doctoral student retention: (a) persistence, (b) cultural diversity, (c) psychological characteristics, and (d) college engagement. It was noted in Stallone’s (2009) research that psychological factors are the most identified cause for student attrition. The human quality factors related to cultural diversity sensitivity are what assisted the student in achieving doctoral success. Kumar et al. (2011) reported 94% of the doctoral students agreed that their expectations were met during the initial year of their doctoral training, with most students identifying faculty members’ support as the key ingredient of doctoral student persistence. Understanding the necessary skills and relevant experiences faculty would need for successful online doctoral studies is significant for administrators who seek to increase online doctoral persistence. Green et al. (2009) reported previous studies had identified motivating factors that enhanced faculty retention to be flexible hours, innovative pedagogy, acquiring new technological skills, and expanding faculty career opportunities. The authors also reported unfavorable aspects that included added time and effort required to teach online courses, lack of monetary compensation, limited organizational support structures, faculty inexperience, and the faculty member’s lack of technological skills. The greater problem was in the perceived lack of vision by administration for online education. Green et al. informed this study as it related to population demographics, such as faculty years of experience teaching online, gender and others. Encouraging factors, as noted by these researchers included mentoring, continual training, collaboration with on-site faculty, and enhanced engagement within organizational community of the college. Lovitts (2009) discussed students not being prepared to make the transition from student to independent scholar. For example, in the first years of doctoral programs, students begin to deal with isolation. Further development was based upon the connections made that support or understands the student. The researchers concluded that online faculty would benefit from assistance in instructional design, added training, and early mentorship for new online faculty to reduce online faculty turnover. Seaman (2009) found the faculty delivering online courses to be both experienced and novice, part-time and full-time. The researcher reported the top ranking concern among faculty surveyed was that online course preparation required more time than conventional classroom delivery. The data also indicated they needed assistance with support and incentives. Only one third of the surveyed faculty had taught an online course, and even fewer were currently online instructors at the time of the inquiry. The faculty paradoxically expressed some concerns about online programming while most had at some time recommended it as a viable option to students. The contradictory nature of the faculty responses was reflective of the distant role of administration to the unique support needs on online programming at their colleges. Seaman (2009) reported faculty who had never taught an online course view that the online student outcome was inferior as compared to faculty that had taught an online course found student outcome as good or superior to traditionally taught courses. All faculty surveyed identified the lack of support services for online programming (Seaman, 2009). Mentoring, according to Columbaro (2009), had all of the benefits in the virtual environment and few of the historical limitations. Columbaro (2009) contended that exemplary professors could mentor doctoral students and prepare them for professional challenges in the real world. She explained that mentorship was essential to describing the relational quality of the professor and student and preparing them for professional placement. The students who were unengaged in mentoring needed to be motivated, a significantly different problem. Online faculty had few incentives to reinforce student productivity. Understanding student productivity in the online environment was addressed by experienced online faculty in several different ways. Meyer and McNeal (2011) in a qualitative study interviewed 10 online faculty to determine what methods maximized student productivity in the online environment. Faculty reported pedagogical methods that increased student productivity were creating relationships, student engagement, timely responding, planned intervals for communication, assignment reflection, well organized course structure, applied technology, adaptable, and having the utmost in expectations for the student. Literature indicates student persistence is negatively impacted by anxiety, and was positively affected by faculty presence that contributed to student satisfaction (Baltes et al., 2010; Bollinger & Halupa, 2012; Kumar et al., 2011; Stallone, 2009). Faculty status, training, incentives, and experience contributed significantly to both faculty and student retention (Green, Alejandro, & Brown, 2009; Lee et al., 2009; Seamon, 2009). The literature review has indicated that course room time has needed intensive instructional design that was best accomplished collaboratively with other faculty and modeled after institutional mentoring practices (Columbaro, 2009; Meyer & McNeal, 2011). Gaps in the research of online doctoral student’s persistence have common features. None of the studies indicated that actual measurement of time spent online, frequency of faculty contacts, and correlation to course outcomes. These measures would provide good markers of student progress, persistence, and engagement throughout the doctoral level course. If such indicators could be benchmarked, it is reasonable to use them throughout a given course by experienced online instructors as flags warranting potential intervention. In online doctoral programs, the completion of the coursework has become a challenge and concern. The purpose of this research was to determine whether a correlation exists between faculty and student time spent in online doctoral course rooms and student persistence. Research Questions The following questions guided this study: (a) Is there a statistically significant correlation between faculty time in the Educational Leadership and Instructional Design and Technology doctoral online course rooms and doctoral student persistence? (b) Is there a statistically significant correlation between student time in the Educational Leadership (EDL) and Instructional Design and Technology (IDT) doctoral online course rooms and doctoral student persistence? Method The study was quantitative, using archived data–expo facto–to determine whether a correlation existed between the dependent variable, student persistence, and the independent variables, faculty and student time spent in doctoral online course rooms. The data was collected from the Educational Leadership (EDL) and Instructional Design and Technology (IDT) 3-credit courses from years 2009 to 2012, at a Level 6, not-for-profit institution in South Florida. The IDT program began in 2012. Population Students enrolled in the EDL and IDT PhD program represent diverse backgrounds and locations. Thirty states are represented, as are China, Ghana, and Puerto Rico. Racial distribution is equally diverse with 46% of the students being White, 38% African American, 11% Hispanic, 1% native Hawaiian, and 4% Other. Ages of the students range from 27-81 years and 67% are female and 33% are male. Fifty-five percent of the students are married, 25% are single, 17% are divorced, and 3% are separated. Although the main research questions were concerned only with the relationship between faculty/student time in courses and student retention, addition analysis was completed on demographic information and any statistically significant items of interest. Faculty teaching in the EDL and IDT online doctoral programs have varied online teaching experience ranging from 1-10 years. All hold a terminal degree in the content area related to the courses they teach. Procedures Archived data consisted of 1782 records of students who took online doctoral classes in EDL and IDT programs. Students could have taken more than one course; therefore, individual students were pulled out to identify better the number of courses that each student took. The data were aggregated (collapsed) to a single, individual case to determine the average amount of time each student spent in all course work. This was done so that we had independence. To conduct an independent sample T-test, the assumptions of that statistical test–independent samples–needed to be met. Because the same students appear more than once in the data set, with most students having taken multiple courses, a correlation was not initially conducted because the cases were not independent of one another. For example, student A would be highly correlated with student A. Student A could appear in the data as many as 17 times in the data set, which makes students highly correlated with themselves. In addition, students who spent a lot of time in their first class all the way through to their 17th class were going to be very similar to themselves. There were 179 persisters and 69 students who dropped, weighting the data to favor persisters, which violates our assumption of independence. The data was then collapsed so that each student appears in the data only once. An average was determined. The learning platform (LMS) provides a total number of minutes students and faculty spend online in the course room. The total time spent in all classes was divided by the number of courses taken. Persisters’ time in courses was compared to students who dropped out of the program, allowing the independent assertion to be made. The instructors’ minutes spent in the course and the students’ time spend in the course created a mean called I average time and S average time. Instructor time spent off-line was not calculated. The files were each collapsed to create unique, aggregated file that consisted of 260 students. A total of 260 students took between 1 and 17 courses. Of those 260 students, 63 dropped at some point and 197 persisted (coded as 1 and 0 respectively). An independent sample T-test was selected for the analysis because it allows for comparisons between two variables, two dichotomous groups, and to find out if those who were coded as 1, persisters, and those who were coded as 0, non-persisters, were significantly different in the amount of time that they spent in their courses on average. Results An independent-samples t-test was conducted to compare the average amount of time individual instructors spend in an online doctoral program spent in course rooms and the persistence or non-persistence of their students in the program. These results were highly significant as indicated by the alpha level was 0.001suggesting counter intuitively, those students who did not persist had on average instructors who spent significantly longer amounts of time in the courses than those who persisted. A significant difference exists in the scores for Instructor Average Amount of Time in Courses and (M = 4.2, SD = 1.3) and students who persisted and those who did not (M = 9516, SD = 2628); t (257) = 4.565, p = 0.000. An independent-samples t-test was conducted to compare the average amount of time individualstudents in an online doctoral program spent in course rooms and their persistence in the program. There was no significant difference in the scores for Student Average Amount of Time in Courses (M = 4397, SD = 3048) and students who did not persist and those did (M = 5187, SD = 3049); t (257) = -1.780, p = 0.076. These results suggest that the time students spend in online courses does not play a role in persistence. Specifically, our results suggest that there is no statistically significant difference between persisting students and non- persisting students in the average amount of time they spend in their online courses at the .05 alpha level. It was shown to be significant at an alpha level of .1, suggesting that those who dropped out of the program spent significantly less time, on average, than those who persisted. Additional Analysis None of the additional variables of interest, including faculty gender, faculty full time vs part time status, faculty years of experience, or average time faculty spent in the course per student were correlated with student persistence. The proportion of students who persisted (PropPerst was our Dependent variable) was computed by taking the total number of students a particular faculty had had in an online course during the study period (ranging from 5 to 213 students) and calculating the proportion of students that persisted compared to those that dropped from the program (range .6 or 60% persisted to 1 or 100% of that faculty’s students persisted). Table 2 Additional Variables Discussion EDL and IDT doctoral course rooms revealed a statistically significant correlation between the faculty time in course rooms and students who did not persist. Interestingly, the results do not reflect the current thinking that faculty are productive and available while logged on. Meyer and McNeal’s (2011) study indicated faculty effectively used access to content, their faculty role, increased their interaction, encouraged student effort, required real world applications, and stressed time usage consistently over their course, regardless of their discipline. Although both studies seem to be contradictory, the question still remains: what do faculty do when logged on? Some faculty might grade lengthy papers while logged on and others might download all papers and grade while offline. Others might be answering phone calls and e-mail while logged on. The data indicated that the more experienced faculty spent more time logged into their online classes. Seidman (2005) asserted faculty members have the most influence on the attitudes of students, and therefore, the “greatest effect on retention” (p. 223). The current study indicates that faculty time online alone is not a factor in student persistence. The results of this study can guide the development of retention strategies for students who are at risk of academic failure and who might ultimately drop from online doctoral programs. The findings of this study revealed that student time online in EDL and IDT online courses at the doctoral level was not a significant factor in student persistence and time was not a predictor for students who might drop out. A limitation of this study was the inability to gauge how students were using time when logged into the EDL and IDT online course rooms. Some students might prefer to download materials and work offline, which results in fewer minutes counted as “online” compared to students who are logged in while reading, writing, or just away from their computers. Another consideration is the computer expertise of students. Because many students have not taken online courses and/or have been out of school for many years, learning to maneuver in the course room and in the programs needed to complete assignments might have added to the login time. Time logged in doctoral online courses is only one piece of the retention puzzle. Other factors mitigate student decisions to dropout (see Table 2). Summary Retention rates in PhD programs have gained increased attention (Cassuto, 2010). The focus has been on the dissertation stage, not the coursework (Cassuto, 2010). All students, regardless of interventions and best practices offered, are at risk of dropping out. Although connectedness to the faculty and the university have a positive influence on retention rates (Seidman, 2005), the student time logged into the EDL and IDT doctoral programs was not a factor in persistence. However, faculty time logged in had a negative impact—more time, higher dropout rates. Suggestions for future research might include a qualitative study exploring what faculty and students do while logged into online course rooms. References Baltes, B., Hoffman-Kipp, P., Lynn, L., & Weltzer-Ward, L. (2010). Students’ research self-efficacy during online doctoral research courses. Contemporary Issues in Education Research, 3(3), 51-58. Retrieved from http://journals.cluteonline.com/index.php/CIER Bolliger, D. U., & Halupa, C. (2012). Student perceptions of satisfaction and anxiety in an online doctoral program. Distance Education, 33(1), 81-98. Retrieved from http://www.tandfonline.com/toc/cdie20/current#.U2kGxocx-Uk Cassuto, L. (2010, October). Advising the dissertation student who won’t finish. The Chronicle of Higher Education. Retrieved from http://chronicle.com/article/Advising-the-Dissertation/124782/ Columbaro, N. L. (2009). e-Mentoring possibilities for online doctoral students: A literature review. Adult Learning, 20(3), 9-15. Retrieved from http://www.aaace.org/adult-learning-quarterly Green, T., Alejandro, J., & Brown, A. H. (2009). The retention of experienced faculty in online distance education programs: Understanding factors that impact their involvement. International Review in Open and Distance Learning, 10(3), 1-16. Retrieved from http://www.irrodl.org/index.php/irrodl Holmes, B. D., Robinson, L., & Seay, A. (2010). Getting to finished: Strategies to ensure completion of the doctoral dissertation. Contemporary Issues in Education Research, 3(7), 1-8. Retrieved from http://journals.cluteonline.com/index.php/CIER Kumar, S., Dawson, K., Black, E. W., Cavanaugh, C., & Sessums, C. D. (2011). Applying the community of inquiry framework to an online professional practice doctoral program. International Review of Research in Open & Distance Learning, 12(6), 126-142. Retrieved from http://www.irrodl.org/index.php/irrodl Lee, D., Paulus, T. M., Loboda, I., Phipps, G., Wyatt, T. H., Myers, C. R. . . . Mixer, S. J. (2010). A faculty development program for nurse educators learning to teach online. Tech Trends: Linking Research & Practice to Improve Learning, 54(6), 20-26. Retrieved from http://dupress.com/periodical/trends/tech-trends-2014/ Meyer, K. A., & McNeal, L. (2011). How online faculty improve student learning productivity. Journal of Asynchronous Learning Networks, 15(3), 37-53. Retrieved from Seaman, J., (2009). Online learning as a strategic asset. [Volume II]. The paradox of faculty voices: Views and experiences with online learning. Results of a National Faculty Survey, part of the online education benchmarking study conducted by the APLU-Sloan national commission on online learning. Washington, DC: Association of Public & Land-Grant Universities and Babson Survey Research Group. Retrieved from http://www.aplu.org/document.doc?id=1879 Seidman, A. (2005). College student retention. Westport, CT: Praeger Publishers.
https://www.keiseruniversity.edu/doctoral-students-online-time-graduation/
Process facilities are used in various industries such as petroleum or chemical refining, pharmaceutical, pulp and paper, or other manufacturing operations. Process facilities use process control systems including various field devices to measure and sense process parameters. The field devices can include tank level gauges, temperature sensors, pressure sensors, valve controllers, actuators and other devices. A process facility can use tens or hundreds of field devices to monitor and control the process(es). Field devices require calibration at regular intervals of time as prescribed by the field device manufacturer in order to maintain accurate measurements and properly function. If a field device is not calibrated, the process data which that device measures may not be accurate which can affect the quality of the process. The calibration of field devices can be performed as scheduled maintenance at time intervals depending on recommendations of field device manufacturers or based on process criticality where that instrument is used. However, detecting a field device that has gone out of calibration during operation is difficult. The identification of out of calibration measurement values are difficult to identify particularly when the field device is being used in a process.
Corporate Turbulence in a VUCA Business World The environment can be stable, that is, one in which there is little unpredictable change. Another type of environment is referred to as changing. A turbulent environment exists when changes are unexpected and unpredictable. Environmental complexity was defined as characterizing the number of environmental variables and their interdependence. Low organizational complexity indicates that only a few variables describe the environment while high complexity indicates that the environment has many important variables to consider. (Krishna Teja 2016) A turbulent environment exists when changes are unexpected and unpredictable. The key environmental issues concern the nature of the pressure for change and the speed at which the organization must be able to respond an act. The level of environmental turbulence appears to influence structure. Ansoff (1979) also developed the measurement of the environmental turbulence into five levels: repetitive, expanding, changing, discontinuous, and surprising levels (Dan Kipley, Roxanne Helm-Stevens, Mitchell Lookinbee-Kipley) Turbulence in diving In scuba diving, this can happen when changes, Indicative, occur - Environment: Such as current, wave, visibility, thermal bed - Equipment: Such as mask failure, pressure / breathing regulator, latency regulator failure, gas leak - The diver him/herself: Such as workload, stress, abrupt change of depth, hypercapnia, sedation, disease The above can happen individually or in combination, and their effective treatment lies in - Good education - Taking care of the physical and mental condition - The time spent practicing in scenarios - The performance ability of the diving buddy or team Turbulence in business and management Examples of this “turbulence” (Warnecke and Becker, 1994) are the rapid development in information and communication technologies, satisfied markets, a high competition that leads to more customer orientation as well as the political changes in East Europe, where markets break down in few months. (Kranjska gora, 1997) As environmental turbulence increases, strategic issues that challenge the way an organisation plans and implements its strategy emerge with greater frequency. Hence the tracking, monitoring and management of priority strategic issues becomes an imperative. Two basic factors that influence uncertainty are the number of factors that affect the organization and the extent to which those factors change. Strategies to adapt to these changes in the environment include boundary-spanning roles, interorganizational partnerships, and mergers and joint ventures. (read more) Turbulence and VUCA Bob Johansen, of the Institute for the Future, adapted VUCA for the business world in his 2009 book, Leaders Make the Future. He used it to reflect the turbulent and unpredictable forces of change that could affect organizations, and he argued that you need new - skills - approaches and - behaviors to manage in the face of the four VUCA threats. VUCA represents a set of challenges that - individuals - teams - managers and - organizations in affected industries all have to face. Individually, these challenges can be significant, but they can be formidable when they’re combined. Are you an executive, manager or project team member? Do you want to learn how to deal with turbulence in a VUCA business world? VUCASIM Dive is an imaginative, fun, and safe way.
https://seabreaze.gr/corporate-turbulence-in-a-vuca-business-world/