content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Recent release: Prospector Sound's Red Sargasso is out 4/29 via The Ambient Zone.
If you enjoyed this interview with Richard Talbot aka Prospector Sound and would like to find out more, visit the website of his main band, Marconi Union.
[Read our Marconi Union interview]
Can you talk a bit about your interest in or fascination for sound? What were early experiences which sparked it?
I can’t remember a specific time, but I’ve always had an interest in sound, places, memories and the connections between them. It can be like a form of time travel, where you hear a random sound or piece of music and instantly your mind takes you somewhere else, either an imaginary place or a real one.
The other aspect of music and sound that was very important to me, was that it could also act as an escape route from everyday life and allow you to travel to places that only existed in your mind.
Which artists, approaches, albums or performances using sound in an unusal or remarkable way captured your imagination in the beginning?
The first record that made me think about the nature of sound was Joy Division’s Unknown Pleasures. At that impressionable age it didn’t sound like anything I’d ever heard before and had a sense of mystery and power that totally caught my imagination.
What's your take on how your upbringing and cultural surrounding have influenced your sonic preferences?
I grew up on the outskirts of nowhere and I was always looking either for an escape route or a more exotic or cinematic alternative to a very humdrum existence. I found that music offered that.
It perhaps wasn’t that healthy to be as consumed with music as I then was but it did allow me a very exciting life - if only in my mind!
Working predominantly with field recordings and sound can be an incisive step / transition. Aside from musical considerations, there can also be personal motivations for looking for alternatives. Was this the case for you, and if so, in which way?
This sounds like something a therapist might ask!
I’m not sure I’ve made an incisive step or transition. When I look back over my music, I think all I’ve really done is refine the same set of very simple ideas that I started with. I certainly haven’t made any quantum leaps stylistically and actually I’m happy with that. A lot of artists spend time trying to work out what they want to achieve. I was really fortunate that right from the start I knew what I wanted to do and I’ve followed that route since then. I think that’s unlikely to change in the near future because I’m still fascinated by those ideas and I think there’s a lot left to explore.
Of course, I am also very fortunate that I have another musical life as a member of Marconi Union and there I get an entirely different musical experience working with other musicians and experimenting with different styles of music.
I should clarify that I’m not a purist about field recordings and while they constitute an element of what I do, it is by no means the largest part. I use sounds that conjure up images that inspire me. Whether they are from field recordings or conventional instruments is largely irrelevant to me. The source of the sounds is less interesting than their effect. I also use a lot of processing which often renders raw sounds unrecognisable in their final form.
How would you describe the shift of moving towards music which places the focus foremost on sound, both from your perspective as a listener and a creator?
I would not say I ever made a shift in this, my own music has always been like that and it’s a large part of my contribution to Marconi Union’s music. The main difference is that I’m collaborating with other musicians and together we want to explore different directions and ideas.
What, would you say, are the key ideas behind your approach to music and working with sound? Do you see yourself as part of a tradition or historic lineage when it comes to your way of working with sound?
When I started making music I wasn’t really well versed in what you call the ‘tradition’ or aware of ideas like Soundscaping, Ambient Music and Acoustic Ecologists - that developed over time.
But I’m not particularly interested in making “academic music” or in authenticity, my aim is to just make something that inspires people’s imagination and I don’t really feel the need to be linked to a certain genre or category.
What are the sounds that you find yourself most drawn to? Are there sounds you reject – if so, for what reasons?
That’s quite difficult because it depends on context, but I do have a recurring fondness for warm evolving drones that sit low in the mix and provide a foundation for other high register sounds ...
As creative goals and technical abilities change, so does the need for different tools of expression, from instruments via software tools and recording equipment. Can you describe this path for you personally starting from your first studio/first instruments and equipment? What motivated some of the choices you made in terms of instruments/tools/equipment over the years?
Over the years I’ve bought relatively little equipment, I got my first synth, a Korg DS8, in the eighties and it was always my main instrument until it died about ten years ago. Since then my main keyboard has been a Nord Wave which I bought because it allowed you to import sounds. I also use a variety of spatial effects which provides a range of processing options.
So it’s pretty clear that my primary musical interest is textural rather than rhythmic. For instance I’ve never owned a hardware drum machine. Recently, I’ve found myself working more ‘in the box’ using Ableton Live. I particularly like the generative possibilities of asynchronous loops which I used on my recent track, "Laguna". Ableton also has follow actions which offer an easy way to experiment with chance. These are pretty well documented techniques but they can still produce great results.
Where do you find the sounds you're working with? How do you collect and organise them?
I don’t have a favourite location for collecting sounds, sometimes I just leave a mic recording while I’m working and the use the sounds I capture - breathing, typing and coffee mugs being placed on the desk as a barely audible layer in the mix. I often gather sounds around the house, the kitchen is always rewarding. If I remember correctly, the gong type sounds on Night Formation are tuned and processed samples of saucepans.
On other occasions I go out to busy urban locations or grab sounds in shops and occasionally I’ll record more rural sounds.
From the point of view of your creative process, how do you work with sounds? Can you take me through your process on the basis of a project or album that's particularly dear to you?
I don’t have a routine for collecting sounds, occasionally I go out with my recorder or my iPhone and grab lots of different noises then upload into folders on my computer and they can sit there for months until I find them again. By that time I often can’t even identify their source, but if I hear a sound I like, then I’ll reshape it using Ableton or my Octatrack until it has that elusive connection.
I don’t tend to use raw recordings very often, more or less everything gets processed to some degree to enhance the aspects of it that I like. I’m not really interested in trying to replicate something literal. I’m far more interested in suggestion, sounds that trigger memories and emotions that are unique to each listener.
The possibilities of modern production tools have allowed artists to realise ever more refined or extreme sounds. Is there a sound you would personally like to create but haven't been able to yet?
No, because I don’t look for sounds to recreate a memory or trigger an emotion. It is quite the opposite. A sound finds me and if it conveys a certain memory or emotion then that’s all the better.
I did experiment with recording from films using the audio in between the dialogue. It was great fun and after a bit of research I found horror films were predictably the most atmospheric with all sorts of sinister noises. I remember one particular loop that consisted of someone digging a grave!
Ultimately though it felt like plagiarism so I scrapped the samples without using them and abandoned the idea.
How do you see the relationship between sound, space and composition?
That’s a pretty big question I think you’d need to write a book to answer that!
If we just look at space in a musical context, that alone is complex issue with lots of potential definitions. It could refer to the space or place where the listener hears the music and the way that affects their interpretation of it.
There is also the matter of spatiality within music, the gaps between the sounds, not to mention the use of spatial effects such as reverb or delay to influence our perception of the sonic landscape. If we add in sounds that are taken from specific geographical places or attempt to recreate them, it quickly becomes even more complex.
If we then add issues around sound and composition and the interactions between all these topics, we’re in danger of disappearing down a rabbit hole and not resurfacing for several years!
The idea of acoustic ecology has drawn a lot of attention to the question of how much we are affected by the sound surrounding us. What's your take on this and on acoustic ecology as a movement in general?
I think that’s a really interesting question and it touches on loads of issues. I don’t know masses about acoustic ecology but what I have read has been quite fascinating.
The documentarian approach to sound and place is somewhat antithetical to my interest in sound, which is much more impressionistic. I’m really enthusiastic about the idea that in the future we will have audio records of places and I guess there is the added benefit that these recordings will also document changes in our environment.
I certainly enjoy going to places and taking the time to draw breath, listening to my surroundings and imagining how they have changed over the ages.
We can listen to a pop song or open our window and simply take in the noises of the environment. Without going into the semantics of 'music vs field recordings', in which way are these experiences different and / or connected, do you feel?
Pop songs focus is always on the singer and even where there isn’t a vocalist there is usually an attempt to capture a sense of community.
If we open our window what we are hearing is place, even if we hear human activity it is just part of the larger environment and not a demand on our attention.
From the concept of Nada Brahma to "In the Beginning was the Word", many spiritual traditions have regarded sound as the basis of the world. Regardless of whether you're taking a scientific or spiritual angle, what is your own take on the idea of a harmony of the spheres and sound as the foundational element of existence?
Those are much bigger ideas than I focus on when creating my music. My main thoughts when creating are sound, place and imagination.
Sound takes such an important place in the human memory and mind. I have been with my wife for 35 years but when I hear her laugh, the sound always makes me smile.
|
https://www.15questions.net/interview/richard-talbot-aka-prospector-sound-talks-sound/page-1/
|
Putting together a good musical arrangement is an art unto itself. There’s no end to the number of options that present themselves to artists and producers who choose to wear the arranger’s hat in the studio; even those who’ve been at it for a while are always looking for new ideas. In that spirit, here are (in no particular order) 8 quick suggestions of things to think about and watch out for when putting together a musical arrangement.
1. Pick And Choose
A lot of material gets recorded during the initial tracking and overdubbing phases of production, but that doesn’t necessarily mean that it all has to be used in the final arrangement. Often the most musical results are achieved when the producer/artist picks and chooses just the parts that work together best for a particular song and arrangement, instead of trying to use everything that was recorded or programmed.
That may mean not utilizing some excellent performances or cool sounds, and that can become complicated when there are different people involved (disappointed band members or collaborators). But even if a particular performance is nicely crafted or especially well-played, if it doesn't fit in the project at hand then it’s better to cast it aside than try to force a square peg into a round hole (musically speaking). Good ideas will often return in another form at a later date anyway, and the current project will be better for not forcing things.
2. Mix It Up
When artists and producers are working on a group of songs there can be a tendency to rely on the same instrumentation throughout the project. This may make sense if it contributes to maintaining a band’s consistent “sound”, but it still can be a good idea to vary the instrumentation up a little here and there, for some welcome variety. That doesn’t mean wholeheartedly replacing a group of guitars or synths with a string quartet—simply changing one or two instruments slightly can add a little extra something to a group of similar tracks.
Even subtle changes can be helpful—for example, swapping one of several acoustic guitars for a mandolin or the like; trying an upright bass instead of electric (when appropriate); using a Rhodes or Wurlitzer in place of an acoustic grand piano. The extra variety of tones over the course of several songs (or even in different sections of the same song) can draw the listener in anew, and help to keep things sounding fresh.
3. Don’t Step On The Lead
Even studio novices know to keep the lead vocal (or lead instrument if that’s the case) up front in the mix, but the same concern may apply to the arrangement as well. It’s important to insure that other musical parts don’t step on the lead, detracting from the lyrics or melody, especially if that melody is the main hook for the song. A part can sometimes get in the way even if it’s not loud enough to interfere by volume alone.
A background riff (counter-melody) may contain a note or two that clashes with the lead, either pitch-wise (like a stray passing tone that might sound fine on its own but is dissonant when heard under the main melody) or rhythmically (like a syncopation that just doesn’t work against the lead). Sometimes a producer may be focussed so much on the lead that these flaws can go unnoticed, so it’s important to occasionally take a step back and listen to the interactions of the parts with fresh ears.
4. Delegate
Many “one-man band” producers and artists have to wear all the hats in the studio, but whenever possible it can be a good idea to delegate some aspects of the production to another, for an alternative or fresh perspective. A talented multi-instrumentalist may be perfectly capable of playing all the instruments and all the parts in an arrangement, but that doesn’t mean the song wouldn’t potentially benefit from bringing in another player here and there, who may bring a different take (no pun intended) to his part(s). We all tend to fall into musical habits (characteristic riffs, familiar rhythms) and even one or two tracks from guest artists can provide a bit of welcome stylistic variety.
5. Double Your Pleasure
By “doubling” here, I don’t mean the familiar mix techniques of playing the same part twice or duplicating a part and delaying the copy slightly, to make it sound like two players. I mean doubling in the more traditional musical (orchestral) sense—having two different instruments play the same part in unison. In orchestral arranging this has been long employed as a way to coax additional timbres from a fixed orchestral lineup—when two instruments play in unison, their harmonics and overtones combine and interfere, producing different tonalities that neither is capable of on their own (i.e. flute and oboe; trumpet and sax, guitar and piano; etc). Add to that the inevitable variations in timing and pitch and you can have a much more colorful tonal palette without having to deviate too much from a particular instrumental lineup.
6. Hold Something Back
Many arrangements start off light and then gradually build to a higher level of musical intensity—needless to say, in the appropriate musical genres this can be a very effective way of holding and building the listener’s interest throughout a song. But all too often, when this technique is used, the buildup comes too quickly, leaving nowhere to go (in terms of musical intensity) for the rest of the tune, which can end up leveling off and sounding anti-climactic as a result.
Vocalists sometimes do this (especially on those voice competition shows), turning to elaborate flourishes and dramatic high notes long before the melody has been established well enough for the listener to have a handle on it (which is essential to those flourishes eliciting the intended emotional response when they do come along). Arrangements can suffer from this as well. A track may start off lightly (say, guitar or keys and voice) and then the full band comes in, but if that full bands enters full blast on the first chorus, the next two or three choruses may fail to provide the same jolt, again making for an unfortunate anti-climax. If that’s happening, a better approach may be to have a slight energy bump on the first chorus (full band but not at full-tilt), and a slightly bigger second verse and chorus (extra rhythm part or two), saving the full-tilt entrance for the bridge or solo, of course remembering to leave at least a little something for the last chorus(es) to bring at least some extra energy to the end of the song.
7. Comp Judiciously
Comping is a standard arranging technique in the modern studio: multiple takes are recorded, and the best bits of them are extracted and combined into a final composite track. This can be done with any parts, but it’s probably most commonly done with vocals. But, like many modern studio tools that can be employed in search of the best performance (in this case the idealized perfect vocal take), a little may be a good thing, but too much of a good thing can sometimes be bad for you—comping is one of those tools.
Some producers, in search of vocal perfection, end up cutting the various takes not only into lines and phrases but into even smaller bits, words, even individual syllables (!) This can indeed create a technically perfect vocal, but it may sacrifice some of the musicality in the process. Many vocal tracks have an overall arc to the performance, as the singer’s tone and intensity ebbs and flows in keeping with both the lyrical content and the musical dynamics of the song. But few vocalists do this exactly the same way every take—their performances typically evolve and vary somewhat over the course of the session.
When a comp is created from very small bits of several takes, any sense of arc—overall musical dynamic—present in individual takes can easily be lost, resulting in a technically perfect but musically sterile track. An alternative is to comp more sparingly—maybe find the best take of the bunch, and then use comping to correct and enhance the musical arc of that particular performance, swapping out bits and pieces in larger chunks (when that works musically) and only applying small corrections (syllables, letters) when really needed, careful to preserve the overall arc and unique character of the take chosen as the basis for the comp.
8. Leave A Little Of The Human Touch
On the same note as the previous suggestion (re comping), it can sometimes be a good idea to avoid too much “perfection” in general. Now of course this suggestion may be somewhat genre-dependent; there are some musical styles that depend on rhythmic perfection (i.e. dance music), where full 100% quantization is one of the defining elements of the genre, and other tools like pitch correctors (Auto-Tune) may be deliberately applied specifically for the effect they are capable of creating while applying perfect tuning to vocals.
But for many other musical genres, especially those styles that would utilize (or at least emulate) live musical performances, it’s often a better idea to let a little imperfection in and preserve the performers’ individuality than to correct everything but lose that all-important feel. Approaches like partial quantization (less than 100% timing correction) and using pitch correction only on short phrases (rather than whole tracks) can go a long way toward preserving a little of that human touch.
Wrap Up
And on that note, I’ll wrap this up. When putting together a musical arrangement it’s always a good idea to step back now and then and consider other options and alternative approaches—hopefully some of these suggestions will prove useful from time to time.
|
https://ask.audio/articles/8-intelligent-music-arrangement-tips-for-producers
|
Draw 4 arcs around circle in Excel I have 8 cells which have a value of between 0 & 360 degrees. the values in cells 1 & 2 are paired, 3 & 4 are paired, 5 & 6, etc..... Enter Angle and Radius and hit Calculate to re-draw the circle and display all measurements across (Chord) and around (Arc) each section of the circle. The areas of each colored section and the entire circle are displayed in the table above, with volumes for sections of a sphere.
In order to draw the circles, some inputs are: the center of the circle (in Geographic coordinates only) and its radius (in meters). Other data as name, description and style are also required. For this application 6 styles are available: 101, 201, 301, 401, 501 and 601.... 12/02/2009�� On Feb 11, 7:44*pm, joelhmiller wrote: how do i create a formula to find the radius (circumference=2*3.14r) if i have the circumference? The correct formula is 2*pi*r.
How to create a 8 mile radius circle on a specific address plotted on Excel 3D Maps? Hi, so I've been having trouble finding clear instructions on how to use 3D Maps in Excel. I know how to plot regions such as counties and cities, etc., as well as specific addresses on the map.
13/11/2014�� Re: draw circle on chart at specified x,y coordinates Yes, I would like to draw an outlined circle in an existing x,y scatter plot at a specified x,y coordinate with a specific radius. A Bubble chart will not work for what I'm needing because my x,y scatter plot already has some points with connected lines between them.
12/02/2009�� On Feb 11, 7:44*pm, joelhmiller wrote: how do i create a formula to find the radius (circumference=2*3.14r) if i have the circumference? The correct formula is 2*pi*r.
|
http://gleekguide.com/northern-territory/how-to-draw-a-circle-in-excel-with-specific-radius.php
|
JACC in a Flash: NT-proBNP: Guiding Primary Prevention of CVD in Diabetic Patients
Diabetes is one of the main factors leading to development of cardiac disease, yet little research has focused on finding preventive therapies in this population. The biomarker N-terminal pro-B natriuretic peptide (NT-proBNP) is recognized as one of the best predictors for both short- and intermediate-term cardiovascular events—so could elevated levels of NT-proBNP help identify diabetic patients at the greatest risk for cardiac disease?
In the PONTIAC (NT-proBNP Selected Prevention of Cardiac Events in a Population of Diabetic Patients Without a History of Cardiac Disease) trial, Martin Huelsmann, MD, and colleagues hypothesized that the uptitration of neurohumoral therapies (renin-angiotensin system [RAS]-antagonists, ACE-Is or ARBs, and beta-blockers) would be most effective for the prevention of cardiac events in a subgroup of diabetic patients who had elevated NT-proBNP concentrations at baseline. A total of 268 diabetic patients completed the study: 131 patients who were randomized to the “control” treatment group and 137 patients in the “intensified” treatment group.
|
|
Using NT-proBNP concentration appeared to identify diabetic patients who would benefit from neurohumoral therapy: after 2 years, randomization to the biomarker-guided “intensified” group was associated with a 65% reduction in risk of the primary endpoint (hospitalization or death due to cardiac disease; TABLE). There were no major side effects requiring hospitalization, suggesting that targeted blocking of neurohumoral activation with aggressive uptitration was effective and safe, even in an already well-treated population. However, contrary to Dr. Huelsmann and colleagues’ hypothesis, allocation to a treatment group did not result in a significant decrease in NT-proBNP levels.
“It is well known from clinical practice that the cardiovascular risk of individual diabetes patients varies widely,” the authors wrote, noting that future study in larger populations is required to validate the findings of the present study—particularly in patients with low NT-proBNP to determine if treatment effect is exclusively present in patients with higher biomarker concentrations. “We would expect that based on the low event rates in a population with low NT-proBNP the number to treat would be substantially higher than in the population presented in the PONTIAC trial.”
Huelsmann M, Neuhold S, Resl M, et al. J Am Coll Cardiol. 2013;62:1365-72.
|
https://www.acc.org/Latest-in-Cardiology/Articles/2013/10/16/13/28/JACC-in-a-FLASH-NT-proBNP-Guiding-Primary-Prevention-of-CVD-in-Diabetic-Patients
|
Michel Imberty, the psychology of music beyond the cognitive sciences
Though the works of Michel Imberty are known in the field of music cognition (Entendre la musique, 1979, Les écritures du temps, 1981), they adhere only partially and in specific ways to the then dominant paradigms of behaviorism and structuralism. In these books as in the many articles that followed, two major interdisciplinary themes emerged, as well as an epistemic position that is closer to that of anthropology or ethnomusicology, a position that could in fact be described as phenomenological. These two themes are temporality and/or musical time on the one hand - the older of the two - and the nature and origin of human musicality on the other hand, whose central concept was developped in a parallel fashion in the 2005 book, La musique creuse le temps. Indeed, it is also is this book that Imberty's phenomenological position is fleshed out, because reflecting on musical time posits problems not only for cognition in the classical sense, but also for the study of musical meaning which was the focus of the two previous books. Meaning begs the question of intentionality and one cannot work on music - or on any human endeavour - without seeking to understand both the behavior (that of the composer, the performer-interpreter, the listerner), and the meaning those behaviors have for the intentional agents involved. Furthermore, insights may not be gained without reflecting on the meaning of all actions for the researcher himself, the meaning he gives to his study compared to that which the study participants themselves perceive.
By interrogating music through a wide field of research, ranging from psychoanalytic theories to the understanding of time in a "proto-narrative" way such as contemporary biology reveals its traces in cerebral functioning, Michel Imberty opened a considerable space to the interpretation of musical facts, and his writings query both the musicologist and the music analyst, as well as the psychologist or the philosopher who is interested in the way human beings give meaning to temporality. In a special issue entitled "Michel Imberty, the psychology of music beyond the cognitive sciences", the filigrane magazine proposes to welcome the contributions of researchers who have been at one point or another in their journey, marked by this thought which radically renewed the way of understanding the musical fact.
Proposals (in French or English) should be sent before 18 November 2017 to [email protected].
They will include: a) an abstract (6000 characters); b) a short bio-bibliography.
If their proposal is accepted, the authors undertake to send the complete article before June 30, 2018.
|
https://www.ircam.fr/agenda/michel-imberty-colloque/detail/
|
Psychology as a scientific discipline aims to describe, understand, and predict the behavior of living organisms. In doing so, psychology embraces the many factors that influence behavior - from sensory experience to complex cognition, from the role of genetics to that of social and cultural environments, from the processes that explain behavior in early childhood to those that operate in older ages, and from normal development to pathological conditions. The Psychology Department at Berkeley reflects the diversity of our discipline's mission covering 6 key areas of research:
• Behavioral and Systems Neuroscience • Clinical Science
• Cognition • Cognitive Neuroscience
• Developmental • Social-Personality Psychology
Despite the existence of these specialization areas, our program learning goals focus on fostering methodological, statistical and critical thinking skills that are not tied to any one particular content area in psychology but are relevant for all of them.
Most of our program level goals are introduced in Psych 1 (General Psychology), which is the only lower division psychology course that is a prerequisite for the major. These goals are extended and reinforced in a majority of the upper division Tier 2 "core" courses. These include Psychology 10/101, Research Methods, required of all majors, and our Tier 2 courses that survey the major fields of psychology. Our program is designed to ensure that all students gain broad exposure to the field of psychology. In addition, students are encouraged to develop a deeper understanding of at least one major content area in psychology.
|
https://psychology.berkeley.edu/students/undergraduate-program
|
l Understand the overall objectives of B2B marketing, functional modules and architecture, and the relationship between the various levels.
l In-depth understanding of the three levels of customer needs, explore the commercial value and personal value in B2B marketing
l Use market segmentation tools to explore potential market growth opportunities
Benefits to Participants
l Explore the real drivers of the customers and our advantages and disadvantages.
l Match customer needs with our product advantages, seek common interests in differentiation.
l Based on customer’s interests, develop convincing products and communication strategy
l Complete the Marketing Strategy and Action Plan using the result of the analysis
Target Participants
Marketing, Sales, Product, Technical Support and Service related Staff.
Duration
2 Days
Course Outline
l Definition and Logic of B2B Marketing
Ø Tasks and functions included in B2B marketing
Ø The Definition and common goal of B2B marketing
Ø 5 stages in B2B marketing
l Needs and Insight: Understand business customer needs
Ø Explicit and Implicit needs
Ø Understand the customer’s industry needs
Ø Understand the customer’s business process needs
Ø Our role in the industry value chain
Ø Match needs to 4P and 4C
Ø Understand the three levels of need
Ø Trends and changes in need
l Methods of exploring customer needs
Ø (Exercise: 3 levels of customer need)
l Where are we now? Competitor Analysis
Ø Use incentive factor analysis tool to identify motivation factors that the customer cares about and our advantages and disadvantages
Ø Analyze the opportunities we need to maintain and improve
Ø Apply SWOT analysis
Ø (Exercise: incentive factor analysis)
Ø (Exercise: Advantages and Disadvantages Analysis)
l Where could we be? Lock the target market)
Ø Overall market and target market
Ø Product propositioning and brand proposition
Ø Opportunities in the target market
Ø Dimensions and tools for market segmentation
Ø Differentiated competitive advantage increases market opportunity
Ø (Exercise: dimensions of market segmentation and explore potential market)
Ø Exercise: evaluate potential market opportunity and project market size)
l How can we get there? Develop a marketing strategy - Proposition
Ø 4 dimensions of market proposition and strategy
Ø From product to brand
Ø Brand power and market force. Brand building or marketing? Push or Pull?
|
http://www.34learning.com/course_detail/102
|
The dynamic Boston Symphony Orchestra (BSO) is a world-renowned institution with one of the largest and most prestigious orchestras in the country. Engaging more than 1.2 million people each year, the BSO celebrates its 139th season by maintaining the dream of its founder, Henry Lee Higginson. He envisioned and embraced the notion of creating orchestral mastery in his hometown of Boston by inspiring a myriad of audiences and producing transformational experiences. Today, the BSO is known for its diverse programs, excellent performances, and a longstanding tradition of innovation. The goal of these programs and experiences has always been to enhance the quality of life for many communities and promote a sense of belonging for all.
The BSO is ranked among the greatest cultural assets in the City of Boston, the Berkshire region, the entire Commonwealth of Massachusetts and beyond. With an enviable reputation, the BSO provides the broadest range of performance, educational and community programs providing Boston with cultural vibrancy through a number of artistic vehicles, namely; The Boston Symphony Orchestra/Symphony Hall, The Boston Pops and Tanglewood.
Reaching a diverse audience, the BSO leads the way in classical and popular music through concert performances in Boston and Tanglewood but also via the internet, radio, television, educational programs, recordings and tours. In 1996, the BSO launched its website and it is the largest and most visited orchestral website in the United States, receiving approximately 7 million visitors annually on its full site as well as its smart phone-/mobile device – friendly web format. The BSO is also on Facebook and Twitter. Additionally, video content from the BSO is available on YouTube.
In an ongoing quest to attract new audiences, innovate and create even greater impact, the BSO’s new Tanglewood Learning Institute has expanded its reach. Today, it includes educational opportunities that intersect with music and other contemporary arts. Academic fields like history and philosophy are woven into the portfolio of offerings. This has created greater possibilities for all, including an enhancement of the experience for patrons.
From inception, the BSO has been led by legendary conductors, each of whom has put their unique imprint on the Orchestra and its listeners in terms of musical traditions, performance repertoire, educational programs, international tours, and musical recordings. In May 2013, a new chapter in the history of the BSO was initiated when the internationally acclaimed young Latvian conductor, Andris Nelsons became the Music Director, a position that he took up in the 2014-2015 season, following a year as Music Director Designate. Named Musical America’s 2018 Artist of the Year, Mr. Nelson leads 14 of the BSO’s 26 subscription programs in 2018-2019.
Most recently, under the direction of Andris Nelsons, the BSO has been proud to foster a five-year multidimensional collaboration with the historic German Gewundhaus Orchestra of Leipzig (GHO). Andris Nelsons is the principal conductor of both orchestras. This unique collaboration explores each ensemble’s music-making and the historic traditions and accomplishments that have built their reputations as two of the world’s great orchestras. A major highlight of the five-year alliance is an annual focus on complementary programming celebrating each other’s musical legacy in thoughtfully curated concerts which is core to the BSO’s mission.
The BSO produces a plethora of concerts, lectures, educational events and activities. International tours and major worldwide outlets widely publicize the BSO through events such as the Boston Symphony Millenium Concert in Paris and the Boston Pops appearances at major sporting events including the Super Bowl, the World Series, and NBA Finals. The BSO has launched its own recording label, BSO Classics, which, in 2010, garnered the Orchestra’s first Grammy win in over 45 years for its recording of Daphnis et Chloe. Numerous Grammy awards have been celebrated including a fourth Grammy won by the BSO’s engineering team for the recording of Shostakovich’s 4th and 11th Symphonies.
Additionally, the BSO makes a significant contribution to the local and state economies as an employer and market for goods as well as a critical component of the tourism product for the Commonwealth. In fact, the BSO contributes significantly to the Massachusetts economy and is an international treasure; inspiring thousands of audiences and impacting the global community.
For more information on the BSO, please visit the website HERE.
At this important moment in its history, the BSO is seeking an outstanding Chief Development Officer (CDO) to lead its philanthropic efforts and promote transparency between the board and staff on all development issues. One of the main goals of the successful CDO will be to promote and build a culture of philanthropy for the BSO. As the BSO continues to create impact, the ideal Chief Development Officer will be a commanding fundraising leader who will be an integral part of the BSO’s senior management team.
Reporting to the CEO, Mark Volpe, the CDO will be responsible for the creation of the overall strategic development initiatives producing revenue goals in support of the BSO’s mission and vision for the future. The CDO will lead a team of 36 including 7 direct reports. Through this leadership hire, it is anticipated that the BSO will reach new and aspiring levels of stewardship and philanthropic success organization-wide, engaging current stakeholders at the highest levels and attracting new donors. This will include developing strategies to further engage high-level constituents, including corporate partners, individual and institutional donors.
The newly formed Philanthropy Committee of the Board has fiduciary oversight of the overall Development function. The CDO will partner with the Philanthropy Committee to establish a bond between the Board of Trustees and the staff. A new governance model has been implemented which is attracting a new generation of leaders to join the Board and Advisors. Simultaneously, a significant cohort of longstanding and generous donors will be considering their legacy gifts over the next five years and the organization is undertaking a process to more fully explore “mission, program and engagement.”
This is a special time in the history of the BSO as planning is underway to determine the future of Symphony Hall and the surrounding campus. The CDO role offers a unique opportunity to shape the role of philanthropy in a project of multi-generational impact. Currently, philanthropic efforts comprise roughly half of the operating revenue each year for the BSO’s $100 million+ budget; 25% through endowment income. The BSO commits close to $8 million a year to support the Development function. The Development Function is a vital part of the BSO’s foundation.
Key goals will include the following:
The BSO will continue to build a friendly and inspiring environment in which musical mastery, growth, common purpose and community thrive.
The successful CDO must be a proven fundraising visionary and strategist. The CDO will be an effective listener and strong collaborator with the ability to build a meaningful and robust network of partnerships. The CDO will collaborate with other senior executives, Development Staff, and Board members to develop overall strategy for philanthropic activities to continue to elevate and support BSO’s mission.
Key responsibilities include:
While we will consider a broad range of backgrounds, the ideal candidate would have the following qualifications/experience:
Maureen Alphonse-Charles, Liz Lombard and Nadine Coleman of Koya Leadership Partners have been exclusively retained for this search. To express your interest in this role please submit your materials here. All inquiries and discussions will be considered strictly confidential.
Boston Symphony Orchestra is an equal opportunity employer and strongly encourages applications from people of color, persons with disabilities, women, and LGBTQ+ applicants.
At Koya, we do not just accept difference – we celebrate it, support it, and thrive on it for the benefit of our team, our clients, and the communities we serve. Koya is an equal opportunity employer fully committed to creating an environment and team that represents a variety of backgrounds, perspectives, styles, and experiences.
We encourage all to apply because we believe a diversity of voices leads to better discussions, decisions, and outcomes for everyone.
Koya does not discriminate on the basis of race, color, national origin, religion, sex, disability, age, sexual orientation, military status, veteran status, genetic information, gender identity, or any other characteristic protected by applicable federal, state, or local law.
Download the Full Position Profile Here
Koya Leadership Partners is a retained executive search and human capital consulting firm that partners exclusively with mission-driven clients, institutions of higher education and social enterprises. We deliver measurable results, finding exceptionally talented people who truly fit the unique culture of our clients and ensuring they have the strategies to support them. For more information about Koya Leadership Partners, visit www.koyapartners.com.
|
https://koyapartners.com/search/boston-symphony-orchestra-chief-development-officer/
|
By studying reflected light from a young star system, astronomers have precisely measured the size of the innermost gap between the star and where planets will eventually form.
Often, stars are too distant for us to make any sense of their surroundings. But in the case of baby stars, surrounded by protoplanetary disks, there's an ingenious trick astronomers can use to examine the structure of their dusty birthplaces.
A star may form from a molecular cloud of gas that, should the conditions be just right, collapses under a mutual gravity. This collapse will form a knot of dense material that may fuse to produce the core of a young star. Over time, material will collect around this protostar, forming a swirling disk. Eventually, planets will coalesce from this protoplanetary disk. For us to better understand how the planets in the solar system formed, astronomers are very curious about studying the disks around other stars.
"Understanding protoplanetary disks can help us understand some of the mysteries about exoplanets, the planets in solar systems outside our own," said postdoctoral research associate Huan Meng, of the University of Arizona, Tucson. "We want to know how planets form and why we find large planets called ‘hot Jupiters' close to their stars."
Unless young star systems are on our interstellar doorstep, it can be hard for the structure of these disks to be seen.
However, by studying the fluctuations of brightness of a star called YLW 16B, approximately 400 light-years from Earth, Meng and his collaborators were able to detect reflected starlight from the innermost boundary of the star's protoplanetary disk, making extremely precise measurements of its location and structure.
This particular star is of approximately the same mass as our sun, but it is only 1 million years old (compared to our 4.6 billion year-old sun, this star isn't much more than a stellar embryo). This makes it an ideal candidate to better understand the physics of our solar system before any planets began to form around the young sun.
Using data from NASA's Spitzer space telescope, which observes the cosmos in infrared light, and from ground-based observatories, the astronomers applied a technique called "photo-reverberation" to study the starlight bouncing off the protoplanetary disk's inner edge.
It just so happens that YLW 16B has a variable and unpredictable fluctuation in emissions, so the astronomers measured these emission fluctuations and waited for the reflected light to bounce off the disk. The variations in star brightness could then be matched with the light echo, which arrived shortly after. The time lag could then be used to derive the distance of the star from the protoplanetary disk's inner edge.
For this star system, the gap between star and inner disk is around 0.08 AU - where 1 AU, or astronomical unit, is the average distance between the sun and Earth's orbit. As a better comparison, the inner edge is approximately one quarter of the distance that Mercury orbits the sun.
These observations were also able to deduce that the disk was thick, providing an interesting additional clue as to how much material the disk may contain.
Young stars are bright and possess powerful stellar winds that "blow-out" the inner protoplanetary disk, leaving a gap (as illustrated by the image, top). Understanding how big this gap is and its location from the star will help us improve models of baby star systems, ultimately teaching us a little about how our solar system may have formed 4.6 billion years ago.
This illustration shows a young star, erupting with magnetism, surrounded by a protoplanetary disk.
Astronomers using the Hubble Space Telescope recently completed the largest and most sensitive survey of dust surrounding young star systems. The survey zoomed-in on stars that are between 10 million to 1 billion years old and the source of the dust is thought to be the left-over debris from planet, asteroid and comet collisions after systems of planets have formed.
The research is akin to looking far back into the history of our solar system, seeing the inevitable dusty mess left over after the Earth and other planets evolved. "It's like looking back in time to see the kinds of destructive events that once routinely happened in our solar system after the planets formed," said Glenn Schneider, of the University of Arizona's Steward Observatory and lead scientist on the survey team.
Read on to see some of the beautiful variety of circumstellar disks observed by Hubble.
One of the major findings to come from this survey is the stunning diversity of dust surrounding these young stars. Traditionally, circumstellar dust is thought to settle into an orderly disk-like shape -- but it turns out that the opposite is true.
"We find that the systems are not simply flat with uniform surfaces," said Schneider. "These are actually pretty complicated three-dimensional debris systems, often with embedded smaller structures. Some of the substructures could be signposts of unseen planets."
One stunning observation of the star HD 181327 exhibits a bright ring of dust containing irregularities, potential evidence of a massive collision that has scattered debris far and wide. "This spray of material is fairly distant from its host star — roughly twice the distance that Pluto is from the Sun," said Christopher Stark of NASA's Goddard Space Flight Center, Greenbelt, Md., and co-investigator in the survey team. "Catastrophically destroying an object that massive at such a large distance is difficult to explain, and it should be very rare. If we are in fact seeing the recent aftermath of a massive collision, the unseen planetary system may be quite chaotic."
Another interpretation for the irregularities could be some kind of interaction with unseen interstellar material. "Our team is currently analyzing follow-up observations that will help reveal the true cause of the irregularity," added Schneider.
Like the diversity of exoplanetary systems astronomers have discovered, it appears the accompanying dust disks also share this characteristic, possibly indicative of gravitational interactions with planets orbiting the stars surveyed by Hubble.
"How are the planets affecting the disks, and how are the disks affecting the planets? There is some sort of interdependence between a planet and the accompanying debris that might affect the evolution of these exoplanetary debris systems," said Schneider.
Since 1995, thousands of exoplanets have been discovered orbiting stars in our galaxy. Over the same period, however, only a couple of dozen circumstellar disks have been imaged directly. This is down to the fact that the scattered light off these disks is extremely faint (around 100,000 times fainter than the parent star's light). The technology and techniques are only recently becoming available for scientists to not only block the star's blinding light, but to also boost the sensitivity of observations to pick out this scattered light that would otherwise be obscured from view. Fortunately, Hubble's high-contrast imaging has been key in making this survey a success.
By studying these disks of dust and their surprising variety of morphologies may help astronomers better understand how the Earth-moon and Pluto-Charon systems formed. Through planetary collisions, the debris from the early solar system may have coalesced to create many of the natural satellites we see today, 4.6 billion years later. The results of this survey have been published in The Astrophysical Journal.
For more information about this Hubble survey and high-resolution images, browse the HubbleSite.org news release.
|
https://www.seeker.com/light-echos-used-to-measure-size-of-baby-stars-crib-1771269842.html
|
Ramsar sites are areas of wetland that are designated under the International Convention on Wetlands of International Importance (the Ramsar Convention). The UK government signed up to the Convention in 1976 and in 2014 there were 148 designated sites in the UK and 2186 globally.
Mission of the Ramsar Convention
The mission of the Ramsar Convention is:
“the conservation and wise use of all wetlands through local and national actions and international cooperation, as a contribution towards achieving sustainable development throughout the world”.
Under the Convention, wetlands include:
- Lakes and rivers.
- Underground aquifers.
- Swamps and marshes.
- Wet grasslands.
- Peatlands.
- Oases.
- Estuaries.
- Deltas and tidal flats.
- Mangroves and other coastal areas.
- Coral reefs.
- All human-made sites including fish ponds, rice paddies, salt pans and reservoirs.
UK Ramsar sites
In the UK, in 2014 there were 148 designated Ramsar sites totalling over 785,000 hectares. Further information on each of the sites is available from the Joint Nature Conservation Committee website.
Ramsar Site criteria
There are nine criteria for identifying Wetlands of International Importance:
Group A. Sites containing representative, rare or unique wetland types
- Criteria 1: A wetland containing a representative, rare, or unique example of a natural or near-natural wetland type found within the appropriate biogeographic region.
Group B. Sites of international importance for conserving biological diversity
- Criteria 2: A wetland supporting vulnerable, endangered, or critically endangered species or threatened ecological communities.
- Criteria 3: A wetland supporting populations of plant and/or animal species important for maintaining the biological diversity of a particular biogeographic region.
- Criteria 4: A wetland supporting plant and/or animal species at a critical stage in their life cycles, or provideing refuge during adverse conditions.
- Criteria 5: A wetland supporting 20,000 or more waterbirds.
- Criteria 6: A wetland regularly supporting 1% of the individuals in a population of one species or subspecies of waterbird.
- Criteria 7: A wetland supporting a significant proportion of indigenous fish subspecies, species or families, life-history stages, species interactions and/or populations that are representative of wetland benefits and/or values and thereby contributing to global biological diversity.
- Criteria 8: A wetland that is an important source of food for fishes, spawning ground, nursery and/or migration path on which fish stocks, either within the wetland or elsewhere, depend.
- Criteria 9: A wetland regularly supporting 1% of the individuals in a population of one species or subspecies of wetland-dependent nonavian animal species.
Proposals that may affect a Ramsar site
Any developments that are close to (or within) the boundary of a Ramsar site may require a Habitat Regulations Assessment if they are likely to have an adverse affect on the site. An initial screening stage would be required, followed by an Appropriate Assessment.
Where it is considered that an adverse effect on the integrity of the site is likely, and no alternatives are available, the project can only go ahead if there are imperative reasons of over-riding public interest and if the appropriate compensatory measures can be secured.
Related articles on Designing Buildings Wiki.
- Areas of Outstanding Natural Beauty.
- Designated sites.
- Forests.
- Habitats regulations assessment.
- National nature reserves.
- National parks.
- Natura 2000 network.
- Natural England.
- Protected species.
- Sites of Special Scientific Interest.
- Special areas of conservation.
- Special Protection Areas.
- Types of land.
|
https://www.designingbuildings.co.uk/wiki/Ramsar_sites
|
Radon is a gas produced by the radioactive decay of the element radium. Radioactive decay is a natural, spontaneous process in which an atom of one element decays or breaks down to form another element by losing atomic particles (protons, neutrons, or electrons). When solid radium decays to form radon gas, it loses two protons and two neutrons. These two protons and two neutrons are called an alpha particle, which is a type of radiation. The elements that produce radiation are called radioactive. Radon itself is radioactive because it also decays, losing an alpha particle and forming the element polonium.
The decay of each radioactive element occurs at a very specific rate. How fast an element decays is measured in terms of the element 'half-life', or the amount of time for one half of a given amount of the element to decay. Uranium has a half-life of 4.4 billion years, so a 4.4-billion-year-old rock has only half of the uranium with which it started. The half-life of radon is only 3.8 days. If a jar was filled with radon, in 3.8 days only half of the radon would be left. But the newly made daughter products of radon would also be in the jar, including polonium, bismuth, and lead. Polunium is also radioactive - it is this element, which is produced by radon in the air and in people's lungs, that can hurt lung tissue and cause lung cancer.
| |
Physical dispersion of radioactive wastes into regolith at the Radium Hill uranium mine site, South Australia
Ashley, Paul , and Lottermoser, Bernd (2004) Physical dispersion of radioactive wastes into regolith at the Radium Hill uranium mine site, South Australia. In: Papers from the 17th Australian Geological Convention. From: 17th Australian Geological Convention, 8-13 February 2004, Hobart, TAS, Australia.
|
PDF (Abstract Only)
- Published Version
|
Download (137kB)
Abstract
The Radium Hill uranium deposit was mined for radium between 1906 and 1931 and uranium between 1954 and 1961. Rehabilitation was limited to removal of mine facilities, sealing of underground workings and capping of selected waste repositories. Radium Hill has a semi-arid climate and the area is subject to wind and water erosion. In 2002, gamma-ray data, plus tailings, uncrushed and crushed waste rock, stream sediment, soil and vegetation samples were collected to determine the dispersal of mine wastes by wind and water into the local regolith.
The mine and former processing site covers an area of approximately 100 ha. Numerous stable waste dumps of uncrushed rock occur for 800 ill along the line of lode. These consist of broken rock material from underground workings and represent the various rock types (feldspar-quartz-biotite gneiss, amphibolite, pegmatite, retrograde rock types, lode material) encountered during mining. Ore grade material (0.1-0.2% U) has significant davidite, high radiation levels (max. 5000 cps; max. 4.2 mSv/hr) and LREE, Nb, Sc, Th, Ti U, V and Y enrichments. Crushed rock material from the mine is found in several dumps in the mine and mill areas, and has been used widely for road and building construction. It is more radioactive than the uncrushed waste rock. Several former mill tailings dams are covered by soil and rock, with the largest containing approximately 0.5 Mt of tailings averaging 200 ppm U. Tailings have elevated radiation levels (1400-5500 cps; max. 3.5 mSv/hr) and prior to covering in the early 1980s, wind deflation and water erosion had caused widespread dispersal into surrounding regolith, with some soils having >90 % of tailings material. Despite partial coverings, mine wastes at the site remain susceptible to water and wind erosion. Regional airborne radiometric data outline the former town and mine sites and roads as pronounced U-Th anomalies.
Capping of tailings storage facilities did not ensure long-term containment of low-level radioactive wastes due to erosion of sides of the impoundments. Continued wind and water erosion of physically unstable waste repositories causes radiochemical and geochemical impacts on local soils and sediments. Additional capping of mine wastes is required in order to minimise impacts on surrounding soils and sediments. However, measured radiation levels are generally below Australian Radiation Protection Standards (20 mSv/year averaged over five consecutive years), except for exposed railings.
|
https://researchonline.jcu.edu.au/8163/
|
Pioneering motion map shows Scotland to be 'on the move'
A first land motion map has been created showing movement across Scotland.
It was created by a team of scientists at the University of Nottingham with hundreds of satellite radar images of the country.
See also: Scottish Highlands hit by biggest earthquake in 30 years
See also: Unidentifiable animal spotted in Scottish field
The map covers a two-year period from 2015 to 2017 and was created using Intermittent Small Baseline analysis, a satellite remote sensing technique.
It showed that small but significant rates of land motion are occurring across almost the entire landscape of Scotland.
Rural areas were found to be marked by subsidence over peatlands and landslides on steep slopes, while urban and industrialised areas showed the "effects of historical coal mining and engineering work".
The university team said such maps could inform regulations around fracking and oil and gas production.
Project leader Dr Stephen Grebby said: "Tracking ground motion is also important for a wide range of other applications such as monitoring infrastructure, and this is not just limited to Scotland.
"For example, our wide-area monitoring technique could be used to help identify and monitor ground instability issues along the whole stretch of the proposed HS2 route.
"This would provide information that could ultimately influence the plans for the final route for Phase 2 of HS2, or at least highlight existing ground instability issues that may need to be addressed during construction of the network."
Subsidence is shown on the map by red and yellow colours, with green representing stable ground across the majority of the country.
The map team said vast lowland and highland peatland areas have shown subsidence, thought to be link to the levels of carbon stored in soils.
Dr Andy Sowter, chief technology officer of Geomatic Ventures Limited, the company that processed the satellite images, said: "If Scotland is to reach its climate change targets, which are currently under scrutiny by the UK Committee on Climate Change, land motion maps like this can provide vital evidence on the health of peatlands and with regular monitoring, the beneficial effect of peatland restoration towards improving the carbon balance."
|
https://www.aol.co.uk/2017/11/07/pioneering-motion-map-shows-scotland-to-be-on-the-move/
|
Determining the Quantum Expectation Value by Measuring a Single Photon (and other recent applications of weak measurements)
Quantum mechanics exhibits several peculiar properties, differentiating it from classical mechanics. One of the most intriguing is that variables might not have definite values. A complete quantum description provides only probabilities for obtaining various eigenvalues of a quantum variable. The eigenvalues and corresponding probabilities specify the expectation value of a physical observable, but they are known to be statistical properties of large ensembles. In contrast to this paradigm, we demonstrate a unique method allowing to measure the expectation value of a physical variable on a single particle, namely, the polarization of a single protected photon. This is the first realization of quantum protective measurements [1,2], which are based on a combination of weak measurements and the quantum Zeno effect. Before discussing these issues, I will review the notion of weak measurements [3-5] and discuss their realization by presenting our previous experiment , where we measured two non-commuting observables, on one and the same photon, using sequential weak measurements. I will conclude by discussing a few applications of these methods, both in metrology and in the study of foundational questions.
References
Y. Aharonov, L. Vaidman, Measurement of the Schrӧdinger wave of a single particle, Phys. Lett. A 178, 38 (1993).
Y. Aharonov, E. Cohen, Protective measurement, Post-selection and the Heisenberg representation, in Protective measurement and quantum reality: Towards a new understanding of quantum mechanics, Shan Gao (Ed.), Cambridge University Press (2014), arXiv: 1403.1084.
Y. Aharonov, D.Z. Albert, L. Vaidman, How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100, Phys. Rev. Lett. 60, 1351 (1988).
Y. Aharonov, E. Cohen, A.C. Elitzur, Foundations and applications of weak quantum measurements, Phys. Rev. A 89, 052105 (2014).
Y. Aharonov, E. Cohen, A.C. Elitzur, Can a future choice affect a past measurement's outcome?, Ann. Phys. 355, 258-268 (2015).
F. Piacentini M.P. Levi, A. Avella, E. Cohen, R. Lussana, F. Villa, A. Tosi, F. Zappa, M. Gramegna, G. Brida, I.P. Degiovanni, M. Genovese, Measuring incompatible observables of a single photon, Phys. Rev. Lett.. 117, 170402 (2016).
|
https://physics.biu.ac.il/node/3832
|
In this vivid and compelling narrative, the Seven Years' War-long seen as a mere backdrop to the American Revolution-takes on a whole new significance. Relating the history of the war as it developed, Anderson shows how the complex array of forces brought into conflict helped both to create Britain's empire and to sow the seeds of its eventual dissolution.
Beginning with a skirmish in the Pennsylvania backcountry involving an inexperienced George Washington, the Iroquois chief Tanaghrisson, and the ill-fated French emissary Jumonville, Anderson reveals a chain of events that would lead to world conflagration. Weaving together the military, economic, and political motives of the participants with unforgettable portraits of Washington, William Pitt, Montcalm, and many others, Anderson brings a fresh perspective to one of America's most important wars, demonstrating how the forces unleashed there would irrevocably change the politics of empire in North America.
This title is part of (or scheduled to be part of) the following subscriptions:
You can find this title in the following lists:
All formats/editions
Click the Download button to download a copy of the MARC file.
Enter your FTP details below to send the MARC export file via FTP.
|
https://www.recordedbooks.com/title-details/9781541498020
|
Background
==========
Infertility is a common clinical problem. It affects 13% to 15% of couples worldwide \[[@B1]\]. The prevalence varies widely, being less in developed countries and more in developing countries where limited resources for investigation and treatment are available \[[@B2]\]. In the United Kingdom, it is estimated that one in six couples would complaint of infertility \[[@B3]\].
In addition, infertility is considered also a public problem. It does not affect the couples\' life only, but it also affects the healthcare services and social environment \[[@B4]\]. The feelings experienced by the infertile couples include depression, grief, guilt, shame, and inadequacy with social isolation.
Today, many patients do not receive the recommended medical care that based on the best available evidence \[[@B5]\]. As there is surplus medical information everyday that can not be cached by healthcare providers, clinical guidelines created to endorse *up-to-date*evidence-based practice to improve patients\' outcome \[[@B6]\].
This study was carried out in accordance with the requirements of the University of Bristol Regulations and Code of Ethics for Research Programmes.
Methodology
-----------
This paper, as a comprehensive review, deploys a new strategy to translate the research findings and evidence-base recommendations into a simplified focused guide to be applied on routine daily practice. It is an approach to disseminate the recommended medical care of infertile couple to the practicing clinicians. To accomplish this, the literature was searched for the keywords of \"*Management of infertility, infertile couples*\" at library website of University of Bristol (MetaLib) by using a cross-search of different medical databases such as \...Allied and Complementary Medicine Database (AMED), \...BIOSIS Previews on Web of Knowledge, \...Cochrane Library, \...PubMed, including Medline, and \...Web of Science, in-addition to the relevant printed medical journals and periodicals. Guidelines and recommendations were retrieved from the best evidence reviews at the American College of Obstetricians and Gynaecologists (ACOG), American Society for Reproductive Medicine (ASRM), Canadian Fertility and Andrology Society (CFAS), European Society of Human Reproduction and Embryology (ESHRE), Human Fertilisation and Embryology Authority (HFEA), Royal College of Obstetricians and Gynaecologists (RCOG), and the World Health Organization (WHO).
Epidemiology of infertility
---------------------------
For healthy young couples, the probability of getting pregnancy per a reproductive cycle is about 20% to 25%. Their cumulative probabilities of conception are 60% within the first 6 months, 84% within the first year, and 92% within the second year of regular fertility-focused sexual activity.
Several studies have reported different causes of infertility \[[@B7]-[@B9]\]. Some causes are more common in some countries than others, such as pelvic inflammatory diseases (PID) and sexually transmitted infections (STI) in Africa \[[@B10]\]. Some personal habits are considered risk factors for infertility, such as excess alcohol intake \[[@B11]\] and cigarette smoking \[[@B12]\].
According to the literature survey, the most common causes of infertility are: male factor \[[@B5],[@B7]-[@B9],[@B13]-[@B15]\] such as sperm abnormalities \[[@B9],[@B13],[@B15]\], female factor \[[@B7]-[@B9],[@B14]-[@B16]\] such as ovulation dysfunction \[[@B7],[@B8]\] and tubal pathology \[[@B7]-[@B9]\], combined male and female factors \[[@B7],[@B9],[@B14],[@B15]\] and unexplained infertility; where no obvious cause could be detected \[[@B7]-[@B9]\].
As the rate of getting spontaneous pregnancy among infertile or subfertile couples is lower than that among normal fertile population, it is recommended to carry out the following diagnostic, *evidence-based*, work-up to detect any hidden treatable cause.
History-taking
--------------
Couples with infertility problem should be interviewed separately as well as together, to bring out important facts that one partner might not wish to disclose to the other. Full history taking of both partners usually denotes the underlying problem \[[@B17]-[@B23]\], (Appendix 1).
Clinical examination
--------------------
Full clinical examination of both partners usually stands for the underlying physical problem \[[@B17]-[@B22],[@B24]-[@B26]\], (Appendix 2). By the end of this step, most of healthcare professionals will be able to sketch out their provisional diagnosis. Investigations will be requested to prove the clinical diagnosis and to exclude other close possibilities.
Investigations
--------------
Infertile couples are usually adviced to start their investigations after 12 months of trying to conceive or after 6 months if the female partner is more than 35 years old or immediately if there is an obvious cause for their infertility or subfertility \[[@B16]\].
As the major causes of infertility are sperm abnormalities, ovulation dysfunction, and fallopian tube obstruction, the preliminary adviced investigations for the infertile couple should be focused on semen analysis (to be compared with the WHO reference values \[[@B27]\]), detection of ovarian function by hormonal assay (early follicular FSH and LH levels, and mid-luteal progesterone), and evaluation of tubal patency by hysterosalpingography (HSG) \[[@B17]-[@B32]\], (Appendix 3).
Many infertile couples have had some previous assessment for their infertility and this data should be cautiously reviewed. Further investigations may be requested according to the clinical presentation and the results of preliminary tests. Omitting unnecessary investigations, in particular couples, could reduce total cost of their infertility management without compromising their success rate. For example; a woman who has no history suggestive of previous pelvic inflammatory disease or endometriosis, there is no justification to request a laparoscopy especially after normal hysterosalpingography study \[[@B33]\]. Similarly, there is no need for testing tubal patency for couples who will require IVF or ICSI procedure.
A woman with a suspicion of chronic anovulation most probably due to polycystic ovary (PCO) syndrome, as there is a long history of irregular cycles and clinical presentation with hirsutism, her serum levels of testosterone hormone, sex hormone binding globulin (SHBG), dihydroepiandrostenedione (DHEA), dihydroepiandrostenedione-sulfate (DHEAS) and prolactin should be evaluated to prove the provisional diagnosis and to detect the source of excess androgens. However, early referral of infertile couples to a dedicated specialist infertility clinic may be indicated to increase their chance of pregnancy (Table [1](#T1){ref-type="table"}).
######
Criteria for early referral to specialist infertility clinic
In women In men
------------------------------------------------- ------------------------------------------
Age: \< 35 years with \> 18 months infertility. History of: Genital pathology
≥35 years with \> 6 months infertility Uro-genital surgery
Sexually-transmitted infections
Varicocele
Cryptorchidism
Length of menstrual cycle: \< 21 days. Systemic illness
\> 35 days Chemotherapy/Radiotherapy
Menstrual abnormalities: Amenorrhoea
Oligomenorrhoea
History of: Ectopic pregnancy Two abnormal results of semen analysis:
Pelvic infections (PID) Sperm count \< 20 million/ml
Endometriosis Sperm motility \< 25% (grade-a)
Pelvic surgery (ruptured appendix) Sperm motility \< 50% (grade-b)
Developmental anomalies Sperm morphology \< 15% normal
Abnormal P/V findings on examination Abnormal findings on genital examination
Chlamydia antibody titre ≥ 1:256 Patient request or Anxiety
Mid-luteal progesterone \< 20 nmol/l
FSH \> 10 IU/l early follicular phase
LH \> 10 IU/l early follicular phase
Patient request or Anxiety
In some cases, the cause of infertility or subfertility could not be suspected from the history taking and clinical examination. In such circumstances, it is recommended not to prescribe any medication until all basic investigations are done and its results received.
Treatment
---------
The cost of infertility treatment is high \[[@B34]\]. For this reason, a call for low-cost ART protocols have been attempt to reduce the overall current cost of IVF through limiting the rquired laboratory investigations, modifying the stimulation regimen and purchasing low-priced pre-used machines and instruments \[[@B35]\]. But the question that should be answered one day is: will the output quality be compromised with such approach?
With the fast progression in reproductive medicine and the experiences gained through infertility management, a wider range of treatment options have become available to infertile couples \[[@B17]-[@B19],[@B21]-[@B26],[@B31],[@B36]\], (Appendix 4). There are three main types of fertility treatment: medical treatment (such as ovulation induction therapy); surgical treatment (such as laparoscopy and hysteroscopy); and the different assisted reproduction techniques \[[@B37]\].
Choice of infertility treatment often related to issues of efficacy, cost, ease of use or administration, and its side effects. Legal, cultural and religious inquiries have limited the available choices in some countries, such as the use of donor sperms or oocytes.
Treatment options available for any particular infertile couple will depend also on the duration of their infertility, which partner is affected, the age of the female partner and if any has a previous children or not, the underlying pathological cause, and if the treatment will be covered by the National Health System (NHS) or funded by their own.
Counseling
----------
Fertility clinics should address the psycho-social and emotional needs of infertile couples as well as their medical needs. The content of counselling may differ depending on the concerned couple and the existing treatment options. It usually involves treatment implication counselling, emotional support counselling, and therapeutic counselling \[[@B38],[@B39]\].
Most of infertile couples are aware of what can be offered to them from the media. Regrettably, this often leads to untruly high expectations of assisted reproduction techniques (ART) \[[@B40]\]. The chance of a live birth following treatment is nearly 50% \[[@B25]\]. It varies with the age (the optimal female age is between 23 and 39 years) and with body weight (the ideal body mass index is between 19 and 30). It is more successful in women who have previously been pregnant. There is no reliable means to predict whether the use of any treatment option will be successful and after how many attempts. The facilities available and the skills of personnel are the major determining factors for the success rate.
An estimated 28% of all couples seeking reproductive assistance may have normal findings on their clinical evaluation, making the unexplained infertility a more common provisional diagnosis. The predictable pregnancy rate for this group is about 5% after timed intercourse, 10% after superovulation with intrauterine insemination (IUI), and 15% to 25% after assisted reproduction techniques (ART) \[[@B41]\]. These rates, of course, readjusted down for the older women with long durations of infertility \[[@B42]\].
As treatment begins, couples may experience cycles of optimism and despair with each passing menstrual cycle. As the duration of treatment prolonged, psychological suffering is likely to increase \[[@B6]\]. The treating doctor may feel inadequacy and the trust between the doctor and patient breaks down \[[@B43]\]. At this point, psychological consultation and support should be provided \[[@B44]\].
The incidence of congenital malformation in IVF babies ranges between 2% and 3% worldwide and is similar to that in babies conceived naturally \[[@B45]\]. However there is a minimal increased risk of *de-novo*chromosomal abnormalities in ICSI born babies \[[@B46],[@B47]\] that necessitate counselling of the concerned couples.
Conclusions
===========
Infertility by itself does not threaten the life, but it has devastating psycho-social consequences on infertile couples. It remains a worldwide problem challenge. Management of infertility has been and still a difficult medical task not only because of the difficulty in the diagnosis and treatment of the reproductive disorders in each partner, or the poorly unstated interaction between the partners\' fertility potentials, but also because of the fact that success of treatment is clearly identifiable entity; the achievement of pregnancy. The treating doctor who is counselling the couple regarding their infertility must be familiar with the causes, investigations and the treatment options available. The couple needs to be given realistic information about their chances of having a live birth, as well as, the risks and costs of the management plan and its alternatives. By follow the proposed, *evidence-based*, management protocol stated in this paper, infertile couples will have a good chance to start up their treatment in the proper way at early time with enough financial support through reducing money spent on unnecessay investigations.
Competing interests
===================
The authors declare that they have no competing interests.
Authors\' contributions
=======================
This work was done by RMK and there is no contribution of any other authors.
Appendix 1. Focused history taking for infertile couple
=======================================================
**Female and Male Partners**
----------------------------
Female Partner
**Present history**: the current problem/complaint, age, occupation, recent cervical smear findings, breast changes as milk-like discharges, excessive hair growth with or without acne on face and chest, hot flushes, eating disorders, any current associated medical illness as diabetes and/or hypertension, drug intake prescribed as non-steroidal anti-inflammatory drugs (NSAIDs), sex steroids and cytotoxic drugs or recreational as marijuana and cocaine, smoking, alcohol, and caffeine consumption
**Menstrual history**: for age of menarche, cycle characteristics and any associated symptoms as painful menstruation or intermenstrual spotting. History of primary or secondary amenorrhoea
**Obstetric history**: previous pregnancies, *if any*, and its outcome, recurrent pregnancy loss, induced abortion, post-abortive infection or puerperal sepsis
**Contraceptive history**: previous use of any contraceptive method, particularly intrauterine system, and any associated problems
**Sexual history**: Coital frequency, timing in relation to the cycle, use of vaginal lubricant before, or vaginal douching after, coitus, loss of libido, as well as, any associated problems as difficult or painful coitus
**Past history**: medical or surgical as pelvic infection, tuberculosis, bilharziasis, ovarian cyst, appendicectomy, laparotomy, caesarean sections, and cervical conization. Ask about her rubella status
**Family history**: for similar problem among the female members, consanguinity, diabetes mellitus, hypertension, twins delivery, breast cancer
Male Partner
**Present history**: the current problem/complaint, age, occupation, previous seminal analysis findings, breast changes as enlargement, any current associated medical illness as diabetes and/or hypertension, drug intake prescribed or recreational, smoking, alcohol, and caffeine consumption
**Sexual history**: Coital frequency, timing, and any associated problems as erectile dysfunction or ejaculatory problems, loss of libido. History of previous marriage or extra-marital sexual relations
**Contraceptive history**: previous use of any contraceptive method either temporary as condom or permanent as vasectomy
**Past history**: medical disease or surgical operations as mumps, tuberculosis, bilharziasis, sexually-transmitted infections, hydrocele, varicocele, undescended testis, appendicectomy, inguinal hernia repair, or bladder-neck suspension operations
**Family history**: for similar problem among the male members, consanguinity, diabetes mellitus, and hypertension
Appendix 2. Focused clinical examination for infertile couple
=============================================================
**Female and Male Partners**
----------------------------
Female Partner
**General Examination**: vital signs (especially blood pressure), body height and weight (BMI= ratio between weight in kilograms and height in square meters) for over or under weights, secondary sexual characters, any excessive hairs with/without acne on face or chest, and acanthosis nigricans. Abnormal skin depigmentation as vitiligo may suggest an autoimmune systemic disease. Examination should include also the thyroid gland
**Breast Examination**: to evaluate its development and to exclude any pathology or presence of occult galactorrhoea
**Chest Examination**: for lungs and heart
**Abdominal Examination**: for any abdominal mass, organomegaly, ascites, abdominal striae, and surgical scars
**Genital Examination**: type of circumcision, size and shape of clitoris, hymen, vaginal introitus, site, size, shape, surface, consistency, mobility and direction of uterus, any palpable adnexal mass, vaginal discharge, tenderness, uterosacral ligament thickening, and nodules in the cul-de-sac denoting either endometriosis or tuberculosis by per-vaginal (PV) examination
Male Partner
**General Examination**: vital signs (especially blood pressure), body height and weight (BMI), arm-span, secondary sexual characters, and examination of thyroid gland.
**Breast Examination**: for gyanecomastia
**Abdominal Examination**: for any abdominal mass, undescended testis, inguinal hernia, organomegaly, or ascites
**Genital Examination**: shape and size of penis, prepuce, position of external urethral meatus, testicular volume (by using Prader\'s Orchidometer. Normally 25 ml = 3 × 5 cm), palpation of epididymis and vas deferens, exclude varicocele or hydrocele. Perineal sensation, rectal sphincter\'s tone, and prostate enlargement by per-rectal (PR) examination
Appendix 3. Focused investigations for infertile couple
=======================================================
**Female and Male Partners**
----------------------------
Female Partner
Basic Investigations
**General**: Full blood count, urine analysis, Papanicolaou smear, vaginal wet mount with appropriate culture, Rubella serology, Hepatitis B and C, HIV serology, and *Chlamydia trachomatis*serology
**Hormonal assay**: to predict ovulation and ovarian reserve. Mid-luteal serum progesterone level (5-10 days before the expected menstrual cycle). FSH, LH (twice if female age \> 38 years, on day 2-5 of the menstrual cycle). The use of basal body temperature (BBT) charting and ovulation predictor home kits are not recommended
**Transvaginal ultrasonography**: to monitor natural ovulation, to detect any pelvic pathology as uterine or ovarian masses, abnormally-shaped or mal-directed uterus. No need for ultrasound scanning of endometrium
**Hysterosalpingography**or **Hysterosalpingo-Contrast-Sonography (HyCoSy)**: to evaluate shape of uterine cavity and patency of both fallopian tubes in low-risk women
Advanced Investigations
**Hormonal assay**: Prolactin (if cycles are irregular with/without galactorrhoea or pituitary adenomas). Thyroid function tests (for women with symptoms of thyroid disease). Testosterone, SHBG, DHEA and DHEAS (for suspected cases with PCO syndrome)
**Laparoscopy**: for possible associated pelvic pathology or adhesions in cases with abnormal HSG findings, previous history of pelvic inflammatory disease or endometriosis
**Hysteroscopy**: for intrauterine space-occupying lesions detected on HSG as adhesions or polyp (no evidence linking it with enhanced fertility)
**Chromosomal karyotyping**: for suspected genetic disorders as Turner\'s syndrome
Male Partner
Basic Investigations
**General**: Full blood count, Hepatitis B and C, HIV serology, and *Chlamydia trachomatis*serology
**Semen analysis**(after 72 hours of sexual abstinence): interpreted for its volume, sperm count, motility, and morphology according to the WHO reference values (Two analyses with 3 months apart at the same lab)
Advanced Investigations
**Post-coital test**: no predictive value on the pregnancy rate
**Anti-sperm antibodies**(no evidence of effective treatment to improve fertility), and **Sperm function tests**
**Hormonal assay**: FSH, LH, Testosterone, TSH and Prolactin (for male with abnormal seminal analysis and suspected endocrine disorder)
**Testicular biopsy**: A fine-needle aspiration biopsy may required to differentiate between obstructive and non-obstructive azoospermia
**Chromosomal karyotyping**: for suspected genetic disorders as sex chromosomal aneuploidy, cystic fibrosis, and deletion of Y-chromosome
Appendix 4. Treatment Options for Infertile couple
==================================================
**Female and Male Partners**
----------------------------
Female Partner
Non-Invasive Treatment
**Counselling**: for regular intercourse 2-3 times/week. Give-up smoking, not to drink more than 1-2 units of alcohol/week, not to use any addictive drugs, and follow a supervised weight loss programme if obese (BMI \> 29). Folic acid 0.4 mg should be provide as a daily supplement to prevent neural tube defect (5.0 mg adviced for women who have previously affected child or on medication for epilepsy). Rubella vaccination if seronegative (avoid pregnancy for one month). Treat any psycho-sexual problem if present
**Induction of Ovulation**: for women with ovulatory dysfunctions. Provide a controlled ovarian stimulation for assisted reproduction techniques
**Intra-uterine Insemination (IUI)**: is not regulated by Human Fertilisation and Embryology Authority (HFEA) in the United Kingdom. Could be used for unexplained infertility and female cases with minimal endometriosis
**Surrogacy**: In women with congenital absence of uterus or after surgical removal
Invasive Treatment
**Tubal surgery**: as laparoscopic adhesiolysis, tubal cannulation or catheterisation
**Hysteroscopic surgery**: as resection of IU adhesions or polyp
**In-vitro Fertilisation (IVF)**and **Embryo transfer (ET)**: Used procedure for female tubal factor, moderate male factor, and for unexplained infertility
**Gamete Intra-fallopian Transfer (GIFT)**and **Zygote Intra-fallopian Transfer (ZIFT)**: are not regulated by HFEA. For cases who refuse fertilisation in lab
**Oocyte donation**and **Ovarian tissue transplantation**: for premature ovarian failure
Male Partner
Non-Invasive Treatment
**Counselling**: for regular intercourse 2-3 times/week, give-up smoking, reduce alcohol intake to 3-4 units/week, not to use any addictive drugs, wear loose fitting underwear and trousers, and avoid occupational or social situations that might cause testicular heating. Treat any psycho-sexual problem if present
**Intra-uterine Insemination (IUI)**: Used for mild male factor infertility problems
**Donor insemination**: for azoospermia, positive HIV, severe male factor who refuse the ICSI as a management option
**Adoption**: For cases with recurrent unexplained failed IVF cycles
Invasive Treatment
**Surgical restoration of duct patency**: For cases with previous vasectomy
**Intra-cytoplasmic Sperm Injection (ICSI) ± PGD**: Commonly used procedure for severe male factor or for recurrent unexplained failed IVF cycles. Surgical sperm retrieval (SSR) usually done by percutaneous epididymal sperm aspiration (PESA), testicular sperm aspiration (TESA), testicular sperm extraction (TESE), or microsurgical epididymal sperm aspiration (MESA).
Acknowledgements
================
I would like to express my gratitude\'s to **Uma Gordon**, Consultant in Reproductive Medicine and Surgery, Bristol Centre for Reproductive Medicine (BCRM), Southmead Hospital, Bristol, University of Bristol, United Kingdom, for her scientific inputs. This work was self-funded. I did not receive any financial funding or support from any person or institution.
| |
Cost-effectiveness of hepatitis A vaccination for adults in Belgium.
- Medicine
- Vaccine
- 2012
The results indicated that these expanded vaccination strategies are not cost-effective in the epidemiological circumstances of a typical low-endemic western country. Expand
Methodological reviews of economic evaluations in health care: what do they target?
- Medicine
- The European Journal of Health Economics
- 2013
The aim of this study is to analyze the methodological characteristics of the reviews of economic evaluations in health care, published during the period 1990–2010, to identify their main features and the potential missing elements. Expand
Hepatitis A immunisation in persons not previously exposed to hepatitis A.
- Medicine
- The Cochrane database of systematic reviews
- 2012
The risk of both non-serious local and systemic adverse events was comparable to placebo for the inactivated HAV vaccines, and the clinical effectiveness of both inactivated and live attenuated HAV vaccine was confirmed. Expand
Public health impact and cost effectiveness of routine childhood vaccination for hepatitis a in Jordan: a dynamic model approach
- Medicine
- BMC Infectious Diseases
- 2018
A vaccination program covering one-year old children is projected to be a cost-saving intervention that will significantly reduce the public health and economic burden of hepatitis A in Jordan. Expand
Considerations on the Current Universal Vaccination Policy against Hepatitis A in Greece after Recent Outbreaks
- Medicine
- PloS one
- 2015
Data suggest that universal vaccination may need to be re-considered in Greece and that a more cost effective approach would be to implement a program that will include: a) vaccination of high risk groups, b) universal vaccination of Roma children and improving conditions at Roma camps, c) education of the population and travel advice, and d) enhancement of the control measures to increase safety of shellfish and other foods. Expand
Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic.
- Medicine
- Health economics
- 2010
Estimating the effects of therapeutic treatment with antiviral (AV) drugs during an influenza pandemic in the Netherlands focuses on the sensitivity of the cost-effectiveness ratios to model choice, to the assumed drug coverage, and to the value of several epidemiological factors. Expand
Seroprevalence and susceptibility to hepatitis A in the European Union and European Economic Area: a systematic review.
- Geography, Medicine
- The Lancet. Infectious diseases
- 2017
The Review supports the need to reconsider specific prevention and control measures, to further decrease HAV circulation while providing protection against the infection in the EU and EEA, and could be used to inform susceptible travellers visiting EU andEEA countries with different HAV endemicity levels. Expand
References
SHOWING 1-10 OF 99 REFERENCES
Cost-effectiveness of hepatitis A vaccination in healthcare workers.
- Medicine
- Infection control and hospital epidemiology
- 1997
For routine vaccination of medical students to be cost-saving, the incidence rate for hepatitis A must be at least 10 times higher than the rate presently reported for the general population. Expand
Cost‐effectiveness analysis of hepatitis A vaccination strategies for adults
- Medicine
- Hepatology
- 1999
A decision model simulating costs and health consequences for otherwise healthy adults with respect to hepatitis A prevention concluded that under baseline conditions, neither the “test” nor the ‘vaccination’ strategies are considered cost‐effective according to current standards. Expand
The cost-effectiveness of hepatitis A vaccination in patients with chronic hepatitis C viral infection in the United States.
- Medicine
- 2002
Targeted vaccination for hepatitis A in patients with chronic hepatitis C may be a cost-effective strategy to decrease the morbidity and mortality associated with hepatitis A superinfection. Expand
Cost-utility of universal hepatitis A vaccination in Canada.
- Medicine
- Vaccine
- 2007
A range of possible universal vaccination strategies over an 80-year time horizon is assessed using multiple cost perspectives, and net benefit from the payer perspective is sensitive to the marginal cost of HA/HB vaccine relative to HB vaccine. Expand
The cost-effectiveness of vaccinating chronic hepatitis C patients against hepatitis A.
- Medicine
- 2002
Hepatitis A vaccination of chronic hepatitis C patients would substantially reduce morbidity and mortality in all age groups examined, and cost-effectiveness is most favorable for younger patients. Expand
Cost Effectiveness of Hepatitis A Prevention in France
- Medicine
- PharmacoEconomics
- 2012
A spreadsheet simulation model of hepatitis A disease was developed to evaluate the cost effectiveness of a recombinant hepatitis A vaccine (‘Havrix’, SmithKline Beecham) in high risk groups in France and suggested that screening before vaccination would be about half as costly as systematic vaccination. Expand
Bias in published cost effectiveness studies: systematic review
- Medicine
- BMJ : British Medical Journal
- 2006
Most published analyses report favourable incremental cost effectiveness ratios, and studies of higher methodological quality and those conducted in Europe and the US rather than elsewhere were less likely to report ratios below $20 000/QALY. Expand
Cost-Effectiveness of Hepatitis A/B Vaccine versus Hepatitis B Vaccine in Public Sexually Transmitted Disease Clinics
- Medicine
- Sexually transmitted diseases
- 2003
Background Many patients seen at U.S. sexually transmitted disease (STD) clinics are offered hepatitis B vaccination. Substituting hepatitis A/B vaccine would provide additional protection but… Expand
Cost-effectiveness analysis of hepatitis A prevention in travellers.
|
https://www.semanticscholar.org/paper/Cost-Effectiveness-Analyses-of-Hepatitis-A-Vaccine-Anonychuk-Tricco/48c9a93aef0177d639c32d6ac6525392f49ad288
|
As we celebrate May Day Joseph Daher reminds us about the international character and the importance of solidarity within our celebrations as the current neoliberal attacks continue around the world.
The revolutionary processes in the Middle East and North Africa are continuing despite the difficulties and the national and international attempts to limit the objectives of the peoples of the region whether on democratic and socio economic issues. The importance of the workers’ struggle in the uprisings is often neglected, while the depth of the social question and its impact on the outbreak of revolutions is certainly the most overshadowed dimension by Western media and even in the Arab world. These popular uprisings, arising out of the financial crisis and economic crisis, are indeed also a revolt against the neoliberal policies imposed by authoritarian regimes, and encouraged by international financial institutions like the International Monetary Fund (IMF) and the World Bank (WB).
Neoliberal policies imposed on these populations were used to dismantle and increasingly weaken the public services in these countries, including the elimination of subsidies, especially for basic necessities, while accelerating the privatization process, often in favor of the ruling and bourgeois classes linked to the regime.
These neoliberal policies have impoverished the societies as a whole, particularly affecting the students and the workers. These two groups are now leading the protests.
In Tunisia, the Tunisian General Labour Union (UGTT) has often been a driving force of the opposition against the authoritarian regimes. In 2008 it was members of the UGTT who were at the source of the uprising of miners in the region of Gafsa. They supported the movement for over a year.
In Egypt, the country experienced the largest social movement since the Second World War in the year 2000's with strikes and occupations in various sectors of the society. Strikes in Mahala el Kubra’s industries in 2008 gathered workers and activists and was presented by many as a rehearsal for 2011. These events also showed the strength of the labor movement despite the crackdown by security forces.
The unions and workers movements played an important role in the revolutions, particularly in Tunisia and Egypt. The general strike was proclaimed on January 11 in Tunisia and the days of general strikes conducted throughout Egypt were decisive to bring down the dictators of these countries.
In Egypt, in January 2012, over 300 independent unions were formed and gathered over two million workers since the fall of Mubarak. The role of the workers in the revolutions has been demonstrated by the continuous occupations of workplaces and strikes movements, and their participation in number of other demonstrations. Several companies have been nationalized and are back in the hands of the state following strikes and occupations by workers.
In Tunisia, the UGTT activism on social and democratic issues has been under attack by the government led by the El Nahda movement. This latter actually repeated several times since the fall of Ben Ali that socio economic demands are at this point of time counter revolutionaries Nahda also accused the UGTT to be the cause for the country's economic crisis because of its activism.
On February 25, more than 4,000 members of Tunisia's main trade union UGTT marched through the center of the capital to denounce the Islamist-led government. The UGTT demonstrated against the government to denounce several attacks on its offices around the country and the dumping of rubbish outside the headquarters of the Union heardquarters in Tunis, which it blamed on members of El Nahda. Police fired tear gas to disperse the protest after it exceeded its time limit.
This followed weeks of unprecedented escalation in statements and governmental initiatives, which El Nahda is leading, aiming to demonize the opposition as well as attempts to criminalize protest movements.
In Egypt, the workers have also been attacked by the ruling elite; especially the military and the Muslim Brotherhood. The Security Council of the Armed Forces (SCAF) has in fact implemented laws criminalizing strikes, protests, demonstrations and occupations that affect the economy. The Muslim Brotherhood movement supported these laws.
This degradation of living standards of the majority, coupled with political repression, led to an era of protests, often concerning economic issues, from 2002. In May 2006 hundreds of workers of the Public Building Company held a demonstration in Damascus, clashing with security forces, while taxi drivers in Aleppo went on strike.
Successful campaigns of general strikes and civil disobedience in Syria during the period December 2011 paralyzing large parts of the country also shows the activism of the working class and the exploited who are indeed the heart of the Syrian revolution. For this reason, the dictatorship has laid off more than 85,000 workers from January 2011 to February 2012, and closed 187 factories (according to official figures), to break the dynamics of protest.
The labor movement in Bahrain was a spearhead against the dictatorship of the regime, while many unionists were sentenced to long prison terms for their active participation inthe protest movement. This is also a result of strikes that paralyzed various unions. On March 14, in the capital Manama, Saudi troops supplemented by security forces in Bahrain, intervened to suppress the popular movement. As late as last November, nearly 3000 workers in the public and private sectors had been dismissed because of their participation in the uprising. Heads of various unions have been the target of the regime and imprisoned. The determination of workers in Bahrain have not diminished and continues to this day.
The party workers and May 1 keynote speakers should remind us all about the internationalist character of this celebration and therefore our solidarity with our comrades in the revolutionary process! This celebration is now more than current with neo liberal attacks against the workers do all over the world. This feast reminds us of the role of the working class in defense of a progressive society and not subservient to the profit!
Long live the permanent revolution!
|
https://www.counterfire.org/articles/opinion/15743-may-day-an-international-celebration-of-solidarity
|
On January 24, Scottish Renewables’ yearly Offshore Wind Conference will take place in Glasgow.
The event will be the second delivered in partnership with the Offshore Renewable Energy Catapult, and will feature speakers from both organisations as well as wider industry.
The conference, to be held in Glasgow on January 24, will focus on the roles of innovation, cost reduction, planning, infrastructure and supply chain in delivering the next phase of the UK’s offshore wind fleet.
Speakers include:
- Dame Anne Glover – Vice-Principal External Affairs & Dean for Europe at the University of Aberdeen, and Non-Executive Director, Offshore Renewable Energy Catapult
- Paul Wheelhouse MSP – Scottish Government Minister for Business, Innovation and Energy
- Jonathan Cole – Managing Director, ScottishPower Renewables and Chairman, Offshore Wind Programme Board
- Sarah Pirie – Head of Development, EDP Renewables
- Sebastian Bringsværd – Head of Hywind Development, New Energy Solutions, Statoil ASA
- Una Brosnan – Growth Manager, Offshore Wind, Power & Renewables, Atkins
- Andrew Jamieson – Chief Executive, Offshore Renewable Energy Catapult
- Ray Thompson – Head of Business Development, Siemens Wind Power
- Lindsay Roberts, Senior Policy Manager at Scottish Renewables, said: “Scottish Renewables’ Offshore Wind Conference will bring together the industry’s leading players to discuss and debate the key issues facing the sector.
“After the success of 2016’s event, the 2017 conference will again be delivered in partnership with the Offshore Renewable Energy Catapult, whose expertise in innovation and testing is helping cement the UK’s position as a world-leader in offshore renewables.
“The conference will look at innovations such as 2-B’s two-bladed turbine, Statoil’s Hywind project and cutting edge developments in operations and maintenance.
“We’ll also discuss challenges presented by different consenting regimes and those faced by Scotland’s supply chain.”
Offshore wind developers, local and national government, academia, public bodies, finance, advisory, legal and consultancy representatives are all expected to attend the event, which is being held in the University of Strathclyde’s Technology & Innovation Centre in central Glasgow.
Andrew Jamieson, Chief Executive of ORE Catapult, said: “Following the success of the 2016 conference we are pleased to partner once again with Scottish Renewables, helping to bring together key industry figures to discuss some of the most important opportunities and challenges facing our sector today.
“The Scottish renewables industry has a huge role to play in bringing growth and prosperity to the Scottish and UK economies and in ensuring the continued success of the UK offshore wind industry.”
The event will also feature an exhibition, at which spaces are still available. Current exhibitors include:
- Aggreko
- DNV GL
- Flintstone Technology
- Highlands and Islands Enterprise
- Invest in Fife
- Marine Scotland
- TNEI Services
- NewWaves Solutions
- Scottish Enterprise
- Shepherd + Wedderburn
- Sparrows
- The Crown Estate
- The Offshore Renewable Energy Catapult
A Burns Supper-themed dinner will be held the evening before the conference and will feature elements of a traditional Burns Night including readings of poetry and songs written by Scotland’s national poet, Robert Burns.
To book tickets for the conference or dinner, or space for an exhibition stand, please contact Lisa Russell ([email protected] or 0141 353 4986) or visit bit.ly/SRoffshore17.
Energy
7 New Technologies That Could Radically Change Our Energy Consumption
Most of our focus on technological development to lessen our environmental impact has been focused on cleaner, more efficient methods of generating electricity. The cost of solar energy production, for example, is slated to fall more than 75 percent between 2010 and 2020.
This is a massive step forward, and it’s good that engineers and researchers are working for even more advancements in this area. But what about technologies that reduce the amount of energy we demand in the first place?
Though it doesn’t get as much attention in the press, we’re making tremendous progress in this area, too.
New Technologies to Watch
These are some of the top emerging technologies that have the power to reduce our energy demands:
- Self-driving cars. Self-driving cars are still in development, but they’re already being hailed as potential ways to eliminate a number of problems on the road, including the epidemic of distracted driving ironically driven by other new technologies. However, even autonomous vehicle proponents often miss the tremendous energy savings that self-driving cars could have on the world. With a fleet of autonomous vehicles at our beck and call, consumers will spend less time driving themselves and more time carpooling, dramatically reducing overall fuel consumption once it’s fully adopted.
- Magnetocaloric tech. The magnetocaloric effect isn’t exactly new—it was actually discovered in 1881—but it’s only recently being studied and applied to commercial appliances. Essentially, this technology relies on changing magnetic fields to produce a cooling effect, which could be used in refrigerators and air conditioners to significantly reduce the amount of electricity required.
- New types of insulation. Insulation is the best asset we have to keep our homes thermoregulated; they keep cold or warm air in (depending on the season) and keep warm or cold air out (again, depending on the season). New insulation technology has the power to improve this efficiency many times over, decreasing our need for heating and cooling entirely. For example, some new automated sealing technologies can seal gaps between 0.5 inches wide and the width of a human hair.
- Better lights. Fluorescent bulbs were a dramatic improvement over incandescent bulbs, and LEDs were a dramatic improvement over fluorescent bulbs—but the improvements may not end there. Scientists are currently researching even better types of light bulbs, and more efficient applications of LEDs while they’re at it.
- Better heat pumps. Heat pumps are built to transfer heat from one location to another, and can be used to efficiently manage temperatures—keeping homes warm while requiring less energy expenditure. For example, some heat pumps are built for residential heating and cooling, while others are being used to make more efficient appliances, like dryers.
- The internet of things. The internet of things and “smart” devices is another development that can significantly reduce our energy demands. For example, “smart” windows may be able to respond dynamically to changing light conditions to heat or cool the house more efficiently, and “smart” refrigerators may be able to respond dynamically to new conditions. There are several reasons for this improvement. First, smart devices automate things, so it’s easier to control your energy consumption. Second, they track your consumption patterns, so it’s easier to conceptualize your impact. Third, they’re often designed with efficiency in mind from the beginning, reducing energy demands, even without the high-tech interfaces.
- Machine learning. Machine learning and artificial intelligence (AI) technologies have the power to improve almost every other item on this list. By studying consumer patterns and recommending new strategies, or automatically controlling certain features, machine learning algorithms have the power to fundamentally change how we use energy in our homes and businesses.
Making the Investment
All technologies need time, money, and consumer acceptance to be developed. Fortunately, a growing number of consumers are becoming enthusiastic about finding new ways to reduce their energy consumption and overall environmental impact. As long as we keep making the investment, our tools to create cleaner energy and demand less energy in the first place should have a massive positive effect on our environment—and even our daily lives.
Energy
Responsible Energy Investments Could Solve Retirement Funding Crisis
Retiring baby-boomers are facing a retirement cliff, at the same time as mother nature unleashes her fury with devastating storms tied to the impact of global warming. There could be a unique solution to the challenges associated with climate change – investments in clean energy from retirement funds.
Financial savings play a very important role in everyone’s life and one must start planning for it as soon as possible. It’s shocking how quickly seniors can burn through their nest egg – leaving many wondering, “How long your retirement savings will last?”
Let’s take a closer look at how seniors can take baby steps on the path to retiring with dignity, while helping to clean up our environment.
Tip #1: Focus & Determination
Like in other work, it is very important to focus and be determined. If retirement is around the corner, then make sure to start putting some money away for retirement. No one can ever achieve anything without dedication and focus – whether it’s saving the planet, or saving for retirement.
Tip #2: Minimize Spending
One of the most important things that you need to do is to minimize your expenditures. Reducing consumption is good for the planet too!
Tip #3: Visualize Your Goal
You can achieve more if you have a clearly defined goal in life. This about how your money can be used to better the planet – imagine cleaner air, water and a healthier environment to leave to your grandchildren.
Investing in Clean Energy
One of the hottest and most popular industries for investment today is the energy market – the trading of energy commodities. Clean energy commodities are traded alongside dirty energy supplies. You might be surprised to learn that clean energy is becoming much more competitive.
With green biz becoming more popular, it is quickly becoming a powerful tool for diversified retirement investing.
The Future of Green Biz
As far as the future is concerned, energy businesses are going to continue getting bigger and better. There are many leading energy companies in the market that already have very high stock prices, yet people are continuing to investing in them.
Green initiatives are impacting every industry. Go Green campaigns are a PR staple of every modern brand. For the energy-sector in the US, solar energy investments are considered to be the most accessible form of clean energy investment. Though investing in any energy business comes with some risks, the demand for energy isn’t going anywhere.
In conclusion, if you want to start saving for your retirement, then clean energy stocks and commodity trading are some of the best options for wallets and the planet. Investing in clean energy products, like solar power, is a more long-term investment. It’s quite stable and comes with a significant profit margin. And it’s amazing for the planet!
Trending
-
Energy3 weeks ago
How Much Energy Does Bitcoin Use, Really?
|
http://blueandgreentomorrow.com/environment/glasgow-welcomes-offshore-wind-conference/
|
Remarkably, many South Florida residents and visitors find it difficult to gain access to Biscayne Bay. There are many reasons, among them, limited public areas and lack of parking, signs and information. Moreover, knowledge of the bay’s resources and challenges is not common, even among long-time South Floridians. Greater access and public awareness are essential keys to preserving a healthy Biscayne Bay for future generations.
Access to the bay provides opportunities for recreation and education, and these experiences help build a sense of community pride and stewardship of Biscayne Bay. Everyone, regardless of economic or social circumstances, deserves safe access and opportunities for responsible use of the bay. Yet public access should not harm the bay’s natural resources or reduce the value of the bay user experience. As more people use the bay, it is important to expand education and provide additional protection to sites of high environmental value through limits on types of use, timing or number of users, as well as improved enforcement to ensure compliance.
There is a great need for environmental education programs and activities specifically focused on Biscayne Bay and reaching community residents of all ages. Local schools’ environmental awareness programs are a good start, but also needed are additional hands-on teaching that brings students closer to the bay. Environmental educational programs should be geared to the many different audiences and cultures in our community. Information and signage should reflect the languages and cultures of the community to provide information useful to all. Finally, special effort needs to be given to educating community and neighborhood leaders so they can both understand the impacts of their decisions and help educate their constituents about the importance of the bay.
To encourage greater public access to the bay and community education about bay issues, the Biscayne Bay Partnership Initiative has recommended these actions:
* Sections of the Biscayne Bay and Miami River shoreline currently planned for intense development should include space for activities that are water-dependent or water-related with green areas that enhance habitat and public access.
* Public lands, causeways and public parks along Biscayne Bay should provide opportunities for public recreational and educational experiences, and should provide all people with safe access and opportunities for responsible use of the bay. Public access must be consistent with the need to protect the bay.
* A central clearinghouse should collect and disseminate available information about the bay.
* Biscayne Bay education efforts should encompass at least five groups: 1) primary, secondary and post-secondary students and educators; (2) the general public, with an emphasis on minority groups; (3) public officials; (4) direct users of the bay, with an emphasis on boaters; and (5) tourists.
* The Florida legislature should continue to provide funding for public education and outreach regarding the long-term health of the Biscayne Bay ecosystem and its importance for South Florida.
|
http://www.discoverbiscaynebay.org/access-and-education.htm
|
What To Do First After Involved In A Vehicle Accident?
Every day as we drive, regardless of how cautious we are, how well maintained our cars are, or how good the road is, there is always the risk of an accident. No one has any influence over this situation. When we are involved in an accident, the best we can do is educate ourselves about what to do to make sure that we and all other road users are on the safe side. So, in case you find yourself in this fateful situation, here is what you should do.
1. Make Sure You and Everyone Else Is Okay
The loss of life is one of the deadliest consequences of an accident. Every year, millions of lives are lost in accidents, some of which could have been avoided if those present at the scene had taken the precautions or prerequisite actions. After an accident, one of the first things you should do is make sure that you, your passengers, and everybody else involved in the crash are all okay. Regardless of whether or not anyone seems to be in good health, call an ambulance. Some injuries are not immediately apparent but can manifest later, and only qualified medical practitioners can determine this.
2. Have an Accident Attorney in Mind
In case you need to seek compensation for your injury or property loss after the accident, it will be in your best interests to get an accident attorney’s guidance. An experienced motor vehicle accident lawyer can help make life easier for you in various ways from the moment you get into the car crash. As you heal and recover from the trauma, they will be taking care of the legal aspects of the issue and representing your interests. This could involve gathering evidence, establishing grounds for your case, and filing compensation claims. They will also advise you accordingly on whether you need to file a lawsuit or settle the matter out of court.
3. Call the Police and Possibly Gather Information
If you’re involved in an accident, it’s always a good idea to call 911. This will ensure that the authorities get to the scene as soon as possible to conduct inspections and create an accident report before any evidence is tampered with. The information they collect is generally crucial, especially if the accident will result in a lawsuit. If, for example, you are unable to collect facts or evidence from the scene, the police will take care of it for you. The report they produce might be extremely useful soon.
4. Seek Medical Care
It’s always a good idea to seek medical help, whether you’re feeling well or not. This is because you never know if any underlying injuries will show up later. Most of the time, you might appear to be in great health only to discover later that you have other internal injuries that you were unaware of. The results would be disastrous for both your health and your finances.
The doctor will keep a record of your injuries and the cost of care while you are in the hospital. The report they produce will often be useful in court, especially when it comes to bargaining for compensation. To supplement all of the other documentation that will be available, maintain a journal of all incidents.
5. Have Chat with Your Insurance Provider
When you are involved in an accident, most insurance providers would ask you to file a claim. If this is the case, file your claim as soon as possible. Make use of your attorney to ensure that the responsible insurance company fulfills its responsibilities. They will advise you on what to do to make sure your accident report claim is successful. In most cases, if they act on your behalf, you stand higher chances of getting compensated than if you represented yourself. Make sure to adhere to your insurance company’s policies so your claim doesn’t get rejected.
6. Try Negotiating With the Defendant
Obtaining compensation via the court system can be a time-consuming and expensive process. This is why attempting to reach an out-of-court settlement with the defendant or their insurance company is sometimes a viable approach. However, you will want your legal counsel present in the negotiation process. This will help protect your legal rights and ensure you receive appropriate compensation for your injuries or damages. Arbitration can help reach an agreement so both parties can save time and money they would have spent on court fees.
It’s never a pleasant experience being a victim of an accident. All the same, you need to be aware of what to do in case you find yourself in such a situation. This piece has highlighted some of the actions to take as soon as you become an accident victim.
|
https://sflcn.com/what-to-do-first-after-involved-in-a-vehicle-accident/?noamp=mobile
|
Description-: A glucose meter is a medical device for determining the approximate concentration of glucose in the blood. A small drop of blood, obtained by pricking the skin with a lancet, is placed on a disposable test strip that the meterreads and uses to calculate the blood glucose level.
1. What is (CGM) Continuous Glucose Monitoring ?
Continuous Glucose Monitoring (CGM) is a method to track glucose levels throughout the day and night. CGM systems take glucose measurements at regular intervals, 24 hours a day, and translate the readings into dynamic data, generating glucose direction and rate of change reports.
2. How does a glucose monitor work ?
Glucometers provide readings by detecting the level of glucose in a person's blood. To get a reading, a person pricks the skin — most commonly, a finger — and applies the blood sample gained to a test strip inserted in the meter. The glucose in the blood reacts with the chemicals in the strip.
3. How do you monitor blood sugar levels ?
How to check with a meter:
- After washing your hands, insert a test strip into your meter.
- Use your lancing device on the side of your fingertip to get a drop of blood.
- Touch and hold the edge of the test strip to the drop of blood, and wait for the result.
- Your blood glucose level will appear on the meter's display.
4. Can you check blood sugar without blood ?
This diabetes monitor can read your blood sugar without any blood. CNBC's Erin Black, who has Type 1 Diabetes, puts it to the test to see if it's as accurate as previous glucose monitors that require a finger prick and blood.
5. What is a average glucose level ?
Normal blood sugar levels are less than 100 mg/dL after not eating (fasting) for at least eight hours. ... For most people without diabetes, blood sugar levels before meals hover around 70 to 80 mg/dL. For some people, 60 is normal; for others, 90 is the norm.
6. What happens if your blood sugar is over 400 ?
coma and death can occur simply because the blood sugar is so high.HHS usually occurs with blood sugar readings above 400 mg/dl as thebrain and other functions begin to shut down. When insulin levels are low, the body cannot use glucose present at high levels in the blood.
7. Cost of Gulcose Monitor in India ?
Cost of Gulcose Monitor in India is vary According to how much strip they useally provide with Gulcose Monitor but a average Range is from (INR 1000 to 10000)
8. Manufatures of Gulcose Monitor ?
|
https://www.hospitalsstore.com/buy-glucose-monitors/
|
- Format:
- Hardback
- ISBN:
- 9780762301669
- Published:
- 01 Aug 1997
- Publisher:
- Emerald Group Publishing Limited
- Dimensions:
- 216 pages - 156 x 234 x 14mm
- Series:
- Advances in Taxation
Categories:
This ninth volume is part of a series which serves as a research annual for the publication of academic tax research.
An examination of contingent and non-contingent rewards in a tax compliance experiment, Debra S. Callihan and Roxanne M. Spindle; horizontal equity and income type in the Canadian federal income tax system - a cluster analysis, Alexander M.G. Gelardi; valuation of companies of the estate and gift tax - evidence of minority interest discounts, Roger C. Graham Jr and Craig E. Lefanowicz; designing tax incentives to promote environmental goals - a life cycle approach, Julie A. Lockhart; a comparative analysis of capital gains taking, Thomas M. Porcano and David M. Shull; vertical equity and interstate effects of the state and local tax deduction after the Tax Reform Act of 1986 - evidence from tax returns, David Ryan; the mentor relationship within the public accounting firm - its impact on tax professional's performance, Philip H. Siegel, Robert W. Rutledge and Joseph M. Hagan.
You might also be interested in..
|
https://books.emeraldinsight.com/page/detail/advances-in-taxation-thomas-m-porcano/?k=9780762301669
|
When your bedtime approaches, does a four-legged friend hop onto the blankets, too? A new study finds that for many American pet owners, that’s not a bad thing.
According to a Mayo Clinic study surveying 150 people, “more respondents perceived their pets to not affect or even benefit rather than hinder their sleep,” while “some respondents described feeling secure, content and relaxed when their pet slept nearby.”
The study is published in the December issue of Mayo Clinic Proceedings.
The research was led by Dr. Lois Krahn of the Center for Sleep Medicine at the Mayo Clinic in Scottsdale, Ariz. Her team said there hasn’t been good research to date on the impact a pet slumbering nearby might have on an owner’s sleep.
Sleep problems continue to plague millions of Americans, and “pet ownership and the number of pets per household are at the highest level in two decades,” the study authors wrote.
In their research, Krahn’s team used interviews and questionnaires to discover how pets in the bedroom affect sleep. Seventy-four of the 150 adults interviewed had at least one pet, and 31 had multiple pets. More than half (56 percent) of pet owners allowed their animal (or animals) to sleep with them in the bedroom or on the bed.
Only 15 pet owners (20 percent) considered the pet’s presence “disruptive” to their sleep. Some said their pets wandered, snored, whimpered or needed bathroom breaks, for example. One single 51-year-old woman complained that her pet parrot “consistently squawked at 6 a.m.,” according to the researchers.
Many more were just fine with a pet sleeping nearby, however. Forty-one percent of people who allowed their pet to sleep with them said that it was either “not an issue or [was] advantageous” to their sleep.
“A single 64-year-old woman commented that she felt more content when her small dog slept under the covers near her feet,” Krahn’s group wrote. In addition, they reported that a 50-year-old woman said she did “‘not mind when my lovely cat’ slept on her chest and another described her cat as ‘soothing.'”
Some people even said that part of the reason they acquired a dog or cat in the first place was to help them relax at night, and this was especially true for single people or people whose partners often traveled or worked at night.
The researchers stressed that having a pet in the bedroom is not always a calming experience, and people should prioritize their need for restful sleep over the need of a pet to be close by.
However, when pet-human slumber does work, it can be very rewarding, according to Krahn and colleagues.
“The value of these experiences . . . cannot be dismissed, because sleep is dependent on a state of physical and mental relaxation,” the authors concluded.
More information
There’s more on getting a good night’s sleep at the National Sleep Foundation.
|
https://iamtotallysick.com/womens-health/many-pet-owners-happy-to-have-fido-fluffy-share-the-bed/
|
Bhavnagar Municipal Corporation (BMC) Recruitment for Junior Clerk, Fireman Posts 2019 @ www.ojas.gujarat.gov.in
Posts :
• Divisional Fire Officer (Male): 01 Post
• Fireman (Male): 11 Posts
• Junior Clerk: 34 Posts
• Swimming Instructor (Female & Male): 04 Posts
• Technical Assistant (Civil): 14 Posts
Total Post : 64
Educational Qualification : Please Read Official Notification.
How to Apply: Interested and eligible Candidates may Apply Online Through official Website ojas.gujarat.gov.in
A mutual fund is a professionally managed investment fund that pools money from many investors to purchase securities. These investorsmay be retail or institutional in nature.
Mutual funds have advantages and disadvantages compared to direct investing in individual securities. The primary advantages of mutual funds are that they provide economies of scale, a higher level of diversification, they provide liquidity, and they are managed by professional investors. On the negative side, investors in a mutual fund must pay various fees and expenses.
BMC Junior Clerk, Fireman Posts 2019 @ www.ojas.gujarat.gov.in
Primary structures of mutual funds include open-end funds, unit investment trusts, and closed-end funds. Exchange-traded funds (ETFs) are open-end funds or unit investment trusts that trade on an exchange. Mutual funds are also classified by their principal investments as money market funds, bond or fixed income funds, stock or equity funds, hybrid funds or other. Funds may also be categorized as index funds, which are passively managed funds that match the performance of an index, or actively managed funds. Hedge funds are not mutual funds; hedge funds cannot be sold to the general public and are subject to different government regulations.
Date : 30-06-2019
Important Dates:
|
https://www.akparmar.com/2019/06/bhavnagar-municipal-corporation-bmc.html
|
Factors Correlated to Protective and Risk Dietary Patterns in Immigrant Latino Mothers in Non-metropolitan Rural Communities.
Immigrant Latinos face conditions which over time negatively impact their nutritional behaviors and health outcomes. Our objective was to evaluate associations between environmental and lifestyle factors and both protective dietary patterns (e.g., intake of fruits and vegetables) and harmful dietary patterns (e.g., consumption of salty snacks and fast food). Surveys were individually and orally administered to 105 foreign-born Latina mothers living in rural locations in a Midwestern state. Principal component analysis created composite variables for each construct and Spearman correlations were conducted to determine associations. Protective dietary patterns were positively associated with access to food and information (ρs = 0.21) and language acculturation (ρs = 0.24), and negatively associated with family challenges (ρs = -0.31). Food insecurity was negatively associated with harmful dietary patterns (ρs = -0.24). Findings suggest that rural Latino dietary interventions should be complemented with comprehensive strategies addressing environmental and lifestyle factors across ecological domains.
| |
# The Most Secret Method
The Most Secret Method was an American post-hardcore band formed in Washington, D.C., in 1995. Combining styles from groups of the first wave of punk with newer indie rock influences, the band was a major part of the vanguard which represented the D.C. music scene's new direction in the aftermath of the Revolution Summer movement. In addition to their music, the Most Secret Method developed a signature visual art style on their concert posters and 1998 album, Get Lovely, thanks to drums player Ryan Nelson.
## History
Founded in 1995, the Most Secret Method featured the trio of Johanna Claasen (bass guitar, vocals), and brothers Marc Nelson (lead guitar, vocals) and Ryan Nelson (drums). The band made its debut at the high-profile independent music nightclub the Black Cat alongside fellow D.C. band the Capitol City Dusters. Live performances in the D.C. punk scene's abundance of venues and widely observed all-ages policy helped the group hone their heavily emphasized rhythm technique, while attracting a sizable following in a brief amount of time. Additionally, the Most Secret Method incorporated stylistic tendencies from like-minded groups of the D.C. punk community such as Juno, the Dismemberment Plan, Smart Went Crazy, and Jawbox. Ryan Nelson thought "to a large extent the D.C. musicians of the ‘90s were influenced by the Revolution Summer bands. I certainly was. And by the time we got around to forming our own bands, there was still an active underground community – a network of places to play and kids to stay with around the country".
In 1997, the Most Secret Method split an EP with the Capitol City Dusters on Superbad Records, a subsidiary of Dischord Records. In addition to the band's musical aesthetic, much was made of Ryan Nelson's visual art designs on the Most Secret Method's concert fliers. Well aware of the designs of artists including Raymond Pettibon (designer of Black Flag album covers and the Sonic Youth's Goo artwork), Love and Rockets creator Jaime Hernandez, and Ghost World writer and artist Daniel Clowes, Nelson's artwork for the band became immediately recognizable and considered among the best the area produced. Combine with the group's split EP, Nelson's artistic style was instrumental in landing their record deal with Slowdime Records in 1998.
Jawbox's guitarist J. Robbins collaborated with the Most Secret Method to record and mix their debut album, titled Get Lovely, at Arlington's Inner Ear Studios. Music historian Brandon Gentrey described the album as a "sterling example of Capitol City post-hardcore indie ruckus, buzzing, dynamic, and smart", while managing to stand the test of time. Released in September 1998, Get Lovely features cover art and inner sleeve designs by Nelson, who was inspired to create a piece which rivaled the boldness of Big Black's Songs About Fucking. The album had a profound influence on the following generation of punk bands, with groups such as Q and Not U and Black Eyes exhibiting signs of the Most Secret Method's style.
Following the distribution of Get Lovely, the group had a down period in which the band members took to other activities. Ryan Nelson played in the bands the Dead Teenagers and Beauty Pill, Marc Nelson devoted time to acting, and Claasen focused on practicing standup bass, before resurfacing in 2002 to release their second album Our Success. In August 2002, the Most Secret Method returned for a series of concerts at the Black Cat before disbanding. Nelson continued his music career, forming the band the Soccer Team, which has released two albums for Dischord Records. Marc Nelson began acting full time under the stage name Marcus Kyd. He went on to found Taffety Punk Theatre Company in Washington, DC, with choreographer Erin F. Mitchell, actors Lise Bruneau and Chris Marino, and manager Amanda MacKaye.
## Discography
### Single and EP
"Blue" b/w "Perfect Plan" – (self-released), 1996 The Most Secret Method / The Dusters Split – Dischord Records (#DIS-117.5), 1997
### Albums
Get Lovely – Slowdime Records (#15), 1998 Our Success – Superbad Records (#9), 2002
|
https://en.wikipedia.org/wiki/The_Most_Secret_Method
|
BACKGROUND OF THE INVENTION
DISCLOSURE OF INVENTION
BEST MODE
REFERENCES
In order to maintain air-conditioned spaces at comfortable and healthy conditions, it is necessary to dry the supply air to about 57° F. wet bulb. It is also desirable to have the supply air dry bulb temperature at least about 5° F. warmer (62° F. or higher). This air feels more comfortable, and also avoids moisture-saturated conditions in the supply duct.
Conventionally the air drying is done with 44° F. chilled water, supplied from electric powered mechanical vapor compression chillers. Those chillers create an unacceptably high and costly peak summer electric demand in many areas. When the air reheat is done with external heat input, the chilling demand is increased by the amount of reheat—a very wasteful practice.
The drying and cooling of the air could alternatively be done with a heat-activated liquid desiccant cycle. Those cycles are proven and effective in drying air (leaving it warmer and dryer). However their performance degrades markedly as they are pushed to conditions where the air is also cooled, e.g. to the 62° F. DB/57° F. WB supply air condition cited above, coupled with a realistic heat rejection temperature, e.g. 83° F. cooling water. The required regeneration temperature goes up, cycle losses magnify, and COP goes down, thus requiring more input heat at higher temperature.
Liquid desiccant drying systems are well established. Commercial vendors include Munters, Drykor, Kathabar, and Niagara Blower (the latter having just acquired Kathabar). The desiccant drying process leaves the air dryer but hotter.
There have been many efforts to use liquid desiccants as coolers vs. only drying. This entails cooling the dried air at least back to room temperature, and preferably below room temperature. Gommed and Grossman (2008) report performance of a system with adiabatic drying and cooling the desiccant with cooling water. It provides very good dehumidification but very limited cooling—cooling COPs range from 0.23 to 0.74.
Liu et al (2006) report performance of a system with diabatic drying followed by evaporative cooling (also referred to as adiabatic humidification). They achieve a cooling COP of 0.61 at 80° C. regeneration temperature when using 15° C. cooling water for heat rejection. Lowenstein et al (2005) report on the transport properties of a low flow diabatic absorber that is directly evaporatively cooled. Jones (2008) reports performance of that low flow unit—a cooling COP of 0.52 at 78° C. regeneration temperature when using 23° C. cooling water.
Numerous researchers have studied and reported upon the combination of a heat pump with a desiccant cooling cycle such that the cold end of the heat pump chills the dried air, and the hot end supplies the regeneration heat. This can be done with either a mechanical compression chiller or an absorption chiller. One example of this type of hybrid system using a mechanical chiller is found in Peterson et al (U.S. Pat. No. 4,941,324).
Wilkinson (U.S. Pat. No. 5,070,703) reports study results on a hybrid of a closed cycle LiBr absorption chiller and an open cycle liquid LiBr desiccant system, wherein both condenser heat and absorber heat from the absorption chiller are supplied to the desiccant regeneration process. The desiccant section of the hybrid cycle incorporates diabatic dehumidification followed by chilling of the ventilation air.
Schinner and Radermacher (1999) also report study results for an integrated absorption chiller/desiccant hybrid cycle. In their case it is a single effect ammonia-water absorption chiller. They model a “triple effect” cycle, i.e. with both condenser and absorber heat supplied to the regeneration process. The ventilation air is adiabatically dried in a desiccant wheel, then cooled by heat exchange with outdoor air, and finally evaporatively cooled. They report calculated COPs above 1.0, but only at return air temperatures higher than desired (above 60° F. wet bulb). The calculated absorption cycle COP is 0.293, and the absorber temperature is 192° F., inferring a driving heat temperature above 300° F.
What is needed is an air conditioning cycle that can be powered by low temperature waste heat or solar heat, that has high COP, and that has low parasitic power demand to run fans and pumps.
Since the absorption cycle only supplies sensible cooling and no moisture removal, it operates at higher evaporator chilling temperature than normal (e.g. 55° F. vs 40° F.), which reduces its required generator temperature.
Since the desiccant cycle only supplies about half the total equivalent chilling, and none of the colder portion of the chilling, it requires a much lower degree of drying of the ventilation air, compared to when it supplies all the chilling. This means that its regeneration temperature is much lower (typically 135° F. at the above design condition).
The absorption cycle absorber rejects heat at about 160° F., which is hot enough to supply the desiccant regenerator. Hence the desiccant cycle COP (typically 0.55) adds to the absorption cycle COP (typically 0.7) without requiring any additional input heat. The resulting 1.25 COP is obtained from a 245° F. heat input—much lower than the driving temperature required by any existing thermally activated chilling apparatus performing to a comparable COP. This allows use of less expensive solar thermal heat, and/or more readily available waste heat.
The approach to achieving higher Coefficient of Performance (1.25) air conditioning at standard conditions from lower temperature driving heat (245° F.) is as follows. A thermally integrated hybrid absorption/desiccant system is provided that is comprised of both a liquid desiccant drying cycle and a closed absorption cycle chiller. Each of the constituent cycles will pick up about half of the total chilling duty. Reject heat from the absorption cycle will be used as regeneration heat for the desiccant cycle. With this combination, three important benefits are obtained:
One key feature of this thermally integrated combination is that the desiccant cycle reject heat is rejected to ambient, not to the chilling coil (which is chilled by the absorption cycle evaporator). The net improvement in overall cycle COP is directly a function of how much of the desiccant cycle reject heat goes to ambient, versus into the chilling coil. The disclosed cycle has three features that ensure that the amount of dryer heat rejected to ambient is maximized. Those features are: the front-end adiabatic humidifier; the air to air heat exchanger, and the cooled drier (cooled either via recirculated liquid desiccant or via diabatic contact). Another key feature, necessary to achieve reasonably interesting values of COP, is the regeneration air-to-air heat exchanger. That feature, in combination with the heated regenerator, ensures that as much as possible of the regeneration heat goes into desiccant regeneration, as opposed to simply heating the regeneration air.
The component parts of each flowsheet are identified on the figures. Prior art disclosures are recited below. Each of the flowsheets exhibits the unique disclosed features that enable increased performance over the prior art results.
This invention builds upon the capabilities of ammonia water absorption heat pumps currently being demonstrated by Energy Concepts Co. Those heat pumps produce 155° F. hot water from 175° F. absorber and 120° F. condenser, while chilling on the cold side to 34° F. That is done at high performance (COP of 0.6) and with 330° F. driving temperature. In the absorber-coupled integrated absorption/desiccant system, the absorption cycle operates at much more benign conditions, and hence should have even higher performance at lower driving temperature. The various liquid desiccant cycles that could be regenerated by low temperature heat were surveyed. It was discovered that the systems using desiccant wheels could not operate acceptably at the desired low regeneration temperature, and hence non-adiabatic (“diabatic”) desiccant contactors were needed. It was further discovered that air-to-air heat exchangers were vitally important, as otherwise much of the applied energy ended up as wasteful heating of the air, instead of the desired cooling and drying of the air. Those air-to-air exchangers can be either the stationary type or the rotary type.
When even those features did not yield the desired performance from the liquid desiccant cycle, it was further discovered that it was necessary to incorporate an adiabatic humidifier in the cycle. The best location for that component is at the start of the ventilation air treating sequence, although satisfactory results can also be obtained when it is at the end of the sequence.
FIG. 8
FIG. 10
FIG. 4
FIGS. 6 and 7
The above development sequence resulted in the and flowsheeets. The accompanying thermodynamic diagrams show that those cycles are capable of producing the desired performance. There was however one drawback to that configuration. The diabatic contactors are not standard commercial items available at various sizes, and hence would require some development. Hence an alternative configuration was investigated. In particular, standard adiabatic contactors were explored. Taken alone, these contactors have the same drawbacks as the (adiabatic) desiccant wheel—the required regeneration temperature is too high for a practical integrated cycle. However it was discovered that with heavy recirculation of the liquid desiccant through a heat exchanger and then sprayed back into the contactor, the performance of the adiabatic contactor could be made to approach that of the diabatic contactor. The result of that investigation sequence is the flowsheet, where the contactors are changed to adiabatic. depict the thermodynamic operating conditions of this configuration that demonstrate that the desired performance is indeed feasible. The dryer operates between 87.3° F. and 96.2° F., and hence can indeed be cooled by ambient at 75° F. wet bulb. The regenerator operates between 140° F. and 131.7° F., and hence can indeed be heated by the absorption cycle absorber. The chilling coil only needs to cool the air to 65° F., since it has already been dried to 57° F. wet bulb. The desiccant flow rates are very reasonable when either LiCl or LiBr is used as the desiccant (or mixtures thereof). TEG or other known liquid desiccants could also be used. However those other desiccants would require somewhat higher regeneration temperatures to keep the desiccant flow rates reasonable, since their moisture carrying capacity is so much lower.
When properly operated, this integrated cooling system has a relatively fixed ratio between the moisture removed by the dryer and the sensible chilling delivered by the absorption cycle. However conditioned spaces have widely varying sensible heat ratios, depending upon the moisture emitting processes in the space and also the amount of outside makeup air. The integrated system accommodates those varying sensible heat ratios with the adiabatic humidifier. Any moisture introduced into the front-end humidifier has the effect of increasing the moisture removal duty and decreasing the sensible chilling duty. Hence that is done until the two are brought into their natural balance point. With the wetted media type of humidifier, the amount of moisture added to the air can be controlled by simply bypassing part of the air around the media with a damper. With the spray mist type of humidifier, the water flowrate to the spray-misters is controlled.
The integrated absorption/desiccant cooler can use other types of absorption cycle than the ammonia-water cycle. In particular, it can also make use of the LiBr absorption cycle. In that case however, it is not possible to use reject heat from the absorber to power the desiccant regeneration step. The LiBr absorbent would be in its crystallization region at that temperature. Instead, the condenser heat must be used. The steam condensing temperature in the cycle is elevated from its normal 105° F. to about 150° F., to heat the liquid desiccant. Since condenser heat is typically only about ⅔ the magnitude of absorber heat, this combination will not realize as much gain from the desiccant section, and hence the overall cycle COP will be lower. However the advantage is that it eliminates use of ammonia. In some applications ammonia is deemed too risky due to its toxicity.
Similarly the disclosed integration of a heat pump and a liquid desiccant cycle can be done with a mechanical compression heat pump. In that instance the combination is powered by electricity rather than by low temperature heat. This would be used where no heat is available. It has the advantage that the electric requirement is reduced by 40% below what is possible with any conventional electric chiller.
FIG. 1
The disclosed novel liquid desiccant cooling cycle has been discovered to have two other beneficial applications, independent of being in the above-disclosed integrated combination. First is as a stand-alone cycle, as depicted in . In compensation for no longer having the help of the chilling coil at the cold end, two adjustments are necessary. First, a second adiabatic humidifier is required (one at either end), and secondly, the dryer must dry the air substantially more. The extra dryness allows the back end humidifier to achieve the desired low dry bulb temperature, however it means that higher regeneration temperatures are required.
The other beneficial application is to provide a hybrid of the desiccant cycle and a chiller wherein they are not integrated, yet they both cooperate to cool and dry the air. With an electric chiller, the same 40% (or more) savings in electricity is realized, but a separate source of regeneration heat is required. However it can be at very low temperature (150 F), i.e. can be waste heat or low cost solar heat. When the chiller is heat activated, it only requires driving temperature of about 175 F, and that combined with the 150 F requirement of the desiccant cycle can be very advantageous, even though the overall COP is only about 0.65.
Gommed, Khaled and Grossman, Gershon. Sep. 23, 2008. “Experimental Investigation of a Solar-Powered Open Absorption System for Cooling, Dehumidification and Air Conditioning.” International Sorption Heat Pump Conference. Seoul, Korea.
Howell, John R. and Peng, Patrick. Feb. 15, 1983. “Hybrid Double-Absorption Cooling System.” U.S. Pat. No. 4,373,347.
Jones, Benjamin Marcus. Sep. 2008. “Field Evaluation and Anaysis of a Liquid Desiccant Air Handling System.” Queen's University. Kingston, Ontario, Canada.
Kelley, G. A. Dec. 24, 1968. “Method and Means for Providing High Humidity, Low Temperature Air to a Space.” U.S. Pat. No. 3,417,574.
Ko, Suk M. Jun. 3, 1980. “LiC1 Dehumidifier LiBr Absorption Chiller Hybrid Air Conditioning System with Energy Recovery.” U.S. Pat. No. 4,205,529.
HVAC Technologies for Energy Efficiency
Liu, Jianhua; et al, 2006. “Experimental Investigation on the Operation Performance of a Liquid Desiccant Air-Conditioning System.” , Volume IV-11-5.
Lowenstein, Andrew; et al. Jun. 22, 2005. “A Low-Flow, Zero Carryover Liquid Desiccant Conditioner.” International Sorption Heat Pump Conference. Denver, Colo.
Solar LDAC.
Lowenstein, Andrew. March 2003. “A Solar Liquid-Desiccant Air Conditioner.”
ASHRAE Technical Data Bulletin
Lowenstein, Ph.D, Andrew and Novosel, Davor. 1995. “The Seasonal Performance of a Liquid-Desiccant Air Conditioner.” , Volume II, Number 2.
Maeda, Kensaku. Oct. 19, 1999. “Heat Pump Device and Desiccant Assisted Air Conditioning System.” U.S. Pat. No. 5,966,955.
Meckler, Gershon and Meckler, Milton. Oct. 23, 1979. “Air Conditioning Apparatus.” U.S. Pat. No. 4,171,624.
ASHRAE Technical Data Bulletin
Pesaran, Ph.D. Ahmad; et al, 1995. “Evaluation of a Liquid Desiccant-Enhanced Heat Pipe Air Preconditioner.” , Volume II, Number 2.
Peterson, John L.; et al, Jul. 17, 1990. “Hybrid Vapor-Compression/Liquid Desiccant Air Conditioner.” U.S. Pat. No. 4,941,324.
Potnis, Shailesh V.; et al, Apr. 17, 2001. “Liquid Desiccant Air Conditioner.” U.S. Pat. No. 6,216,483.
HVAC
R Research
Schinner, Jr., P.E., Edward N. Jan. 1999. “Performance Analysis of a Combined Desiccant/Absorption Air-Conditioning System.” & , Volume 5, Number 1.
ASHRAE Transactions: Symposia
Wilkinson, P.E., W. H. 1991. “A Simplified, High-Efficiency Dublsorb System.” , NY-91-2-1.
ASHRAE Transactions: Symposia
Wilkinson, P.E., W. H. 1991. “Evaporative Cooling Trade-Offs in Liquid Desiccant Systems.” , NY-91-8-2. Pages 642-649.
Wilkinson, William H. Dec. 10, 1991. “Hybird Air Conditioning System Integration.” U.S. Pat. No. 5,070,703.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a simplified schematic flowsheet of a stand-alone liquid desiccant cooling system, with recirculating adiabatic contactors for both the drying section and the regeneration section, and with dual adiabatic humidifiers (one at the front (return air) end, and one at the back (supply air) end). Makeup (outside) air is shown being introduced at the front end.
FIG. 2
FIG. 1
modifies the flowsheet so as to have a chilling coil at the back end, in lieu of the second adiabatic humidifier, and also depicts a generic heat pump supplying chilling to the chilling coil and regeneration heat to the desiccant heater. Also, the makeup air tie-in is moved to downstream of the air-to-air heat exchanger (although that change has little impact).
FIG. 3
FIG. 2
presents one specific example of the generic flowsheet, wherein the heat pump is a mechanical vapor compression type. This combination only requires about 60% as much electricity as a conventional electric chiller, since the desiccant section picks up nearly half of the total cooling duty. Note another advantage—that no separate cooling circuit is required for the mechanical compression section.
FIG. 4
FIG. 2
presents another embodiment of the flowsheet, wherein the heat pump is an ammonia-water absorption heat pump. A single low temperature heat source, that can be at a temperature less than about 250° F., powers the entire cooler system, by first powering the generator (desorber) of the absorption cycle. Reject heat from the absorber of the absorption cycle then heats the regeneration desiccant. The evaporator of the absorption cycle provides chilled water for the air-chilling coil. Heat is rejected to ambient from the adiabatic dryer, preferably to an evaporative cooler via the liquid desiccant recirculation loop. Heat is also rejected to ambient from the condenser of the absorption cycle, preferably using the same evaporative cooler.
FIGS. 1 through 4
FIG. 5
FIG. 2
Whereas all depict use of adiabatic contactors in the desiccant cycles, it is also possible to use non-adiabatic (“diabatic”) contactors. depicts a modification of the flowsheet such that both contactors are diabatic.
FIG. 6
FIG. 2
FIG. 7
presents the flowsheet with the addition of statepoints for thermodynamic properties, and the table in depicts those properties at each statepoint.
FIG. 8
FIG. 4
FIG. 9
FIG. 8
presents a version of the flowsheet with diabatic contactors, and presents example conditions of the ventilation air and regeneration air as it flows through the flowsheet, as depicted on a psychrometric chart. Note also that this version of the flowsheet has the adiabatic humidifier at the back end of the cycle.
FIG. 10
FIG. 4
FIG. 11
FIG. 10
presents a version of the flowsheet with diabatic contactors, and presents example thermodynamic properties in the ammonia-water absorption cycle corresponding to the flowsheet, as depicted on a vapor-liquid equilibrium diagram for the ammonia-water working pair. Note that the generator heat input spans the temperature range from 238° F. to 213° F., and the absorber heat rejection is from 156° F. to 177° F. The evaporator is at 55° F., and the condenser is at 102° F.
| |
To address gang-involved youth and crime reduction in the South Cariboo Region of BC, the Cariboo Family Enrichment Centre (CFEC) hired a Community Youth and Family Navigator (Navigator) position. The Navigator works with gang-involved and at-risk youth age 9-30 in the community of 100 Mile House.
The Navigator encompasses strengths-based assessment and planning, connection, education, collaboration, linkage facilitation, increasing resiliency factors, providing gang-awareness initiatives, advocacy, addressing risk-factors, and follow up.
Participation in a gang reduces a gang members’ connections to other prosocial activities and they may cut ties to prosocial groups and organizations such as family, friends, schools and religious communities in order to focus more intensively on gang participation. The Navigator meets face-to-face with youth who need assistance navigating services to identify and meet their needs, so the youth are not reliant on support from gangs.
The youth’s needs will be addressed by connection to individual assessment, family support, individual/group therapy, family therapy, mental health counselling, mentoring, drug treatment, outreach, case management, service referrals, recreational opportunities, employment, cultural opportunities, and Indigenous and traditional cultural awareness. The goal is to decrease the likelihood of youth gang involvement by providing alternatives and accessible social opportunities.
Goals
The main goals of the Cariboo Family Enrichment Centre Community Youth and Family Navigator program are to reduce risk for gang-involvement of youth by:
- Connecting youth to relevant community services and supports;
- Advocating on behalf of the youth for increased services ands supports; and
- Connecting the families of youth to relevant community services and supports
Clientele
Mixed gender youth aged 9-30 who are:
- Involved with the criminal justice system
- Gang-involved
- Youth at-risk
- Using or dealing drugs
- Association to delinquent or gang-involved peers
- Disconnected from school
- Disconnected from family
Youth may self-refer or be referred by another agency in the community such as School District 27, RCMP, Indigenous Bands, Interior Health, or the Ministry of Children and Family Development, among others. Referrals may also come from families who identify members that are at-risk or gang-involved. Youth are eligible for the Program if they meet the markers noted above.
Core Components
The core component of this Program is working directly with youth to increase their access to services and ability to overcome barriers. The Navigator will assist youth to make connections to supports available to them in the community to reduce their reliance on support from gangs, or other gang-involved youth.
Interventions are directly related to reducing barriers to supportive factors such as employment, mental health, education, physical health, secure housing, positive role models, reconnection to Indigenous culture, and food security, etc.
The Navigator is available 35 hours per week (Monday-Friday, 9am-4pm), and the Program has been funded by the Gun and Gang Violence Action Fund for 2019/2020, 2020/2121, and 2021/2022.
Implementation Information
Some of the critical elements for the implementation of this program or initiative include the following:
- Organizational requirements: Lead organization should have a strong and stable management team / executive / leadership; ensure proper analysis of community needs and knowledge of other existing services, resources and organizations; have experience in fundraising; be able to manage logistical elements which enable the program to happen; have solid skills in outreach, intake and assessment, case planning, program delivery and post-program follow-up; have written policies regarding cultural competence, parent involvement, privacy of personal information, client complaints and client feedback mechanisms.
- Partnerships: The success of Cariboo Family Enrichment Centre Community Youth and Family Navigator Program depends on its partnerships with Cariboo Chilcotin Child Development Centre Association, 100 Mile House RCMP, School District 27, Big Brothers Big Sisters of Williams Lake,100 Mile House Ministry of Children and Family Development, 100 Mile House Child and Youth Mental Health, Canim Lake Band, 100 Mile House Canadian Mental Health Association and Interior Health.
- Training and technical assistance: Bachelor Degree in Child and Youth Care, Social Work or related field and 5 years of experience required for Navigator position.
- Risk assessment tools: Limited information on this topic.
- Materials & resources: Client information and notes kept on Case Administrative Management System.
International Endorsements
The most recognized classification systems of evidence-based crime prevention programs have classified this program or initiative as follows:
- Blueprints for Healthy Youth Development: Not applicable.
- Crime Solutions/OJJDP Model Program Guide: Not applicable.
- SAMHSA's National Registry of Evidence-based Programs and Practices: Not applicable.
- Coalition for Evidence-Based Policy: Not applicable.
Gathering Canadian Knowledge
Canadian Implementation Sites
The Navigator program has been implemented by the Cariboo Family Enrichment Centre in 100 Mile House, BC, beginning in 2019/2020 and funded until 2021/2022.
Main Findings from Canadian Outcome Evaluation Studies
No information available.
Cost Information
At this time, the Navigator is funded for three years, until the end of fiscal year 2021/2022, at $58,787 per year, for a total of $173,361.
As of the end of the fiscal year 2020/2021, the cost-per-youth was $3,114.96.
References
Anderson, J., & Larke, S. (2009). The Sooke Navigator project: Using community resources and research to improve local service for mental health and addictions. Mental Health in Family Medicine, 6, 21-28.
Chettleburgh, M. C. (2007). Young thugs: Inside the dangerous world of Canadian street gangs. Harper Collins Publishers.
Dunbar, L. (2017). Youth gangs in Canada: A review of current topics and issues. Public Safety Canada. https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/2017-r001/2017-r001-en.pdf
Lupick, T. (2015, February 6). Vancouver coastal health expands health navigating mental-health services, cuts long time advocates. The Straight. https://www.straight.com/news/822331/vancouver-coastal-health-expands-help-navigating-mental-health-services-cuts-long-time-advocates
Ngo, H. V., Calhoun, A., Worthington, C., Pyrch, T., & Este, D. (2017). Unravelling identities and belonging: Criminal gang involvement of youth from immigrant families. International Migration & Integration, 18, 63-84. https://doi.org/10.1007/s12134-015-0466-5
Sersli, S., Salazar, J., & Lozano, N. (2010). Gang prevention for new immigrant and refugee youth in BC. http://www2.gov.bc.ca/assets/gov/public-safety-and-emergency-services/crime-prevention/community-crime-prevention/publications/gang-prevention-immigrant-refugee.pdf
Snider., C., Wiebe, F., Mahmood, J., Christensen, T., & Kehler, K. (n.d.). Community assessment of a gang exit strategy for Winnipeg, Manitoba. Gang Action Interagency Network (GAIN) and University of Manitoba. https://gainmb.files.wordpress.com/2011/03/gain-report1.pdf
Wortley, S., & Tanner, J. (2006). Immigration, social disadvantage and urban youth gangs: Results of a Toronto-area study. Canadian Journal of Urban Research, 15(2), 18-37.
For more information on this program, contact:
Cariboo Family Enrichment Centre
1-486 Birch Avenue
100 Mile House, BC, V0K 2E0
Telephone: (250) 395 5155
E-mail: [email protected]
Website: www.cariboofamily.org
Record Updated On - 2022-01-17
- Date modified:
|
https://www.publicsafety.gc.ca/cnt/cntrng-crm/crm-prvntn/nvntr/dtls-en.aspx?i=10213
|
Wassily Kandinsky was a Russian born painter. In the years before World War II, Kandinsky was was co-founder (along with Franz Marc)of the Expressionist group Der Blaue Reiter (Blue Rider) and later he taught at the German Art School The Bauhaus.
Kandinsky is considered the originator of abstract art. After travelling to Paris he was influenced by the art of Gauguin, as well as the work of the neoimpressionists. Kandinsky began as a landscape painter, depicting the essence of what he saw outside.
Kandinsky's main interest was creating expressist art with the pure elements of color and shape. Having come from a religious background, Kandinsky would convey religious and spiritual elements in his work. He was also influenced by science in his art. He then developed his ideas concerning the power of pure color and nonrepresentational painting.
His first work in this mode was completed in 1910, the year in which he wrote an important theoretical study, Concerning the Spiritual in Art (1912, tr. 1947). In this work he examines the psychological effects of color with analogies between music and art. It is believed that Wassily Kandinsky created his first non-representational painting while he was studying with theosophist Rudolf Steiner.
|
https://www.artbywicks.com/wassily-kandinsky/
|
The sequence follows FCI breed standard.
CLASSIFICATION: Herding Dog
BRIEF HISTORICAL SUMMARY: The Bouvier des Ardennes originated as a cattle drover in the Belgian Ardennes. Only the most hard and hardworking dogs from a very restricted population were retained and used to drive the herds, mostly cattle but also sometimes sheep, pigs and horses. They were also used to track deer and wild boar, and during the two World Wars they became poachers’ dogs. During the 20th Century, the disappearance of farms in the Ardennes and the reduction in the herds of milking cattle greatly diminished the number of working dogs, including the Bouvier des Ardennes. Around 1985 a few survivors of this breed were discovered, and some breeders set out to produce dogs that adhered to the original standard of the breed which had been published in 1923. The Bouvier des Ardennes was recognized by the United Kennel Club July 1, 2006.
GENERAL APPEARANCE: The Bouvier des Ardennes is a medium-sized, very hardy dog of rugged appearance. It is short and thick set, with bone that is heavier than its overall size might suggest. It is compact and well-muscled, with a harsh, tousled coat and a rather forbidding appearance. The breed should be judged in a natural stance, without stacking by the handler.
BEHAVIOUR / TEMPERAMENT: Extremely adaptable, the Bouvier des Ardennes is at ease in any situation. It is playful and curious, yet very obstinate and determined when defending its family, possessions or territory.
HEAD: The head is strong and rather short.
CRANIAL REGION:
Skull: The skull is broad and flat, slightly longer overall than wide. There is no occipital protuberance. The stop is pronounced but not excessive, though it is emphasized by the bushy eyebrows. Cheek bones are not prominent.
FACIAL REGION:
Nose: Broad and always black.
Muzzle: Thick and broad, well filled under the eyes, and clearly shorter than the skull. The toplines of the skull and muzzle lie in parallel planes. The muzzle is furnished with upstanding hair. The muzzle is as broad as the skull, with no indentation at their juncture. The lips are thin and close fitting, with black edges.
Jaws/Teeth: The Bouvier des Ardennes has a complete set of evenly spaced, white teeth meeting in a scissors or level bite. The absence of the first premolars is not a fault. The M3 are not taken into consideration.
Eyes: Medium in size, set not too far apart, oval in shape, and as dark in color as possible. Eye rims are fully pigmented and no haw should be visible when the dog looks straight ahead.
Ears: Triangular in shape, rather small, and set high on the skull. Fully erect ears are preferred, but semi-prick or rose ears are acceptable.
NECK: Strong and well muscled, slightly arched and free from throatiness.
BODY: The Bouvier des Ardennes is a square breed, measured from point of shoulder to buttocks and top of withers to ground. The body is powerful, with rounded ribs and a broad, firm back. The chest is deep to the elbows and quite broad. The topline is level all the way through the short, broad loin and croup, to the high set tail. There is little tuck-up.
TAIL: Thick and high set. Docked or natural are equally acceptable. Some are born naturally bobbed or tailless.
LIMBS:
FOREQUARTERS: Shoulder blade and upper arm are reasonably long and thickly muscled. They form an angle of approximately 110 degrees.
FORELEGS: Straight and strong, with powerful bone. Length of the leg from elbow to ground is approximately one-half the height at the withers. The pasterns are short and strong and slightly sloping.
FEET: Round and tight with arched toes and thick, dark pads.
HINDQUARTERS: Powerful and moderately angulated. A vertical line drawn from the back of the pelvis should fall just in front of the toes of the back feet.
HIND LEGS: The thighs have prominent muscles. Hocks are broad, sinewy and well let down. Rear pasterns are slightly sloping in profile. Rear dewclaws should be removed.
GAIT / MOVEMENT: A lively, ground covering trot with strong thrust from the rear. The legs should move in parallel lines with no crabbing. The topline remains firm and level when the dog is in motion.
COAT:
HAIR: The coat is dense, double and completely weatherproof. The outer coat is dry, coarse and tousled and about 2½ inches in length all over the body, except on the skull, where it is shorter and flatter. There must be a moustache and beard about 2 inches in length that hides the inside corner of the eye. The outside of the ears is covered by short, straight hair. The undercoat is very dense, regardless of season, and about half the length of the outer coat. The skin is tight fitting, but supple.
COLOUR: All colors are acceptable except white. Most generally the color is a mixture of grey, black and fawn hairs. The grey can be from pale to dark. Sometimes the coat is brownish, red or straw colored. A small white spot on the chest and/or toes is acceptable.
SIZE: Height at the withers for males is 22-24½ inches. For females, it is 20½ to 22 inches. Weight for males is approximately 60-75 pounds. For females, it is 48-60 pounds.
FAULTS:
DISQUALIFYING FAULTS:
(An Eliminating Fault is a Fault serious enough that it eliminates the dog from obtaining any awards in a conformation event.) More than one inch over or under the prescribed height limits.
Unilateral or bilateral cryptorchid. Viciousness or extreme shyness. Albinism. Lips, nose or eyerims not fully pigmented in black. Yellow eyes. Overshot, undershot or wry mouth. Three or more missing teeth, not counting the first premolars or the M3’s. Cropped ears. Drop ears that lie flat against the head. Natural tail carried curled over the back. Any evidence of trimming of the coat. Excessive head furnishings that completely mask the eyes. Solid white. Any white markings except on the chest and toes.
Muzzle: Lips not completely black.
Teeth: Overshot, undershot or wry mouth. Three or more missing teeth, not counting the first premolars or the M3’s.
Nose: Nose not fully black.
Eyes: Eye rims not fully pigmented black. Yellow eyes.
Ears: Cropped ears. Drop ears that lie flat against the head.
Tail: Natural tail carried curled over the back.
Coat: Any evidence of trimming of the coat. Excessive head furnishings that completely mask the eyes.
Color: Solid white. Any white markings except on the chest and toes.
Note: The docking of tails and cropping of ears in America is legal and remains a personal choice. However, as an international registry, the United Kennel Club, Inc. is aware that the practices of cropping and docking have been forbidden in some countries. In light of these developments, the United Kennel Club, Inc. feels that no dog in any UKC event, including conformation, shall be penalized for a full tail or natural ears.
|
https://dogsglobal.com/breeds/bouvier-des-ardennes/UKC-standard
|
The literature and applied studies report that microfinance is an effective tool to tackle poverty, gender inequality, female disempowerment and financial dependency issues. Earlier studies on microfinance reported successes in Bangladesh (Pitt and Khandker,1998) and some Latin American countries (Bolivia) (Velasco & Marconi, 2004). However, the findings of these studies have been overshadowed by recent studies that have reported weak (Ganle, et al., 2015) and sometimes negative microfinance outcomes in other regions (Salia et al., 2018; Karim, 2011). These mixed results have raised doubts about the effectiveness of microfinance and its relevance to promoting women development, especially when donor funding is declining. This empirical study investigates the impact of microfinance intervention on women's empowerment and entrepreneurial development by analysing microfinance interventions and the perspectives of women service users in Nigeria. I drew data for the study from secondary sources, 350 questionnaire responses, 11 focus groups interviews with the women clients, 28 one-to-one interviews with loan officers and heads of a Non-government organisation (NGO) microfinance. Using qualitative, Chi-square, Analysis of variance (ANOVA) and Ordinal regression. The analysis found that access to microcredit, training and mentoring services supports women microenterprises: increased awareness and use of formal financial services and increased business assets and the development of critical soft business skills. This further leads to enhancing the contribution to household decision-making, autonomy in decision-making and decreases in family dispute often triggered by lack of money. Evidence shows that women's social capital development was realised through taking part in group meetings which encouraged social solidarity, mutual support and business networking amongst women entrepreneurs. However, control of spending on household assets (land, building) remains the exclusive prerogative of the male household heads. The results of the study support the previous literature (Swain & Wallentin, 2017; Kabeer, 2010) mainly based on South Asian economies that microfinance support for women positively affects their entrepreneurial development, raises equality levels and reduces their dependency on male household heads. Finally, the study suggests that microfinance efforts at promoting women's empowerment may produce better outcomes within a larger framework that includes the cultural acceptance of women ownership and the control of family assets.
|
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.795252
|
Lessons learned from past experience with intensive livestock management systems.
The main impetus for 'modern' intensive animal production occurred after the Second World War, when Western governments developed policies to increase the availability of cheap, safe food for their populations. Livestock benefit under intensive husbandry by protection from environmental extremes and predators, and better nutritional and health management. Nevertheless, there are costs to the animal, such as impaired social behaviour, limited choice of living environment or pen mates, poor environmental stimulation and behavioural restrictions. The rapid progress in genetic selection of production traits has also, in some cases, adversely affected welfare by creating anatomical and metabolic problems. Above all, the intensively housed animal is heavily reliant on the stockperson and, therefore, inadequate care and husbandry practices by the stockperson may be the largest welfare risk. In a future in which the food supply may be limited as the world's population grows and land availability shrinks, intensive animal production is likely to expand. At the same time, ethical considerations surrounding intensive farming practices may also become more prominent. Novel technologies provide the opportunity to enhance both the productivity and welfare of intensively kept animals. Developing countries are also establishing more intensive commercial systems to meet their growing need for animal protein. Intensive livestock production in such countries has the potential for major expansion, particularly if such developments address the key constraints of poor welfare, inadequate nutrition, poor reproduction, poor housing, and high mortality often seen with traditional systems, and if farmer access to emerging market opportunities is improved. However, as shown by previous experience, inadequate regulation and staff who lack the appropriate training to care for the welfare of intensively housed livestock can be major challenges to overcome.
| |
CROSS-REFERENCE TO RELATED APPLICATION
TECHNICAL FIELD
BACKGROUND ART
CITATION LIST
Patent Literature
SUMMARY OF INVENTION
Technical Problem
Solution to Problem
Advantageous Effects of Invention
DESCRIPTION OF EMBODIMENTS
REFERENCE SIGNS LIST
This application claims priority to JP Application No. 2015-241509, filed Dec. 10, 2015, the disclosure of which is incorporated in its entirety by reference herein.
The present invention relates to an exhaust gas dilution device adapted to dilute exhaust gas discharged from an internal combustion engine or the like for the purpose such as the component analysis of the exhaust gas, and to an exhaust gas measuring system using the exhaust gas dilution device.
When analyzing the components of exhaust gas of an internal combustion engine, the exhaust gas causes condensation or the like as it is to affect the analysis, and therefore after being diluted with diluent gas such as air, is introduced into an analytical instrument. A device to be used for the dilution is a full flow dilution device using a full tunnel or a partial dilution device using a micro tunnel.
For example, taking the full flow dilution device as an example, the tunnel of the full flow dilution device introduces thereinto the total amount of the exhaust gas discharged from the internal combustion engine as well as introducing thereinto the diluent gas having a controlled flow rate, and thereby the exhaust gas is diluted.
The important thing here is that the diluent gas and the exhaust gas must sufficiently mix with each other inside the tunnel.
33
32
32
34
For this purpose, in the past, in the middle of a tunnel, a flat plate-like orifice plate formed with an orifice hole at the center thereof is provided, and slightly upstream of or flush with the orifice hole , a discharge port a of an exhaust gas introduction pipe is placed (see FIGS. 6 and 7 of Patent Literature 1).
32
This is to surely mix diluent gas and exhaust gas using a mixing effect due to the concentration of the diluent gas at the orifice hole .
The mixing effect caused by the orifice hole is more strongly exerted as the flow velocity of gas passing through the orifice increases. However, when increasing the gas flow velocity too much, although the mixing effect is enhanced, a pressure loss at the orifice hole increases to make the pressure of the discharge port negative, thus causing the problem of changing engine combustion conditions.
As a result, a gas flow velocity range that can prevent the problem caused by the negative pressure and moderately ensure the mixing effect is spontaneously defined. Also, the gas flow velocity range defined as described defines a dilutable flow rate range in this sort of conventional exhaust gas dilution device, and it is difficult to obtain a flow rate range exceeding the dilutable flow rate range.
The reason for this will be described.
Parameters contributing to the gas flow velocity are a flow rate (the mixed gas flow rate of air and the exhaust gas) and an orifice hole diameter. Increasing the mixed gas flow rate or decreasing the orifice hole diameter increases the gas flow velocity.
For example, when the orifice hole diameter is large and the mixed gas flow rate is small, the gas flow velocity at the orifice hole cannot be ensured. As a result, a predetermined mixing effect cannot be obtained, and therefore in such an exhaust gas dilution device, the dilutable flow rate range is shifted to a larger side.
On the other hand, when the orifice hole diameter is small and the mixed gas flow rate is large, a pressure loss at the orifice hole increases, and therefore in such an exhaust gas dilution device, the dilutable flow rate range is shifted to a smaller side.
Accordingly, as described above, it is difficult to expand the dilutable flow rate range to some extent or more in this sort of conventional exhaust gas dilution device.
However, the larger such a flow rate range is, the better it is. This is because as the flow rate range increases, a single exhaust gas dilution device makes it possible to do more various patterns of tests and also accept a test of internal combustion engines having more variously-sized displacements.
Patent Literature 1: Japanese Unexamined Patent Publication JP-A2001-249064
Therefore, the present invention is made in order to provide an exhaust gas dilution device capable of expanding a flow rate range to exceed a conventional limit.
That is, the exhaust gas dilution device according to the present invention is one including: a dilution pipe through which diluent gas such as air flows or nitrogen gas; an orifice member adapted to block the dilution pipe except for an orifice hole provided in a central part; and an exhaust gas introduction pipe of which a discharge port is disposed so as to face to or penetrate through the orifice hole and face to a downstream side and through which exhaust gas is discharged from the discharge port into the dilution pipe. In addition, in the exhaust gas dilution device, the orifice member is formed with a concave part that is gradually concaved from an outer circumferential edge part toward the orifice hole as viewed from an upstream side.
Specific embodiments adapted to simplify manufacturing and reduce weight include one in which the orifice member is one forming a hollow truncated conical shape.
Preferably, the tilt angle of a surface of the concave part is set to be 45° to 60° with respect to the inner circumferential surface of the dilution pipe as viewed in a virtual cross section obtained by cutting the dilution pipe along the axial line of the dilution pipe.
Specific embodiments adapted to make the effect of the present invention include an exhaust gas measuring system including: the exhaust gas dilution device including a dilution pipe through which diluent gas such as air flows or nitrogen gas, an orifice member adapted to block the dilution pipe except for an orifice hole provided in a central part, and an exhaust gas introduction pipe of which a discharge port is disposed so as to face to or penetrate through the orifice hole and face to a downstream side and through which exhaust gas is discharged from the discharge port into the dilution pipe, in addition, in the exhaust gas dilution device, the orifice member is formed with a concave part that is gradually concaved from an outer circumferential edge part toward the orifice hole as viewed from an upstream side; and an exhaust gas measuring device adapted to sample mixed gas of the exhaust gas and the diluent gas, the mixed gas being produced by the exhaust gas dilution device, and measure a concentration or an amount of a predetermined component contained in the exhaust gas.
The exhaust gas dilution device according to the present invention is capable of expanding a flow rate range, and therefore makes it possible to do various tests and also accept a test of internal combustion engines having variously-sized displacements.
In the following, one embodiment of the present invention will be described with reference to drawings.
FIG. 1
100
As illustrated in , an exhaust gas dilution device according to the present embodiment is one of a full flow dilution type, and used as part of an exhaust gas measuring system X.
100
1
2
2
1
9
2
Specifically, the exhaust gas dilution device includes: an exhaust gas sampling pipe that is connected to an exhaust pipe (not illustrated) of an internal combustion engine and into which the total amount of exhaust gas is introduced; a circular pipe-shaped dilution tunnel (hereinafter also simply referred to as a tunnel ) as a dilution pipe into which the exhaust gas is introduced through the exhaust gas sampling pipe and also air as diluent gas is introduced to mix them for diluting the exhaust gas; and a flow rate control device (CVS) that makes the flow rate of mixed gas flowing through the tunnel constant.
4
2
2
x
Note that numeral in the diagram represents an exhaust gas measuring device constituting part of the exhaust gas measuring system X. Here, as an example of the exhaust gas measuring device, a filter collection device adapted to proportionally sample the mixed gas flowing through the tunnel , and samples PM contained in the sampled mixed gas is illustrated. As another exhaust gas measuring device, one adapted to measure the concentrations and amounts of various components such as CO, THC, and NOin the exhaust gas can be cited.
2
5
In addition, in the present embodiment, the tunnel is provided with a gas mixing structure adapted to facilitates the mixture of the air and the exhaust gas.
5
51
5
a
The gas mixing structure is one that includes an orifice member having an orifice hole in the center.
FIG. 2
51
2
2
2
As illustrated in , the orifice member is one forming a hollow truncated conical shape of which the outer circumferential edge is joined to the inner circumferential surface of the tunnel without any gap in the middle of the tunnel , and disposed such that as viewed from the upstream side, the central part thereof is concaved to form a concave part S. The tilt angle θ of the surface of the concave part S is configured to be 45° to 60° with respect to the inner circumferential surface of the tunnel as viewed in a virtual cross section obtained by virtually cutting along an axial line as illustrated in the diagram.
1
2
1
1
5
1
5
5
1
5
a
a
a
a
a
a.
In addition, the terminal part of the exhaust gas sampling pipe is extended from the upstream side toward the downstream side along the axial line of the tunnel , and a discharge port of the exhaust gas sampling pipe is configured to face to the downstream side on the upstream side of the orifice hole . More specifically, the discharge port is arranged coaxially with the orifice hole so as to be positioned slightly upstream of (the surface on the upstream side of) the orifice hole . Note that here the outside diameter of the exhaust gas sampling pipe is set to be slightly smaller than the inside diameter of the orifice hole
5
1
1
5
a
a
a.
In such a configuration, the air flows into the orifice hole at a tilt from the gap between the terminal outer circumferential edge of the exhaust gas sampling pipe and the surface of the concave part S, and mixes with the exhaust gas that is discharged from the exhaust port and flows into the orifice hole
51
5
a
In this case, since a flow path of the air is gradually narrowed by the tilt surface of the concave part S of the orifice member and then reaches the orifice hole , even when increasing an air flow rate, a pressure loss is small, and therefore the mixing effect hardly changes according to the knowledge of the inventor.
100
As a result, since the exhaust gas dilution device is capable of making a flow rate range larger than before, various patterns of tests can be done, and also a test of internal combustion engines having variously-sized displacements is also acceptable.
51
FIG. 4
Note that the surface of the concave part S of the orifice member does not have a constant angle, but as illustrated in , may be one having an angle that gradually decreases or increases toward the downstream side like a horn.
51
51
FIG. 5
FIG. 5
The orifice member is not limited to a thin one, but as illustrated in , may be made thick to form the concave part S. In the case of , the surface on the downstream side of the orifice member is perpendicular to the axis.
The terminal of the exhaust gas sampling pipe may be present on the upstream side of the orifice hole, be flush with the orifice hole, or penetrate through the orifice hole to be positioned on the downstream side of the orifice hole.
A dilution range may be determined with at least one or more of an aperture ratio of the orifice hole (the area of the orifice hole with respect to the inner circumferential cross-sectional area of the tunnel), the axial direction distance between the terminal of the exhaust gas sampling pipe and the orifice hole, the outside diameter of the exhaust gas sampling pipe and the inside diameter of the orifice hole, and the tilt angle of the concave part surface of the orifice member as parameters.
It goes without saying that as the dilution pipe, not only the so-called dilution tunnel but a general piping component may be used.
In addition, the present invention can also be applied to a partial dilution device.
FIG. 6
illustrates an example of the partial dilution device.
1
In the diagram, numeral represents an exhaust gas sampling pipe adapted to sample part of raw exhaust gas discharged from an internal combustion engine E.
2
1
Numeral represents a dilution tunnel (hereinafter also simply referred to as a tunnel) adapted to introduce the raw exhaust gas thereinto through the exhaust gas sampling pipe as well as introducing air as diluent gas thereinto to dilute the exhaust gas with the air.
3
1
3
31
2
32
2
Numeral represents a flow rate control device adapted to perform control to make the flow rate of the exhaust gas sampled through the exhaust gas sampling pipe equal to a predetermined ratio of the total flow rate of the exhaust gas discharged from the internal combustion engine E. The flow rate control device is configured to include: a constant flow rate keeping mechanism adapted to keep the flow rate of mixed gas led out of the tunnel constant; and a diluent gas flow rate control mechanism adapted to control the flow rate of the air to be introduced into the tunnel in accordance with the exhaust gas flow rate.
31
311
4
312
6
311
312
The constant flow rate keeping mechanism is one including a pump provided downstream of a filter collection device and a mixed gas flow rate sensor , in which a control circuit controls the rotation speed of the pump so as to make the mixed gas flow rate measured by the mixed gas flow rate sensor constant.
32
321
2
2
2
322
323
6
322
323
b
The diluent gas flow rate control mechanism is one including: an adjustment mechanism that is provided in an air introduction flow path connected to an introduction port of the tunnel and adapted to adjust the flow rate of the air to be introduced into the tunnel ; an air flow rate measurement sensor adapted to measure the air flow rate; and an exhaust gas flow rate sensor adapted to measure the total flow rate of the exhaust gas discharged from the internal combustion engine E. Also, the control circuit controls the air flow rate such that a sampling flow rate calculated by subtracting the air flow rate measured by the air flow rate measurement sensor from the mixed gas flow rate becomes equal to the predetermined ratio of the exhaust gas total flow rate measured by the exhaust gas flow rate sensor .
4
2
Numeral represents an exhaust gas analyzing device, and as the exhaust gas analyzing device, a filter collection device adapted to sample PM contained in the mixed gas of the exhaust gas and the air discharged from the tunnel is illustrated here.
Besides, various modifications and combinations of the embodiments may be made without departing from the scope of the present invention.
X: Exhaust gas measuring system
100
: Exhaust gas dilution device
1
: Exhaust gas introduction pipe
1
a
: Discharge port
2
: Dilution tunnel
51
: Orifice member
5
a
: Orifice hole
S: Concave part
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is an overall schematic diagram of an exhaust gas dilution device and an exhaust gas measuring system in one embodiment of the present invention;
FIG. 2
is a vertical cross-sectional view in which an orifice member in the same embodiment is cut along an axial line;
FIG. 3
is a perspective cross-sectional view illustrating a state where the orifice member in the same embodiment is cut along the axial line;
FIG. 4
is a vertical cross sectional view in which an orifice member in another embodiment of the present invention is cut along an axial line;
FIG. 5
is a vertical cross sectional view in which an orifice member in still another embodiment of the present invention is cut along an axial line;
FIG. 6
is an overall schematic diagram of an exhaust gas dilution device and an exhaust gas measuring system in yet another embodiment of the present invention; and
FIG. 7
is a vertical cross-sectional view in which an orifice member in yet another embodiment of the present invention is cut along an axial line.
| |
From Name:
From Email:
To Name:
To Email:
Optional Message:
2013 NAPT Election: Seats open for Regions 1 and 5, Affiliate Member, President-Elect
Four seats on the NAPT Board of Directors are open for election this year: Region 1, Region 5, President-Elect and Affiliate Member. Any person who wishes to be a candidate for NAPT president-elect or regional director must be an active individual member of the association for at least two years; a candidate for affiliate member director must be a current business partner individual member.
Anyone interested in running for president-elect or regional director must be nominated by a minimum of two active individual members of the association; candidates for regional director must be nominated by individuals from that particular region. Individuals running for affiliate member director must be nominated by at least two current business partner individual members.
The president-elect is selected by vote of all active members. Regional directors are selected by vote of only those active members whose mailing address is within that specific region. The affiliate member director is chosen by NAPT Business Partner Individual Members only. The President Elect serves a 4-year term (two as President-Elect, two as President); the Regional Director serves for three years and the Affiliate Member serves for two years.
Call 800-989-NAPT for additional information from NAPT headquarters.
2013 election dates and deadlines
NAPT Elections Committee Chair Peter F. Mannella has announced the following dates and deadlines for the 2013 NAPT Election:
ABSENTEE BALLOT REQUESTS
— Any NAPT member seeking to vote by absentee ballot must send a written request for a ballot to NAPT headquarters. All requests must be received no later than the close of business on Wednesday,
Sept. 4
.
ABSENTEE BALLOTS MAILED TO VOTERS
— Official absentee ballots will be mailed to all eligible voters no later than Thursday,
Sept. 19
.
ABSENTEE BALLOT RETURN
— All absentee ballots must be received at NAPT headquarters no later than Wednesday,
Oct. 9
.
Election Day is Wednesday, Oct. 23. Elections will be held in Grand Rapids, Michigan on the Trade Show floor from 10 a.m. – noon.
Questions about the election should be directed to
[email protected]
.
|
http://multibriefs.com/ShareArticle.php?51fbecff958d9
|
Assessment refers to the evaluation of student learning and could include innovative instruments and measures beyond standardized testing, such as performance-based assessments, to evaluate student mastery and growth.
Resources
… results in 2013 and Assessment.
Exploring Maine’s Proficiency-Based Education System
This study tour highlighted the role of school, district, and state in moving an education system towards proficiency-based learning.
Transforming Remediation: Understanding the Research, Policy, and Practice
This webinar focused on the key principles that are needed to transform remedial education as well as highlighted promising institution-level practices.
|
https://www.aypf.org/topic-areas/learning-and-teaching-strategies/assessment/?year_published=2013
|
Amy Helm uses lessons she learned from her musician parents to hone her craft and help her on tour.
Amy Helm is scheduled to perform Dec. 2 at Tower Theatre, 425 NW 23rd St., opening for folk trio The Wood Brothers.
One thing musician Amy Helm learned from playing with her father, The Band’s Levon Helm, was how to fail.
Amy Helm, scheduled to perform Dec. 2 at Tower Theatre, 425 NW 23rd St., played in her father’s Midnight Ramble Band until his death from throat cancer in 2012. She said watching him continue to perform while struggling with illness and its complications taught her to be brave, an important lesson for the music business.
This Too Shall Light, Amy Helm’s second solo album, was released in September. She recorded and re-recorded her debut album over a period of several years, but she recorded its follow-up over four days, following producer Joe Henry’s suggestion to record songs she and her backing band were mostly unfamiliar with to give the album a more immediate and improvised sound.
Learning to play the songs in concert with a different set of musicians has given her a chance to do it all over again.
The album comprises songs by Rod Stewart (“Mandolin Wind”), Allen Toussaint (“Freedom for the Stallion”) and Hiss Golden Messenger’s M.C. Taylor and Josh Kaufman (the title track) and at least one song with which Amy Helm was intimately familiar, closer “Gloryland,” which appeared on many of her father’s live set lists in the final years of his life, a tradition she has continued in her own performances.
The influence of Amy Helm’s father on her music career is evident, but the inspiration she has taken from her mother, singer-songwriter Libby Titus, is also important even if it’s not as immediately obvious.
This Too Shall Light, Amy Helm’s second solo album, was released in September.
As a musician and co-producer on her father’s 2007 album Dirt Farmer, Amy Helm said she felt confident in the studio because she was so well acquainted with the material, which describes Levon Helm’s childhood as the son of Arkansas cotton growers.
Working on Dirt Farmer was another learning experience.
The show begins at 8 p.m. with folk trio The Wood Brothers scheduled to headline. Visit towertheatreokc.com.
|
https://www.okgazette.com/oklahoma/color-spectrum/Content?oid=5070475
|
This past week Maryland Professional Photographers Association had a class on Conceptual Portraiture with the wonderful artist Tatiana Lumiere.
I will generally jump at any chance to learn more about my craft and it’s especially gratifying to be in person with other photographers to collaborate with. We had a number of models who were coiffed, made-up and costumed by Tatiana and Geniia Elliott Makeup Artist as well as various sets and locations.
As I’ve spoken of before, I have relished the opportunity to take on more creative challenges that stretch both my technical skills and artistic vision. So, I was excited by this class. After looking at the kind of art that Tatiana creates, I knew I would be inspired by having an opportunity to see her work. This was definitely true and I look forward to finding other artists to collaborate with on future projects.
After some demonstration and discussion with Tatiana, we were broken into groups to have our time with the models at various locations around the carriage house where MDPPA meets.
My first stop was with Jamie in front of the fireplace as Tatiana had first worked with the dry ice. As much as I love the textures of the old wood and stone in the carriage house, I didn’t particularly love it with the outfit Jamie was wearing. When Tatiana shot it, there was a curtain of fog behind her and I think I would have liked that better. In post I really pushed the color on the leaves and warmed her to fit in better with the warm tones of the wall. I also added some blur to the background to help separate her. This is also notable as the only set that had artificial light (a large softbox on a studio strobe and a large reflector).
Next we shot with Dana. She was made up reminiscent of Xena: Warrior Princess or a Wildling from Game of Thrones. I felt that the textures of the exterior wall of the carriage house worked thematically. It was pretty much the perfect time of day for a natural light shot. The difference between direct sun and the shadows of the “tunnel” behind the carriage house really help her stand out in the shot.
Next we went back inside to work again with the dry ice and a set that Tatiana arranged. This time it was with Kelly. Clearly, Geniia and Tatiana were taking Kelly’s lovely red hair as inspiration. This was one of the stations where it was challenging to move around and we probably had too many people working it. Especially since we were up against the wall and shooting with natural light through the doorway (as you can see above when Jamie was in the pit of fog)
I ended up not getting too many shots from angles that I felt worked for me, but I was so taken with the overall look. I definitely like the dreamy feel of natural light and mist. That’s on my todo list for a future project.
Finally we got to spend a little time with Rylee. This is kind of cool, because it shows how great natural light can be at the right time of day (in this case, sunset). Soft and glowy without really any effort or technical savvy. I was going for a glamor look, so the post processing might be a little heavier than I normally would use.
All in all, it was a great experience. I picked up some new techniques and inspiration going forward. Which is what it is all about.
|
http://www.brucepress.net/blog/?tag=portrait
|
Before travelling, the Department strongly recommends that you obtain comprehensive travel insurance which will cover all overseas medical costs, including medical repatriation/evacuation, repatriation of remains and legal costs. You should check any exclusions and, in particular, that your policy covers you for the activities you want to undertake.
We don’t have a resident Irish Embassy in Montserrat but you can contact our Consular Assistance Unit if you need guidance on the nearest assistance and we will help you as best we can. Our number is: +353 1 408 2000.
If you’re travelling to Montserrat, our travel advice and updates give you practical tips and useful information.
We advise Irish citizens in Montserrat to take normal precautions.
There is currently an outbreak of Zika Virus (a dengue-like mosquito-borne disease) in Central and South America and the Caribbean. Irish Citizens are advised to follow guidance available on the website of the Health Protection Surveillance Centre (HPSC) at http://www.hpsc.ie/A-Z/Vectorborne/Zika/ .
The Atlantic hurricane season generally runs from June to November each year and can also affect the eastern and southern USA with heavy rain, flooding and extremely high winds.
Citizens with plans to be in the affected region during this period should consider the need to travel based on information relating to extreme weather projections.
Because there is no Irish Embassy or Consulate in the Montserrat, we are limited in the help we can offer you in an emergency situation. However, if there is an emergency, or if you need help and advice, you can contact our Consular Assistance Unit at the Department of Foreign Affairs and Trade in Dublin on +353 1 408 2000.
Under the EU Consular Protection Directive, Irish nationals may seek assistance from the Embassy or Consulate of any other EU member state in a country where there is no Irish Embassy or permanent representation.
Add an alert for your destination within the Travelwise App.
Crime remains relatively low in Montserrat but you should take sensible precautions.
Don’t carry your credit card, travel tickets and money together - leave spare cash and valuables in a safe place.
Don’t carry your passport unless absolutely necessary and leave a copy of your passport (and travel and insurance documents) with family or friends at home.
Avoid showing large sums of money in public and don’t use ATMs after dark, especially if you are alone. Check no one has followed you after conducting your business.
Keep a close eye on your personal belongings and hold on to them in public places such as internet cafes, train and bus stations.
Avoid dark and unlit streets and stairways, arrange to be picked up or dropped off as close to your hotel or apartment entrance as possible.
If you’re a victim of a crime while in Montserrat, report it to the local police immediately.
The hurricane season in the Caribbean normally runs from July to October. You should pay close attention to local and international weather reports and follow the advice of local authorities. Always monitor local and international weather updates for the region by accessing, for example, the Weather Channel, or the US National Hurricane Centre website.
Remember, the local laws apply to you as a visitor and it is your responsibility to follow them. Be sensitive to local customs, traditions and practices as your behaviour may be seen as improper, hostile or may even be illegal.
Check with your doctor well in advance of travelling to see if you need any vaccinations for this country.
If you are unsure of the entry requirements for this country, including visa and other immigration information, ask your travel agent or contact the country’s nearest Embassy or Consulate.
You can also check with them how long your passport must be valid for.
We do not have an Embassy in Montserrat, please contact our office in Dublin.
|
http://foreign-affairs.net/travel/travel-advice/a-z-list-of-countries/montserrat/
|
It is shown that optimum distance flag codes attaining the best possible size, given an admissible type vector, must have a spread as the subspace code used at the corresponding shot.
Partial spreads in random network coding
- Computer ScienceFinite Fields Their Appl.
- 2014
Constructions, decoding and automorphisms of subspace codes
- Mathematics
- 2013
Subspace codes are a family of codes used for (among others) random network coding, which is a model for multicast communication. These codes are defined as sets of vector spaces over a finite field.…
Geometric decoding of subspace codes with explicit Schubert calculus applied to spread codes
- Computer ScienceArXiv
- 2016
A decoding algorithm for error-correcting subspace codes for Desarguesian spread codes, which are known to be defined as the intersection of the Pl\"ucker embedding of the Grassmannian with a linear space.
Research Statement 1 of 4 Subspace Codes for Random Network Coding 1 Classical Coding Theory ?
- Computer Science
- 2014
Algebraic coding theory studies the balance between adding mathematical redundancy to data versus the cost of sending the redundancy, and how to decode the received word to the closest codeword.
Spread decoding in extension fields
- Computer ScienceFinite Fields Their Appl.
- 2014
Optimum distance flag codes from spreads via perfect matchings in graphs
- Computer ScienceJournal of Algebraic Combinatorics
- 2021
The set of admissible type vectors for this family of flag codes is characterized and a construction of them is provided based on well-known results about perfect matchings in graphs.
Analysis and Constructions of Subspace Codes
- Computer Science
- 2015
This dissertation explores properties of one particular construction and introduces a new construction for subspace codes, which is recursive, and uses the linkage construction to generalize some constructions of partial spreads.
Cyclic Orbit Codes
- Computer ScienceIEEE Transactions on Information Theory
- 2013
It is shown how orbit codes can be seen as an analog of linear codes in the block coding case and how the structure of cyclic orbit code can be utilized to compute the minimum distance and cardinality of a given code.
Equidistant subspace codes
- Computer ScienceArXiv
- 2015
References
SHOWING 1-10 OF 27 REFERENCES
Construction of Large Constant Dimension Codes with a Prescribed Minimum Distance
- Computer ScienceMMICS
- 2008
A method of Braun, Kerber and Laue which they used for the construction of designs over finite fields to construct constant dimension codes is modified and many new constant Dimension codes with a larger number of codewords than previously known codes are found.
Algebraic list-decoding on the operator channel
- Computer Science2010 IEEE International Symposium on Information Theory
- 2010
For any integer L, the list-L decoder guarantess successful recovery of the message subspace provided the normalized dimension of the error is at most L − L2(L + 1) over 2 R where R is the normalized rate of the code.
Coding for Errors and Erasures in Random Network Coding
- Computer ScienceIEEE Transactions on Information Theory
- 2008
A Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ldquolist-1rdquo minimum-distance decoding algorithm is provided.
Error-Correcting Codes in Projective Spaces Via Rank-Metric Codes and Ferrers Diagrams
- Computer ScienceIEEE Transactions on Information Theory
- 2009
This paper proposes a method to design error-correcting codes in the projective space by using a multilevel approach to design a new class of rank-metric codes and presents a decoding algorithm to the constructed codes.
A Rank-Metric Approach to Error Control in Random Network Coding
- Computer ScienceIEEE Transactions on Information Theory
- 2008
The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of Rotter and Kschischang and an efficient decoding algorithm is proposed that can properly exploit erasures and deviations.
A Welch-Berlekamp Like Algorithm for Decoding Gabidulin Codes
- Computer ScienceWCC
- 2005
The decoding of Gabidulin codes can be seen as an instance of the problem of reconstruction of linearized polynomials, which leads to the design of two efficient decoding algorithms inspired from the Welch–Berlekamp decoding algorithm for Reed–Solomon codes.
A complete characterization of irreducible cyclic orbit codes and their Plücker embedding
- Computer ScienceDes. Codes Cryptogr.
- 2013
This paper gives a complete characterization of orbit codes that are generated by an irreducible cyclic group, i.e. a group having one generator that has no non-trivial invariant subspace.
Spread codes and spread decoding in network coding
- Computer Science2008 IEEE International Symposium on Information Theory
- 2008
The paper introduces the class of spread codes for the use in random network coding and proposes an efficient decoding algorithm up to half the minimum distance.
Fast decoding of rank-codes with rank errors and column erasures
- Computer ScienceInternational Symposium onInformation Theory, 2004. ISIT 2004. Proceedings.
- 2004
A new modified Berlekamp-Massey algorithm for correcting rank errors and column erasures is described, which is about half as complex as the known algorithms.
On Linear Network Coding
- Computer Science
- 2010
This work demonstrates how codes with one notion of linearity can be used to build, in a distributed manner, codes with another notion oflinearity, and introduces the new class of filter-bank network codes of which all previous definitions of linear network codes are special cases.
|
https://www.semanticscholar.org/paper/An-algebraic-approach-for-decoding-spread-codes-Gorla-Manganiello/4205e5d9d07f9e6008ac3e0d8a6493f59ae126fa
|
More than 20,000 earthquakes have shaken southern Iceland this week, rattling the capital city of Reykjavik and keeping geologists on their toes as all signs point to a pending volcanic eruption, the Icelandic Meteorological Office (IMO) reported on Thursday (March 4).
This week’s marathon of quakes continues a swarm of seismic activity that began on Feb. 24, when a 5.7-magnitude earthquake struck near Iceland’s Reykjanes Peninsula — about 20 miles (32 kilometers) from the capital city.
Earthquakes in the 5.0- to 5.9-magnitude range are considered moderate, and can result in slight damage to nearby buildings, according to Michigan Technological University. Fortunately, the quake’s epicenter was far enough from the island’s populated areas that no damage or injuries were reported.
The vast majority of the thousands of quakes that have followed the Feb. 24 event have been minor, with only two temblors registering above magnitude 5.0, according to the IMO. Still, residents of Reykjavik have felt the shaking day after day, with some “waking up with an earthquake, others [going] to sleep with an earthquake,” Thorvaldur Thordarson, a professor of volcanology at the University of Iceland, told The New York Times.
While disconcerting, there is “nothing to worry about,” Thordarson added, as the quakes have all been minor and distant enough to leave Reykjavik unharmed. (Meanwhile, the IMO issued a warning of increased landslide risk on the Reykjanes Peninsula, but had no further guidance for city-dwellers.)
In the past, seismic swarms like this one have been observed ahead of volcanic eruptions in southern Iceland, according to the IMO. Magma movement at the boundary where the North American and Eurasian tectonic plates meet likely caused the tremors, the agency said, which could fuel the five active volcanoes on the Reykjanes Peninsula.
If any of southern Iceland’s volcanoes do blow their tops in the coming weeks, the eruptions will be both expected and manageable. According to Thordarson, southern Iceland’s volcanoes experience “pulses” of activity every 800 years or so, and the last pulse occurred between the 11th and 13th centuries. Iceland is “on time” for another eruption cycle, he added.
Like the earthquakes, these potential eruptions should also pose little threat to the inhabitants of Iceland. Such eruptions would look nothing like the explosive 2010 eruption of the Eyjafjallajökull volcano, which sent an ash column more than 5 miles (9 km) into the sky, forced hundreds of people to evacuate and halted European air traffic for six days, volcanologist Dave McGarvie wrote in The Conversation.
“Eruptions in southwest Iceland are of a fluid rock type called basalt. This results in slow-moving streams of lava fed from gently exploding craters and cones,” wrote McGarvie, of Lancaster University in Lancashire, England. “In Iceland, these are warmly called ‘tourist eruptions’ as they are relatively safe and predictable.”
Currently, tourists entering Iceland are subject to a five-day quarantine period due to the COVID-19 pandemic, so hopeful volcano watchers will have move fast, or settle for the webcam view.
Originally published on Live Science.
|
https://newsroom.dinestle.co.tz/swarm-of-20000-earthquakes-could-make-icelands-volcanoes-erupt/
|
The Institute of Civil Engineering is one of Switzerland`s leading teaching, continuing education and research institutions in the field of structures, building envelopes, and construction and project management.
Das erste Jahrbuch des Instituts für Bauingenieurwesen vermittelt einen aufschlussreichen Überblick über die verschiedenen Leistungen des Instituts.
Die Fachschaft Bauingenieurwesen verleiht jährlich einen Preis für die besten Bauingenieur-Bachelorarbeiten der Schweizer Fachhochschulen, den «Best of Bachelor». Von der Hochschule Luzern erhielt im Januar Thomas Gämperle den Preis für seine Arbeit zum Thema «UHFB-Plattenbalkenbrücke — Weiterentwicklung». UHFB steht dabei für Ultra-Hochleistungs-Faserbeton.
Das Projekt Sozialräume für eine Werkhalle - Gwatt, unter Mitwirkung von Uwe Teutsch, wurde mit dem renommierten Prix Acier ausgezeichnet. Der Preis zeichnet Bauwerke aus, welche exemplarisch für die architektonische Qualität und technische Leistungsfähigkeit des Schweizer Stahl- und Metallbaus stehen.
Structural engineers are essential in order to ensure that permanent structures can stand the tst of time over decades while withstanding storms and earthquakes along the way. These engineers design and measure structures and assess possible dangers.
The expansion and maintenance of infrastructures such as road, water supply and sewage networks will be the dominant theme in infrastructures engineering of the future. This is a challenge that structural engineers in the Traffic and Water spezialization will have to deal with.
Building envelopes protect against the effects of wheathering and should be optimized for energy efficiency. Furthermore, facades also have to meet architectual requirements. The Lucerne University of Applied Sciences and Arts is the only university of applied sciences to train the engineers sought after in this field.
The complex problems seen in contemporary construction practice can only be solved through interdiciplinary cooperation. The most suitable studens are given the possibility of preparing optimally for this challenge as part of the "Bachelor + Interdisciplinarity in Construction".
Master`s students at the IBI experience a forward-thinking degree programm aimed at their future career. It focuses on research topics such as the load-bearing and deformation behaviour of structures, non-linear finite element modeling, load-bearing systems for glass and facades, and many more.
The Competence Center Structural Engineering (CC SE) enhances constructions and buildings by researching the responsible use of resources and energy and designing or improving constructions in an efficient and sustainable manner. As a reliable partner, the competence center provides assistance during consultations and surveys, developement processes, simulations and material testing.
Building envelope combines architectural, design- and construction-based as well as energy based requirements. With its interdisciplinary team, the Competence Center Building Envelopes (BE) offers applied research and developement and additonal services for companies, architects and building owners alike.
With one of Europe`s largest facade test rigs, the Institute of Civil Engineering has an outstanding infrastructure at its disposal. In terms of the building envelope, facades and windows of all kinds are analyzed and tested here.
The Research Group Envelopes and Solar Energy (RG EASE) runs a highly specialized laboratory for goniophotometry, an outdoor facility for measuring global and diffuse solar radiation, and a laboratory for light simulation.
Business partners can use the accredited material laboratory (STS 209) for examining building materials and testing components on glass, facade, window and door systems. Thanks to the impressive infrastructure, special tests can also be carried out on building products and complete engineering components.
The research group in solid construction addresses the developement of new engineering components and buildings made from reinforced and prestressed concrete, plus the examination of existing ones.
Construction professionals can find interesting programs relating to construction and project management or specialist topics.
|
https://www.hslu.ch/en/lucerne-school-of-engineering-architecture/institutes/bauingenieurwesen/
|
Associations of isometric and isoinertial trunk muscle strength measurements and lumbar paraspinal muscle cross-sectional areas.
The relationships of dynamic and static trunk muscle strength measurements and muscle geometry are studied. Physiologically, isometric muscle strength is directly related to muscle cross-sectional area. We measured isometric and isoinertial trunk muscle strength of 111 former elite male athletes, aged 45-68, by Isostation B-200. Paraspinal muscle cross-sectional areas were measured from axial magnetic resonance images at the L3-L4 level. Isometric and isoinertial torques were closely related, but angular velocities were not predicted by isometric maximal torque. The area of the psoas muscles correlated with isometric maximal flexion, as well as with isoinertial maximal torque. angular velocity, and power in flexion (r = 0.24-0.27). The area of the extensor group correlated with isometric maximal extension and with isoinertial maximal torque and power in extension (r = 0.24-0.25). We conclude that dynamic and static strength measurements are closely related, with angular velocity giving additional information on muscle function. Paraspinal muscle cross-sectional area is one determinant of isometric and isoinertial trunk muscle strength.
| |
Job Description:
Do you have the ability to build relationships across a large organization, and do you want the opportunity to use your non-authoritative leadership and interpersonal skills to influence outcomes? In this role, you will ensure our plant operations comply with regulatory requirements as well as our internal policies and codes of practice. You will also provide technical support and leadership to improve overall industrial hygiene awareness, and the continued success of our Edmonton refinery’s operations.
Job Responsibilities:
- Identify and mitigate chemical, physical, biological, and ergonomics issues
- Maintain and calibrate industrial hygiene equipment regularly
- Participate as an integral member of the environmental, health, and safety (EH&S) team at the refinery as well as within the larger Suncor industrial hygiene network
- Conduct workplace assessments to identify, quantify and control risk to people, property and equipment
- Provide day-to-day technical advice, expertise and direction to clients in order to ensure compliance to legislative requirements and to both industry and company standards.
- Assist in the development and implementation of health and safety programs and processes that support the business
- Collaborate with the refinery’s safety department on a regular basis and work with regulatory authorities on industrial hygiene matters
Job Requirements:
- A continuous improvement mindset and are able to seek greater knowledge and understanding of the systems, process and hazards in the workplace
- Comfortability in making presentations and providing training to various levels of management and employees
- Alignment with our values: safety above all else, stronger together, operational discipline, curiosity and lifelong learning, and act with integrity
- Strong collaboration skills that enable you to build strong, positive relationships with diverse groups
- Outstanding verbal and written communication skills
- Excellent problem solving and decision-making skills, and the ability to use independent and analytical skills to make decisions
Qualification & Experience:
- One year of previous experience
- A Bachelor’s degree or a Technical Diploma in Occupational Hygiene/Industrial Hygiene, Health & Safety, Environmental Sciences
Job Details:
|
https://www.getyoursvacancy.com/job/suncor-jobs-04/
|
In Chapter 5 we developed the idea that, at the schematic model level of representation within SPAARS, individuals possess a variety of models of the world, self, and others which form the basis of their "reality". For most of us there is a sense that the world is a reasonably safe place, that we have more or less control over what goes on in our world, that the actions of other people are pretty much predictable, that bad things don't usually happen to us, and so on (Dalgleish, 2004a). These models act as organising and guiding principles throughout our daily lives and are fundamental to the goals we set for ourselves and the ways in which we try to fulfil them. The maintenance of schematic models also represents the highest level on the goal hierarchy. For example, the "goal" of maintaining a sense of self or a sense of reality.
Within this kind of analysis, schematic models develop as a function of the individual's learning history. This learning history, in turn, is essentially a history of the success at achieving goals and the various obstacles to those goals that the individual has encountered. A learning history in which the individual has been reasonably successful in fulfilling goals or has been able to negotiate the obstacles to those goals is likely to lead to a set of schematic models about the self, world, and others which is generally positive. We are not suggesting that this is a one-way relationship; as we have stated above, the goals individuals set either implicitly or explicitly are a function of the schematic models that individuals bring to bear on their circumstances. An individual is only likely to set goals that are viewed as more or less attainable within the parameters of the schematic models that are being applied. Such goals are consequently more likely to be attained and the positive nature of the schematic models is thereby strengthened. What we have, then, is a proposed interactive system involving schematic models of the self, world, and others, and the achievement and setting of goals at different levels and in different domains of the individual's life, such that the system as a whole functions in a generally positive way.
An interactive system of this kind would have several implications: (1) that the schematic models of normal healthy individuals are generally positive and self-serving; (2) that, consequently, such individuals will show processing biases on a number of cognitive and social cognitive tasks (Isen, 1999) and a number of such positive biases are discussed in the section on depressive realism in Chapter 7 and also below; (3) the current state of the interactive goal-model system is reflected in trait constructs such as optimism/pessimism (see Lyubomirsky et al., 2005).
Was this article helpful?
Meeting Realistic Goals Can Be Easy if You Have the Right Understanding of the Process. The Reason So Many People Fail at Meeting Their Goals is Because They Have a Confused Understanding of Realistic Goal Setting and Self-Motivation Methodology.
|
https://www.mitchmedical.us/cognition-emotion/the-relationship-between-goal-structures-and-schematic-models-within-spaars.html
|
The Entrion Vendor Surveillance Program manages and reduces equipment delivery and quality risks. The program provides experienced professionals who are located near OEM’s and contracted vendors. Our professionals review required standards and certifications and become an interface mechanism to confirm compliance mandated by unique specifications for each client.
To provide the best results to our customers, we combine engineering, manufacturing and operations experience with risk management expertise and lessons learned. We optimize resources by employing local professionals and prioritizing critical equipment.
Program Planning: This process defines equipment scope and activities based on equipment criticality. It includes a long-term vendor management plan, budget, and short-term work schedules.
Program Execution: Based on equipment criticality, execution includes some or all of the following activities, inspection test plan (ITP) and quality plan (QP), review, vendor kick-off meetings (KOM), progress monitoring (PM), factory acceptance testing (FAT), final inspections and data book review.
Reporting and Feedback: This process reports on survey results and supports punch resolution.
Management: The program designates a project manager that works with Entrion’s local operations managers to execute work. This team tracks progress against budgets, dispatch surveyors, ensure consistent quality reports, and coordinate with customers and vendors punch resolution.
© 2017 Entrion Inc. All Rights Reserved.
|
http://www.entrion.com/409-2/
|
In the current Azure SQL Database Managed Instance (MI) preview, when customers create a new instance, they can allocate a certain number of CPU vCores and a certain amount of disk storage space for the instance. However, there is no explicit configuration option for the amount of memory allocated to the instance, because on MI, memory allocation is proportional to the number of vCores used.
How can a customer determine the actual amount of memory their MI instance can use, in GB? The answer is less obvious than it may seem. Using the traditional SQL Server methods will not provide the right answer on MI. In this article, we will go over the technical details of CPU and memory allocation on MI, and describe the correct way to answer this question.
The information and behavior described in this article are as of the time of writing (April 2018). Some aspects of MI behavior, including the visibility of certain compute resource allocations, may be temporary and will likely change as MI progresses from the current preview to general availability and beyond. Nevertheless, customers using MI in preview will find that this article answers some of the common questions about MI resource allocation.
First glance at CPU and memory on MI
We will use a MI instance with 8 vCores as an example. On the traditional SQL Server, most customers would look at the Server Properties dialog in SSMS to see the compute resources available to the instance. On our example MI instance, this is what we see:
We should right away note that the resource numbers in this dialog, as well as in several other sources (DMVs) described later, can change over the lifetime of a given MI instance. These changes can be relatively frequent. Customers should not take any dependencies, or make any conclusions based on these numbers. Later in the article, we will describe the correct way to determine actual compute resource allocation on MI.
An immediate question is why we see 24 processors here, when we have created this instance with only 8 vCores/processors. To determine the actual number of logical processors available to this instance, we can look at the number of VISIBLE ONLINE schedulers in the sys.dm_os_schedulers DMV:
SELECT COUNT(1) AS SchedulerCount FROM sys.dm_os_schedulers WHERE status = 'VISIBLE ONLINE';
SchedulerCount -------------- 8
This is in line with the number of vCores we have for this MI instance. Then why does SSMS show that 24 processors are available?
CPU and memory resources are managed differently on MI
To answer this question, we need to take a high-level look at the MI architecture. Each MI instance runs in a virtual machine (VM). Each VM may host multiple MI instances of varying sizes, in terms of compute resources allocated to the instance.
It is important to note here that all MI instances on a given VM always belong to the same customer; there is no multi-tenancy at the VM level. In effect, the VM hosting MI instances serves as an additional isolation boundary for customer workloads. This does not mean that if a customer creates multiple MI instances, they will necessarily be packed on the same VM. In reality, this does not happen very often. The service intelligently allocates instances to VMs to always provide guaranteed SLAs and ensure good customer experience.
What SSMS shows in the Server Properties dialog is the number of processors and the amount of memory at the OS level on the VM that happens to currently host the instance. This works the same way for the traditional SQL Server, where SSMS also shows OS level numbers. SQL Server error log, which is accessible on MI, shows the same information during MI instance startup:
SQL Server detected 2 sockets with 12 cores per socket and 12 logical processors per socket, 24 total logical processors; using 24 logical processors based on SQL Server licensing. This is an informational message; no user action is required. Detected 172031 MB of RAM. This is an informational message; no user action is required.
For our example MI instance, this means that there are 24 processors accessible to the OS on the underlying VM. However, given the number of visible online schedulers, the MI instance can only use 8 of these processors, as expected given its current provisioned size.
To reiterate, the number of processors and the amount of memory at the VM level (24 processors and 172031 MB in this example) is not fixed. It can change over time as the MI instance moves across VMs allocated to the customer, for example when it is scaled up or scaled down. These values will, however, always be larger than or equal to the resource values actually allocated to the instance.
But what about memory? Does this MI instance have 168 GB of memory, as shown in SSMS and in the error log? Let’s look at some DMVs.
SELECT cpu_count, physical_memory_kb, committed_target_kb FROM sys.dm_os_sys_info;
cpu_count physical_memory_kb committed_target_kb ----------- -------------------- -------------------- 8 176,160,308 48,586,752
SELECT cntr_value FROM sys.dm_os_performance_counters WHERE object_name LIKE '%Memory Manager%' AND counter_name = 'Target Server Memory (KB)';
cntr_value ---------- 48,586,752
Both of these show that the server target memory, which is commonly used to measure the amount of memory available to the instance, is about 46 GB, while the total physical memory at the OS level is 168 GB, as seen in SSMS and in the error log. This shows that not all memory available at the VM OS level is allocated to this MI instance.
On the traditional SQL Server, the usual reason for the target memory to be much lower than the available OS physical memory is configuring a limit on server maximum memory using sp_configure. Is that the case here?
SELECT value_in_use FROM sys.configurations WHERE name = 'max server memory (MB)';
value_in_use ------------ 2147483647
This large value shows that the maximum server memory for this instance is not limited, and the instance should be able to allocate all available physical memory. What is causing target memory to be much less than the total physical memory for this instance? Is there some other mechanism that can impose a lower limit?
For MI, this mechanism is Job Objects. Running a process such as SQL Server in a job object provides resource governance for the process at the OS level, including CPU, memory, and IO. This resource governance is what allows the service to share the same VM among multiple instances belonging to the same customer, without resource contention and “noisy neighbor” problems. At the same time, this mechanism guarantees a dedicated allocation of vCores and memory for each instance. Memory is allocated according to the GB/vCore ratio for the instance size selected by the customer. There is no overcommit of either vCores or memory across instances on the same VM. In other words, the instance always gets the resources specified during provisioning.
Can we see the configuration of the job object that contains our MI instance? Yes, we can:
SELECT cpu_rate, cpu_affinity_mask, process_memory_limit_mb, non_sos_mem_gap_mb FROM sys.dm_os_job_object;
cpu_rate cpu_affinity_mask process_memory_limit_mb non_sos_mem_gap_mb ----------- -------------------- ----------------------- -------------------- 100 255 57,176 9,728
The sys.dm_os_job_object DMV exposes the configuration of the job object that hosts the MI instance. For our current topic, there are four columns in the DMV that are particularly relevant:
cpu_rate: this is set to 100%, showing that each vCore can be utilized by the MI instance to its full capacity.
cpu_affinity_mask: this is set to 11111111 (in binary), showing that only eight OS level processors can be used by the process hosted in the job object (SQL Server). This is in line with the number of vCores provisioned for this instance.
process_memory_limit_mb: this is set to about 56 GB, and is the total amount of memory allocated to the process within the job object. Note that this is larger than the server target memory, which, as we saw earlier, is about 46 GB. The next column provides the explanation.
non_sos_mem_gap_mb: this is the amount of memory that is a part of total process memory, but is not available for SQL Server SOS (SQL OS) memory allocations, i.e. is reserved for things like thread stacks and DLLs loaded into the SQL Server process space. The difference between process_memory_limit_mb and non_sos_mem_gap_mb is 46 GB, which is exactly the server target memory that we saw earlier.
To elaborate on the last point, even though SQL Server target memory visible in DMVs and in the output of DBCC MEMORYSTATUS is less than the total memory allocated to the instance, this difference, known as the non-SOS memory gap, is still being used by the instance. In fact, a sufficiently large allocation of non-SOS memory is required for the instance to function reliably.
Conclusion
To summarize the technical details above:
1. MI compute resource values shown in SSMS and the instance error log reflect resources at the underlying OS level, not the actual resources available to the MI instance. The resource values at the OS level can change over time, without affecting the resources allocated to the MI instance in any way.
2. MI instance is resource-governed at the OS level using a job object.
3. The sys.dm_os_job_object DMV exposes job object configuration for the MI instance. This is the DMV that should be used to determine the actual compute resources (vCores and memory) allocated to the MI instance, as described above.
We hope that the information in this article will help customers using Managed Instance to accurately and confidently determine the amount of compute resources allocated to their instances, and avoid potential confusion in this area due to architectural differences between the traditional SQL Server and Managed Instance.
|
https://docs.microsoft.com/en-us/archive/blogs/sqlcat/cpu-and-memory-allocation-on-azure-sql-database-managed-instance
|
10 Best Books For Data Structure And Algorithms For ...
The authors' treatment of data structures in Data Structures and Algorithms is unified by an informal notion of "abstract data types," allowing readers to compare different implementations of the same concept. Algorithm design techniques are also stressed and basic algorithm analysis is covered. Most of the programs are written in Pascal.
Videos Of Data Structures And Algorithms Textbook
Feb 06, 2022 · The book: Data structures and Algorithms Made Easy, by Narsimha Karumanchi is a very famous book on Data structures and Algorithms. This book is a very beginner-friendly book. If anyone wants to learn data structures and Algorithms from the basic level to a decent level in the simplest way and language, this is the book for you.
8 Great Data Structures & Algorithms Books | Tableau
Best Books for Data Structures and Algorithms
Data Structures And Algorithms: Aho, Alfred, Ullman ...
What are the most difficult data structures to implement? - Quora
10 Best Data Structures And Algorithms Books [2022 ...
What is the best books to learn data-structure? - Quora
14 BEST Algorithm & Data Structures Books (2022 List)
Why Data Structures and Algorithms Are Important to Learn? - Geeksfor…
Data Structures And Algorithms In Python.pdf - Google Docs
Data Structures and Algorithms in Python.pdf - Google Docs ... Loading…
Data Structures And Algorithms.pdf - Free Download Books
Data Structures and Algorithms (DSA) features implementations of data structures and algorithms that are not implemented in any version of .NET. This book is the result of a series of emails sent back and forth between the two authors during the development of a library for the .NET framework of the same name.
Data Structures And Algorithms
Data Structures and Algorithms Department of Computer and Information Sciences and Engineering University of Florida ... No material on this Web site may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of Sartaj Sahni. ...
COP 4530 : Data Structures - USF
Data Structures and Algorithms in C++ Ch 7, Section EOC End of Chapter, Exercise R-7.1 Leaf node A node to the left of the binary tree is that node which has no children.
|
https://upscoverflow.in/data-structures-and-algorithms-textbook-3/
|
The invention relates to an analytic technique of seismic data quality. The invention has the following steps: firstly, selecting control point common shot gathers data, determining the maximum analytical frequency, selecting and analyzing frequency band according to the range of the effective reflected signal frequency in a target block, carrying out frequency wave filtering to each seismic channel of the data each cannon to obtain the frequency demultiplication result of each channel; adopting a formula (1) to make time domain median filter to all seismic channels of each frequency band in each cannon to obtain the statistical energy curve results of each frequency band, drawing the statistical energy curve results of each frequency band in the same cannon on an identical chart to obtain a time-frequency analysis chart. The invention can quantitatively reflect the data quality of data control point rapidly and correctly, avoid errors in human analysis and obtain reliable results.
| |
CLAIM OF PRIORITY
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
This application claims priority under 35 USC §119(e) to U.S. patent application Ser. Nos. 60/618,244 and 60/618,366, both filed on Oct. 12, 2004, the entire contents of which are hereby incorporated by reference.
This invention was made with government support under Contract No. N66001-00-1-8914 awarded by the Space and Naval Warfare Systems Command. The government has certain rights in the invention.
Modern machine translation systems use word to word and phrase to phrase probabilistic channel models as well as probabilistic n-gram language models.
FIG. 1
FIG. 1
A conventional way of translating using machine translation is illustrated in . illustrates the concept of Chinese and English as being the language pair, but it should be understood that any other language pair may be alternatively used.
150
153
151
152
155
165
167
167
160
165
161
Training is shown as , where a training corpora is used. The corpora has an English string and a Chinese string . An existing technique may be used to align the words in the training corpora at a word level. The aligned words are input to a training module which is used to form probabilities based on the training corpora. A decoding module is used that maximizes the argument argmax/e P(e)*P(f|e), and maximizes the probability of e, given certain languages in the corpora, where e and f are words or phrases in the training corpora. The decoding module , which may simply be a module within the same unit as the training module. The decoder thus takes a new Chinese string such as , and uses the probabilities along with a language model which may be an n-gram language model. The decoder outputs English strings which correspond to the highest scores based on the probabilities and the language model.
Phrase based systems may sometimes yield the most accurate translations. However, these systems are often too weak to encourage long-distance constituent reordering when translating the source sentences into a target language, and do not control for globally grammatical output.
Other systems may attempt to solve these problems using syntax. For example, certain reordering in certain language pairs can be carried out. One study has shown that many common translation patterns fall outside the scope of the Child reordering model of Yamada & Knight, even for similar language pairs such as English/French. This led to different possible alternatives. One suggestion was to abandon syntax on the grounds that syntax was a poor fit for the data. Another possibility is to maintain the valid English syntax while investigating alternative transformation models.
The present application describes carrying out statistical analysis using trees created from the strings. In training, trees are created and used to form rules in addition to the probabilities. In application, trees are used as output, and either the trees, or information derived from the trees, may be output. The system may input strings of source symbols, and outputs target trees.
In an embodiment, transformation rules that condition on larger fragments of tree structure are created. These rules can be created manually, or automatically through corpus analysis to form a large set of such rules. Specific cases of crossing and divergence may be used to motivate the algorithms to create better explanation of the data and better rules.
The present description describes string to tree translation. Different aspects are described which enable a direct translation between the string and the syntax tree.
The general structure and techniques, and more specific embodiments which can be used to effect different ways of carrying out the more general goals are described herein.
FIG. 2
FIG. 10
1000
1001
illustrates an overall block diagram of an embodiment. In an embodiment, the rule learning is used for learning rules for a text to text application. The rule learning and the text to text application may each be carried out on a computer such as shown in , which includes an associated memory storing the translation rules, probabilities and/or models. The computers described herein may be any kind of computer, either general purpose, or some specific purpose computer such as a workstation. The computer may be a Pentium class computer, running Windows XP or Linux, or may be a McIntosh computer. The programs may be written in C, or Java, or any other programming language. The programs may be resident on a storage medium, e.g., magnetic or optical, e.g. the computer hard drive, a removable disk or other removable medium. The programs may also be run over a network.
151
152
251
250
255
152
260
260
265
270
In this embodiment, the English string and Chinese string are first word aligned by alignment device . The English string is parsed by a parser , as described herein, into an English tree that represents the contents of the English string. The English tree is used along with the Chinese string by a string based training module . The translation module produces probabilities shown as , and also produces subtree/sub string rules indicative of the training, and shown as . Thus, the training device produces rules with probabilities, where at least a portion of at least some of these rules are in the form of trees.
267
160
267
161
262
280
267
The rules and probabilities are used by the decoding module for subsequent decoding of a new Chinese string . Decoding module also uses multiple language models, here an n-gram language model , and also a syntax based language model . The output of the decoding module corresponds to all possible English trees that are translations of the Chinese string according to the rules. The highest scoring English trees are displayed to the user. Alternatively, information that is based on those trees may be displayed, for example, string information corresponding to those trees.
FIG. 1
FIG. 1
262
Some advantages of the embodiment include the following. The use of information from trees within the rules can allow the model to learn what the different parts represent. For example, the machine translation system has no idea what a noun is, but the embodiment can learn that as part of the translation. In addition, the present embodiment provides tree/string rules, as compared with the phrase substitution rules which are produced by the system. The use of trees enables the use of the syntax based language model , which is not conventional in the prior art.
According to another embodiment, the training information in both languages may be parsed into trees prior to the training.
Tree outputs produce outputs which are well formed, having a verb in the right place, for example, and other parts also in the right places. In addition, tree/string rules capture information about when reordering may be useful. Tree/string rules control when to and when not to use function words. However, many of the tree string rules may be simple word to phrase substitutions.
FIGS. 3-9
The training is described herein with reference to .
FIG. 3
a
300
shows a French sentence, (il ne va pas) and a parse tree of its translation into English. The parse tree includes the conventional parsing parts, the sentence S, noun phrase (NP), verb phrase (VP) and other conventional sentence parts.
An embodiment defines determining rules using a string from a source alphabet that is mapped to a rooted target tree. Nodes of that rooted target tree are labeled from a target alphabet. In order to maintain this nomenclature, symbols from the source alphabet are referred to as being “source symbols”. Symbols from the target alphabet are referred to as being “target symbols”. A symbol tree is defined over an alphabet Δ as a rooted directed tree. The nodes of this alphabet are each labeled with a symbol of Δ. In an embodiment, a process by which the symbol tree is derived from the string of source signals, over the target language, is captured. The symbol tree to be derived is called the target tree, since it is in the target language. Any subtree of this tree is called a target subtree.
A derivation string S is derived as an ordered sequence of elements, where each of the elements is either a source symbol or a target subtree.
The following is a formal definition of the derivation process. Given a derivation string S, a derivation step replaces the substring S of S with a target subtree T that has the following properties:
1. Any target subtree in S′ is also a subtree of T,
2. Any target subtree in S that is not in S′ does not share nodes with T, and
3. A derivation from a string S of source symbols to the target tree T is a sequence of derivation steps that produces T from S.
FIG. 3
FIG. 3B
a
300
201
202
203
Consider the specific example of the alignment in . illustrates three different derivations of the target tree from the source French string. The three derivations are labeled , and . Each of these derivations are consistent with the definitions 1 through 3 above.
202
However, analysis of these derivations shows that at least one of the derivations is more “wrong” then the others. In the second derivation , for example, the word “pas” has been replaced by the English word “he”, which is incorrect.
1
Alignment allows the training system to distinguish between a good derivation and a bad derivation. Alignment between S and T can be carried out in order to improve the possible derivations. If S is a string of source symbols, and T is a target tree, then the definitions would lead to the conclusion that each element of S is replaced at exactly one step in the derivation and, and to each node of T is created at exactly one step in the derivation. Thus, for each element s of sa set called replaced(s, D) is created at the step of the derivation D during which s is replaced. This set keeps track of where in the derivation, different parts are replaced.
201
At , the word “va” is replaced in the second step of the derivation.
201
210
211
212
213
201
211
Each of the different derivations includes a number of “steps”, each step, therefore, doing certain things. The derivation , for example, includes the steps , , , . In , for example, the French word “va” is replaced during the second step, , of the derivation. Thus, in notation form, files can be created which indicate the step at which the words are replaced. For example, here,
Replaced(s,D)=2
201
212
Analogously, each node t of T can have a file defined called created (T,D) to be the step of derivation D during which t is created. In , the nodes labeled by auxiliary and VP (verb phrase) are created during the third step of the derivation. Thus, created (AUX, D)=3 and created(VP,D)=3.
Given a string S of source symbols and a target tree T, an alignment A with respect to S and T forms a relation between the leaves of T and the elements of S. If derivation D between S and T is selected, then the alignment induced by D is created by aligning an element s of S with a leaf node t of T, but if and only if the replaced(s, D) is equal to the created(T, D). In other words, a source word is “aligned” with the target word if the target word is created during the same step as that in which the source word is replaced.
FIG. 3C
FIG. 1
FIG. 3B
FIGS. 3B and 3C
FIG. 3A
301
201
302
202
303
203
201
203
202
⊂
illustrates alignments. The tree in corresponds to the derivation in . Analogously, corresponds to and corresponds to . A rule to analyze the derivations is described. The set of “good” derivations according to an alignment A is precisely that set of derivations that induce alignments A′, such that A is a sub alignment of A′. The term sub alignment as used herein requires that AA′. Since alignments are simple mathematical relationships, this is relatively easy to determine. Another words, A is a sub alignment of A′ if A aligns 2 elements only if A′ also aligns those two elements. This is intuitively understandable from . The two derivations that seem correct at a glance include derivations and . These are superalignments of the alignment given in . The derivation which is clearly wrong is not such a super alignment.
Notationally speaking, the derivation is admitted by an alignment A if it induces a super alignment of A. The set of derivations between source string S and target string T that are admitted by the alignment A can be denoted by
A
δ(S, T)
In essence, each derivation step can be reconsidered as a rule. This, by compiling the set of derivation steps used in any derivation of δA(S, T), the system can determine all relevant rules that can be extracted from (S, T, A). Each derivation step is converted into a usable rule according to this embodiment. That rule can be used for formation of automated training information.
212
201
Derivation step in derivation begins with a source symbol “ne”, which is followed by a target subtree that is rooted at VB and followed by another source symbol “pas”. These three elements of the derivation are replaced, by the derivation, with a target subtree rooted at VP that discards the source symbols and contains the started target subtree rooted at VB.
FIG. 4
401
402
402
2
2
403
404
illustrates how this replacement process can be captured by a rule. shows the derivation step on the left, where the elements are replaced with other elements. shows the induced rule that is formed. The input to the rule include the roots of the elements in the derivation string that are being replaced. Here, the root of the symbol is defined as being the symbol itself. The output of the rule is a symbol tree. The tree may have some of its leaves labeled with variables rather than symbols from the target alphabet. The variables in the symbol tree correspond to the elements of the input to the rule. For example, the leaf labeled x in the induced tree means that when this rule is applied, x is replaced by the target subtree rooted at VB, since VB is the second element of the input. The two induced rules and are obtained from the respective derivations. Thus this rule format may be a generalization of CFG rules. Each derivation step can use this system to map to a rule in this way.
Hence, given a source string S, a target string T, and an alignment A, the set δA(S, T) can be defined as the set of rules in any derivation DεδA(S, T). This set of rules is the set of rules that can be inferred from the triple (S, T, A)
FIG. 5
FIG. 5
FIG. 6
FIG. 5
In an embodiment, the set of rules δA(S, T) can be learned from the triple (S, T, A) using a special alignment graph of the type shown in . The alignment graph is a graph that depicts the triple (S, T, A) as a rooted, directed, acyclic graph. is shown with direction as being top-down, but it should be understood that this can alternatively very easily be turned upside down. In an embodiment, certain fragments of the alignment graph are converted into rules of δA(S, T). A fragment is defined herein as being a directed acyclic graph and G as a nontrivial subgraph G′ if a node A is in G′. Here, nontrivial means that the graph has more than just a single mode. The subgraph G′ is such that if the node n is in G′ then either n is a sink node of G′ (a node with no children) or all of n's children are in G′ and connected to all of the nodes thereof. illustrates graph fragments formed from the alignment graph of .
2
3
5
7
2
3
4
5
6
7
500
502
FIG. 5
The span of the node N of the alignment graph constitutes the subset of nodes from S that are reachable from n. A span is defined as being contiguous if it contains all the elements in a contiguous sub string of S. The closure of span (n) is the shortest contiguous span which is a superset of span (n) for example, the closure of (s, s, s, s) would be (s, s, s, s, s, s). The alignment graph of is annotated with the span of each node. For example, each node, such as , has an annotation that represents the span of that node.
FIG. 5
One aspect is to determine the smallest set of information from these graphs that can form the set of rules. According to this aspect, first smaller parts of the rules are found, and then the rules are put together to form larger parts. The chunk can be defined in different ways—in an embodiment, certain fragments within the alignment graph are defined as being special fragments called frontier graph fragments. Frontier sets of the alignment graph include the set of nodes n in which each node n′ of the alignment graph, that is connected to n but is neither an ancestor nor a descendent of n, span(n′) ∩ closure(span(n))=0. The frontier set in is shown in bold face and italics.
The frontier graph fragment of an alignment graph is the graph fragment where the root and all sinks are within the frontier set. Frontier graph fragments have the property that the spans of the sinks of the fragment are each contiguous. These spans form a partition of the span of the root, which is also contiguous. A transformation process between spans and roots can be carried out according to the following:
1) first, the sinks are placed in the order defined by the partition. The sink whose span is the first part of the span of the root goes first. This is followed by the Se whose span is the second part of the span of the root. This forms the input of the rule.
2) Next, the sink nodes of the fragment are replaced with a variable corresponding to their position in the input. Then, the tree part of the fragment is taken, for example by projecting the fragment on T. This forms the output of the rule.
FIG. 6
A
illustrates certain graph fragments, and the rules: both input and output, that are generated from those graph fragments. Rules constructed according to the conversion between the alignment graph and the rules are within a subset which is called ρ(S, T).
A number of rule extraction techniques are also described herein.
A
In a first embodiment, rules of ρ(S, T) are extracted from the alignment graph by searching the space of graph fragments for frontier graph fragments. One conceivable problem with this technique, however, is that the search space of all fragments of a graph becomes exponential to the size of the graph. Thus, this procedure can take a relatively long time to execute. The technique can be improved by taking the following simplifications.
The frontier set of an alignment graph can be identified in a time that is linear to the size of the graph. The second simplification is that for each node N of the frontier set, there is a unique minimal frontier graph fragment rooted at n. Because of the definition of the frontier set, any node n′ that is not in the frontier set can not have a frontier graph fragment rooted at n′. The definition of a minimal fragment requires that the frontier graph fragment is a subgraph of every other frontier graph fragment that has the Se route.
For an alignment graph that has k nodes, there are at most k minimal frontier graph fragments.
FIG. 7
FIG. 5
FIG. 8
shows the seven minimal frontier graph fragments from the alignment graph of . All of the other frontier graph fragments can be created by composing two or more minimal graph fragments. illustrates how the other frontier graph fragments can be created in this way.
FIG. 9
FIG. 9
FIG. 10
900
Thus, the entire set of frontier graph fragments, as well as all the rules derivable from those fragments, can be computed systemically according to the flowchart of . The flowchart of can be run on the computer system of , for example. At , the set of minimal frontier graph fragments is computed for each training pair. More generally, any minimal set of information that can be used as a training set can be obtained at this operation.
910
At , the set of graph fragments resulting from composing the minimal graph fragments is computed. This allows the rules derived from the main minimal frontier graph fragments to be regarded as a basis for all of the rules that are derivable from the frontier graph fragments.
920
The rules are actually derived at . These rules have been derived from the minimal fragments. The rules include trees, or information derived from those trees.
930
At , the rules from the minimal fragments are combined to form “composed” rules.
Thus, the extracting of rules becomes a task of finding the set of minimal frontier graph fragments of any given alignment graph.
This is carried out by computing the frontier set of the alignment graph. For each node of the frontier set, the minimal frontier graph fragment rooted at the node is determined. The computing of the frontier set can be computed in a single pass through the alignment graph. The frontier set is computed as the union of each node with its span and also with its complement span, which is the union of the complement span of its parents and the span of all its siblings. Here, siblings are nodes that share the same parent.
A node n is in the frontier set if and only if its complement span (n) ∩ closure(span(n)) is equal to 0. Thus, the complement span nearly summarizes the spans of all nodes that are neither ancestors nor descendents of n. This step requires only a single traverse through the graph and thus runs in linear time.
The second step of computing the minimal frontier graph fragment rooted at the node is also relatively straightforward. For each node n of the frontier set, n is expanded. As long as there is some sink node n′ of the resulting graph fragment that is not in the frontier set, n′ needs to be expanded also. After computing the minimal graph fragment rooted at the node of the frontier set, every node of the alignment graph has thus been expanded at most once. Hence, this operation can also run in linear time.
The above has simplified certain aspects; for example, unaligned elements are ignored. However, processes to accommodate these unaligned elements can be determined. This system computes all derivations corresponding to all ways of accounting for unaligned words, and collects rules from all the derivations. Moreover, these techniques can include derivations where sub strings are replaced by sets of trees rather than by one single tree.
This corresponds to allowing rules that do not require the output to be a single rooted tree. This generalization may allow explaining linguistic phenomena such as immediately translating “va” into “does go”, instead of delaying the creation of the auxiliary word “does” until later in the derivation.
The above has been tested with a number of observations. The quality of alignment plays an important role in this derivation. Moreover, the technique which simplifies to running in linear time is barely affected by the size of the rules of abstracts, and produces good effects.
FIG. 11
FIG. 11
identifies one cause of crossing between English and French which can be extended to other language pairs. Adverbs and French often appear after the verb, but this is less common in English. A machine parser creates a nested verb phrase when the adverbs are present. This prevents child reordering from allowing the verb and adverbs should be permeated. Multilevel reordering as shown in can prevent or reduce these kinds of crossings.
One solution, initially suggested by Fox, may be to flatten the verb phrases. This constitutes a solution for this sentence pair. It may also account for adverb-verb reorderings. Flattening the tree structure is not necessarily a general solution since it can only apply to a very limited number of syntactic categories. Sometimes, however, flattening the tree structure does not resolve the crossing in the node reordering malls. In these models, a crossing remains between MD and AUX no matter how VPs are flattened.
FIG. 12
1200
The transformation rule model creates a lexical rule as shown in as . This lexical rule allows transformation of “will be” into -sera-, as the only way to resolve the crossing.
These techniques can also be used for decoding, as described herein. This embodiment describes automatic translation of source natural language sentences into target natural language sentences using complex probabilistic models of word to word, phrase to phrase, syntactic and semantic rule translation. This also describes probabilistic word, syntax and semantic language models.
This second embodiment forms trees directly from the string based information, here, the input information being the information to be translated. The translation is constructed by automatically deriving a number of target language parse trees from the source language sentence that is given as input. Each tree is scored by a weighted combination between the probabilistic models, as well as an additional set of language features. The tree of maximum probability provides the translation into the target language.
This embodiment defines a cross-lingual parsing framework that enables developing statistical translation systems that use any type of probabilistic channel or target language model: any of word based, phrase based, syntax based or semantic based.
The channel and target language models can be trained directly from a parallel corpus using traditional parameter estimation techniques such as the expectation maximization algorithm. The models can alternatively be estimated from word or phrase aligned corpora that have been aligned using models that have no knowledge of syntax. In addition, this enables exploring a much larger set of translation possibilities.
In this embodiment, a target language parse tree is created directly from the source language string. All channel operations are embodied as one of the different type of translation rules. Some of these operations are of a lexical nature, such as the word to word or phrase to phrase translation rules. Other rules are syntactic.
Table 1 illustrates rules that are automatically learned from the data.
TABLE 1
1. DT(these) → <img id="CUSTOM-CHARACTER-00001" he="2.79mm" wi="2.46mm" file="US08600728-20131203-P00001.TIF" alt="custom character" img-content="character" img-format="tif" />
2. VBP(include) → <img id="CUSTOM-CHARACTER-00002" he="2.79mm" wi="4.91mm" file="US08600728-20131203-P00002.TIF" alt="custom character" img-content="character" img-format="tif" />
3. VBP(includes) → <img id="CUSTOM-CHARACTER-00003" he="2.79mm" wi="5.25mm" file="US08600728-20131203-P00003.TIF" alt="custom character" img-content="character" img-format="tif" />
4. NNP(France) → <img id="CUSTOM-CHARACTER-00004" he="2.79mm" wi="3.56mm" file="US08600728-20131203-P00004.TIF" alt="custom character" img-content="character" img-format="tif" />
5. CC(and) → <img id="CUSTOM-CHARACTER-00005" he="2.79mm" wi="2.12mm" file="US08600728-20131203-P00005.TIF" alt="custom character" img-content="character" img-format="tif" />
6. NNP(Russia) → <img id="CUSTOM-CHARACTER-00006" he="2.79mm" wi="4.91mm" file="US08600728-20131203-P00006.TIF" alt="custom character" img-content="character" img-format="tif" />
7. IN(of) → <img id="CUSTOM-CHARACTER-00007" he="2.79mm" wi="1.44mm" file="US08600728-20131203-P00007.TIF" alt="custom character" img-content="character" img-format="tif" />
8. NP(NNS(astronauts)) → <img id="CUSTOM-CHARACTER-00008" he="2.79mm" wi="6.35mm" file="US08600728-20131203-P00008.TIF" alt="custom character" img-content="character" img-format="tif" />
9. PUNC(.)→ .
10. NP(x0:DT, CD(7), NNS(people) → x0, 7<img id="CUSTOM-CHARACTER-00009" he="2.79mm" wi="1.78mm" file="US08600728-20131203-P00009.TIF" alt="custom character" img-content="character" img-format="tif" />
11. VP(VBG(coming), PP(IN(from), x0:NP)) → <img id="CUSTOM-CHARACTER-00010" he="2.79mm" wi="3.13mm" file="US08600728-20131203-P00010.TIF" alt="custom character" img-content="character" img-format="tif" /> , x0
12. IN(from) → <img id="CUSTOM-CHARACTER-00011" he="2.79mm" wi="3.56mm" file="US08600728-20131203-P00011.TIF" alt="custom character" img-content="character" img-format="tif" />
13. NP(x0:NNP, x1:CC, x2:NNP) → x0, x1, x2
14. VP(x0:VBP, x1:NP) → x0, x1
15. S(x0:NP, x1:VP, x2:PUNC) → x0, x1, x2
16. NP(x0:NP, x1:VP) → x1, <img id="CUSTOM-CHARACTER-00012" he="2.46mm" wi="2.12mm" file="US08600728-20131203-P00012.TIF" alt="custom character" img-content="character" img-format="tif" /> , x0
17. NP(DT(“the”), x0:JJ, x1:NN) → x0, x1
These translation rules fall into a number of different categories.
Lexical simple rules are rules like numbers 1-7 that have one level syntactic constituents that dominate the target language part. These rules include a type of the word, the word itself, and the translation.
Lexical complex rules are rules like number 8, where there are multiple levels of syntactic constituents that dominate the target language part.
1
0
0
1
Rules 10, 11, 16 and 17 are lexically anchored complex rules. These rules explain how complex target syntactic structures should be constructed on top of mixed inputs. The mixed inputs can be lexical source language items and syntactic constituent target language constituents. For example, rule 16 says that if the Chinese particle occurs between two syntactic constituents x x, then the resultant target parse tree is an NP with NP:x and X:VP. In other words, this rule stores order information for the syntactic constituents between the languages.
0
The syntactic simple rules are rules like rule 13 which enable target syntactic structures to be derived. Finally, syntactic complex rules enable multiple level target syntactic structures to be derived. This technique can use cross lingual translation rules such as 11 and 16 that make reference to source language lexical items and target language syntactic components or constituents. Note that many of these rules include features that are actually tree based information written in string form. NP(DT (“the”), x: . . . for example represents tree based information.
FIG. 13
illustrates a syntactic tree form derivation for the input sentence. A top down traversal of this derivation enables the creation of the target sentence because each node in the derivation explicitly encodes the order in which the children need traversal in the target language.
1300
160
1302
1302
1304
1310
The decoding is carried out using clusters of decoding according to different levels. At a first step, each of the rules is applied first to the individual words within the phrase . Note that existing software has already divided the new Chinese string into its individual words. Each word such as is evaluated against the rules set to determine if any rule applies to that word alone. For example, the word has an explicit rule (rule 1) that applies to that single word. This forms a first level of rules shown as rule level 1; .
1302
1312
1314
1312
1316
1314
1312
1314
1302
1312
1320
At level 2, each pair of words is analyzed. For example, the pair , is analyzed by rule . Similarly, the pair , is analyzed to determine if any rules apply to that pair. For example, the rule applies to any word that is followed by the word . Accordingly, rule applies to the word pair , . These dual compound rules form level 2; analogously, triplets are analyzed in level 3, and this is followed by quadruplets and the like until the top level rule shown as level x is executed.
Each of these rules includes strings for string portions within the rule. For example, rule 13 shows the information of a specific tree which is written in text format. The tree portion may include variables within the tree.
When this is all completed, the English tree is output as the translation, based on the tree that has the highest score among all the trees which are found.
Although only a few embodiments have been disclosed in detail above, other embodiments are possible and the inventor (s) intend these to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in other way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, different rules and derivation techniques can be used.
Also, the inventor(s) intend that only those claims which use the words “means for” are intended to be interpreted under 35 USC 112, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects will now be described in detail with reference to the accompanying drawings, wherein:
FIG. 1
shows a block diagram of a translation system;
FIG. 2
shows an embodiment using tree portions as parts of the rules;
FIG. 3A-3C
show formation of trees and alignment of the steps;
FIG. 4
shows derivation steps and the induced rules therefrom;
FIG. 5
shows an alignment graph and
FIGS. 6 and 7
FIG. 5
show minimal fragments derived from the alignment graph;
FIG. 8
shows how the minimal fragments are combined;
FIG. 9
FIG. 10
shows a flowchart, run on the computer of ;
FIGS. 11 and 12
show crossing and reordering; and
FIG. 13
shows a decoding rule.
| |
The EU Commission has further modernized rules and regulations for the data economy. The GDPR's record-breaking fines and high number of cases of the last ten months reveal that companies are overwhelmed by compliance with the existing rules. And the task is getting more complex, as the amount of data and rules are constantly growing. It's time for companies to rethink how they manage their data and regain control to avoid compliance risks.
Currently 1.6 billion euros of fines have been issued since the General Data Protection Regulation (GDPR) came into force on 25 May 2018. And, the fines are not just in EMEA. In fact, in the past 12 months European Data Protection Authorities have imposed major penalties against five prominent American companies in the technology sector. These five major cases alone account for more than $1.2 billion in fines. The number of reported violations has also grown faster than ever, from 639 to 1037 in the same period. When the law marks its fourth anniversary on May 25, authorities will have marked a record year of penalties. Anyone who expected a more lax interpretation of the guidelines due to the pandemic will certainly be surprised.
During this period, other trends have emerged. Accelerated digitization coupled with the pandemic and remote working have created more data in more and more places. Many companies are obviously no longer able to keep up with this explosion of data and data silos, especially since the general conditions are rapidly changing once again.
So in March, Ursula von der Leyen, President of the EU Commission, and Joe Biden, US President, jointly announced that they would also adopt new regulations for transatlantic data traffic. It is not clear when this Trans-Atlantic Data Privacy Framework, or what some are calling Privacy Shield 2.0, will come into force as the new legal foundation. But this ruleset will have an impact on how companies with international business handle their data, how they transfer, store and archive personal data. And just a month earlier, the EU Commission presented new approaches with the Data Act to regulate the data market in Europe.
View of the data distorted
Many companies have distributed their data to a variety of storage locations and, as the fines would show, are having challenges in properly managing their archipelago of proprietary, siloed islands of data. No doubt, IT teams spend significant time and resources tackling governance, archiving, and compliance issues. This distorted image causes a number of glaring problems that grow with the amount of data and the number of regulations. In this way, it is almost impossible to see whether data is redundant, whether critical personal data is stored in risky locations, or whether it has been overlooked in the backup plan.
A company can attempt to get these data islands under control with processes and point-product solutions, but may face high infrastructure and operating costs, a lack of integration between products, and increasingly complex architectures. And it is questionable whether all data is protected from ransomware in such a fragmented environment and whether important tasks such as rapid recovery can be implemented in the required time and quality to keep businesses up and running.
In fact, companies should break away from the archipelago and look for a next-gen approach to data management that enables them to improve data compliance, advance security, remove data silos, and reduce complexity.
1) Recognise access and value
Businesses need to know what data they own and what value it has. Only then can they answer questions of governance and compliance. And they need to be clear about who has access to that data. For example, can they detect users that have too much access to data, or can they use AI/ML technology to identify unusual backup or access patterns, or other abnormal behavior. These indicators may help to identify possible internal and external attacks, such as ransomware, at an early stage, enabling countermeasures to be put in place in rapid fashion.
2) Gain a unified view of the data
Ideally, all of these functions and an overview of the data landscape can be accessed via a console that only authorized users can access thanks to multi-factor authentication and access control lists - regardless of whether the data is stored on-premises, in a hybrid cloud or in a SaaS service.
3) Establish resilient, highly scalable infrastructure
The data itself should be backed up in a next-gen data management platform, ideally based on a hyperconverged file system, that can easily scale and goes beyond the Zero Trust security model. In addition to the already mentioned strict access rules and multi-factor authentication, enterprises should be able to utilise immutable snapshots, which means that no external application or unauthorized user can modify the snapshots. Organizations should also use modern data management technology that can encrypt the data, both during transport and at rest, to further enhance security against cyber threats like ransomware.
4) Available as a service
Some companies today no longer want to manage infrastructure completely themselves, perhaps because their IT teams need to concentrate on other business-critical tasks. In these cases, they could consider a vendor that offers Data Management as a Service (DMaaS), designed to provide enterprise and mid-size customers with a radically simple way to back up, secure, govern, and analyse their data.
Outlook
Governments will certainly develop new rules for the data economy, as it plays a major role in all sectors of the economy. IT teams in companies will therefore continue to have to react to new general conditions. To finally find your way out of the complexity trap, you should consider consolidating your data silos into a next-gen data management platform. Organizations can then benefit from synergy effects, including enhanced security, governance, compliance, and the ability to save time and money, since managing this infrastructure becomes much easier.
|
https://www.datacenterdynamics.com/en/opinions/expect-a-year-of-record-gdpr-fines/
|
ARTICLE 19 welcomes the report of the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, on disinformation and freedom of opinion and expression during armed conflicts (the Report). The Special Rapporteur presented the Report at the 77th Session of the UN General Assembly (Third Committee) on 17 October 2022.
During armed conflict, the right to freedom of expression and access to information becomes more important than ever. Yet, parties to an armed conflict typically try to control the flow of information at the expense of the right to freedom of expression and other related human rights. While disinformation and State propaganda activities during conflicts can have many harmful consequences for those most affected by the hostilities, the legal framework applicable to such forms of information manipulation and the question of what responses can counter disinformation in an appropriate and effective manner is not always clear. The illegal Russian invasion of Ukraine has added renewed urgency to finding responses to these issues that have affected ongoing conflicts in many parts of the world.
ARTICLE 19, therefore, welcomes the decision of the Special Rapporteur to examine these questions and to offer recommendations to States and social media companies on how to address challenges and threats stemming from information manipulation during armed conflicts in a way that complies with international standards on freedom of expression and information.
ARTICLE 19’s submission
ARTICLE 19 took part in the consultation organised by the Special Rapporteur during the preparation of the Report. In our response, we stress that international human rights law, including freedom of expression and information, continues to apply during armed conflict. Any responses to the outbreak of conflict, and in particular to disinformation and State propaganda during armed conflicts, need to be grounded in international human rights law and uphold free expression.
We further explain that while sound public interest reporting is one of the key tools to counter the spread of disinformation in armed conflicts, some legacy media outlets can also be a vehicle of propaganda and incitement to violence. Social media platforms, for their part, have also increasingly become a driver of conflict. This is not least due to their problematic business model, which is often based on the vast collection of personal data and selling access to users’ attention through targeted advertising which is routinely coupled with flawed content moderation processes. Both States and non-State actors have instrumentalised social media to crush dissent, recruit members to join armed groups or incite international crimes.
The Special Rapporteur’s report
In her report, the Special Rapporteur approaches the complex issue of information manipulation during armed conflicts by examining different factors, such as the protection of journalists, internet shutdowns or social media regulation – issues which impact the spread of disinformation, propaganda and ‘hate speech’ during armed conflicts. It further examines the applicable legal framework and makes recommendations to States and social media companies.
ARTICLE 19 shares the conclusion of the Special Rapporteur that the effective protection of the right to freedom of opinion and expression remains vital during armed conflicts and that censorship of critical voices, attacks on independent media and internet disruptions are ineffective responses to disinformation that harm free expression. We agree that positive measures promoting media and digital literacy are generally more effective in countering propaganda and disinformation – both in times of peace and in times of armed conflicts.
We also welcome the important recommendation that social media companies must do much more to ensure that their policies and operational practices are applied consistently in situations of armed conflicts across the world and that enhanced human rights due diligence and impact assessments need to be attuned to local contexts.
Following the presentation of the Report at the Third Committee, we call on States and social media companies to implement its recommendations and to do their utmost to ensure respect for the right to freedom of expression and access to information during armed conflicts.
ARTICLE 19 will continue to promote respect for freedom of expression and the free flow of information during armed conflict and looks forward to continuing our engagement with the Special Rapporteur in the follow-up to the Report.
|
https://www.article19.org/resources/the-survival-right-freedom-of-expression-in-armed-conflicts/
|
Los Angeles Zoo recently suffered an unfortunate tragedy when its zookeepers discovered the carcass of one of its koalas that was 14 years of age.
The investigation goes on, although without proper video footage to reveal who the attacker was, speculation is all the Zoo has to go by. Nevertheless, there is circumstantial evidence to suggest that it was the work of P-22, a mountain lion that is well-known in the Griffith Park area.
According to those familiar with the event, the mountain lion was spotted around the area on the night of the unfortunate koala’s mauling.
Jeff Sikich, a biologist for the Santa Monica Mountains National Recreational Area, suggests that mountain lion attack calls have been more prevalent within the last year likely because of human landscaping and habitation taking away from the homestead of the animals.
With that being said, the number of mountain lions probably isn’t increasing in the area, because the population is pretty well regulated by those licensed to put down potential threats and by fellow mountain lions, which attach others over territory.
This is not the first time that mountain lions have been responsible for killing domestic animals; it’s something that occurs every so often and the LA Times reports local mountain lions can be quite terroristic and have been known to kill up to ten domestic animals at a time while only eating one.
Zoos like Los Angeles Zoo need to take extra measures to ensure that they keep their animals protected from mountain lions. By having an open top to the cages, it allows the mountain lions to simply leap right into the cage and the victim animal has nowhere to go.
It’s still unknown what is to become of P-22, but without any evidence to suggest it was the fault of the mountain lion, officials are unlikely to do anything about it. Instead, the Zoo will just need to toughen its animals’ security to keep them safer.
|
https://www.labroots.com/trending/plants-and-animals/2665/mountain-lion-attack-blame-koala-death-ca-zoo
|
Within the next two months, U.S. immigration officials will have to decide the fate of hundreds of thousands of foreign nationals living in the U.S. under a program that shields them from deportation.
The Department of Homeland Security is set to decide Monday whether it will extend Temporary Protected Status (TPS) to 57,000 Nicaraguans and 2,500 Hondurans. It has until Thanksgiving day to make a similar determination about 50,000 Haitians and until Jan. 8 to do the same for nearly 200,000 El Salvadorans.
All told, more than 300,000 foreign nationals could lose their reprieve from deportation under TPS.
Created by Congress in 1990, TPS defers deportation for certain aliens already in the U.S. and allows them to apply for work permits. As its name suggests, TPS is ostensibly a short-term humanitarian benefit that lets foreign nationals stay while their home countries recover from catastrophes such as civil wars, natural disasters, and epidemics.
Trending: ‘Squad’ member Pressley on migrant detention centers: If people don’t ‘see the light,’ ‘we will bring the fire’
In practice, TPS has become something of a semi-permanent immigration benefit, as both Democratic and Republican administrations repeatedly granted extensions for reasons that were not always related to the initial designation. This has led to the current situation, where many illegal immigrants from the designated countries have lived and worked in the U.S. for decades.
Honduras and Nicaragua
The two Central American countries were first designated for TPS in January 1999, after Hurricane Mitch slammed into the region, killing thousands and causing billions of dollars in damage. Nearly two decades later, the protections remain in place for any illegal immigrants from those countries who have had continuous residence in the U.S. since Dec. 30, 1998.
Since the original TPS designation, administrations of both parties have granted extensions because of subsequent natural disasters. In the last 18-month extension for Honduras — which expires Jan. 5 — the Obama administration cited “severe rains, landslides, and flooding” and an outbreak of mosquito-borne diseases including chikungunya and dengue fever. The extension for Nicaragua included many of the same causes, as well as earthquakes and volcanic eruptions that hit between April 2014 and July 2015.
The Obama administration argued that those environmental problems had degraded both countries’ civil infrastructure and health care systems to the point where their citizens could not safely return. Earlier this week, the Trump administration made a different assessment, finding that conditions on the ground in Honduras in Nicaragua do not justify another extension.
Interestingly, Nicaragua and Honduras are the only remaining Clinton-era TPS countries whose current designation was made on the basis of natural disasters. The others — Somalia and Sudan — have kept their designation due to decades of civil war and terrorist insurgencies.
El Salvador
El Salvadoran nationals originally received TPS directly from Congress in 1990 when the country was in the midst of a brutal civil war. Like its Central American neighbors, El Salvador received a subsequent TPS designation on the basis of a natural disaster — in this case from the Bush administration in 2001 after series of earthquakes.
The difference between TPS for El Salvador and the other Central American countries is one of scope: Some 200,000 El Salvadoran nationals are covered under the designation, according to the Congressional Research Service. That’s about 16% of the entire El Salvadoran population in the U.S.
When it extended TPS for El Salvador in 2016, the Obama administration cited a litany of “environmental challenges” the country has faced since the 2001 earthquakes. These include “heavy rains and flooding” and also a “prolonged regional drought” that was the country’s worst in 35 years.
El Salvador’s government has officially asked the Trump administration to extend the country’s TPS designation, which expires in March.
Haiti
Haiti is the most recent TPS designee of the four countries whose status expires in the next two months. The Obama administration granted protected status to Haitian nationals in 2010 after a massive earthquake destroyed much of the already impoverished country, killing as many as 200,000 people.
Then-DHS Secretary John Kelly approved a six-month TPS extension for Haiti in May, the fifth such reprieve since the earthquake. At the time, immigration and humanitarian activists criticized Kelly for not granting a customary 18-month extension, arguing that his brief trip to Haiti did not give him a full appreciation of problems facing the country.
The Haitian government last month formally petitioned DHS to keep TPS protections in place. The government says it cannot safely handle the re-repatriation of its citizens due to damage and flooding caused by Hurricanes Irma and Maria, as well as widespread civil unrest.
Thousands of Haitian illegal immigrants have fled the U.S. for Canada in anticipation of losing TPS protections when the six-month extension expires in January.
This report, by Will Racke, was cross posted by arrangement with the Daily Caller News Foundation.
|
https://libertyunyielding.com/2017/11/05/temporary-asylum-became-permanent-300000-people-living-u-s/
|
In 2014, the U.S. Supreme Court ruled in Alice Corp vs. CLS Bank that abstract ideas implemented on a computer aren’t patent eligible — but failed to define what was considered “abstract.” While the Alice decision isn’t specific to software, it has created the most ambiguity for software-implemented inventions.
But earlier this year, Andrei Iancu was appointed the new director of the U.S. Patent and Trademark Office (USPTO). Under his leadership, the Office has released a memo providing new procedures for how subject-matter eligibility will be analyzed in the wake of Alice and its progeny. This procedural change should benefit applicants, as it promises to make the patent process more clear and reliable.
So if you’re a software company and believe you’ve been routinely getting bogus rejections post-Alice — where the examiner dismisses certain statements as “routine” and “well-understood” without citing any concrete evidence — here’s how the latest developments can offer you useful tools to overcome those rejections.
In the Alice opinion, the Supreme Court did not issue a clear directive, and this has introduced uncertainty and inconsistency in what is considered patent eligible by lower courts and the USPTO. As a result, many businesses who routinely file software patent applications now find it more difficult to secure patent protection.
According to former USPTO director David Kappos, many inventions that were deemed patent-ineligible in the United States have been successfully patented in foreign jurisdictions, which could adversely impact American competitiveness in tech innovation.
And as a result, many leaders in the patent law community have begun debating whether legislative action is the right next step.
Under Iancu’s leadership, the USPTO has finally begun addressing the fallout from Alice. Following the Federal Circuit’s 2018 decision in Berkheimer v. HP Inc., the USPTO updated its procedures to clarify when an examiner may deem an invention patent ineligible — thereby limiting examiners’ ability to make arbitrary decisions.
Berkheimer addresses a scenario where claims are directed to an abstract idea. In order for these types of claims to be patent-eligible under the framework provided in Alice, the claims must also contain some "additional element" that amounts to "significantly more" than the abstract idea.
In the decision, the court further explained that whether a claimed feature is well-understood, routine, and conventional is a question of fact — and is a different question than whether it was simply known in the prior art.
How has the USPTO interpreted Berkheimer?
The USPTO has released a memo clarifying how Berkheimer changes the standards that an examiner must meet in rejecting an application’s claims as patent-ineligible subject matter under 35 U.S.C. § 101.
Previously, in many patent applications, U.S. patent examiners have asserted (without any evidence) that "additional elements" in a claim do not amount to "significantly more" than an abstract idea because the "additional elements" represent well-understood, routine, and conventional activity.
The procedure established in the Berkheimer memo requires examiners to provide explicit support based on factual determination for these types of assertions.
In particular, examiners should use the same standards that are used for the analysis under 35 U.S.C. § 112 when determining whether an element is so well-known that it doesn’t need to be described in detail in the patent specification.
Additionally, an element is not necessarily well-known just because it’s disclosed in the prior art. In other words, the analysis to determine whether something is well-understood, routine, and conventional activity (to determine its patent eligibility under 35 U.S.C. § 101) is different from the analysis to determine whether something is “new” and “non-obvious” in view of prior art (as defined under 35 U.S.C. § 102 and 35 U.S.C. 103 respectively). Basically, an examiner can’t simply refer to a 102 or 103 rejection to provide the necessary support for a rejection under 101.
The USPTO memo will revise MPEP § 2106.07, which describes the procedures an examiner must follow in rejecting claims or evaluating applicant responses under Section 101.
A statement in the specification that shows the applicant knows the element is well-understood. In particular, the specification must expressly state that the element hasn’t been explained because it’s well-understood in the industry.
A publication showing the element is well-understood. Acceptable publications include books, manuals, review articles, or other sources that expressly discuss what is well-known within the industry.
A statement that the examiner is taking official notice of the element’s conventional nature. This should only be offered when the examiner is certain that the element is well-known within the industry.
If the applicant challenges the examiner’s finding that an element is well-known, then the examiner must refute the challenge by either providing one of the first three items on the list above, or submitting an affidavit that cites factual evidence.
If your application is facing a rejection under Section 101, the new USPTO guidelines can be useful.
In particular, you can ask the examiner to provide explicit support for any assertion that something is well-understood, routine, and conventional activity.
In many cases, the examiner will not be able to provide the evidence that is now required by USPTO procedures, and in such cases the examiner should withdraw the rejection.
The new USPTO leadership is taking positive steps to improve predictability and clarity in how patent eligibility is determined. If you’re a software company (or a company that files software patent applications), you absolutely need to engage a patent professional who knows how to navigate these developments.
At Henry Patent Law Firm, our attorneys have extensive experience prosecuting patents related to computer systems and software — contact us now to find out how we can help.
|
https://www.henrypatentfirm.com/blog/prosecuting-software-patents-latest-developments
|
What better way to celebrate US Independence Day, or a slightly belated Canada Day, than with a new edition of “The Origins of Boyd’s Discourse”? Click for a larger view, and a full-sized PDF is now on the Articles page.
As a result, the process not only creates the Discourse but it also represents the key to evolve the tactics, strategies, goals, unifying themes, etc., that permit us to actively shape and adapt to the unfolding world we are a part of, live in, and feed upon.
This entry was posted in Boyd's Theories. Bookmark the permalink.
|
https://slightlyeastofnew.com/2013/07/04/new-edition-of-the-origins-of-boyds-discourse/
|
Written by experienced authors and examiners, this third edition encourages students to explore ’What is knowledge? Why, and how do we learn?’. This print and digital course guide helps shape students into internationally minded citizens as they critically assess the world around them. Students will explore real-world examples and independently reflect on their knowledge, growing as knowers. A dedicated chapter focuses on building skills for assessment, so students will be fully prepared to excel in the essay and exhibition.
- Contents
-
FEATURES
‘Before you start’ questions at the beginning of each chapter challenge students’ thinking habits and help to ignite discussion before the unit starts.
‘Explore’ activities lead students into the exploration of the TOK core, optional themes and areas of knowledge.
‘Real-world situations’ help students see how TOK themes manifest in the world around them.
‘Linking questions’ help students make connections across themes and areas of knowledge including sciences, literature and the arts.
‘Discuss’ questions promote debate in the classroom and provide the ideal opportunity to develop oral English skills.
‘Reflection’ features encourage students to analyse their development as knowers as they progress through the course
Clear and concise language, including key term pull outs with associated explanations throughout, support ESL learners.
Ethics is integrated in the course as a running thread throughout the content.
Cambridge Elevate editions are customisable and interactive, allowing students and teachers to annotate text, add audio notes and link out to external resources, both online and offline via the app.
-
CONTENTS
- Introduction
- Part 1. Knowers and knowing: 1. Who is the Knower?
- 2. The Problem of Knowledge
- 3. Knowledge Questions and the Knowledge Framework
- 4. Truth and Wisdom
- Part 2. Optional themes: 5. Knowledge and Technology
- 6. Knowledge and Language
- 7. Knowledge and Politics
- 8. Knowledge and Religion
- 9. Knowledge and Indigenous Societies
- Part 3. Areas of Knowledge: 10. History
- 11 The Arts
- 12
- Mathematics
- 13. The Natural Sciences
- 14. The Human Sciences
- Part 4. Assessment: 15 The ToK Exhibition and the ToK Essay
- Glossary.
-
Latest newsAll news
25 March 2020
Keeping well and coping – five tips for educators and their students
Mindfulness tips from a Professor of Child and Adolescent Psychiatry at the University of Cambridge.
Catalogues & Ordering
Please sign in to access your account
Not already registered? Create an account now. ×
Thank you for your feedback which will help us improve our service.
|
https://www.cambridge.org/id/education/subject/humanities/theory-knowledge/theory-knowledge-ib-diploma-3rd-edition/theory-knowledge-ib-diploma-3rd-edition-course-guide-digital-access-2-years-digital-course-guide-2-years?format=DO&isbn=9781108865982
|
On October 15, 2019, the Cottonwood Heights City Council adopted revised outdoor lighting regulations as part of the City Zoning Code as Chapter 19.77 - Outdoor Lighting.
An outdoor lighting checklist for single-family homes has been created to help make the process of compliance straightforward and simple. This checklist does not apply to any other development types.
Goals for Outdoor Lighting Regulations
Adequate nighttime lighting is important to allow human activity to safely continue after the sun goes down, but inappropriate lighting practices can result in:
• light pollution
• light trespass
• glare
• poor lighting color increasing glare and affecting human health
• poor energy conservation
• impact on wildlife and natural ecosystems, and
• creating skyglow.
These issues can reduce full enjoyment of private property rights, impair human health and safety, waste energy, and create a poor nighttime ambiance. This ordinance provides regulations that seek to mitigate the above noted issues.
When Does This Ordinance Apply?
- All new structures, including new single-family homes.
- Any addition to a structure or land use.
- Replacing light bulbs.
Repairing existing light fixtures does not apply.
How to Find Appropriate Lighting Fixtures and Light SourcesHere are resources to help you select light fixtures that address light pollution issues:
• Outdoor Lighting Basics
• International Dark Sky Association Fixture Seal of Approval Program (some available at Home Depot).
• Select Light Sources that Produce 3000 Kelvins or Less
• What is a Kelvin?
• Do It Yourself Good Lighting Practices
• Tips for Working With Your Neighbor's Poor Lighting
Other Resources
Guidance & Best Practices - Dark Sky Planning: An Introduction for Local Leaders by the Utah Department of Housing & Community Development Office
The International Dark Sky Association provides much information on issues surrounding light pollution.
|
https://www.cottonwoodheights.utah.gov/city-services/community-development/outdoor-lighting-regulations
|
Should you choose a mate based on genetic fitness? Photo: bio.davidson.edu.
Should your choice of spouse be left solely to your heart, or should the choice incorporate some genetic fitness phase? Obviously, if one is to regard marriage (or any other equivalent arrangement such as cohabitation) as a joint decision to share your life with someone you love, incorporating genetic criteria might seem rather troubling, if not inappropriate. But, on the other hand, if you hold the view that the main reason for your union is procreation, then worrying about genetic compatibility and avoiding inheritance of grave genetic diseases becomes a serious consideration.
The landscape
Genetic testing and genetic screening have become part of contemporary medicine and public health initiatives. These terms are usually used interchangeably, but the term “testing” denotes a genetic test done on an individual voluntary basis, while “screening” implies large-scale, public health initiatives. Examples of genetic testing in clinical settings include testing for the presence of BRCA1 and BRCA2 genes, to identify increased risk for breast and ovarian cancer, or prenatal genetic testing for Huntington disease. Examples of public health initiatives in screening programs include newborn genetic screening programs instituted in all states in the United States (these panels of tests range among states from 6 to 50 diseases) and in other developed countries.1,2
The impetus to identifying the genetic cause of a disease or a susceptibility implies, or should imply, the ability to act upon this knowledge:
- providing timely treatment
- avoiding exposure to environmental risks
- influencing reproductive choices
Importantly, we are not dealing here with genetic enhancement but rather avoiding the birth of babies with lethal or severely debilitating diseases.
Screening for a genetic condition should also meet reasonable probability. It doesn’t make much sense to screen for a condition that is likely to occur in one case out of a million, as opposed to testing for a gene that could be carried by one in fifty. Therefore, discussion and decision with respect to a suggested test is highly dependent on the targeted population and its genetic susceptibilities. The subject of large-scale genetic screening, however, brings to mind notorious past precedents in this field, known as eugenics concepts (i.e., cleansing society’s genetic pool of unfit genes). Thus, it is imperative to restrict such programs to lethal or severely debilitating diseases.
Autosomal Recessive Inheritance. Photo: Wikimedia Commons
Another observation relates to the inheritance pattern of a genetic disease: dominant genes will manifest themselves (depending on their penetrance), while the heterozygote carrier state (having two different alleles) of recessive genes is asymptomatic and usually has no significance to the carrier or offspring. Only if both parents are heterozygous for the same ailment is there a 25 percent risk for every pregnancy that their offspring might receive both recessive genes and exhibit the disease. This latter possibility is the thrust for creating premarital genetic screening programs as we show below. X-linked recessive disorders, caused by mutations in genes on the X chromosome, are not amended to premarital genetic screening.
Many countries have been struggling with the proper way to handle genetic information that has no immediate implication, such as a heterozygote carrier state that is identified during newborn screening.1,2 Should the information be revealed to parents? To tested individuals? If not, why not?
What are the options?
In some populations, the likelihood of mating with a person with one’s same faulty recessive gene is quite high. Since being a carrier doesn’t carry any morbidity and is not manifested (i.e., no phenotype), the only material risk to carriers is that they might conceive a child with a partner who shares the same carrier status. If the genetic condition is a lethal one (e.g., Tay-Sachs disease), or seriously debilitating (e.g., Fanconi’s anemia), one might wish to engage in preventive measures. What are the options?
- One could remain in genetic ignorance with respect to his/her own carrier state, as well as the partner’s, and hope that the risk does not materialize.
- One could resort to prenatal testing (e.g., amniocentesis, chorionic villus sampling), with the only true option in case of an affected embryo being an abortion. It should be noted that abortions carry certain risks to the mother (physical as well as psychological), are fraught with moral issues, and in some societies or subpopulations are strictly prohibited.
What should follow from such an analysis is an effort to avoid such conceptions, if possible. It is now possible to examine embryos prior to gestation in a procedure, called pregestational diagnosis (PGD), in which DNA from a cell of the developing pre-embryo is screened, and the pre-embryo is only returned to the mother-to-be’s womb if it doesn’t bear the suspected gene for which it is tested. However, this procedure is still nascent, is expensive and, above all, necessitates in vitro fertilization with its embodied risks (e.g., invasive egg procurement, hyperstimulation syndrome, success rate of less than 20 percent per cycle) and substantial costs.
But what if it would be possible to avoid the problem altogether? One way to do this is by performing premarital genetic testing (PGT) and informing prospective spouses about their carrier status, allowing potential partners who are both carriers of a particular recessive trait the option not to marry or not to procreate if they so wish. Several PGT programs have been instituted around the globe. The two most cited ones are the Dor Yeshorim (DY) program3,4 and the Cyprus thalassemia screening project. Although their means of operation are different, as are their outcomes, these programs share the same goals:
- abolishing particular autosomal recessive diseases through a comprehensive testing program
- targeting a given population in its entirety
- situating in societies where abortions are regarded as highly undesirable
Example 1: Dor Yeshorim
If one is to fully appreciate PGT in the Orthodox Jewish community, some preliminary remarks are needed:
Some recessive genetic diseases such as Tay-Sachs are prevalent among Ashkenazi Jews (those originating from the Western and Eastern Europe diaspora), who make up more than 80 percent of world Jewry and are believed to be descended from about 1,500 Jewish families dating back to the 14th century.
In Jewish communities, secular as well as orthodox, reproduction represents a most significant social and religious obligation. As a result, the utilization of scientific technology in general, and genetics in particular, in the process of procreation is regarded favorably.5 Additionally, as abortions are seriously objectionable in Judaic ethics, a preference for prevention over termination of pregnancy is clear.
DY operates in ultra-orthodox communities, where arranged marriages are the norm.
Lastly, Jewish communities are generally tight-knit social groups, with numerous self-imposed, self-executed institutions (welfare, education, religious).
All of these factors played out in the design and operation of DY’s premarital genetic screening program.6
Established by Rabbi Joseph Ekstein (who lost four children to Tay-Sachs disease), DY operates among ultra-orthodox communities and screens young adolescents for a panel of 10 recessive diseases that are lethal or severely debilitating (Tay-Sachs disease, cystic fibrosis, Gaucher disease type I, Canavan disease, familial dysautonomia, Bloom syndrome, Fanconi anemia, glycogen storage disease type 1a, mucolipidosis type IV, and Niemann-Pick disease type A). Most of these genetic screening takes place in high schools or religious academia (Yeshivot). For most members of Jewish populations outside the ultra-orthodox communities, Tay-Sachs disease screening occurs outside the Dor Yeshorim program and involves prenatal diagnosis of Tay-Sachs disease, followed by selective abortion when the fetus is found to have Tay-Sachs disease.
Generally, individuals consent to be tested, while parental consent is given in cases of underage minors. Each tested individual receives a coded identification (ID) number. When a proposed match is being considered, both individuals’ IDs are checked in the DY database. The only result that the tested individuals receive is either “advisable” or “nonadvisable” for marriage. They do not receive their specific carrier status, neither at the time of the examination nor at the time of a match test. In this way, most carriers never find out what gene they carry and thereby avoid being seen as defective or a damaged good. If marriage is deemed inadvisable, genetic counseling (by phone only) is available to these individuals. Couples can still get married, but the overwhelming majority do not pursue the match and cancel their wedding plans. Fortunately, this carries a light emotional burden, as consulting the DY database transpires very early in the matchmaking. Stigmatization of individuals and their families is avoided by maintaining strict confidentiality in regard to carrier status.
As mentioned, DY has been endorsed by religious community leaders and became a standard prerequisite in ultra-orthodox matchmaking. The results of DY are regarded as a huge success: Since its inception, over 220,000 individuals have been tested, over 500 incompatible couples identified, and virtually no afflicted children were born. Consequently, DY has aimed at increasing its activities, reaching out to other communities within Jewish society (including modern orthodox) and to non-Jewish communities.
PGT is not restricted to Orthodox Jewish communities. Other important projects focusing on a single disease, thalassemia, were instituted in Cyprus, a Mediterranean island, and Iran. A short description of the Cypriot project follows, and the interested reader may find more information with respect to Iran elsewhere.7,8
Example 2: Cyprus thalassemia screening project
Studies in the 1990s estimated 78,000 blood units were needed annually to treat thalassemia in Cyprus. Photo: Toytoy.
The population of Cyprus has a very high ratio of carriers of thalassemia (1 in 7), a group of blood disorders resulting from underproduction of globin proteins. The treatment of afflicted individuals is based on blood transfusion and expensive medication or procedures (bone marrow transplantation). The overall health and pecuniary burden on the Cypriot community was extensive: It was estimated that without intervention, over a period of 40 years, 40 percent of the population would have to become blood donors to meet the expected 78,000 blood units needed annually, consuming resources equal to the entire health budget.9
This fate was averted by a national program of PGT, set in motion in the 1970s with the support of the World Health Organization. Individuals who wish to marry must present documentation of thalassemia screening to obtain a marriage license. Laboratory services and know-how were introduced to meet the needs of this comprehensive project. Upon testing, individuals learn their personal carrier status, although typically at a later stage in the mating process than in DY. As a result, most couples (some 95 percent) do not revoke their marriage plans and resort to prenatal diagnosis (mainly amniocentesis) and abortion of embryos diagnosed with thalassemia. To complement this social transformation, the Orthodox Church of Cyprus has adopted a lenient approach regarding abortion of afflicted embryos, though not without criticism from abroad. Here again, the overall success of the program is impressive, with near zero births of afflicted newborns.10
Will these programs work in the United States and elsewhere?
Prenatal screening is routinely offered in most countries today. This entails the need for selective abortions of embryos with lethal or severely debilitating diseases. Abortions are not risk- or cost-free, and in light of PGT-demonstrated successes, the question arises as to whether PGT can be instituted in other communities, especially in the United States and some Western countries. Indeed, the social, legal, and ethical challenges are not simple:
Most westerners do not engage in matchmaking, and creating a system for secure predating genetic scrutiny, as in the case of DY, would seem to be unacceptable and not feasible. Yet, some point to the growing acceptance of HIV testing as a prerequisite for serious dating in the United States as an example of a possible change of concept.
DY maintains strict confidentiality and limits access to test results even from the tested individual, which is very different from Western ethical paradigms. The former approach is intended to avoid unnecessary life-long knowledge if a recessive carrier doesn’t end up marrying an individual with the same recessive gene.
Importantly, social cohesion is far tighter in communities served by DY and in Cyprus, a key factor to the successful implementation of PGT. DY and the Cyprus programs are contingent on a powerful trust between the constituents and the governance of the project, a feature seemingly missing in the American context (notably, DY is not imposed by a governmental agency but rather by a social compact). Both programs exert a powerful social pressure, which some term “quasi-coercive.” As PGT programs became accepted practices, either by requiring a proof of testing in Cyprus or by the inability to participate in matchmaking in the ultra-orthodox Jewish community, the individual seems to have lost the freedom to choose whether to be tested or not. Such ostensible curtailments of individual freedom are a hard sell in the United States and some other Western countries.
Indeed, PGT may create a new concept of genetic identity. With PGT, individual responsibility with respect to genetic identity may manifest itself in different ways. In Cyprus, individual carriers also bear the burden of knowing their own genetic risk and are expected either to avoid marriage (which doesn’t usually happen) or to have an abortion if necessary. People tested by DY assume only the responsibility to make a genetically responsible decision with regard to their future spouse. They are not informed of their particular carrier status, as it lacks any relevance unless matched with another carrier. This creates what Prainsack and Siegal term “genetic couplehood.”6 This in turn is a stark presentation of a non-individualistic notion of one’s genetic makeup—you are only part of a larger genetic identity. This could be a major leap for Western and American cultures, where accentuated individualism prevails.
In summary, it would be safe to speculate that in the United States and some other Western nations widespread premarital genetic testing is not around the corner. However, one can envision a future genetic inquiry that is evidence-based and focused on population-specific diseases. The transformation to large-scale initiatives, or the creation of a public health initiative, could create substantial resistance. To this end, resolutions with respect to the public’s genetic and health education, data management and protection, and genetic testing are all needed.
© 2007, American Institute of Biological Sciences. Educators have permission to reprint articles for classroom use; other users, please contact [email protected] for reprint permission. See reprint policy.
|
http://actionbioscience.org/genomics/siegal.html
|
1 centimeter equals how many inches?
Do you wanna to convert 7.9 cm into the equivalent of inches? first you should be aware of how many inches 1 centimeter represents.
You may use this cen to inches converter to reverse the conversion.
Basis of centimeter
Centimeters or centimetres is the measurement unit used to measure length in metric systems. It is abbreviated by the letter cm . The meter is internationally defined to an “International System of Units”, the unit cm is not. But a centimeter is equal to 100 meters. It is also 39.37 inches.
Facts About Inch
An inch is an Anglo-American measurement of length measurement. Its symbol is in. In bulk of European local languages, “inch” can be used interchangeably with , or is derived from “thumb”. Because a man’s thumb is about an inch wide.
- Electronic components, such as the dimension of the PC screen.
- Size of car/truck tires.
What is 7.9 centimeters Converted to inches?
Convert inches into centimeters using the cm converter. This basic could be used to convert cm to inches.
From the above, you have fully grasped of cm in inches. Using this simple formula, you can answer the following related questions:
- What’s the formula to convert inches from 7.9 cm?
- How to convert 7.9 cm to inches?
- How do you change cm to inches?
- How to turn 7.9 cm to inches?
- What is 7.9 cm equal to in inches?
|
https://utzx.com/units/convert-7-9-cm-to-inches/
|
Turkeys large size and its unique location of crossing the border of Europe and Asia has led to it being a sought-after destination throughout history and modern times alike. All across Turkey, there are ancient and contemporary cities as well as some that have evolved as time has marched forward.
Istanbul
Istanbul is a city that has been around for generations, it has had many names across different cultures starting with the less known Lygos, to then become Byzantium. From here the city gained the name Constantinople after Constantine the Great who was emperor at the time. Later the city was renamed Istanbul, in line with using modern Latin script instead of Arabic scripture. As a result of the influences of these many cultures,there are a variety of attractions that can be seen all across the city.
One of the leading places to visit is the Ayasofya Museum. This museum was originally built as a church and later converted to be used as a mosque. Nowadays it is used as a museum to display collections from its history as well as items from nearby points of interest.
Konya
The city of Konya is famous across the world but holds a special place for all Turkish people. Whirling Dervishes of the Mevlevi Order call this city home and are an icon that draws tourists from far and wide. The act of whirling is called Sufi Whirling (or Sufi Turning) and is a form of meditation used in worship ceremonies.
Within the City is a public park on a raised area called Alaaddin Hill, this destination is the home of a country-wide famous mosque which bears the same name as the park. This mosque was originally built as a Christian Basilica but was converted into a mosque during the Seljuk Empire. While being a converted building it has undergone major reconstruction until its final additions completed during the Ottoman Empire when a traditional minaret and marble mihrab were added and the entrance was moved to the east side. The east wing of the mosque have Byzantine and Hellenistic style columns which offer a balanced contrast between the different cultures while allowing for an welcoming space.
This mosque is the resting place for a succession of sultans. Following inscriptions found within the mosque and historical understanding, it was considered the mosque for royalty. This is highlighted even more so by the ornate designs within the building itself and the fact that the sultan who converted it is buried within.
Antalya
This metropolis is a popular tourist destination. From all over Turkey and the world people come to this bustling city both for summer beach holidays as well as for relief from the busyness of big city life. The city of Antalya has beautiful harbours that give way to sandy beaches with all-inclusive hotels, with private sections that allow for a safe environment for children.
Antalya’s city centre is particularly popular amongst Turkish people and expats due to its ultra-modern shopping malls. While the city is known for its modernization efforts this doesn’t take away from the history. Major landmarks have been protected from being developed and now stand as a reminder of the origins of the region. One landmark that stands out it Hadrian’s Gate, this triple arched gate was created in commemoration of Emperor Hadrian’s visit to the ancient city that is now known as modern Antalya.
Another reason that Antalya has gained a large popularity is that outside of the city are many different historical destinations that are a must-see if you find yourself in this area of Turkey. One that has been gaining fame each year is Olympos. This ancient city is an unusual destination for visiting, the local area has been kept as natural as possible while also maintaining but not changing the ruins of the once great ancient city. Due to this preservation effort, it has become a popular place for gulet cruises to stop. Alaturka Cruises uses this breathtaking location as its final destination allowing for an unforgettable end to a relaxing cruise.
Ephesus
This ancient city was originally a Greek municipality eventually to be conquered by the Lycians. After a time of peace and prosperousness the city was eventually taken over by the Romans. After the splitting of the Roman empire that created the Byzantine empire, it prospered until the Ottomans ultimately took the country that is known as Turkey. Due to the rise of the nearby harbour of Seljuk and Ephesus’ own harbour silting up the city was eventually abandoned and left to become the ruins that we see today.
Amongst all of the spectacular ruins, one of the ruins remains as only a facade of the original building but is famous across the world and throughout history. The Library of Celsus is no longer a full building, however from the foundations and the facade it is clear to see the size and splendour of what gained its title as one of the Seven Wonders of the Ancient World.
A country the size of Turkey that has straddled the continents of Asia and Europe while also having many cultures share in its history and culture, makes this an incredibly interesting destination for your next holiday. Combine the heritage with the natural beauty and spectacular beaches and it is easy to see why Turkey is an ever increasingly popular holiday destination that should be visited at least once in a lifetime.
|
https://socialifestylemag.com/2018/03/four-cities-of-beautiful-turkey/
|
“Sneak Peeks” build anticipation, increase intrigue, and give us a foretaste of what’s to come. Moses gives us a sneak peek into Jesus, Exodus gives a sneak peek into Easter, and the first Easter sneak peeks into hope and fresh power into our life. Here we study the throwbacks from Jesus on the mountaintop in Matthew 17:1-9 to Moses on the mountaintop in Exodus 24 and sneak peeks into the resurrection of Christ and resurrection power for us today.
|
http://www.jacobswellmemphis.org/tag/easter/
|
Even those that don’t normally follow any specific sport, gets caught up in Olympic events, including both the summer and winter venues. Many of the countries from around the world consider being represented in this venue to be of great importance. Some may be more passionate about it than others, and there is little doubt that the UK is most keen on their Olympic participation.
Dating all the way back to 1896, Great Britain has been represented in these games. Keeping track of the statistics from 1896 up to 2016, the British athletes have been able to bring home 847 medals from the summer Olympics. Then added to this, there are another 26 winter medals accumulated from the winter games. Out of these, 273 were gold, while there were 299 silver, and the balance was bronze.
What is also impressive about Great Britain’s involvement in the Olympics is that they have had the pleasure of acting as host for the summer Olympics, not just once but for a total of three times.
While throughout the years there have been many outstanding single athletes as well as teams, which is clearly depicted by the number of wins, some stand out more than others based on the total number of gold medals that they were able to bring home during their years of participation.
One of the outstanding sports that has added to the Olympic medal-winning for GB is track cycling. The participants who made this possible were Sir Chris Hoy and Jason Kenny. Each of these athletes were able to pull in six gold medals a piece.
When looking at the all-time Olympic greats for Great Britain, this must include Sir Bradley Wiggens, as he has earned the accolade of being the British Olympian who is the most decorated.
Although there is a mention of just these few, there are many more that can be added to the list of Great British Olympians.
|
http://5034eventsuksportive.co.uk/the-uks-passion-for-the-olympics/
|
Serve People. Improve the health of people living in high-need areas by strengthening fragile health systems and increasing access to quality health care.
Lift from the Bottom, Pull from the Top. Focus on serving the most medically under served communities, working with local businesses, greatest thinkers, and best institutions.
Build Upon What Exists. Identify, qualify, and support existing residents over the long term and serve as a catalyst for other resources.
Remove Barriers. Create transparent, reliable, and cost-effective channels to enable under served communities access to essential resources (particularly medicines, supplies, and equipment).
Play to Strengths. Partner for Other Needs. Engage in activities that address a compelling need and align with our core competencies and areas of excellence. Ally with an expanded network of strategic partners who are working on related causes and complementary interventions in order to leverage resources.
Ensure Value for Money. Generate efficiency, leverage resources, and maximize improvement for people with every dollar spent. Maintain modest fundraising expenses.
Be a Good Partner and Advocate. Give credit where due, listen carefully, and respect those served and those contributing resources.
Respond Fast While Looking Ahead. In emergencies, support the immediate needs of survivors by working with local partners best situated to assess, respond, and prepare for the recovery.
Aim high. Combine the best of business, technology, and public policy approaches for the benefit of people in need.
|
https://www.creativegroupeconomics.com/who-we-are
|
Chromic acid (H2Cr2O7), dipotassium salt. A compound having bright orange-red crystals and used in dyeing, staining, tanning leather, as bleach, oxidizer, depolarizer for dry cells, etc. Medically it has been used externally as an astringent, antiseptic, and caustic. When taken internally, it is a corrosive poison.
Chromium Isotopes
Tanning
A process of preserving animal hides by chemical treatment (using vegetable tannins, metallic sulfates, and sulfurized phenol compounds, or syntans) to make them immune to bacterial attack, and subsequent treatments with fats and greases to make them pliable. (McGraw-Hill Dictionary of Scientific and Technical Terms, 5th ed)
Chromium Radioisotopes
Diphenylcarbazide
Spectrophotometry, Atomic
Metals, Heavy
Nickel
Water Pollutants, Chemical
Cobalt
Stainless Steel
Hazardous Waste
Soil Pollutants
Substances which pollute the soil. Use for soil pollutants in general or for which there is no specific heading.
Air Pollutants, Occupational
Metals
Trace Elements
Occupational Exposure
Iron Chelating Agents
Mutagenicity Tests
Carcinogenicity Tests
Tests to experimentally measure the tumor-producing/cancer cell-producing potency of an agent by administering the agent (e.g., benzanthracenes) and observing the quantity of tumors or the cell transformation developed over a given period of time. The carcinogenicity value is usually measured as milligrams of agent administered per tumor developed. Though this test differs from the DNA-repair and bacterial microsome MUTAGENICITY TESTS, researchers often attempt to correlate the finding of carcinogenicity values and mutagenicity values.
|
https://lookformedical.com/en/definitions/chromium
|
Hoa Phat Group (HPG) chairman Tran Dinh Long is no longer named on Forbes' 2019 list of billionaires published on March 5, including five Vietnamese names.
Business magazine Forbes Vietnam published the list of its 50 most influential women in Việt Nam in 2019 on Monday.
Billionaire Pham Nhat Vuong has become the first Vietnamese to be named among the top 200 richest people worldwide.
Though they are in jail, businessmen who are billionaires still own huge assets and every move they make can affect the market.
As one of the most famous businessmen in Austria, Ho Xuan Thai, or Thai Ho, has emerged as a Vietnamese business symbol in the European nation in the sector of restaurants, events, and night [ … ]
The biggest influencers were businesspeople whose enterprises developed large, significant projects and made huge profits, who owned big properties and successfully developed their real esta [ … ]
Two Vietnamese dollar billionaires and other well-known businessmen started their careers by making instant noodles.
The assets of Ho Hung Anh and Nguyen Dang Quang are difficult to analyze, but CafeF has named them as the newest US dollar billionaires in Vietnam.
Earlier this year, Forbes added two more Vietnamese billionaires to the list of the richest persons on the planet.
VietJet CEO and President Nguyễn Thị Phương Thảo has been recognised as one of the 100 most powerful women in the world by Forbes.
|
http://pda.thevietnamese.com/economy/company-and-names.html
|
Privacy supplement for Microsoft Lync 2010 for Nokia
This page is a supplement to the Privacy Statement for Microsoft Lync Products. In order to understand the data collection and use practices relevant for a particular Microsoft Lync product or service, you should read both the Privacy statement for Microsoft Lync products and this supplement.
This privacy supplement addresses the deployment and use of Microsoft Lync 2010 for Nokia on your enterprise’s mobile devices. If you are using Microsoft Lync Server 2010 communications software as a service (in other words, if a third party [for example, Microsoft] is hosting the servers upon which the software runs), information will be transmitted to that third party. To learn more about the use of data that is transmitted from your enterprise to that third party, consult your enterprise administrator or your service provider.
Activation Reporting
What This Feature Does: During initial setup and configuration of Lync 2010 for Nokia, certain hardware-specific and software-specific data is collected and sent to Nokia for the purpose of reporting successful configuration.
Information Collected, Processed, or Transmitted: No personally identifiable information is collected during this process. No information is sent to Microsoft.
Use of Information: The data collected during the Activation Reporting process is used by Nokia to get a report of successful configuration of the software. For details about the Nokia Privacy Policy, see the Nokia Privacy Policy.
Call Delegation (Call Forwarding)
What This Feature Does: Call Delegation allows you to assign one or more delegate(s) and then have your delegate(s) place and answer calls and set up and join online meetings on your behalf. If you are using Call Delegation from your mobile device, you can select only delegates that you have pre-defined on your Microsoft Lync 2010 desktop client.
Information Collected, Processed, or Transmitted: When your delegate(s) answer a call on your behalf, you receive an email informing you about this event. No information is sent to Microsoft.
Use of Information: You can use this feature to work with your delegate(s) to manage your schedule and meetings, follow-up with your delegates about calls they make and answer for you (or on your behalf), or do both.
Choice/Control: Call Delegation is turned off by default. You can enable or disable it by using the following steps:
In Lync for Nokia, from the Contacts list page options menu, tap Settings.
Client-Side Logging
What This Feature Does: Client-Side Logging enables you to log your Lync usage information on your device, in your user profile. The information can be used for troubleshooting any issues you might experience with the Lync for Nokia software.
Information Collected, Processed, or Transmitted: When Client-Side Logging is enabled, information such as the following is stored on your device: device ID, user alias and domain, Presence data, message details, logon history, Contacts list, and client configuration data, such as call forwarding rules, status, and notes. The contents of your Lync 2010 conversations are not stored. No information is automatically sent to Microsoft, but you can choose to manually send this information.
Use of Information: You can use Client-Side Logging to troubleshoot any issues you might experience while using Lync for Nokia.
Choice/Control: Client-Side Logging is turned off by default. You can enable or disable it by using the following steps:
In Lync for Nokia, on the Contacts list page options menu, tap Settings.
Contact Card
What This Feature Does: The Contact Card collects static and dynamic information about other people in your enterprise and displays that information in Lync and for contacts in recent versions of the Microsoft Outlook messaging and collaboration client. The Contact Card provides one-click actions for sending an email, making a call, sending an instant message, and sending SMS (text) messages (SMS capability is not available in the Lync desktop client).
Information Collected, Processed, or Transmitted: The static information on the Contact Card is collected from the enterprise’s corporate directory (such as Active Directory Domain Services) and is shared with others through Lync Server 2010. The dynamic information that is collected, such as phone numbers and Presence, may be entered by you and then shared with others. No information is sent to Microsoft.
Use of Information: The Contact Card information is displayed so that you can share your contact information with others.
Choice/Control: Contacts can be managed from both the Lync desktop client and Lync for Nokia. You can manage contacts in Lync for Nokia by using the following steps:
To add a contact
From the Contacts List tab on the Lync main screen, tap Search.
In the Search text box, type the name of the contact.
From the search results, select the contact you want to add, and then tap the Options menu.
Tap Add Contact, and then tap the contact group to add the contact to.
To remove a contact
From the Contacts List tab on the Lync main screen, tap the contact group of the contact.
Emergency Services (9-1-1)
Important: We recommend that you DO NOT use Lync for Nokia to contact Emergency Services (such as 9-1-1 in the United States). Lync for Nokia DOES NOT have the ability to determine your actual physical location; therefore, if you use Lync for Nokia to contact emergency services providers, the providers will NOT be able to determine your location. To contact emergency services providers from your device, close Lync for Nokia, and use your device’s dial pad.
Personal Picture
What This Feature Does: Personal Picture displays your picture and pictures of other people in your enterprise.
Information Collected, Processed, or Transmitted: Your Personal Picture sharing preference is collected for both displaying pictures and sharing your picture. Only photos stored in Active Directory can be displayed in Lync for Nokia. No information is sent to Microsoft.
Use of Information: The information is used to customize your experience and to share your picture with others.
Presence and Contact Information
What This Feature Does: Presence and Contact Information allows you to access information published about other users (both inside and outside your organization) and provides other users with access to information published about you, such as your Presence status, title, phone number, location, and notes. Your administrator can also configure integration with Outlook and Exchange Server so that you display out-of-office messages and other status information (for example, when you have a meeting scheduled in your Outlook calendar).
Information Collected, Processed, or Transmitted: You use your sign-in address and a password to connect to Lync Server. You and your administrator can publish information about your Presence status and Contact Information that is associated with your sign in. No information is sent to Microsoft.
Use of Information: Other Lync users and programs can access your Presence and Contact Information to determine your published status and information so as to better communicate with you.
Choice/Control: Presence and Contact Information settings are managed from the Lync desktop client.
Privacy Mode
What This Feature Does: Privacy Mode is a setting that allows you to share your Presence status (such as Available, Busy, Do Not Disturb, and so on) only with contacts listed in your Contacts list.
Information Collected, Processed, or Transmitted: Enabling Privacy Mode causes Lync to enter a mode in which you can switch user settings so that your Presence information is shared only with contacts in your Contacts list. No information is sent to Microsoft.
Use of Information: The setting of this mode determines how Presence data is shared.
Choice/Control: Privacy Mode is enabled and disabled by your enterprise administrator. If Privacy Mode has been enabled, it is managed from the Lync desktop client.
Send As Email
What This Feature Does: Send as Email allows you to send your Lync 2010 for Nokia instant message conversation history, which is stored locally on your device, as an attachment to a user designated email address.
Information Collected, Processed, or Transmitted: All incoming and outgoing content in instant message conversations are stored locally on the device in isolated storage indefinitely unless 1) the user deletes the conversation, 2) the user uninstalls the application, or 3) a new user signs in on the same device. Instant message history sent using the Send as Email feature is delivered in the form of an email to the user’s email address. No information is sent to Microsoft.
Use of Information: Users can send their instant message conversation history as an email attachment to their designated email address making instant message conversations available outside the device for purposes such as archiving or sharing.
Choice/Control: Conversation history is stored on the device automatically. There is no way to disable this feature. Conversation history can be deleted as follows:
In the Conversations tab, tap the End Conversations button in the bottom toolbar.
Send Location
What this feature does: This feature integrates with Nokia Maps to determine your location.
Information collected, processed or transmitted: Depending on your selected positioning methods, your device may contact Nokia in order to establish your position. This information is processed in anonymous manner. For more information, please see Nokia Privacy Policy for Map Apps.
Use of information: The data collected by the Send Location feature is used by Nokia Maps to provide you with faster and more accurate location data.
| |
Browsing by Subject "Transcriptomics"
Now showing items 1-1 of 1
-
Poison frog warning signals: From the rainforest to the genome and back again (East Carolina University, 2018-06-18)Signal communication is pervasive in nature and is used to convey information to both conspecifics and heterospecifics. Aposematic species use warning signals (e.g. bright coloration) to alert predators to the presence of ...
|
https://thescholarship.ecu.edu/browse?value=Transcriptomics&type=subject
|
WEBSITE DALAM PENYELENGGARAAN
Order tidak diterima dalam tempoh ini.
Terima kasih kerana melayari BOOKJANNAH.com
SULTAN MUHAMMAD AL-FATEH: THE CONQUERER OF CONSTANTINOPLE
Abdul Latip Talib
This is the story of the legendary Muhammad Al Fateh, following him from his childhood and how he was raised by his father to become a Caliph of Islam, through to his appointment as the Caliph of the Ottoman Empire at the tender age of 19 and subsequent rule.
Without Constantinople under its jurisdiction, the Ottoman Empire was incomplete. And so, Sultan Muhammad Al Fateh set out to conquer it. He was not the first person to attempt conquering Constantinople.
Many warriors and caliphs of Islam before him had tried in vain. Learning from their past failures, Al Fateh carefully assembled and equipped his army. Finally, after an epic battle, Constatinople was conquered.
Al Fateh changed the name of the city to Istanbul. At his peak, Al Fateh was the ruler of 25 countries. In the end, he was poisoned and killed by his enemies. Yet, the Ottoman Empire and legacy he left behind stayed standing strong.
Title: Sultan Muhammad Al-Fateh: The Conquerer of Constantinople
Author: Abdul Latip Talib
Publisher: PTS Publications
Published: June 2016
Format: Paperback
Pages: 192
ISBN: 978-967-411-695-8
Please see 'Related Products' below.
|
https://www.bookjannah.com/islamic-books/sultan-muhammad-al-fateh-the-conquerer-of-constantinople
|
Gene drive is a naturally occurring phenomenon in which selfish genetic elements manipulate gametogenesis and reproduction to increase their own transmission to the next generation. Currently, there is great excitement about the potential of harnessing such systems to control major pest and vector populations. If synthetic gene drive systems can be constructed and applied to key species, they may be able to rapidly spread either modifying or eliminating the targeted populations. This approach has been lauded as a revolutionary and efficient mechanism to control insect-borne diseases and crop pests. Driving endosymbionts have already been deployed to combat the transmission of dengue and Zika virus in mosquitoes. However, there are a variety of barriers to successfully implementing gene drive techniques in wild populations. There is a risk that targeted organisms will rapidly evolve an ability to suppress the synthetic drive system, rendering it ineffective. There are also potential risks of synthetic gene drivers invading non-target species or populations. This Special Feature covers the current state of affairs regarding both natural and synthetic gene drive systems with the aim to identify knowledge gaps. By understanding how natural drive systems spread through populations, we may be able to better predict the outcomes of synthetic drive release.
|
https://centerforvaccineethicsandpolicy.net/2020/01/05/gene-drive-progress-and-prospects/
|
Particulate Matter (PM) is a criteria air pollutant subject to a range of federal and state regulations. The U.S. EPA recently announced that it will reconsider the National Ambient Air Quality Standards for PM, which will likely drive future reductions at existing and new facilities.
This webinar will provide you important information to design and/or review PM air pollution control technologies, estimate associated equipment costs, and address the permitting, monitoring and compliance related challenges to control equipment for major stationary sources. The presenters include a recognized panel of regulatory and technology experts with multiple years of experience in the field of air pollution control technologies and regulatory compliance. This webinar will be useful for operators of power plants, cement plants, and other industrial plants who are responsible for managing PM emissions from point sources.
$99 A&WMA Member; $149 Nonmember
IMPORTANT: Please log in to the website before registering. If you are not an A&WMA member and do not already have a user account, join now or create an account before registering.
Presenters:
John D. McKenna, Ph.D., Principal and Founder, ETS, Inc.
John D. McKenna, Ph.D., is Principal and Founder of ETS, Inc. which provides environmental training, testing, troubleshooting and testimony, with special emphasis on baghouse technology and filtration lab services. He holds a B.S. in ChE, M.S. in ChE, MBA, and Ph.D. Dr. McKenna has over fifty years of technical, business management and entrepreneurial experience. Dr. McKenna has developed numerous air pollution control courses and has coauthored texts on baghouse filtration, air pollution control, and fine particle measurement and control.
Arijit Pakrasi, Ph.D., P.E., Technical Lead, Air Quality Management Services Practice, Edge Engineering and Science, LLC (EDGE)
Arijit Pakrasi, Ph.D., P.E., is the Technical Lead of the Air Quality Management Services Practice for Edge Engineering and Science, LLC (EDGE) headquartered in Houston, Texas. He has extensive experience in evaluation, design, and monitoring of air pollution control systems for gaseous, odor, and particulate pollutants. He has evaluated fabric filters, wet scrubbers, thermal oxidizers, ESPs, catalytic oxidizers, and activated carbon systems in his professional career and has made several presentations on this topic in technical conferences. He is a long-standing member of A&WMA.
Florin Popovici, M.S., M.E., P.E., Emission Control Technology Consultant, Evonik Fibres
Florin Popovici completed his MSc. Degree in Mechanical Engineering with the Polytechnic Institute of Bucharest and a Post Graduate Diploma in Marketing Management with the University of South Africa. He is a consultant in the gas cleaning technologies field with experiences in engineering, operation and maintenance of large coal fired boiler baghouses. He coordinated research activities in the dry filtration field, from fibres, filter media, filter bags to baghouses and he developed related technologies. His activity is associated with Eskom and Beier Envirotec in South Africa, Evonik Fibres in Austria, and other companies around the world.
Moderator: Nathan Schindler, Technical Sales Manager, Evonik Corporation
Mr. Nathan Schindler is the Technical Sales Manager, Evonik Corporation. Mr. Schindler has more than 20 years of experience in a wide range of air pollution control technologies and regulatory requirements. Mr. Schindler currently manages Evonik's P84® Fiber business in North America, assisting end users reduce their total cost of ownership of their pulse-jet baghouse using P84®. Previously, he has been involved in the engineering and sales of low and ultra-low NOx burners, SCR for mobile diesel engines, natural gas retrofits of industrial and utility boilers, and services.
|
https://www.awma.org/content.asp?admin=Y&contentid=733
|
Shaftesbury announced “Departure” has been renewed for a second season and started production in Toronto, ahead of the first season launching on Peacock in the U.S. Archie Panjabi and Christopher Plummer will return to lead the show, joined by Kris Holden-Ried, Mark Rendall, Jason O’Mara, Karen Leblanc, Kelly McCormack, Wendy Crewson, Dion Johnstone and Donal Logue. The show is co-produced by Shaftesbury and Deadpan Pictures in association with Corus Entertainment, Starlings Television and Red Arrow Studios International.
DATES
Showtime announced “Moonbase 8,” starring Tim Heidecker and John C. Reilly, will premiere Nov. 8 at 11 p.m. The sci-fi comedy follows astronauts who train to qualify for their first lunar mission. But their plans change when a series of events forces the astronauts to question their mental sanity. The show is created, written and executive produced by Heidecker, Reilly and Jonathan Krisel, who also serves as director. The producers are A24 and Abso Lutely Productions. Watch a trailer below.
Netflix announced an Oct. 15 premiere date for its quarantine-produced show, “Social Distance.” Set in the early months of the coronavirus pandemic, the anthology series features standalone episodes that capture the emotional experience of relying on technology to stay connected with loved ones. Showrunner Hilary Weisman Graham executive produced with Tara Herrmann, Blake McCormick and Jenji Kohan.
FIRST LOOKS
Netflix debuted a trailer for “Michelle Buteau: Welcome To Buteaupia” set to premiere on Sept. 29. In the hourlong comedy special, Buteau discusses everything from parenthood to the overlooked value of short men. Page Hurwitz and Wanda Sykes produce for Push It Productions, while Hurwitz also directs. Watch the trailer below.
PROGRAMMING
Showtime announced that the comedy series “The End” is slated to debut in 2021, with the official date not yet announced. The 10-episode first season, starring Harriet Walter and Frances O’Connor, will follow three generations of family members who are figuring out how to die with dignity. The series is created and written by Samantha Strauss and directed by Jessica M. Thompson and Jonathan Brough. The co-producers are Sky UK and Foxtel Australia.
PARTNERSHIPS
Norman Reedus signed a development deal with AMC Studios, launching his bigbaldhead production company, with Brillstein Entertainment partner, JoAnne Colonna, and former AMC exec Amanda Verdon to head the company. The actor has also formed a partnership with Blackstone Publishing under the bigbaldhead banner, including books in the “Unknown Man” series. “It has been a dream of mine for so long to be able to share and tell progressive stories that shine a light where others don’t,” says Reedus. “I feel incredibly privileged for the opportunity to amplify innovative voices in storytelling that are visionary in fostering change in culture. I couldn’t be happier to launch this company alongside AMC and Blackstone Publishing.”
INITIATIVES
Reframe announced its 2019-2020 stamp awardees. This year’s recipients show a 57% increase from the previous season in the number of gender-balanced TV and streaming series, from 21 shows in 2018-2019 to 33 shows now. The list of recipients includes, but is not limited to, Netflix’s “Altered Carbon,” FX’s “American Horror Story,” HBO Max’s “Batwoman,” HBO’s “Big Little Lies,” Netflix’s “Dead to Me,” Netflix’s “The End of the F***ing World,” HBO’s “Euphoria,” Netflix’s “Glow,” HBO’s “Euphoria,” Hulu’s “The Great,” ABC’s “Grey’s Anatomy,” Hulu’s “The Handmaid’s Tale,” Netflix’s “I Am Not Okay with This,” Netflix’s “The I-Land,” Netflix’s “Jessica Jones,” BBC America’s “Killing Eve,” Hulu’s “Little Fires Everywhere,” Amazon’s “The Marvelous Mrs. Maisel,” Amazon’s “Modern Love,” Apple TV Plus’ “The Morning Show,” Netflix’s “Never Have I Ever,” Hulu’s “Normal People,” CW’s “The 100,” Netflix’s “Orange Is the New Black,” Netflix’s “Sex Education,” CBS All Access’ “Star Trek: Picard,” Netflix’s “Sweet Magnolias,” Netflix’s “13 Reasons Why,” Netflix’s “Unbelievable,” Netflix’s “Unorthodox,” AMC’s “The Walking Dead,” HBO’s “Watchmen,” HBO’s “Westworld” and Netflix’s “You.” The ReFrame analysis and stamp determinations are based on IMDbPro data displaying the top 100 most popular scripted TV and streaming shows in the past year.
|
https://variety.com/2020/tv/news/tv-news-roundup-netflix-social-distance-premiere-date-1234771227/
|
Advanced training methods using an Augmented Reality ultraso...
Sort
relevance
views
votes
recent
update
View
thumb
title
8
click to vote
ISMAR
2009
IEEE
375
views
Augmented Reality
»
more
ISMAR 2009
»
Advanced training methods using an Augmented Reality ultrasound simulator
10 years 2 months ago
Download
ar.in.tum.de
Ultrasound (US) is a medical imaging modality which is extremely difficult to learn as it is user-dependent, has low image quality and many artifacts that depend on the viewing d...
Tobias Blum, Sandro Michael Heining, Oliver Kutter...
claim paper
Read More »
3
click to vote
SIGGRAPH
1996
ACM
205
views
Computer Graphics
»
more
SIGGRAPH 1996
»
Technologies for Augmented Reality Systems: Realizing Ultrasound-Guided Needle Biopsies
10 years 1 days ago
Download
www.cs.unc.edu
We present a real-time stereoscopic video-see-through augmented reality (AR) system applied to the medical procedure known as ultrasound-guided needle biopsy of the breast. The AR...
Andrei State, Mark A. Livingston, William F. Garre...
claim paper
Read More »
5
click to vote
ISWC
2005
IEEE
189
views
Human Computer Interaction
»
more
ISWC 2005
»
Using Ultrasonic Hand Tracking to Augment Motion Analysis Based Recognition of Manipulative Gestures
10 years 1 months ago
Download
luci.ics.uci.edu
The paper demonstrates how ultrasonic hand tracking can be used to improve the performance of a wearable, accelerometer and gyroscope based activity recognition system. Specifica...
Georg Ogris, Thomas Stiefmeier, Holger Junker, Pau...
claim paper
Read More »
5
click to vote
ISMAR
2005
IEEE
214
views
Augmented Reality
»
more
ISMAR 2005
»
Camera-Marker Alignment Framework and Comparison with Hand-Eye Calibration for Augmented Reality Applications
10 years 1 months ago
Download
ftp.vision.ee.ethz.ch
An integral part of every augmented reality system is the calibration between camera and camera-mounted tracking markers. Accuracy and robustness of the AR overlay process is grea...
Gérald Bianchi, Christian Wengert, Matthias...
claim paper
Read More »
3
click to vote
CAD
2010
Springer
203
views
Theoretical Computer Science
»
more
CAD 2010
»
AR interfacing with prototype 3D applications based on user-centered interactivity
9 years 8 months ago
Download
www.cs.cmu.edu
Augmented Reality (AR) has been acclaimed as one of the promising technologies for advancing future UbiComp (Ubiquitous Computing) environments. Despite a myriad of AR application...
Seungjun Kim, Anind K. Dey
claim paper
Read More »
« Prev
« First
page 1 / 1
Last »
Next »
Join Our Newsletter
receive notifications of our new tools
Explore & Download
Proceedings Preprints
Top 5 Ranked Papers
Publications
Books
Software
Tutorials
Presentations
Lectures Notes
Datasets
Explore Subject Areas
Life Sciences
Algorithms
Applied Computing
Artificial Intelligence
Augmented Reality
Automated Reasoning
Bioinformatics
Biomedical Imaging
Biomedical Simulation
Biometrics
Business
Chemistry
Cognitive Science
Combinatorics
Communications
Computational Biology
Computational Geometry
Computational Linguistics
Computer Animation
Computer Architecture
Computer Graphics
Computer Networks
Computer Science
Computer Vision
Control Systems
Cryptology
Data Mining
Database
Digital Library
Discrete Geometry
Distributed And Parallel Computing
Document Analysis
ECommerce
Economy
Education
Electrical and Computer Engineering
Electronic Publishing
Embedded Systems
Emerging Technology
Finance
Forensic Engineering
Formal Methods
FPGA
Fuzzy Logic
Game Theory
GIS
Graph Theory
Hardware
Healthcare
Human Computer Interaction
Image Analysis
Image Processing
Information Technology
Intelligent Agents
Internet Technology
Knowledge Management
Languages
Latex
Logical Reasoning
Machine Learning
Management
Mathematics
Medical Imaging
Modeling and Simulation
Multimedia
Music
Natural Language Processing
Neural Networks
Numerical Methods
Operating System
Operations Research
Optimization
Pattern Recognition
Physics
Programming Languages
Remote Sensing
Robotics
Security Privacy
Sensor Networks
Signal Processing
Social Networks
Social Sciences
Software Engineering
Solid Modeling
System Software
Theoretical Computer Science
User Interface
VHDL
Virtual Reality
Virtualization
Visual Languages
Visualization
VLSI
Wireless Networks
Productivity Tools
International On-screen Keyboard
Graphical Social Symbols
OCR Text Recognition
CSS3 Style Generator
Web Page to PDF
Web Page to Image
PDF Split
PDF Merge
Latex Equation Editor
Sci2ools
Document Tools
PDF to Text
PDF to Postscript
PDF to Thumbnails
Excel to PDF
Word to PDF
Postscript to PDF
PowerPoint to PDF
Latex to Word
Repair Corrupted PDF
Image Tools
JPG to PS
JPG to PDF
Extract Images from PDF
Image Converter
Sciweavers
About
Community
Report Bug
Request Feature
Cookies
Contact
Copyright © 2009-2011 Sciweavers LLC. All rights reserved.
|
http://www.sciweavers.org/sci2search/Advanced+training+methods+using+an+Augmented+Reality+ultrasound+simulator
|
---
abstract: |
In this article, we first propose the modified Hannan-Rissanen Method for estimating the parameters of autoregressive moving average (ARMA) process with symmetric stable noise and symmetric stable generalized autoregressive conditional heteroskedastic (GARCH) noise. Next, we propose the modified empirical characteristic function method for the estimation of GARCH parameters with symmetric stable noise. Further, we show the efficiency, accuracy and simplicity of our methods through Monte-Carlo simulation. Finally, we apply our proposed methods to model the financial data.
[***Keywords:*** ARMA-GARCH models, stable distributions, parameter estimation, simulation, application ]{}
author:
- |
Aastha M. Sathe and N. S. Upadhye\
Department of Mathematics, Indian Institute of Technology Madras,\
Chennai-600036, INDIA\
Email address: [email protected], [email protected]
title: 'Estimation of the Parameters of Symmetric Stable ARMA and ARMA-GARCH Models'
---
Introduction {#sec1}
============
In finance and econometrics, important information about past market movements is modelled through the conditional distribution of return data series using the autoregressive moving average (ARMA) models. However, in these models, the conditional distribution is assumed to be homoskedastic which poses challenges in the modelling and analysis of returns with time-varying volatility and volatility clustering, a commonly observed phenomena. This led to the development of autoregressive conditional heteroskedastic (ARCH) and generalized ARCH (GARCH) models introduced by Engle [@eng] and Bollerslev [@bol] respectively. In empirical finance, the most important and widely used model is the combination of ARMA with GARCH, referred to as an ARMA-GARCH model (see [@zang], [@zou]).
In general, the noise term of ARMA, GARCH and ARMA-GARCH models is assumed to be normal or Student-$t$ distribution, where the normal distribution is known for the desirable property of stability but fails to capture the heavy-tailedness of the data. On the other hand, the Student-$t$ distribution allows for heavier tails, but, when compared with the normal, lacks the desirable property of stability. These flaws, therefore, motivate us to use the family of stable distributions as a model for unconditional, conditional homoskedastic and conditional heteroskedastic return series distribution. Some of the significant and attractive features of stable distributions, apart from stability are heavy-tailedness, leptokurtic shape, domains of attraction and skewness. For more details on stable distributions, see [@sam]. Hence, there is a need to explore the behaviour of ARMA, GARCH and ARMA-GARCH models with stable noise for effective modelling of the return data series which also involves estimation of the parameters of these models, and only a handful of estimation techniques based on stable noise are available. For details on the different estimation techniques used for such models, see [@mik]. Therefore, we feel that this article is an important contribution to the literature available on these models.
In this article, we develop a method for estimating the parameters of ARMA model with symmetric stable noise and symmetric stable GARCH noise by modifying the Hannan-Rissanen Method [@han]. For the estimation of the GARCH parameters, we propose the modified empirical characteristic function method which is based on the method discussed in [@om]. The efficiency and effectiveness of the two proposed methods is validated through Monte Carlo simulation. For comparative analysis, we compare the modified Hannan-Rissanen method with the first two of the three specifc M-estimators of the parameters of ARMA models with symmetric stable noise introduced by Cadler and Davis [@cad], namely, the least absolute deviation (LAD) estimator; the least squares (LS) estimator, and the maximum likelihood (ML) estimator. Amongst the three M-estimators, LAD is an excellent choice for estimating ARMA parameters with stable noise due to its robustness, computational simplicity and good asymptotic properties. Our method works at par with the LAD estimator both in terms of accuracy and computational simplicity.
The paper is organised as follows. Section \[sec2\] gives a brief introduction to the stable distributions along with the necessary definitions and notations. Section \[sec3\] discusses the two new methods proposed for the estimation of ARMA, GARCH and ARMA-GARCH parameters. Section \[sec4\] deals with simulations and comparative analysis of the proposed methods. Section \[sec5\] discusses application of our proposed methods to the financial data. Finally, Section \[sec6\] gives some concluding remarks on the proposed methods.
Preliminaries and Notations {#sec2}
===========================
**Stable distributions.** These distributions form a rich class of heavy-tailed distributions, introduced by Paul Lévy [@lev], in his study on the Generalized Central Limit Theorem. Each distribution, in this class, is characterized by four parameters, namely $\alpha$, $\beta$, $\sigma$, and $\delta$, which, respectively, denote the index of stability, skewness, scale and shift of the distribution. Their respective ranges are given by $\alpha \in (0,2]$, $\beta \in [-1, 1]$, $\sigma > 0$ and $\delta \in \mathbb{R}$. For more details, see [@hybrid]. In this paper, we deal with symmetric stable distributions. We say that the distribution is symmetric around zero if and only if $\beta=\delta=0$ and denote by $S\alpha S$. The characteristic function (cf) representation is given by $$\begin{aligned}
\phi(t)&=& \begin{cases}
\exp\left\{-(\sigma|t|)^\alpha\right\},&
\alpha \neq 1,\\
\exp\left\{-\sigma|t|\right\}, & \alpha = 1.
\end{cases} \label{2par}\end{aligned}$$ For simulations, we assume $\sigma=1$ for $S\alpha S$ distribution throughout.
Next, we discuss some important definitions and notations required for parameter estimation of ARMA, GARCH and ARMA-GARCH models. Let $\boldsymbol{\phi}=(\phi_1,\cdots,\phi_p)$ and $\boldsymbol{\theta}=(\theta_1,\cdots,\theta_q)$ denote the parameter vectors used in the ARMA model. For the GARCH model, the parameter vectors used are $\boldsymbol{a}=(a_1,\cdots,a_q)$, $\boldsymbol{b}=(b_1,\cdots,b_p)$, $\boldsymbol{a}_G=(a_1,\cdots,a_{q_G})$ and $\boldsymbol{b}_G=(b_1,\cdots,b_{p_G})$.
Let the time series $\lbrace X_{t}\rbrace$ be an **ARMA($p,q$) process with $S\alpha S$ noise**, denoted by $S\alpha S$-ARMA($p,q$), if $$X_{t}=\sum_{i=1}^{p}\phi_{i}X_{t-i}+\sum_{j=1}^{q}\theta_{j}\epsilon_{t-j}+\epsilon_{t}, \label{10}$$ where $p$ denotes the order of autoregression, $q$ denotes the order of moving average and $\lbrace\epsilon_{t}\rbrace$ denotes the noise sequence of iid random variables with $S\alpha S$ distribution with (cf) given in (\[2par\]) for $\alpha \in (1,2]$ and $\sigma > 0$. Further, in the unit disc $\lbrace z:|z|\leq 1 \rbrace$, $\phi(z)=1-\phi_{1}z-\cdots -\phi_{p}z^{p}$ and $\theta(z)=1+\theta_{1}z+\cdots +\theta_{q}z^{q}$ have no common zeros. For more details on ARMA($p,q$) process with $S\alpha S$ noise, (see [@sam], pp. 376-380).
Let the time series $\lbrace X_{t}\rbrace$ be a **GARCH($p,q$) process with $S\alpha S$ noise** denoted by $S\alpha S$-GARCH($p,q$), if $$X_{t} = \sigma'_{t}\epsilon_{t}, ~\sigma'_{t}=c+\sum_{i=1}^{q}a_{i}|X_{t-i}|+\sum_{j=1}^{p}b_{j}\sigma'_{t-j} \label{20}$$ where $c>0$, $\boldsymbol{a}$ and $\boldsymbol{b}$ are non-negative and $\lbrace\epsilon_{t}\rbrace$ denotes the noise sequence of iid random variables with $S\alpha S$ distribution with (cf) given in (\[2par\]). It is known that the $S\alpha S$-GARCH($1,1$) process has a unique strictly stationary solution if $b_1+\lambda a_1 <1$ while $S\alpha S$-GARCH($p,q$) process with $p \geq 2$ or $q\geq 2$ has a unique strictly stationary solution if $c>0$ and $\lambda \sum_{i=1}^{q}a_i+\sum_{j=1}^{p}b_j \leq 1$. For further details and information on the approximate values of $\lambda$ used for different values of $\alpha$, see [@mit].
We say that the time series $\lbrace X_{t}\rbrace$ is an **ARMA($p_{A},q_{A}$) process with $S\alpha S$-GARCH($p_{G},q_{G}$)** noise, denoted by ARMA($p_{A},q_{A}$)-$S\alpha S$-GARCH($p_{G},q_{G}$), if $$X_{t}=\sum_{i=1}^{p_{A}}\phi_{i}X_{t-i}+\sum_{j=1}^{q_{A}}\theta_{j} e_{t-j}+e_{t},~~~ \\
e_{t}=\sigma'_{t}\epsilon_{t},~~~\sigma'_{t}=c+\sum_{i=1}^{q_{G}}a_{i}|e_{t-i}|+\sum_{j=1}^{p_{G}}b_{j}\sigma'_{t-j}\label{30}$$ where $c>0$, $\boldsymbol{a}_G$ and $\boldsymbol{b}_G$ are non-negative and $\lbrace\epsilon_{t}\rbrace$ denotes the noise sequence of iid random variables with $S\alpha S$ distribution with (cf) given in (\[2par\]) for $\alpha \in (1,2]$ and $\sigma > 0$.
Note now that due to undefined covariance when $\alpha < 2$, the classical autocovariance function cannot be considered as a tool for developing methods of parameter estimation of the processes defined in (\[10\]) and (\[30\]). Thus, autocovariation (normalized) and autocodifference serves as a **measure of dependence** best suited for heavy-tailed distributions and processes. In our proposed method of estimating the parameters of the processes defined in (\[10\]) and (\[30\]), we make use of the normalized autocovariation. The normalized autocovariation (see [@gb]) for $S\alpha S$ process $\lbrace X_{t}\rbrace$ with lag $k$ is given by $$NCV(X_{t},X_{t-k})=\frac{\mathbb{E}(X_{t}\text{sign}(X_{t-k}))}{\mathbb{E}|X_{t-k}|}\label{50}$$ and its estimator for a sample $x_{1},\cdots,x_{N}$ being a realization of a stationary process $\lbrace X_{t}\rbrace$ is given by $$\widehat{NCV}(X_{t},X_{t-k})=\frac{\sum_{t=l}^{r}x_{t}\text{sign}(x_{t-k})}{\sum_{t=1}^{N}|x_{t}|}$$ where $l=\text{max}(1,1+k)$ and $r=\text{min}(N,N+k)$.
Parameter Estimation {#sec3}
=====================
We propose two methods for the estimation of the parameters $\boldsymbol \phi,~ \boldsymbol \theta,~ c,~ \boldsymbol a $, $\boldsymbol b$, $\boldsymbol{a}_G $ and $\boldsymbol{b}_G$ for the processes defined in (\[10\]), (\[20\]) and (\[30\]). For estimation of $\boldsymbol \phi$ and $\boldsymbol \theta$, we suitably modify the Hannen-Rissanen method [@han] and for estimating $c,~\boldsymbol{a}$, $\boldsymbol{b}$, $\boldsymbol{a}_G $ and $\boldsymbol{b}_G$ we modify the empirical characteristic function method discussed in [@om].
Modified Hannan-Rissanen Method (MHR) {#sec3.1}
-------------------------------------
Let $\lbrace X_{t}\rbrace$ be as defined in (\[10\]) or (\[30\]). We discuss the Modified Hannan-Rissanen (MHR) algorithm in three steps.
1. Fit a high order AR($a$) model to $\lbrace X_{t}\rbrace$ (with $a>$max($p,q$)) using the modified Yule-Walker estimation method, for $S\alpha S$ autoregressive models with $\alpha \in (1,2]$, see [@yule].
1. Let $\lbrace X_{t}\rbrace$ be an AR($a$) process given by $$X_{t}-\phi_{1}X_{t-1}-\cdots-\phi_{a}X_{t-a}=\epsilon_{t} \label{1}$$ where $\lbrace \epsilon_{t} \rbrace$ constitutes sample of an iid $S\alpha S$ random variables with $\alpha >1$.
2. Multiply both sides of (\[1\]) by the vector $\textbf{S}=[S_{t-1},\cdots,S_{t-a}]$ where $S_{t}=\text{sign}(X_{t})$ and take the expectation to obtain $a$ equations of the form $$\mathbb{E}X_{t}S_{t-j}-\sum_{i=1}^{a}\phi_{i}\mathbb{E}X_{t-i}S_{t-j}=\mathbb{E}\epsilon_{t}=0 ,~j =1,\cdots,a\label{2}$$
3. Divide the $j$th equation of (\[2\]) by $\mathbb{E}|X_{t-j}|$ to obtain $$\frac{\mathbb{E}X_{t}S_{t-j}}{\mathbb{E}|X_{t-j}|}-\sum_{i=1}^{a}\phi_{i}\frac{\mathbb{E}X_{t-i}S_{t-j}}{\mathbb{E}|X_{t-j}|}=0 ,~j =1,\cdots,a\label{3}$$
4. Applying normalized autocovariation as given in (\[50\]) to (\[3\]), we obtain the following matrix form $$\boldsymbol \lambda=\Lambda \Phi$$ where $\boldsymbol \lambda$ and $\Phi$ are vectors of length $a$ defined as $$\boldsymbol \lambda=[NCV(X_{t},X_{t-1}),\cdots,NCV(X_{t},X_{t-a})]',~\Phi=[\phi_{1},\cdots,\phi_{a}]'$$ and the matrix $\Lambda$ has size $a\times a$ with the $(i,j)$th elements given by $$\Lambda(i,j)=NCV(X_{t},X_{t-i+j}).$$
5. Finally, in order to estimate the values of the parameter vector $\Phi$, the matrix $\Lambda$ should be nonsingular and make use of the sample normalized autocovariation $\widehat{NCV}$. Thus, $$\hat{\Phi}=\hat{\Lambda}^{-1}\hat{\boldsymbol\lambda}.$$
2. Using the obtained estimated coefficients $\hat{\phi}_{a1},\cdots,\hat{\phi}_{aa}$, compute the estimated residuals from $$\hat{\epsilon}_{t}=X_{t}-\hat{\phi}_{a1}X_{t-1}-\cdots-\hat{\phi}_{aa}X_{t-a}, ~~t=a+1,\cdots,n$$.
3. Next, we estimate the vector of parameters, $\boldsymbol{\beta} =(\boldsymbol\phi,\boldsymbol\theta)$ by least absolute deviation regression of $X_{t}$ onto $(X_{t-1},\cdots, X_{t-p},\hat{\epsilon}_{t-1},\cdots,\hat{\epsilon}_{t-q}),~t=a+1+q,\cdots,n$ by minimizing $S(\boldsymbol{\beta)}$ with respect to $\boldsymbol\beta$ where, $$S(\boldsymbol{\beta)}=\sum_{t=a+1+q}^{n}|X_{t}-\phi_{1}X_{t-1}-\cdots-\phi_{p}X_{t-p}-\theta_{1}\hat{\epsilon}_{t-1}-\cdots-\theta_{q}
\hat{\epsilon}_{t-q}|.$$
The proposed MHR method is both computationally efficient and nearly optimal in estimating the ARMA coefficients of (\[10\]) or (\[30\]). It is important to note that the MHR method does not require information (or estimation) of the parameters of noise distribution which is also true for LAD method. The effectiveness of the proposed method in comparison to other methods is shown in Section \[sec4\].
Modified Empirical Characteristic Function Method (MECF) {#sec3.2}
--------------------------------------------------------
We employ this method to obtain the estimates of $\boldsymbol a$, $\boldsymbol b$ and $c$ for processes defined in (\[20\]) and (\[30\]). For simplicity, we shall discuss the method for the process defined in (\[20\]), for the case $p=1$ and $q=1$ relevant in empirical finance. The method can be extended to higher orders of $p$ and $q$ in a similar spirit. For $p=1$ and $q=1$ we get $$X_{t}=\sigma'_{t}\epsilon_{t},~\sigma'_{t}=c+a_{1}| X_{t-1}|+b_{1}\sigma'_{t-1} \label{100}$$ where $c>0$, $a_1$ and $b_1$ are non-negative satisfying the stationary condition $\lambda a_1 +b_1 <1$ and $\lbrace\epsilon_{t}\rbrace$ denotes the noise sequence of iid random variables with $S\alpha S$ distribution. The parameter estimates for (\[100\]) are obtained by minimizing the function $f$ over $c$, $a_1$ and $b_1$ defined as $$f(c,a_1,b_1)=\sum_{j=1}^{n}|\psi_{theoretical}(\hat{\epsilon}_{j})-\psi_{empirical}(\hat{\epsilon}_{j})|$$ where $\hat{\epsilon}_{j}=\frac{X_{j}}{c+a_{1}| X_{j-1}|+b_{1}\sigma'_{j-1}}$, $\psi_{theoretical}(\hat{\epsilon}_{j})=\exp(-|\hat{\epsilon}_{j}|^{\hat{\alpha}})$, $\psi_{empirical}(\hat{\epsilon}_{j})=\frac{1}{n}\sum_{j=1}^{n}\cos(\hat{\epsilon}_{j}Y_{j})$ where $Y_1,\cdots, Y_n$ are iid random variables with $S\hat{\alpha}S$ distribution and $\hat{\alpha}$ is the estimate of $\alpha$ obtained from the noise sequence of iid random variables with $S\alpha S$ distribution using the hybrid method discussed in [@hybrid]. We make use of the function “optim" available in “stats" package of R for the minimization of the function $f$ over $c$, $a_1$ and $b_1$. The method used is “L-BFGS-B" introduced by Byrd et al. [@fg].
Simulation and Comparative Analysis {#sec4}
===================================
For the comparative analysis, we investigate the performance of LAD, LS and MHR for the estimation of the parameters of the processes defined in (\[10\]) and (\[30\]). The parameters of the process defined in (\[20\]) are obtained using MECF discussed in Section \[sec3.2\]. We consider four models to check the efficiency of our proposed method for (\[10\]) and (\[30\]) in comparison to LAD and LS through Monte Carlo simulations.
- $S\alpha S$-MA(1): $X_{t}=\theta_{1}\epsilon_{t-1}+\epsilon_{t}$, $|\theta_{1}|< 1$
- $S\alpha S$-ARMA(1,1): $X_{t}=\phi_{1}X_{t-1}+\epsilon_{t}+\theta_{1}\epsilon_{t-1}$, $|\phi_{1}|< 1$ and $|\theta_{1}|< 1$
- MA(1)-$S\alpha S$-GARCH(1,1): $X_{t}=\theta_{1}\epsilon'_{t-1}+\epsilon'_{t}$, $|\theta_{1}|< 1$, $\epsilon'_{t}=\sigma'_{t}\epsilon_{t}$ with $\sigma'_{t}=c+a_{1}|X_{t-1}|+b_{1}\sigma'_{t-1}$ such that $\lambda a_1+b_1<1$.
- ARMA(1,1)-$S\alpha S$-GARCH(1,1): $X_{t}=\phi_{1}X_{t-1}+\theta_{1}\epsilon'_{t-1}+\epsilon'_{t}$, $|\theta_{1}|< 1$, $|\phi_{1}|< 1$, $\epsilon'_{t}=\sigma'_{t}\epsilon_{t}$\
with $\sigma'_{t}=c+a_{1}|X_{t-1}|+b_{1}\sigma'_{t-1}$ such that $\lambda a_1+b_1<1$.
The noise sequence $\lbrace\epsilon_{t}\rbrace$ is considered to be iid $S\alpha S$, where $\alpha \in (1,2]$ and $\beta$, $\sigma$ and $\delta$ are fixed to 0, 1, 0 respectively and is generated using the function “rstable" in the “stabledist" package of R. For a selected set of values $\theta_{1}$, $\phi_{1}$ and $\alpha$, simulations are run where 1000 realisations of each model of length 1000 are generated. Tables \[11\] and \[12\] show the efficiency and effectiveness of our proposed MHR method over LAD and LS for models (**M1**) and (**M2**) respectively for different values of $\alpha \in [1.5,2]$, $\theta_{1}\in (0,0.5]$ and $\phi_{1} \in (0,1)$ in terms of mean and root mean squared error (RMSE). The estimate of $\alpha \in [1.5,2]$ for model (**M1**) and (**M2**) is obtained from the estimated residuals given in Step 2 of Subsection \[sec3.1\] using the hybrid method discussed in [@hybrid] .
Tables \[13\] and \[14\] show the accuracy and efficiency of the proposed MHR method and the MECF used in obtaining the estimate of $\theta_1$ and $(c,a_1,b_1)$ respectively for model (**M3**). The estimate of $\alpha$ for this model is obtained from the $S\alpha S$ noise sequence $\lbrace\epsilon_t\rbrace$ using the hybrid method.
Tables \[15\] and \[16\] show the accuracy and efficiency of the proposed MHR method and the MECF used in obtaining the estimate of $(\theta_1,\phi_1)$ and $(c,a_1,b_1)$ respectively for model (**M4**). The hybrid method is employed in obtaining the estimate of $\alpha$ from the $S\alpha S$ noise sequence $\lbrace\epsilon_t\rbrace$.
We observe that for all the four models (**M1**), (**M2**), (**M3**) and (**M4**), the least squares method (LS) performs the worst while the least absolute deviation (LAD) and the proposed modified Hannan-Rissanen method (MHR) are at par with each other in terms of the robustness, accuracy and efficiency of the estimates. The estimates of the GARCH parameters obtained via MECF are also precise and accurate.
Application to Financial Data {#sec5}
=============================
The time series real dataset that we have considered is the International Business Machines Corporation (IBM) obtained from Yahoo Finance for the period January 19, 2000 - March 19, 2005, comprising of 1297 daily log returns values of the adjusted closing price. From Figure \[700\] (a) , we observe the presence of time varying volatility of the log returns of the time series with mean almost zero and the minimum and maximum approximately around zero. The kurtosis is 5.8164 while the skewness is -0.030 which implies the distribution of the log returns is fairly symmetrical with heavier tails than the normal. These characteristics thereby suggest the application of ARMA($p_A,q_A$)-$S\alpha S$-GARCH($p_G,q_G$) model to the given dataset. To determine whether the considered time series data is stationary, we implement the Augmented Dickey-Fuller Test (ADF) “adf.test" available in the “tseries" package in R. The $p$-value obtained is 0.01 which confirms the stationarity of the data. We first try to implement the ARMA($p_A,q_A$) model and check whether it fits the dataset well. In order to select the best order for fitting an ARMA($p_A,q_A$) model according to either AIC (Akaike Information Criterion) or BIC (Bayesian Information Criterion), we make use of “auto.arima" available in the package “forecast" in R. The best model chosen by “auto.arima" was MA(1) model and we assume that the residuals obtained from this model constitute a sample from $S\alpha S$ distribution. Thus, we can employ the MHR method to obtain the estimate $\hat{\theta}_{1}=-0.078$.
In order to check if the residuals can be considered as an independent sample, we make use of the empirical autocovariation function instead of the classical autocovariance function. From Figure \[700\] (c), one can observe that the dependence in the residual series is almost unidentifiable. Finally, we prove that the distribution of the residual series is $S\alpha S$ using the Kolmogorov-Smirnov (KS) test and estimate the parameters ($\alpha$, $\beta$, $\sigma$ , $\delta$) using the method in [@hybrid]. The estimates obtained for the residual series are ($\hat{\alpha}$, $\hat{\beta}$, $\hat{\sigma}$, $\hat{\delta}$)=$(1.6762,~0.0373,~0.0115,~0.0001)$. The KS test statistics is defined as: $$D=\sup_{x}|G_{n}^{1}(x)-G_{n}^{2}(x)|,$$ where $G_{n}^{1}(\cdot)$ and $G_{n}^{2}(\cdot)$ denote the empirical cumulative distribution function for Sample 1 and Sample 2 respectively, both of length $n$. The test derived from the KS statistics is called the two sample KS test. In our case, Samples 1 and 2 are the residuals and the random sample simulated from $S\alpha S$ distribution with estimated parameters corresponding to the residuals respectively. Finally, we obtain 100 $p_{values}$ of the test and create a boxplot as shown in Figure \[700\] (d). Assuming 0.05 confidence level, we fail to reject the hypothesis that the samples are from the same distribution. However, from the residuals plot of MA(1) with $S\alpha S$ noise, we observe that the volatility decays with time, thereby suggesting the replacement of $S\alpha S$ noise with $S\alpha S$-GARCH(1,1) noise in the $S\alpha S$-MA(1) model. Therefore, we will build MA(1)-$S\alpha S$-GARCH(1,1) model. For the given time series, the initial values of $c$, $a_1$ and $b_1$ is considered to be 0.001, 0.1 and 0.8 respectively to obtain the estimates. The GARCH(1,1) estimated parameters obtained after employing MECF are given in Table \[17\].
In order to evaluate if the considered model is correctly specified, we analyse the estimated standardized residuals using graphical diagnostics. We plot the standardized residuals to check the absence of conditional hetroskedascity. From Figure \[700\] (e), we observe that the standardized residual plot seems quite stable, indicating a constant variance over time. Further,we make use of the QQ plot to check if the distribution of the standardized residuals matches the assumed noise distribution ($S\alpha S)$ used in the estimation. From Figure \[700\] (f), we observe that the distribution of of the standardized residuals is almost ($S\alpha S)$ with parameters ($\hat{\alpha}$, $\hat{\beta}$, $\hat{\sigma}$, $\hat{\delta}$)=$(1.82398,~0.03870,~1.01298,~0.02132)$.
Concluding Remarks {#sec6}
==================
To conclude, we make the following observations in relation to our proposed methods.
- In this article, in order to estimate the ARMA coefficents of ARMA($p,q$) process with $S\alpha S$ noise as defined in (\[10\]) or ARMA($p_{A},q_{A}$) process with $S\alpha S$-GARCH($p_{G},q_{G}$) as defined in (\[30\]), we use the Modified Yule-Walker method [@yule] in the Hannan-Rissanen method and obtain the estimates of the parameters using LAD regression.
- The M-estimators namely, the LAD and ML perform well when the noise is heavy-tailed (stable). However, the LAD estimator is preferred over the ML as it is computationally efficient, robust and does not require information (or estimation) of the parameters of the noise distribution. The proposed MHR method also inherits the same advantages of LAD especially for $\alpha \in [1.5,2]$, $\theta_{1}\in (0,0.5]$ and $\phi_{1} \in (0,1)$. For details on the limitations of LS and ML estimator, see [@cad].
- For the estimation of the GARCH coefficients for GARCH($p,q$) process with $S\alpha S$ noise defined in (3) and ARMA($p_{A},q_{A}$) process with $S\alpha S$-GARCH($p_{G},q_{G}$) defined in (4), we introduce and discuss MECF for the case $p=1$ and $q=1$ for simplicity. The method is computationally efficient, robust and can further be extended to higher orders of $p$ and $q$.
- Finally, we give an application of our proposed methods, where the noise distribution of the dataset is considered to be $S\alpha S$ and later $S\alpha S$-GARCH due to volatility clustering. From Table 7, we observe the estimates are statistically significant and Figure 1 shows efficient modelling of the financial data using our proposed methods through residual analysis.
[| \*[5]{}[c |]{}]{} **True Values** & **LAD**& **LS**& **MHR** & $\hat{\alpha}_\textbf{residuals}$\
$\theta_{1}=0.01,~\alpha =1.50$& 0.0100 (0.0152) & 0.0096 (0.0283) & 0.0098 (0.0153)& 1.5011 (0.0529)\
$\theta_{1}=0.05,~\alpha =1.55$& 0.0496 (0.0175) & 0.0490 (0.0258) & 0.0492 (0.0177) & 1.5553 (0.0532)\
$\theta_{1}=0.10,~\alpha =1.65$& 0.0997 (0.0216) & 0.0991 (0.0269) &0.0990 (0.0218) & 1.6552 (0.0524)\
$\theta_{1}=0.40,~\alpha =1.85$& 0.3996 (0.0310) & 0.3992 (0.0295) & 0.3965 (0.0312) & 1.8523 (0.0438)\
$\theta_{1}=0.50,~\alpha =1.95$& 0.4987 (0.0362) & 0.4990 (0.0306) & 0.4911 (0.0372) & 1.9500 (0.0310)\
2ex
[| \*[8]{}[c |]{}]{} **True Values**& & & & $\hat{\alpha}_\textbf{residuals}$\
$(\phi_1,\theta_1,\alpha)$ & **Mean** $(\hat{\phi}_1,\hat{\theta}_1)$ & **RMSE** $(\hat{\phi}_1,\hat{\theta}_1)$ & **Mean**$(\hat{\phi}_1,\hat{\theta}_1)$ & **RMSE** $(\hat{\phi}_1,\hat{\theta}_1)$ & **Mean** $(\hat{\phi}_1,\hat{\theta}_1)$ & **RMSE** $(\hat{\phi}_1,\hat{\theta}_1)$ &\
$(0.50,0.01,1.55)$&(0.4993,0.0105)& (0.0285,0.0329)& (0.4957,0.0133)& (0.0474,0.0564)& (0.5004,0.0088)& (0.0275,0.0316) & 1.5545 (0.0564)\
$(0.40,0.05,1.65)$&(0.3994,0.0503) &(0.0427,0.0474)& (0.3956,0.0534)& (0.0584,0.0662)& (0.4020,0.0466)& (0.0430,0.0470) & 1.6528 (0.0525)\
$(0.80,0.10,1.75)$&(0.7983,0.1013)& (0.0172,0.0306)&(0.7961,0.1024) &(0.0206,0.0359)&(0.7988,0.0979)& (0.0182,0.0313)& 1.7462 (0.0553)\
$(0.90,0.20,1.75)$&(0.8985,0.2011)& (0.0104,0.0274)&(0.8964,0.2019)& (0.0129,0.0318)&(0.8982,0.1951)& (0.0129,0.0315)& 1.7460 (0.0568)\
$(0.20,0.40,1.80)$&(0.1989,0.4009)& (0.0455,0.0535)&(0.1972,0.4018)& (0.0484,0.0575)&(0.2035,0.3917)& (0.0663,0.0681)& 1.8008 (0.0469)\
$(0.10,0.20,1.80)$&(0.0983,0.2016) &(0.0926,0.0968)&(0.0950,0.2041)& (0.0984,0.1036)&(0.1071,0.1919)& (0.1031,0.1036)& 1.8010 (0.0454)\
$(0.30,0.10,1.85)$&(0.2982,0.1014)& (0.0731,0.0786)&(0.2954,0.1035)& (0.0716,0.0784)&(0.3009,0.0983)& (0.0756,0.0790)& 1.8487 (0.0414)\
2ex
[| \*[4]{}[c |]{}]{} **True Values** & **LAD**& **LS**& **MHR**\
$\theta_{1}=0.05,~\alpha =1.20$& 0.0508 (0.0247) & 0.0511 (0.0604) &0.0464 (0.0225)\
$\theta_{1}=0.20,~\alpha =1.40$& 0.1975 (0.0349) & 0.1961 (0.0653) & 0.1925 (0.0354)\
$\theta_{1}=0.30,~\alpha =1.65$& 0.3003 (0.0384) & 0.3014 (0.0577) & 0.2989 (0.0382)\
$\theta_{1}=0.40,~\alpha =1.75$& 0.3952 (0.0424) & 0.3935 (0.0449) & 0.3893 (0.0405)\
$\theta_{1}=0.50,~\alpha =1.85$& 0.5018 (0.0373) & 0.5074 (0.0376) & 0.4942 (0.0367)\
2ex
[| \*[5]{}[c |]{}]{} **True Values ($\theta_1,c,a_1,b_1,\alpha)$** & $\hat{c}$&$\hat{a}_1$& $\hat{b}_1$&$\hat{\alpha}_\textbf{noise}$\
$(0.05,0.01,0.02,0.7,1.20)$& 0.0085 (0.0037) & 0.0196 (0.0035) &0.7086 (0.0275) & 1.2011 (0.0471)\
$(0.20,0.05,0.04,0.9,1.40)$& 0.0499 (0.0032) & 0.0399 (0.0032) & 0.9060 (0.0246) & 1.4017 (0.0511)\
$(0.30,0.10,0.05,0.8,1.65)$& 0.0121 (0.0879) & 0.0498 (0.0032) & 0.8046 (0.0234) & 1.6548 (0.0552)\
$(0.40,0.50,0.06,0.8,1.75)$& 0.4966 (0.0335) & 0.0607 (0.0033) & 0.8102 (0.0272) & 1.7584 (0.0496)\
$(0.50,1.00,0.03,0.9,1.85)$& 1.0507 (0.3165) & 0.0301 (0.0033) & 0.9115 (0.0267) & 1.8574 (0.0433)\
[| \*[7]{}[c |]{}]{} **True Values** & & &\
ARMA(1,1)-GARCH(1,1)& **$\theta_{1}$** & **$\phi_{1}$** & **$\theta_{1}$**& **$\phi_{1}$**& **$\theta_{1}$**& **$\phi_{1}$**\
$\theta_{1}=0.10,~\phi_{1}=0.40,~\alpha =1.55$ & 0.1004 (0.0497) & 0.4028 (0.0422)& 0.1013 (0.0850) & 0.4008 (0.0764) & 0.0873 (0.0476)& 0.4126 (0.0438)\
$\theta_{1}=0.10,~\phi_{1}=0.60,~\alpha =1.65$ & 0.0973 (0.0509) & 0.5979 (0.0415)& 0.0917 (0.0763) & 0.5977 (0.0584) & 0.0921 (0.0510)& 0.6015 (0.0419)\
$\theta_{1}=0.20,~\phi_{1}=0.50,~\alpha =1.70$ & 0.2007 (0.0568) & 0.4944 (0.0417)& 0.1929 (0.0932) & 0.5025 (0.0651) & 0.1920 (0.0564)& 0.5007 (0.0456)\
$\theta_{1}=0.20,~\phi_{1}=0.90,~\alpha =1.75$ & 0.1957 (0.0505) & 0.8971 (0.0155)& 0.1965 (0.0683) & 0.8964 (0.0192) & 0.1912 (0.0482)& 0.8964 (0.0172)\
$\theta_{1}=0.30,~\phi_{1}=0.80,~\alpha =1.85$ & 0.2967 (0.0436) & 0.8005 (0.0219)& 0.2995 (0.0420) & 0.7970 (0.0228) & 0.2935 (0.0428)& 0.7998 (0.0280)\
[| \*[5]{}[c |]{}]{} **True Values** ($\theta_1,\phi_1,c,a_1,b_1,\alpha)$ & $\hat{c}$&$\hat{a}_1$& $\hat{b}_1$&$\hat{\alpha}_\textbf{noise}$\
$(0.1,0.4,0.01,0.02,0.7,1.55)$& 0.0115 (0.0042) & 0.0196 (0.0040) &0.7096 (0.0305) & 1.5572 (0.0554)\
$(0.1,0.6,0.05,0.04,0.9,1.65)$& 0.0573 (0.0035) & 0.0438 (0.0085) & 0.9141 (0.0302) & 1.6504 (0.0573)\
$(0.2,0.5,0.10,0.05,0.8,1.70)$& 0.1090 (0.0879) & 0.0500 (0.0032) & 0.8132 (0.0234) & 1.7036 (0.0471)\
$(0.2,0.9,0.50,0.06,0.8,1.75)$& 0.5016 (0.0308) & 0.0605 (0.0030) & 0.8133 (0.0263) & 1.7542 (0.0495)\
$(0.3,0.8,1.00,0.03,0.9,1.85)$& 1.0622 (0.3311) & 0.0301 (0.0032) & 0.9142 (0.0282) & 1.8507 (0.0485)\
[| \*[5]{}[c |]{}]{} & **Estimate** & **Standard Error** & **$t$-value**& **Pr($>|t|)$**\
$c$ & 0.00107 & 0.00024 & 4.435 & 9.19 e-6\
$a_1$&0.09285 & 0.00082 & 112.71 & 0\
$b_1$&0.79286 & 0.00103 & 766.69 &0\
![ Plots generated using our proposed methods []{data-label="700"}](./Rplot3.pdf)
[1]{} Engle, R.F. (1982). Autoregressive conditional heteroscedemticity with estimates of the variance of U.K. inflation, *Econometrica* 50, 987-1008.
Bollerslev (1986). Generalized autoregressive conditional heterockedasticity. *Journal of Econometrics* 31, 307-327 .
Cadler M. and Davis R.A. (1998). Inference for linear processes with stable noise. *A practical guide to heavy tails: Birkhauser Boston Inc., Cambridge, MA, USA*, 159-176.
Mittnik S. and Paolella M. S. (2003). Prediction of Financial Downside-Risk. *Handbook of Heavy Tailed Distributions in Finance (S. Rachev, eds.), Elsevier, Amsterdam*.
Gallagher, C.M. (2001). A method for fitting stable autoregressive models using the autocovariation function. *Statistics and Probability Letters* 53, 381-390.
Omelchenko, V. (2009). Parameter Estimation of the Stable GARCH(1,1)-Model. *WDS’09 Proceedings of Contributed Papers, Part I*, 137-142.
Kruczek, P., Wylomanska, A., Teuerle, M., and Gajda, J. (2017). The modified Yule-Walker method for $\alpha$-stable time series models. *Physica A: Statistical Mechanics and Its Applications* 469, 588-603.
Sathe, Aastha M. and Upadhye, Neelesh S. (2019). Estimation of the Parameters of Multivariate Stable Distributions. stat.CO, arXiv:1902.09796 .
Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995). A limited memory algorithm for bound constrained optimization. *SIAM Journal on Scientific Computing*, 16, 1190–1208.
Lévy, P. (1924). Th‘eorie des erreurs la loi de Gauss et les lois exceptionelles, *Bulletin de la Soci‘et‘e de France* **52**:49-85.
Hannan, E. J. and Rissanen, J. (1982). Recursive estimation of mixed autoregressive-moving order. *Biometrika* 69(1):81-94.
Nolan, J. P. (2014). Financial modeling with heavy-tailed stable distributions. *WIREs Computational Statistics* 6:45-55.
Samorodnitsky, G. and Taqqu, M. S. (1994). *Stable Non-Gaussian Random Processes, Stochastic Models with Infinite Variance*. Chapman and Hall, New York, 632 pages.
Mikosch, Thomas, et al. (1995). Parameter Estimation for ARMA Models with Infinite Variance Innovations. *The Annals of Statistics*, 23(1),305-326.
Zhang, M., Li, C. (2007). Application of GARCH model in the forecasting of day-ahead electricity prices. *Third International Conference on Natural Computation*, 99-103.
Zhou, B., He, D. and Sun,Z. (2006). Traffic predictability based on ARIMA/GARCH model. *Next Generation Internet Design and Engineering*,199-207.
| |
Promoting Equitable Access to Education for Children and Young People with Vision Impairment offers a suitable vocabulary and developmental route map to examine the changing influences on promoting equitable access to education for learners with vision impairment in different contexts and settings, throughout a given educational pathway.
Bringing together a wide range of perspectives, this book argues that inclusive educational systems and teaching approaches should focus upon promoting and sustaining a balanced curriculum. It provides an analysis of how a suitable curriculum balance can be promoted and sustained through the stages of a given educational pathway to ensure equitable access and progression for all learners with vision impairment. The authors draw on the United Kingdom as a country study to illustrate the complex ecosystem within which learners with vision impairment are educated.
Structured around a framework which provides a conceptually coherent and practical balance between universal and specialist approaches, this book is a relevant read for educators, academics, and researchers involved in vision impairment education as well as officials in government and non-government organisations engaged in developing education policy relating to inclusive education and disability.
- Copyright:
- 2023 Mike McLinden, Graeme Douglas, Rachel Hewett, Rory Cobb, Sue Keil, Paul Lynch, Joao Roe and Jane Stewart Thistlethwaite
Book Details
- Book Quality:
- ISBN-13:
- 9781000624496
- Publisher:
- Taylor and Francis
- Date of Addition:
- 2022-09-30T11:09:47Z
- Language:
- English
- Categories:
- Education, Nonfiction,
- Usage Restrictions:
- This is a copyrighted book.
Choosing a Book Format
EPUB is the standard publishing format used by many e-book readers including iBooks, Easy Reader, VoiceDream Reader, etc. This is the most popular and widely used format.
DAISY format is used by GoRead, Read2Go and most Kurzweil devices.
Audio (MP3) format is used by audio only devices, such as iPod.
Braille format is used by Braille output devices.
DAISY Audio format works on DAISY compatible players such as Victor Reader Stream.
Accessible Word format can be unzipped and opened in any tool that supports .docx files.
|
https://kenya.bookshare.org/en/bookshare/book/4900646
|
When Jay Petervary said he was getting warmed up yesterday, he obviously meant it. The leader in the race is keeping a ferocious pace and is expected to arrive at the third checkpoint of this race early this morning (GMT +6). But not before he and his pursuers have crossed a very wild and remote section with little to no resupply on route. The checkpoint itself is in a guesthouse in the village of Chychkan, right on the edge of Issyk-Kul, the world’s second largest salt water lake.
While the top 5 has been pretty steady since CP1, positions 5 to 10 are only now taking shape. Alex Jacobson has recently overtaken Jan Kopka who till then was holding a firm 6th place. Rui Rodrigues is currently in 8th position, followed by Wannes Baeten and Laurens van Gucht.
As Lee Craigie is out of the race and recovering from food poisoning at CP2 (as now confirmed), it’s interesting to see how the women’s field will evolve. Runner up and now in the lead is Jenny Tough, who reached CP2 this afternoon. This Tough-as-nails 27-year old Canadian is no stranger to Kyrgyzstan. Two years ago, she ran unsupported through the Tien Shan mountain range from one end of the country to the other (just over 900 kilometers) which only took her an astonishing 25 days. The new runner up is Phillipa Battye who is about 70 km away from the CP2 at the moment.
However, the last 24 hours have again seen a lot of scratched riders. The official number is now up to twenty-five. And behind each of the crossed-out names on the map is a personal and sometimes heartbreaking story. Bad luck, tough decisions, hairy situations; it’s the darker side of this unsupported adventure. But it’s also the attraction to many: to test what you’re made of, to stretch the boundaries of what you think you can achieve. Which is often more than we think. As one of our followers quoted: ‘man built the train to move the heavy load, man built the bike to move the soul.’ Spot on.
|
https://www.silkroadmountainrace.cc/recap-srmrno1-day-6/
|
The noisy, motley group of runners suddenly waxed silent and awaited the bullhorn.
When it blasted, the soles of more than a thousand running shoes began pummelling Middle Road against a canvas of shadows and golden light.
On Saturday morning the 25th annual Chilmark Road Race began just as its predecessors - but with an even richer sense of history, and featuring a wonderful new gadget.
"You're making history today - 25 years," race founder Hugh Weisman told the crowd, speaking through a megaphone from the back of his red pickup truck.
Back in 1978, Mr. Weisman put this event together for the kids at the Chilmark Community Center. Fittingly, the road race remains a fundraiser for the center.
Just 180 participants ran in the incipient race; this year slightly more than 1,500 ran, including one canine entry.
The late Joey Kinstlinger was one child who raced the five kilometers along Chilmark's Middle Road, starting near Tea Lane and ending at Beetlebung Corner, in 1978; photographer Alison Shaw captured the moment.
"Wherever he lived - which was mostly in Colorado, after he left the East - he always had [the photograph] in his bathroom because it was such a comforting picture of water coming down on his head," said Julia Kinstlinger, his mother. "He loved that picture."
A year ago in May, he died at the age of 33 while hiking in Colorado with friends.
The photograph was featured on the poster advertising the race.
For Mrs. Kinstlinger, this was her first road race. "I am here to walk for Joe," she said.
Mr. Weisman's family was out in full force, his wife, Suzanne, his three daughters, Ali Weisman, Jennifer Sullivan with husband Dan, and Wendy Jenkinson, and his four grandchildren, Marguerite Smith, 10, Timmy and Annie Sullivan, ages six and four, and Wyatt Jenkinson, four.
"It gets you strong," said Miss Smith. "This year I'm trying to win a prize."
Jim Austin, a Vineyard Haven resident who has run in almost every race since 1978, said, "We are going into the third generation of runners. Today is many grandchildren's first time."
For veteran runners, the idyllic course has a magnetic charm.
"Although you think you know every inch of the course, you never know it until you run this year's race," Mr. Austin said
There were two runners who haven't missed a single Chilmark Road Race since the event began. They were Priscilla Karnovsky, now coordinator of this year's 60 volunteers, and Morgan Shipway, a summer resident of Chilmark.
Miss Karnovsky called the event "downright fun."
Mr. Shipway said Mr. Weisman was instrumental in bringing him out for the first race.
"It was my first step I ever took, right here," said Mr. Shipway. "And 25 years later I have run a lot of marathons. Hughie talked me into it. It made me a runner."
Mr. Shipway talked about the race's popularity and its uniqueness.
"A lot of it has to do with running - it's a great sport," said Mr. Shipway. "Hughie Weisman's aesthetics make the race the fun event that it really is, like the lobsters, the feeling of Chilmark and the Island. Hughie's personality, the race is steeped in it."
There was a time when Mr. Shipway ran competitively, but these days he enjoys himself in mid-pack. "To be 60 years old and healthy is a good thing," he said. "It was fun then, and with my speed lesser, it is still fun."
The race is fun - for children of all ages, running or walking, for parents pushing strollers, serious runners, amateurs and those who just want to soak up the experience of the race.
For years, members of the MacMaster family have come from Philadelphia and New Jersey to the Island for the race.
Patrick MacMaster said the race is a "total blast" every year.
The family, this year numbering over 30, wore self-designed shirts, as they do each year.
This year the shirt design was by Katie MacMaster, who died in May of this year.
The slogan on the back of the blue shirts was "You Push Me, I'll Push You."
Mr. MacMaster said the slogan is all about helping one another out.
One family member, Sarah Williams, six, secured second place in her age category of females under eight years old.
The race times ranged from under 16 minutes to almost an hour and a half. And this year a new gizmo was used to time each runner - an electronic chip with a velcro strap worn on the ankle. The high-tech device proved effective, and times were posted minutes after the first runner crossed the finish line.
The race champs were repeats from last year.
Twenty-one-year-old Tyler Cardinal took a comfortable lead from the start and finished the race with an impressive 15:30.3, almost a minute ahead of second place. He traveled from Middletown, Conn., this weekend to make the race.
Mr. Cardinal, a student and cross-country athlete at Iona College in New York, ran in the road race for the first time when he was in seventh grade.
He won the race in 1999, came in second the next year and won it again last year.
"It's been my favorite summer race," said Mr. Cardinal after receiving his first place prize, a lobster.
Anne Preisig, 34, of Falmouth, took first place among women with a time of 18:05.5.
She came out last year for the first time after hearing from friends how fun the race is. She is a professional dual-athlete and coach of the cross-country team at Falmouth High School.
"The race is a great community event, with all these little kids, spectators along the course," she said. "It's a beautiful course with the trees, and a tough course with the hills, and there always seems like there is a headwind."
She was behind Marian Bihrle until the 2.5-mile mark when she overtook her up a hill. Miss Bihrle finished in 18:16.2.
After all the runners crossed the finish line, a ceremony was held for the founder and the two veteran race runners outside the Chilmark Community Center before a number of prizes were handed out to the top runners from each age category.
The two who have run in the race every year were presented with lobster sculptures by Steve Lohman, Island artist.
Jeff Herman presented Mr. Weisman with another Lohman sculpture, and said, "Without the efforts of a certain individual none of us would have been here for the last 25 years."
Mr. Weisman assured everyone the race will continue for at least another 25 years.
"The road race," Mr. Austin said, "is part of the summer biorhythms now."
|
https://vineyardgazette.com/news/2002/08/12/chilmark-road-race-draws-hundreds
|
Back Lapping Semiconductor Wafers
This process is usually applied to silicon, the key substrate for semiconductor devices. Back lapping is also a key process for compound semiconductors, such as gallium arsenide (GaAs) and indium phosphide (InP). The downstream use of these materials includes communication devices, such as cell phones and modems.
Technical ConsiderationsSemiconductor material is normally fragile. The first consideration must be the residual strength of the wafer after back lapping. Final thickness requirements can vary between 0.050 and 0.200 mm. Certain processes can also require thinning to 0.010 mm (10 microns). After thinning, a small area device of 0.010 mm thickness can be manipulated easily, while a large 125 mm diameter wafer that is 0.1 mm thick can present problems. If the final thickness makes the wafer too fragile to be easily manipulated, it has to be supported on a substrate, which can be another, thicker wafer or a glass or ceramic disc. For accurate work, the substrate should be substantial, with a thickness-to-diameter ratio (aspect ratio) of at least 1:7.
If support is required, the second consideration is whether to make a permanent or temporary bond. This choice depends on the future use of the material.
The final consideration is the accuracy required in terms of mean thickness and thickness variation. Achieving a thickness tolerance of ±0.001 mm over the area of a small device is comparatively simple, but a larger tolerance would be required for a whole wafer. While the surfaces of production wafers are normally highly parallel, the wafer profile is not flat. The mechanics of crystal growth processes and wafer preparation result in some wafers that are “saddle shaped,” with distortions from flatness that may be as large as 0.005 mm.
Protecting the Wafer FrontWith careful use of the cementing described below, damage of the wafer front surface can be avoided. Additional protection can be added by:
- Coating the front surface with photo-resist—this is normal when devices such integrated circuits are on the surface
- Using a proprietary polyvinyl chloride (PVC) or similar adhesive film
- Interposing a lens tissue, pre-soaked in cement, between the wafer and the mounting plate
Direct Mounting on the PlateThe wafer is cemented to a disc or the plate of a precision jig. For producing thin sections, a hard cement such as Crystalbond is used, since a softer wax is eroded away during lapping, which could lead to edge chipping. The surface of the mounting plate or disc must be flat, and this is achieved by first lapping it on a known lapping plate. All surfaces should be clean and grease-free, since deposits can give local deviations from flatness, which will result in thickness variations after lapping.
A second disc is required as a pressure plate. The disc or plate, the wafer and the pressure plate are heated above the melting point of the thermoplastic cement. Two methods can then be used to apply the cement: the direct-pressure method or the capillary method.
In the direct pressure method, a relatively thick layer of cement is spread over the mounting plate. The front surface of the wafer is pressed into this layer, and then the pressure plate is applied. The assembly is then allowed to cool. A thin sheet of plastic material (or lens tissue) between the wafer and pressure plate prevents the latter being retained by excess cement. This method removes any local unevenness on the front face, since cement will fill in defects or connections. A possible pitfall is the “wedging” of the wafer if bubbles are allowed to enter the sample/mounting plate interface.
In the capillary method, the wafer, mounting plate and specimen plate are heated above the melting point of the cement, as in the previous method. A small amount of cement is then applied to the edge of the wafer, which will be pulled into the wafer/mounting plate interface. A very thin uniform layer results, as long as the gap between the two surfaces is kept sufficiently small.
Cementing to a SubstrateThe methods previously described hold the wafer to the substrate and, in turn, the substrate to the mounting plate. The wax bond of the substrate to the mounting plate or jig should have a lower softening point than that used to bond the wafer to the substrate. The substrate can then be removed from the mounting plate by heating to the lower temperature, while the wafer and substrate remain cemented together. Alternatively, the substrate can be held with a vacuum jig.
Permanent BondingPermanent bonding is normally done with epoxy resin to a glass or ceramic substrate. The surface of the substrate should again be flat to better than the tolerance required on wafer thickness. A thin uniform layer of epoxy is then applied to the surface of the substrate (using a combing or stippling action), and the wafer is placed on the epoxy. The substrate, wafer and a suitable pressure plate are heated to not more than 30°C, at which point the epoxy will become very fluid. A plastic interface film sheet is put between the wafer and the pressure plate, and the whole is left for the epoxy to cure. This produces a uniform epoxy layer less than 0.1 mm thick between the wafer and substrate. The epoxy bond can be further strengthened with an additional two to three hours cure at 40 to 50°C. After mounting, the substrate can be held on a mounting plate or disc, or within the vacuum chuck.
The Lapping ProcessLapping is most economically carried out on a versatile, high-precision polishing machine using a scrolled cast iron plate. The machine used should have an applicable reciprocating roller bar mechanism to hold the sample in position while allowing for constant plate conditioning during the preparation process. The lapping material is usually medium grit (10 to 15 micron) silicon carbide powder, suspended in a lapping oil. Other lapping materials are medium grit aluminum oxide or diamond. Self adhesive abrasive papers, which are cleaner to use, can be used with a stainless steel plate or with small specimens and increased speed on smaller polishing machines.
The choice of abrasive depends on the type of material—a very aggressive abrasive, such as diamond will cause a deeper damage layer at the surface. Damage penetration can be reduced by decreasing the load on the sample and the plate speed as the final thickness is approached.
After lapping, the sample can be polished using chemo-mechanical suspensions of colloidal silica (0.125 micron) or aluminum oxide (0.3 micron). The slurry suspensions are pumped continuously over the plate. The highest quality of surface finish is obtained by giving a second polish using a very fine (0.05 micron) aluminum oxide suspension, preferably on a self-adhesive cloth pad fixed to a plain stainless steel plate. Damage to the wafer’s crystalline structure can be reduced by using very low loadings on the wafers, especially toward the end of the polishing process.
Measuring Wafer ThicknessThe only truly accurate method to measure wafer thickness is to measure the wafer thickness before and during the lapping process (see Figure 2). The original wafer mean thickness, t0, is known, and the initial thickness, t1, of the wafer and cement layer after mounting on a plate or substrate is also measured (the cement layer is t1 - t0). The variation of t1 around the periphery and across the wafer is a measure of the cement thickness variation, assuming the wafer thickness variation is small. A series of thickness measurements at fixed time intervals during the lapping process enables the end time (when the wafer reaches its required thickness) to be estimated. Provided no change occurs in the operating conditions, i.e. plate speed, specimen load, specimen drive speed, etc., processing times are very consistent.
The thickness, tm, of the wafer and cement layer is measured relative to the mounting plate, and the amount of wafer material removed is t1 - tm. Two or three measurements at fixed time intervals enable the stock removal rate in microns per minute to be calculated.
The thickness of the wafer is easily measured using a precision micromount, which allows direct measurement of thickness when the sample is mounted in a precision jig.
The micromount holds a mechanical gauge inside a massive stainless steel ring, which sits on the conditioning ring of the inverted jig with the stylus of the gauge in contact with wafer. To use the micromount, the jig is removed from the lapping plate, and any excess slurry is removed from the conditioning ring and wafer. The jig is inverted on its stand, and the micromount is put on the conditioning ring.
An “in situ” dial gauge attached to the precision jig and measuring the downward movement of the inner stem as the specimen is lapped is not truly accurate because it makes no allowance for any wear on the jig conditioning ring. This error means that the resulting wafer thickness is always thinner than indicated by the jig/dial gauge combination. The error can be minimized by the use of very hard ceramic materials as the wear ring material.
In comparison, a precision micromount uses the current surface of the conditioning ring only as a support (i.e., a true “zero”), and thus directly measures the amount of material removed from the wafer, particularly when the reading is compared with that at the wafer mounting plate or substrate.
In situ gauges do, however, generally make more than acceptable compromises for assessing wafer thickness during any specific back-lapping operation and provide a very repeatable means of assessing when a process is reaching its end-point.
Selected Area PolishingAn interesting new application has recently emerged that requires backside thinning of a packaged semiconductor device to allow the electronic circuitry on the wafer’s front side to be imaged under infrared light from the wafer’s backside. This allows point failures within the circuitry to be localized and evaluated. At full wafer thicknesses, silicon is semi-opaque to infrared wavelengths. Thinning to a remaining thickness of 100 microns provides sufficient transparency for backside emission microscopy.
This method has become necessary due to the increasing complexity and number of metallization layers seen in modern devices. The metal layers physically block important photonic information from view at the front side of the wafer. Selected area polishing can be applied to both standard plastic packages and (the generally military-required and some CPU) ceramic packages.
Further Reading1. Hazeldine, Tim and Rubin, Joseph, “Rapid Routes to Planar Polishing,” Materials World, Feb. 1998, published by The Institute of Materials.
2. Fynn and Powell, “Cutting and Polishing Optical and Electronic Materials,” 2nd Ed., published by Adam Hilger.
3. Hazeldine, Tim, “Annular Sawing,” European Semiconductor, October 1997, published by Angel Publishing.
|
https://www.ceramicindustry.com/articles/82842-back-lapping-semiconductor-wafers
|
Carcassonne is the capital of the Aude department in the Occitanie region of southwestern France. The river Aude divides the city into two parts. One part is the Ville Basse and the other is the Cité.
The city has an interesting past where many turbulent events have taken place. The city has been under various religious and national governments throughout its history. Due to its strategic location between Toulouse and the Mediterranean Sea, it is considered the largest citadel in Europe.
Carcassonne has much to offer in terms of historical, cultural, and artistic attractions. The city is best known for its medieval fortress, Cité de Carcassone. But besides this impressive structure, which is also on the UNESCO World Heritage List, there is much more to discover here. And the sights you will see here will undoubtedly impress you. And on top of that, there are so many great restaurants here in Carcassonne. You can enjoy great gastronomy here, which adds value to the city.
As you stroll through the narrow medieval streets and admire the buildings, you will feel the spirit of this city with its rich history. Many religions, nations, and different politics ruled here in Carcassonne. And every important event in the past has left its mark on this city and today you can explore the stories and myths of this city's past. If you are taking a road trip through France, you should put Carcassonne on your list of destinations. You will not regret it.
Basic info and car hire in Carcassonne
- Location: Occitanie region, France
- Population: 46,513
- Official language: French
- Currency: Euro
- Weather: Carcassonne has a humid subtropical climate
- Internet coverage: Hotels, bars, restaurants, and cafés offer Wi-Fi.
- Road conditions: Most roads in Carcassonne are in good condition.
- Car hire in Carcassonne: The most common pick-up point for vehicles in Carcassonne is at Carcassonne airport. Please note that prices vary frequently. It is best to book a car 3 to 8 weeks in advance of your desired pick-up date - this will ensure you get the cheapest rental rate. Alternatively, it is also possible to find cheap last-minute car hire. Find the best deal on car hire in Carcassonne!
Driving in Carcassonne
Carcassonne is a smaller city, but traffic can be pretty heavy during rush hour. You can drive in the city centre, but be sure to follow the rules and obey all signs on the road. There are also many pedestrian areas that you are not allowed to enter by car, so watch out for these areas. The police can be very strict about this. Also, be careful when driving through narrow streets, as many of them are one-way streets.
There are many parking lots in the city and also three large underground garages, so parking your car in one of these lots and exploring the city on foot might be your best bet. Public transport is not that good, but some buses run fairly regularly and cover some areas that might interest you.
But if you drive outside the city and explore the surrounding areas of Carcassonne. That's a whole different story. The roads are wide and the views are stunning. There are not as many traffic jams as in the city, so you can enjoy your drive. There are many amazing places to visit around Carcassonne and they are all accessible by car, so buckle up and drive safely.
- Age limits: The minimum age to drive a car in France is 18 years.
- International Driving Permit: Yes, if you are a non-EU license holder.
- Additional papers: Identification(passports for all people in the car), the car's registration certificate, and your M.O.T. certificate (for cars over three years old, proving the vehicle meets environmental and road safety standards), a valid driving license, and valid proof of insurance. (It is advisable to buy insurance at the car hire agency in case your policy doesn't cover you while driving in France)
- Additional requirements: High-visibility, reflective vest for every person in the car, Warning triangle, Full set of replacement bulbs for head and taillights, Spare pair of glasses, Headlight converters (required if driving from, "Crit'Air" badge (required in central Paris), Breathalyzer.
- Children in the car: Children under 13 must be in car seats or wearing seat belts appropriate for their age and height. Babies and infants about a year old or under should always be positioned in rear-facing car seats.
- Driving side of the road: Right
- Lights: Check that all lights are working, clean, and correctly aimed.
- General speed limits: There is a variable maximum speed on French roads, depending on weather conditions. In dry weather, 2- or 3-lane rural roads are limited to 80 km/h, 4-lane expressways (in rural areas) are limited to 110 km/h, and motorways (in rural areas if classified as highways) are limited to 130 km/h. In rainy conditions, the limits are lowered to 80, 100, and 110 km/h respectively. The 50 km/h speed limit in cities is independent of weather conditions. The general speed limit is reduced to 50 km/h on all roads in fog or other poor visibility conditions when visibility is less than 50 meters.
- Parking suggestions: In the city area, you must pay for all street parking from Monday through Saturday from 9:00 am-8:00 pm, and you pay per hour. Parking is free on Sundays and holidays. Generally, you can park for up to 6 hours at a time if you do not have a resident's permit, and up to a week at a time with a permit.
Car Hire in Carcassonne
Most popular car hire at Carcassonne
The most selected hire car in Carcassonne is Ford Fiesta, with Renault Clio and Seat Ibiza also being a popular option. The most popular car types in Carcassonne are mini, economy, and compact.
Popular Driving Routes
Popular One-Way car hire from Carcassonne
Want to rent a car for a one-way trip? No worries! Orbit Car Hire offers a variety of one-way Car Hire options in many locations outside of Carcassonne. One-way car rental is ideal for cross-town or cross-country travel, saving time by not returning to your original location. Start your reservation with Orbit Car Hire and find great options on one-way car hire at locations across the France.
-
Carcassonne-Toulouse (93km with approximately 1 hour of drive)
- Carcassonne-Sete (136km with approximately 1-1,5 hours of drive)
-
Carcassonne-Montpellier (150km with approximately 1,5 hours of drive)
-
Carcassonne-Nimes (200km with approximately 2 hours of drive)
What to see in Carcassonne
The Jacobin Gate
This is the only one of the four main gates that have survived. It was built at the beginning of the 14th century. In the past, it was rebuilt once (18th century). Today it is a monument and a historical site representing the former entrance to the city of Carcassonne.
Basilica of St Nazaire
This very old cathedral was built in the 6th century in Romanesque-Gothic style. In this church, you can see one of the oldest stained glass windows in France. You can also admire some great decorations in the church and listen to the majestic sound of the organs. It is open to visitors every day of the week.
Lac de la Cavayer
This attractive lake near the city of Carcassonne is a very popular destination for locals during the summer months. You can enjoy numerous water activities on the beaches of this lake. There are also some wonderful routes around the lake where you can go for a walk and admire the stunning scenery around you.
St. Vincent's Church
This church offers one of the highest points in Carcassonne. You can climb to the top of the tower of this church and enjoy a breathtaking view of the city and its surroundings.
Le Parc Australia
This great park is ideal for spending a relaxing day in nature. There are some great spots for different activities for kids and adults. There is also a small zoo in this park where you can see different types of animals.
-
Explore France
Drive your rental car beyond Carcassonne
If you want to rent a car in Carcassonne, you are making a good decision. There are so many beautiful places to visit in this city that you will simply fall in love with Occitania. This region offers you so many different things. It is a great destination for all foodies. It is also an ideal place for adventurous travellers. With its outstanding historical and cultural sites, there is also a lot to see for history buffs. Add to all these great museums, concerts, and festivals and this is undoubtedly a region you must visit and explore.
The roads here are in fantastic condition, so you'll enjoy driving through this region full of sites that are UNESCO World Heritage Sites. You will then drive past some picturesque villages where you can stop to meet the locals and buy some wine or other produce, as they are delicious. There are also many good restaurants throughout the region, so you can stop and try different dishes from this area.
Then some cities are worth visiting. One of them is the city of Toulouse. You will be enchanted by this colourful city with its many sights. Then there is Nimes, also called the Roman city. You can visit the ancient remains of this city and discover what life was like here in ancient times.
Other destinations worth visiting: The bridge of Gard, Narbonne, Gruissan and its Roman remains, the Cathar castles, the medieval town of Rocamadour, Albi and its episcopal town, Conques, one of the most beautiful villages in France, the Pyrenees...
-
Enjoy the culture in Carcassonne
Where to eat, drink, and party in Carcassonne
France is a top gastronomic destination. But there are also some very special places in this culinary country. One of these special destinations is undoubtedly the Occitanie region. It is one of the most famous destinations for all foodies. And if you are travelling through this region and plan a stopover in Carcassonne, you will have the perfect opportunity to find out if it is all true. Many restaurants in the city offer all kinds of menus, from local to regional to national to those from other parts of the world.
Your best bet is to travel through the local cuisine, which is something special here. Some dishes from this area: oeufs cocotte, tielle, cassoulet, farcis, daube avignonnaise, ragout d'escoubilles.
The region is also known for its high-quality wine. Wine production in this area has a very long tradition. And in the past, it has managed to perfect the process of winemaking. Today, you can enjoy some of the best wines in the world here. You can also visit many wine routes in Occitanie. But remember that the laws here are very strict, so never drink and drive. Safety first.
Best restaurants in Carcassonne:
Pizzeria Rabah Zaoui
La Table du Vatican
Restaurant La Marquiere
La Table de la Bastide
Barriere Truffes L'Atelier
Le Clos des Framboisiers
FAQ
What you need to know about renting a car
What do I need to rent a car in France?
A credit card in the name of the main driver needs to be presented. A security deposit may be required while renting a car so the credit card must have sufficient funds. You can find details about the security deposit listed while booking the vehicle as well as on your voucher. It is important to have a valid driving license with the name of the main driver as well as additional drivers when it applies. An International Driving Permit is required in addition to a National Driving License if the National Driving License you or any of the Additional Drivers hold is not identifiable as a Driving License, eg, it is in a non-Latin alphabet (eg Arabic, Chinese, Cyrillic, Japanese). It can also depend on the country or car rental company you are renting with. If you have found a car on our website, you can press the rental terms link from the supplier for more information about driving license requirements. You will need to have identification with you such as a passport and ID cards. When you pick up your rental car you need to have your voucher with you.
How old do I need to be to rent a car in France?
The minimum age to rent a car is 21 years old but drivers with this age can rent only Mini and Economy Car Categories. Renters aged 25 and older may rent all Car Categories. With some suppliers for people over 75 years of age. Young driver surcharges may apply to many car rental companies for people under 25 years of age.
Do I need car insurance in France?
Insurances are very important when renting a car and can be different between countries. When renting a car in France with Orbit Car Hire the prices will include mandatory insurances in the country.
Do I need a credit card to rent a car in France?
In most cases, a credit card is required to be presented in the name of the main driver. A security deposit may be required when renting a car so it is important to have sufficient funds on the credit card. When searching for a car at Orbit you can see credit card requirements from all our suppliers.
Do I need an international driving license in France?
Drivers license issued in France or within the European Union does not have to provide an international drivers license. For drivers with a license issued outside of the EU will need to have an international driving license as well as your standard driving license. You must have both licenses with you at all times while driving a car in France.
How do I find the cheapest car hire in France?
You can find cheap car hire by comparing prices with all major car rentals at Orbit. A good idea can be to book in advance as the prices can increase closer to the travel period.
What is the cheapest rental car in France?
You will find the mini car category or economy cars to be the cheapest car categories. Vehicles such as Renault Twingo and Citroen C3 often provide the best prices.
What is the best car rental company in France?
You may find excellent service provided by companies such as Europcar, Enterprise Rent a Car, Keddy by Europcar, Alamo Rent a Car, and GoldCar.
What types and makes of rental car deals can I find in France?
You will find car types such as mini, economy, compact, Full-Size, and luxury cars. Popular rental cars are Citroen C3, Renault Twingo, Peugeot 308, Toyota Yaris.
Does my rental car have unlimited mileage when I book it for France?
Most car rentals in France offer unlimited mileage in their offers.
How to hire a car in France?
You can hire a car in France with Orbit Car Hire in simple steps. First, you define where you would like to pick up and return the car, enter dates and times and you can search for your cheap France car hire.
How much is it to rent a car in France?
It will depend on where in France you would like to rent a car and at what period. The summer months can be more expensive than during the winter. You can find prices down to 12 GBP per day.
Can you rent an automatic car in France?
Yes, you can find a wide range of automatic rental cars in France. The best way to find your automatic car hire in France is to filter the "Automatic" transmission when searching for your vehicle.
Can I rent a car in France?
|
https://www.orbitcarhire.com/en/locations/car-rental-in-france/carcassonne/
|
participants for their valuable input. Stakeholder’s consultation has the potential of:
- Improving the quality of decision making, since those with a vested interest contribute from the initial stages
- Identifying controversial issues and difficulties before a decision is made;
- Bringing together different stakeholders with different opinions, enabling an agreement to be reached together
- Prevents opposition at a later stage, which may slow down the decision-making process
- Eliminate delays and reduce costs in the implementation phase
- Gives stakeholders a better understanding of the objectives of decisions and the issues surrounding them
- Creates a sense of ownership of decisions and measures, thus improving their acceptance.
Six Step Strategy for Consultation:
The six-step strategy is summarized as follows:
- Specify the issue(s) to be addressed
- Identify which stakeholders to involve
- Analyze the potential contribution of various stakeholders
- Set up an involvement strategy
- Consult identified stakeholders
- Evaluate and follow up
Consultation Process:
The Consultation as a Generic word here represents the three-cycle process adopted to carry out the whole Consultation process. This three-cycle process includes Focus Group Discussions, Discussion, and Consultation to finalize the public spaces design.
The following top to bottom approach has been adapted for feedback and integration:
- Community / users are informed about designs
- Collecting Inputs and suggestions
- Integrating inputs and solutions into designs
- Designs are presented for Finalization
- Including Final Input and Recommendations
- Finalzing Designs
First & Second Cycle of Consultation:
The cumulative consultation cycle included Focus Group Discussions and discussions with the user group; the steps involved in the process were 01, 02, and 03 of the feedback & integration approach where initial designs were shown and the comments were recorded. User Group comprises the following:
User Group
- D. J Science College and
- N.E.D University (Old Campus)
- Arts Council/ Napa/ Culture Department/ Museum
- Consultation with Women and Youth
- Shopkeepers / Business community and Neighborhood
Third Cycle of Consultation:
Stakeholders:
|1||Federal Government|
|2||Cultural Ministry|
|3||Railway Ministry|
|4||Traffic Engineering Bureau|
|5||Karachi Metropolitan Corporation (KMC)|
|6||Traffic Engineering Bureau|
|7||Traffic Police|
|8||Regional Transport Authority (RTA)|
|9||Cultural Department for Museum|
|10||NAPA|
|11||Arts Council|
|12||Faizee Rehmeen Gallery|
|13||Hindu Gymkhana, Muslim Gymkhana|
|14||Educational Institutions (NED, DJ Science College, SM Arts & Law College)|
|15||KWSB, K-Electric, SSGC, PTCL|
The Consultation/coordination has also been carried out with the below-listed utility agencies /
Stakeholders:
Utility Agencies
- KMC – DMC South
- PTCL
- KWSB
- K – Electric
- SSGC
Mapping of Consultation Comments / Feedback
The brief of Coordination with Utility agencies is described in Chapter 04 in each section.
However, the detailed consultation with a user group and accompanying coordination with utility
agencies and its outcomes has been attached as Annexure – XI.
All of the consultations and coordination have been recorded in the form of Minutes of the meeting
to clearly understand the proceedings of the consultation at a later stage. Similarly, A Matrix of
stakeholder’s consultation has been prepared from the Mass consultation, This Matrix is used to
map the comments received from the stakeholders/users and the incorporation of their feedback in
the designs to shape the final Urban design. The Matrix has been attached as Annexure–XII.
Analysis of Gender-Specific Measures
Given the trends of exclusion and marginalization in Karachi, and to capture voices of different groups, it was dire essential to carry out Gender Specific Measures in stakeholder consultations for the proposed design. The consultations advocated a diversity in nature, therefore, potential barriers were identified and minimized during the process of consultation. The Women from all identified backgrounds constituted at least 50% in comparison to male participants of all the Consulted stakeholders. The break-down of consultation participation is delineated in Table 8.
This percentage included Female participants from under listed areas to maintain diversity and record varying yet important opinions/suggestions to be incorporated in Design.
- Women from associations, local arts centers, universities, and colleges
- End users; such as neighborhood occupants
- Civically engaged women and youth in the proposed neighborhood such as community-based organizations, civil society groups, women groups
- Hard to reach groups such as elderly, disabled, socially excluded groups, minority groups.
|S. No||Name of Stakeholders’|
Consultation Group
|Total no of|
Participants
|No. of Male|
Participants
|No. of Female|
Participants
|1||Women’s Group||100||N/A||100|
|2||Students / Youth||146||89||57|
|3||Artistic Community||37||34||3|
|4||Govt. Officials||21||20||1|
|5||Neighborhood Residents,|
Business Community,
DMC/Councilors
|77||74||3|
|6||Cultural Department||12||11||1|
|Grand Total||393||228||165|
Concerns as a Beneficiary:
The Diversity in the participants no doubt put various yet difficult concerns before the Consultants
Group Urban Design team, However, the raising concerns from the participants reflected a great
sense of ownership of the project and it also helped in resolving numerous issues and addressing
the concerns. Following are the summarized concerns that were raised, the detailed
comments/concerns are jotted down in the Matrix attached as Annexure – XII.
1. Freedom of Movement
2. Security Concerns
3. Public Facilities
4. Daily Commute Facilitation (Bus Stops)
5. Solid Waste Disposal
6. Other Municipal Utilities
Impacts on Design:
The design is heterodox in nature as compared to Neo-classical approach because it accommodates user facilitation with a different perspective and a sense of place. The design is developed with an emphasis on Gender balanced criteria rather on Gender discrimination. The design has been refined and improved since its inception keeping in view the comments from different stakeholders. The detailed incorporation of comments and impacts are delineated in Annexure – XII under the Matrix.
|
https://knip.gos.pk/saddar-2/
|
For most employers, the goal is to create a healthy and safe environment so that their employees can thrive at work. Now we have all heard this before – healthy employees, mean productive employees. One of the biggest challenges that we face within today’s fast paced business world is addressing mental health in the workplace. How do we start the conversation? What support programs do we have in place? How do we break down the stigma associated with mental health? What can we be being as leaders to improve mental health?
There is a raft of reasons why someone may be struggling with their mental health. Often, our employees experience symptoms of anxiety and depression due to the stressors of life and work. These stressors tend to dominate the thoughts of the individual throughout the working day and have a negative impact on both the individual and the workplace. Now, we can’t change what goes on in our employees’ personal lives, however, we can influence the workplace environment that they spend the majority of their time in.
According to Beyond Blue Support Service, over 90% of employees in Australia believe that mental health is important, yet only half believe that their workplace is actually mentally healthy.
Australians are battling hard against mental health issues, with statistics suggesting that over a million of us struggle with depression annually, and two million with anxiety. Around 1 in 5 Australians will take time off work due to symptoms of poor mental health. It makes for grim reading, but there is good news too. Beyond Blue Support Service points to research indicating that for every dollar employers invest in improving the mental health of their employees, they gain over two dollars in benefits.
Successful businesses need happy and productive employees, so here are our tips for improving employee productivity through better mental health:
1. Create an open environment. As a manager it’s important to let your team know that you are there for them and available if they need to discuss any issues that they may be dealing with. Feelings should be shared, conflicts should be discussed, and success should be celebrated. It may sound as though it will eat up a lot of your time, but just knowing your door is open can make a big difference.
2. Socialising is a key aspect of improving mental health. Encourage people to get out of the office at lunch time. Promote inter team or department working groups to encourage people to mingle with others outside of their immediate team. Create a social committee, these are just a couple of small things that can be implemented without having to fight for a budget to get them up and running.
If you are looking for more structured programs, and have a bit of a budget to play with, consider arranging a weekly yoga, meditation, pilates or strength and conditioning class as a way to improve both mental and physical health while encouraging your employees to socialise with each other.
A team-building event will also promote the social aspect of improving mental health. You can choose to hold an employee-only event with activities that promote teamwork. Or, you can simply have a full-on barbecue cookout with your employees and their families.
It’s all about creating a supportive environment.
3. Often, employees feel the pressure to work through their lunch or skip breaks due to heavy workloads. Encourage everyone to take their break and step away from the job while they do so. No one should eat over a keyboard. If possible, set up a weekly or fortnightly meeting to discuss workload so that tasks can be reallocated if the load is becoming too much. These meetings are also a great opportunity to check in with your team to see how they are doing and if necessary address any concerns or issues they may be having.
4. Consider putting a formal program in place that aims to maintain the wellbeing and mental health of all employees. This is something that should be ongoing. What that looks like is up to you, however offering free counselling sessions to your employees or providing mental health awareness training is a great place to start. If you extend that offer to their immediate families, then you are doing your part to improve their home life as well, which will impact their work performance.
6. Become a Mental Health First Aid accredited organisation. Mental Health First Aid (MHFA) is an internationally recognised, evidence-based training program that teaches participants about common mental health issues, and has been found to increase their knowledge, awareness and skills. MHFA is the help given to someone where there is concern that they may be developing a mental health problem, or experiencing a mental health crisis. Having mental health first aid officers onsite ensures that you have first responders on hand to assist should anyone in your workforce need help.
7. Allow staff to take mental health days. When someone calls in sick due to poor mental health, they often lie about the real reason for their absence. This doesn’t do anyone any favours. If you know one of your employees is struggling with a mental health issue, its best to take a proactive approach and deal with it head-on and help them overcome those issues to remain a productive part of the team.
A mentally healthy workplace will benefit everyone – your employees, your workforce, your community. When a business cares about supporting the mental health of their employees it also helps attract top talent. The fact is, supporting the mental health of your employees doesn’t just benefit them, it benefits the workplace, and it benefits your bottom line. Try it. And you might be surprised.
If you believe mental health issues are a problem in your workplace, Bodycare can offer you a solution. Our Workplace Mental Health Training and Seminars can be customised to address a specific mental health issue that is relevant to your workforce or we can offer broad programs to raise awareness about mental illness. We also offer programs that are focused on providing individuals with the tools to thrive and flourish at work. Whatever your needs, using our team of experts in this field, we can create a program to ensure you are being proactive about the mental health of your workforce.
Contact our team to discuss the range of mental health training available to improve your employee performance at work.
Sources:
|
https://www.bodycare.com.au/how-to-improve-employee-productivity-through-better-mental-health/
|
HGSE in the Media: June 2018
Please note: While many online periodicals keep their stories freely available indefinitely, stories on other sites expire after a specified period of time, after which they can be retrieved by locating the story through the website’s archives, and sometimes paying a fee to do so. Where that is the periodical's policy, we have provided a link to the periodical's main page and the citation for the article so that interested readers may find the original article.
Straight Up Conversation: New Harvard Ed School Dean Bridget Terry Long (Education Next)
Dean Long speaks to Rick Hess, Ed.M.'90.
Guest Commentary: To Create Safer Schools, Begin With Preschool (WBUR)
Nonie Lesaux writes on one value of high-quality early childhood education.
Rejuvenating Massachusetts Education Reform (CommonWealth)
Tom Kane writes on the education reform effort in Massachusetts.
After Family Separation: How to Promote Healing for Migrant Children? (Christian Science Monitor)
The Health Impact of Separating Migrant Children from Parents (BBC News)
Discussion: What Trauma Are Separated Migrant Children Now Dealing With? (WUNC - North Carolina Public Radio)
How Trauma and Stress Affect a Child's Brain Development (Hechinger Report)
“The Only Buffer You Have Is a Parent. Take that Away, and Everything Falls Apart." (Quartz)
Jack Shonkoff speaks on the how the trauma of separation can affect children.
What Detention And Separation Mean For Kids' Mental Health (NPR)
Charles Nelson comments on the potential effects of the U.S. policy separating immigrant children from their parents.
Boston Public Schools Superintendent Tommy Chang Stepping Down (WBUR)
Paul Reville comments.
Highlighting Rural America on Education Map (Harvard Gazette)
Three recent graduates put an emphasis on rural ed.
The Whole Game in Microcosm: A Powerful Approach to Teacher Learning (Education Week)
Jal Mehta writes the latest in his Learning Deeply series.
Special Education Costs Force Some Districts To Cut Elsewhere (WGBH)
Tom Hehir comments on the cost of special education in Massachusetts.
The Effects of Parental Separation on Children (Wall Street Journal)
How the Toxic Stress of Family Separation Can Harm a Child (PBS NewsHour)
Jack Shonkoff comments on how the trauma of separation effects children.
What Separation from Parents Does to Children: ‘The Effect Is Catastrophic’ (Washington Post)
Deep Dive into Immigration: Pending Legislation on the Hill, Psychological Impact of Family Separation and How We Got Here (Air Talk, KPCC Public Radio)
The Trump Administration Is Committing Violence Against Children (Washington Post)
Charles Nelson comments on the potential effects of the U.S. policy
separating immigrant children from their parents at the border.
Advocates Debate Charter Schools' Performance, Diversity With Lawmakers (Education Week)
House Lawmakers Agree on Need for Accountability at Occasionally Tense Charter School Hearing (The 74)
Martin West spoke at a House education committee hearing last Wednesday.
New Documents In Harvard Lawsuit Provide Peek At Admissions Process (WBUR)
Natasha Warikoo discusses the new details of the case.
A 'Plan-Do-Study-Act' Approach to Improving Freshman Year (Education Week)
Miriam Greenberg, the director of the Strategic Data Project, comments
on the importance of data in school and district improvement.
Response: 'Every Teacher' has to Help ELLs Meet Common Core Standards (Education Week)
Nonie Lesaux responds to: How can we help English Language Learners meet the Common Core Standards?
Response: Too Many Professional Development 'Horror Stories' (Education Week)
Paola Uccelli and Nonie Lesaux respond to: What are the biggest problems
with common teacher professional development practices and how can they
be fixed?
Want to Kill Tenure? Be Careful What You Wish For (Chronicle for Higher Education)
Richard Chait comments.
Local Teachers Get an Education in Addressing Hard Questions (Harvard Gazette)
Ed School students Soraya Ramos and Cassandra St. Vil recently offered a program on race and equity in education.
When Learners Get "Stuck," Try Thinking Like An Artist (Education Week)
A post by Jacob Watson, Ed.M.'18, a theater artist and educator from
Chicago and a recent graduate of the Arts in Education Program.
The Risks and Rewards of Getting Rid of Grade Levels (EdSurge)
A post from Bing Wang, Ed.M.'13, a language teacher at Latin School of Chicago.
Talking with Children Matters: Defending the 30 Million Word Gap (Brookings)
Meredith Rowe and co-writers offer a rebuttal of new research that disputes the 30 million word gap.
Breaking Down the Myths That Lead Young Students to Miss School (Education Week)
Ph.D. candidate Carly Robinson, Ed.M.'17, comments on her new research on early absenteeism.
Valedictory Mixtape (Harvard Magazine)
Coverage of Dean Ryan's Commencement address.
An Affinity for Diversity: Discussing Our Response to the Climate Assessment (HW Chronicle)
Domonic Rollins comments on creating an inclusive school community.
|
https://www.gse.harvard.edu/news/18/07/hgse-media-june-2018
|
1. Field of the Invention
The invention disclosed herein relates to elastomer systems and, in particular, to elastomer systems used as seals in a downhole environment.
2. Description of the Related Art
Boreholes are drilled deep into the earth for many applications such as carbon sequestration, geothermal production, and hydrocarbon exploration and production. Many different types of tools and instruments may be disposed in the boreholes to perform various tasks. Typically, very high pressures are encountered by the tools and instruments when they are disposed deep into the earth.
Seals are used to isolate internal components from the high pressures external to the tools and instruments. It is important for the seals to function properly in the downhole environment because the internal components can be damaged or fail if exposed to the high pressure. It can be very expensive in time and equipment if the internal components fail because the failed components will have to be extracted from the borehole, replaced, and then sent down the borehole. It would be well received in the drilling industry if the sealing art could be improved.
| |
Share your talents with the community
Our Guiding Principles
To share music at no cost to the community to give everyone the opportunity to enjoy
the musical arts.
To provide a forum for musicians of all ages with intermediate and advanced skill levels to hone their musical abilities
and to share their talents with others.
A Brief History
The orchestra was founded in 2010 and given a home in Orange City by the Dickinson Memorial Library Association.
We give four performances each year at no cost to the community. The Spring performance is of popular arrangments,
the Summer performance in recognition of our American heritage, Fall a classical performance, and Winter a Christmas
performance.
| |
Debates on what constitutes a good knife, what’s the best knife steel, and how to best sharpen a knife can go on forever. I can’t give much insight into Rockwell hardness or knife design histories, but I do know what I like in filleting blades. What I prefer may not be what other anglers desire, but here are my thoughts anyway.
I believe there are basically two schools of thought about knives, especially fillet knives used around the water. One is that since the knife will be abused, subject to rusting, and probably will be dropped overboard anyway, it’s smart to buy an inexpensive, soft-steel knife that’s easy to sharpen and a snap to replace. The other school of thought regarding a fillet knife is to buy a top-quality, expensive one with hard steel that will hold a non-corrosive edge a long time–then take care of it forever.
Both schools of thought have merit, depending on how the angler intends to use the knife and how well he believes he can care for it. Personally, I want a soft-steel knife that I easily can put an edge on with a convenient, hand-held sharpening tool while rocking in my boat in bumpy water. I just never seem to have the time to sit at home (on the hearth, in front of the fire, no doubt) and strap the knife edge sharper than a snook’s gill cover.
Further, I want a knife that’s impervious to corrosion, so high-grade stainless steel is a must. This brings up the point of knife sheaths, because cloth or leather ones absorb moisture and salt, and a knife stored for long periods in even a dry sheath impregnated with salt can lead to a rusted blade. Thus, hard-plastic sheaths are most desirable for knives used in a marine environment.
Some anglers spray their knife blades with a bit of oil after using them and before storing them in sheaths. It’s a good idea, but a better way is to use cooking oil or even Crisco, as it’s considerably more palatable than WD-40 next time the blade is stroking fish flesh.
I like a thin, soft-blade fillet knife that is easy to use when skinning even thin-skin species such as catfish, bluefish, or mackerel. For skinning, it’s important the blade bend a bit to provide the right angle to zip flesh from hide, but the blade must have just the right about of “bend.” The blade shouldn’t be as soft as a coat hanger, but the stiffness of a deer skinning knife is no good either.
What constitutes a good degree of “blade bend” is purely subjective for a fisherman, and it’s something that only can be learned through experience.
Long, soft knife blades are difficult to use in close-quarters cutting work, such as boning, making special fillet cuts along “Y-bones,” and removing fish cheeks. For this reason, I usually have two fillet knives along on most trips. One blade is 6-inches, very thin, and not very soft. This knife is used for tight-quarters boning and for smaller fish such as bluegills, mangrove snapper, and trout. The second knife has a 9-inch, soft blade. The long blade makes it easy to remove deep-body fish fillets, such as hefty salmon, striped bass, oversize redfish, wahoo, and dolphin. Skinning deep fillets is easy with the long-blade knife, too.
One interesting innovation in fillet knife design that addresses my two-knife system are the “Adjustable” fillet knives by companies such as Kershaw and Cutco. A special lock mechanism allows the user to adjust the knife blade from 5-to-9 inches, thus the knife serves both large and small fish boning and skinning functions, and the shorter blade is stiffer than when the blade is fully extended. Also, the knife is easy to store in a tackle box or bag in its shorter length.
There are a number of excellent folding fillet knives on the market today, which allow for easy storage and no need for sheaths. The Schrade “Mighty Angler” is one of the best I’ve seen, as its blade is soft, easy-to-sharpen, and the folded knife stores conveniently in a durable sheath that stows in a tackle box or on an angler’s waist belt.
One important aspect to consider with folding filet knives is ease of cleaning. Gaps, holes, and hinges all collect blood and flesh, so it’s imperative the knife can be worked in such ways as to allow easy cleaning. Some folding and fixed-blade fillets knives allow blade removal so they can be thoroughly cleaned and dried following use, which also helps prevent tarnishing of parts.
A very pointed, sharp tip is another important part of a good fillet knife since it allows for quick, safe penetration of the blade into a fish for dressing. Also, a well-designed, easy-to-grasp, non-slip handle is very important because a slimy hand slipping at the wrong instant can result in injury.
|
https://www.alloutdoor.com/2015/08/02/choosing-good-filet-knife/
|
In an environment focused on research and overhead dollars, it is easy to lose sight of the main purpose of a university, which is to educate students—both as scholars within the disciplines and as citizens within a larger global community. The latter half of that mission has prompted Michigan State University (MSU) to bring attention to the importance of liberal education at Research I institutions. Although their structures and funding sources differ, Research I institutions and small liberal arts colleges share the same goal of helping students master the knowledge and skills that will enable them to become informed citizens who are able to contribute effectively to our democratic society. But how can this transformation be achieved, and what metrics can we use to define success?
To answer these questions, institutions must first identify what students are expected to gain from taking coursework and participating in academic life. These student learning outcomes may represent changes in how students think, feel, perceive, or even act as they learn and undergo various experiences during their college years. Institutions vary in how clearly they articulate expected student learning outcomes, ranging from completely implicit, and therefore unarticulated, expectations to completely explicit lists of the ways in which students should show improvement by the end of their undergraduate tenure. Once learning outcomes have been established, however, the institution must evaluate the extent to which students are making progress in achieving them.
Backward design
According to Keeling and Hersh (2011), the next step after reaching institutional consensus on student learning goals is to link those goals to the general education curriculum. But this sequencing obscures one of the trickiest issues in higher education: how to assess student learning. One way of evaluating learning outcomes at the institutional level is by applying the backward design method, which is more commonly used in instructional development (Wiggins and McTighe 1998). The first step in backward design is to identify desired results—in this case, student learning outcomes. The second step is to determine what constitutes acceptable evidence that the learning outcomes have been achieved by students. This step is accomplished by using newly created or preexisting assessment methods that align directly with the student learning outcomes. For example, if the desired outcome is the ability to evaluate and justify whether an article is scientific or not, then, as part of the assessment, the student should be asked to evaluate and justify whether or not an article is scientific. This example may seem obvious, but it illustrates the optimal level of alignment between an assessment method and the related outcome.
The third step in the backward design process is to ensure proper alignment between instruction and curricula, a step that should result in improved student performance on the assessments. To continue with the example used above, this may include an activity where the students practice examining various types of articles with the goal of discerning the characteristics of scientific articles as compared to nonscientific articles. Students should also be given opportunities to argue with others and be required to provide supportive reasoning to justify their decisions. At the end of these three steps, an evaluation of student learning should commence. This evaluation should then be followed by the refinement of any of the steps, as needed to improve alignment and student learning outcomes for future iterations. Just as this process can be followed in an individual class, so too can it be applied at the institutional level.
The alignment of institutional goals and curricula should also involve the alignment of course-level goals and the assessment of student learning, both within individual classrooms and across curricula. The gap between student learning goals and curricula can be bridged using the rubrics that the Association of American Colleges and Universities (AAC&U) developed through its Valid Assessment of Learning in Undergraduate Education (VALUE) project (see www.aacu.org/value/rubrics). Institution-wide learning outcomes, as well as those at the level of the individual course, can be readily aligned with some or all of the outcomes that the VALUE rubrics were designed to assess (Rhodes 2010). These are consensus liberal learning outcomes that have emerged at institutions of all sizes and types nationwide. Some are associated with fundamental skills, such as reading, writing, and mathematics (or quantitative literacy). Others are more nuanced and may not be targeted across all levels of education, and yet graduates entering the workforce are generally expected to have developed them (e.g., creative thinking, ethical reasoning, problem solving, and teamwork). The full set of student learning outcomes should be addressed within and across curricula, rather than isolated within any single course, discipline, or program.
Liberal learning at MSU
Michigan State University (MSU) recently adopted a set of five liberal learning goals: analytical thinking, cultural understanding, effective citizenship, effective communication, and integrated reasoning. Over the past year, MSU faculty and staff have developed rubrics to assess student progress on each of these outcomes. The rubrics are currently being evaluated by focus groups composed of faculty and staff from all colleges and instructional resource areas on campus in order to determine how they can be operationalized within and across units. The five liberal learning goals have also been aligned with a previously adopted set of global competencies.
In its approach to general education, MSU is unique among Research I institutions. In 1992, MSU created centers for integrative studies in three areas: arts and humanities, general science, and social science. Because these three centers share primary responsibility for general education and undergraduate liberal learning, and therefore face close scrutiny by institutional accreditors, the associate provost for undergraduate education has requested that they pay particular attention to the assessment of the university’s new liberal learning goals. Accordingly, the goals have been embedded into the syllabi, course materials, and curricula of all three centers. The centers have also begun to evaluate the effectiveness of their curricula and of general education, more broadly. The assessments implemented as part of these efforts are aligned with the particular goals of each center and with the university’s liberal learning goals, with implicit consideration of the global competencies.
Over the past two years, due to an influx of resources and expertise from the College of Natural Science, the Center for Integrative Studies in General Science has emerged a trailblazer with regard to the large-scale programmatic assessment of the liberal learning goals. An important element of the center’s success has been the collaborative work of an affiliated faculty learning community. While the center has several of its own full-time faculty and staff members, most of the faculty and graduate assistants who teach the courses offered by the center come from departments in either the College of Natural Science or the College of Agriculture and Natural Resources. The lecture and laboratory courses are led by graduate students, postdoctoral scholars, and faculty—including non-tenure-track, tenure-track, and tenured members.
The center offers both online and campus-based courses, as well as international study abroad and United States–based study away experiences. In addition to location, several other factors can account for wide variation among the sections of a similar course. These include the varying integration of sciences, the level of student-centeredness represented by an individual instructor’s teaching approach, the length of the term, and the instructor’s level of experience. This variety of experiences and diversity of expertise has proven to be a boon for discussions of student learning assessment.
Because the center’s instructors are not required to meet regularly, there is a spatial and temporal disconnect between instructors of courses offered for non-science majors. This challenge has been addressed through the creation of a faculty learning community, a professional development venue through which like-minded faculty convene to discuss common interests (Cox and Richlin 2004). Generally, such learning communities are led by one or more faculty facilitators, but all members have equal say in choosing the topics to be discussed and the training to be pursued. Cosponsored by the College of Natural Science and the Office of Faculty and Organizational Development, the center’s faculty learning community convenes monthly throughout the academic year to discuss programmatic evaluation efforts, goals related to the desired student outcomes for general education, challenges and solutions to teaching and learning, and new initiatives to improve teaching and learning within the courses offered by the center. The meetings have helped provide the framework and common ground needed for a diverse group of faculty to engage as active learners and participants in shared dialogue.
One center’s road to assessment and community
The Center for Integrative Studies in General Science’s faculty learning community was primed to respond when AAC&U contacted MSU in the spring of 2012 and invited its scientists to help evaluate the rubric for global learning that was then being developed as part of the association’s VALUE project. That spring, participants reviewed the VALUE rubric for global learning individually, using an online survey, and as a group, during a face-to-face meeting of the faculty learning community. Prior to the meeting, the center’s assessment team conducted a thematic content analysis of participant responses to the online survey. This analysis was then used to focus the group discussion. During the meeting, the common themes identified by the content analysis proved useful for stimulating further discussion and generating questions. The meeting also provided an opportunity for faculty to discuss the types of student assignments they had used to evaluate the VALUE rubric.
As a professional development opportunity, the process of evaluating the VALUE rubric provided training in how to use rubrics and, at least potentially, how to incorporate them into current teaching and assessment practices. Faculty participants focused on providing collaborative, iterative feedback for assessment and improvement, including active discussion of the rubric’s strengths and weaknesses, and explored ways to align individual course goals with the rubric—and ways to communicate these goals to students. They also shared effective, innovative instructional activities directly related to the goals of the rubric for use across courses.
In addition to providing AAC&U with feedback that was used to inform the subsequent revision of the VALUE rubric for global learning, the evaluation process provided an opportunity for the center’s faculty to share ideas and resources across their own community of practice—and, therefore, across disciplinary boundaries. The group discussed the center’s next steps in adopting the global learning rubric, or other rubrics, for use in classes. Before participating in the rubric review process, many of the center’s faculty were unfamiliar with rubrics. These faculty in particular gained valuable training in the creation and use of rubrics for their own courses, and all participants came to a better understanding of the use of rubrics as a way to measure and improve instructional efficacy. Engagement as a community, rather than as individual faculty members, ultimately resulted in deeper understanding.
Reflection on rubrics
By engaging with the VALUE rubric, members of the faculty learning community were able to consider the metacognitive aspects of their own teaching, including consideration of where instruction fits into the broader context of general education science training at MSU. Specifically, they addressed whether or not the student learning goals included as part of the VALUE rubric for global learning aligned with their courses and with the center’s curriculum. Faculty were able to recognize both unarticulated alignment with broad institutional goals and disconnects between practices and expectations. Evaluating the VALUE rubric was particularly useful for faculty teaching study away or study abroad courses, since these courses inherently seek to expand students’ global perspectives. Those faculty were particularly interested to see how their students would perform on assessments using the global learning rubric and how they could align their course goals more closely with those included as part of the rubric. When participants identified aspects of the VALUE rubric that aligned with goals of their courses, they were able to discuss curricular interventions that they currently use and that could be implemented across courses. The group was also able to generate instructional innovations that would advance the shared goals.
One assignment developed by a member of the faculty learning community asks each student to identify and describe an environmental problem within their region and then to search for a comparable issue in a different county. The students are then asked to compare and contrast the likely efficacy of potential solutions in both locations. Following their completion of this assignment, the students are given the VALUE rubric for global learning and asked to highlight the learning goals of the assignment and to identify their own individual levels of competency within each of the goals. This assignment addresses issues related to global citizenship and forces students to think about how they view the world and their role in it. The students’ self-assessments commonly overestimate their scores on the VALUE rubric, as compared to the instructors’ evaluations of their written responses. But such misalignments can be easily identified and then remediated through a feedback loop.
In a related development, the faculty learning community asked technology and assessment specialists to develop online versions of the rubric using the MSU-specific course management software. This online rubric will be made available to all center faculty who wish to use it in their courses. Results from these online assessments could be used in conjunction with programmatic survey data and course-specific student data to evaluate learning outcomes more holistically, either at the course, center, or institutional level.
A model for success
The integration of institution-specific goals for student learning with those specified by the VALUE rubric for global learning won broad support at MSU. The importance of evaluating student learning has been communicated across all levels of the institution, and assessment is fundamentally aligned with a core set of learning outcomes that have broad support across the university. The development of MSU’s liberal learning goals and global competencies, along with their respective rubrics, and the adaptation of the VALUE rubrics have set the stage for the institution-wide evaluation of curricula and student learning outcomes.
The success of the effort is due in large part to buy-in from the faculty who volunteered to participate in the faculty learning community as well as the financial and other support, such as letters of recognition and participation, provided by the dean of the College of Natural Science and the associate provost of undergraduate education. These administrators have also provided resources to support the assessment of student learning outcomes campus-wide, which has led subsequently to further growth in the community of practice. Through its intentional efforts to maintain institutional focus on the goal of providing undergraduate students with a liberal education, MSU can serve as a model for other Research I institutions.
References
Cox, M. D., and L. Richlin, eds. 2004. Building Faculty Learning Communities. New Directions for Teaching and Learning, no. 97. San Francisco: Jossey-Bass.
Keeling, R. P., and R. H. Hersh. 2011. We’re Losing Our Minds: Rethinking American Higher Education. New York: Palgrave Macmillan.
Rhodes, T. L., ed. 2010. Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics. Washington, DC: Association of American Colleges and Universities.
Wiggins, G. P., and J. McTighe. 1998. Understanding by Design. Alexandria, VA: Association for Supervision and Curriculum Development.
Sarah Jardeleza is assistant professor and associate director of educational research in the Center for Integrative Studies in General Science; April Cognato is assistant professor of zoology; Michael Gottfried is associate professor of geological sciences; Ryan Kimbirauskas is academic specialist in the Center for Integrative Studies in General Science; Julie Libarkin is associate professor of geological sciences and director of educational research in the Center for Integrative Studies in General Science; Rachel Olson is graduate assistant in the Department of Entomology; Gabriel Ording is associate professor of entomology and director of the Center for Integrative Studies in General Science; Jennifer Owen is assistant professor of fisheries and wildlife and large animal clinical sciences; Pamela Rasmussen is assistant professor of zoology; Jon Stoltzfus is assistant professor of biochemistry and molecular biology; and Stephen Thomas is assistant professor of zoology and associate director of the Center for Integrative Studies in General Science—all at Michigan State University.
To respond to this article, e-mail [email protected], with the author’s name on the subject line.
Previous Issues
-
Winter 2019Community engagement can be confounding. The insights, confrontations, and sobering realities... Read more
-
Fall 2018This issue of Liberal Education starts off looking at who and where our future students are and... Read more
-
Summer 2018This issue explores approaches to claiming the narrative about the value of liberal education, with... Read more
-
Spring 2018Covering a wide array of topics—democratic engagement between universities and communities;... Read more
-
Winter 2018This issue explores the tenets of AAC&U’s recently released 2018–22 strategic plan—We ASPIRE:... Read more
-
Summer/Fall 2017This issue explores the role of faculty development in creating educational spaces that welcome... Read more
-
2017 Annual Meeting: Building Public Trust in the Promise of Liberal Education and Inclusive ExcellenceSpring 2017This Issue represents the theme of AAC&U's 2017 annual meeting, "Building Public Trust in the... Read more
-
Winter 2017This issue examines higher education’s response to calls for greater accountability for student... Read more
-
Fall 2016AAC&U has joined with more than a hundred diverse organizations and individuals as partners in... Read more
-
The Annual Meeting: How Higher Education Can Lead—On Equity, Inclusive Excellence, and Democratic RenewalSummer 2016This issue presents highlights of AAC&U’s 2016 annual meeting as well as an introductory... Read more
-
Spring 2016At its best, a contemporary liberal education helps form students as creative, innovative,... Read more
-
Fall/Winter 2016At a time when technology is “disrupting” higher education and the federal government proposes “... Read more
-
Summer 2015The summer 2015 issue of Liberal Education features a set of articles on “global learning,”... Read more
-
Winter/Spring 2015This special double issue focuses on the LEAP Challenge, the next phase of AAC&U’s Liberal... Read more
-
Fall 2014This special anniversary issue features a look back at AAC&U's first one hundred years.
|
https://www.aacu.org/publications-research/periodicals/value-community-building-one-centers-story-how-value-rubrics
|
Without mathematics, there would be no architecture, no commerce, no time and no chemistry. A Professor of Mathematics at the University of Oxford explores the 30,000 year history of maths and offers clear, accessible explanations of the development of the key mathematical principles that underpin the science, technology and culture of our modern world.
Our preview videos are intended for broadcasters looking to licence content from the Open University.
The Language of the Universe
Timekeeping motivated the world’s oldest mathematical devices. In ancient cultures, the need to predict the phases of the moon made a lunar calendar especially useful for the hunters of antiquity.
Anthropologists have discovered bones up to 37,000 years old, with 29 notches cut into them to represent the days of the month.
The first fully developed mathematical systems developed in Babylon, Egypt and Greece. Babylonian maths is based on a base 60 system, giving us 60 seconds in a minute, and 60 minutes in an hour. The mathematicians of Babylon also demonstrate that they must have been aware of Pythagoras’s theorem – at least 1,000 years before Pythagoras was born.
The Genius of the East
In China, in around 200 BC, the Han Dynasty encouraged scholars to compile a book known as The Nine Chapters, which attempted to recover and preserve forever the lost teachings of the Chinese mathematicians of antiquity. The text is dedicated to solving practical, real-world problems; how to divide land or goods and how to manage building works.
India was the first civilisation to develop a number system with a special symbol to represent zero – one of the great landmarks in the development of mathematics.
The Frontiers of Space
Mathematical problems became spectator sports in 16th century, with generous prizes given to the winners. In such a competitive atmosphere, it’s not surprising that mathematicians would jealously guard their knowledge – and in some cases, behave very badly. Girolamo Cardano, appeared to solve a problem known as the cubic, but he had stolen the solution, from a rival mathematician - Nicolo Tartaglia.
In England Isaac Newton developed calculus, which could account for the orbits of the planets, but spent the rest of his life embroiled in a dispute with a German mathematician over who developed it first.
To Infinity and Beyond
The computer revolutionised mathematics by enabling lightening speed calculations and helping mathematicians to "see" chaos, but proof without understanding has continued to unsettle mathematicians. Many argue that the pleasure of mathematics is to be found in the understanding of the problem, not simply a correct solution.
In 1900, French mathematician David Hilbert identified the most important unsolved mysteries confronting mathematics, laying down the roadmap for maths in the 20th century. 15 of the 23 problems have been fully or partly resolved and work continues on the rest.
The Open University has appointed DCD rights to distribute our television catalogue.
Please contact DCD Rights for further information
Please contact [email protected]
or follow the link to Amazon.
Our Address:
|
http://www.open.ac.uk/about/broadcast-media-sales/tv-sales/history-culture/story-maths
|
Sometimes the after-effects of a biopsy procedure are quite minimal, so not all of the instructions may apply. Common sense will often dictate what you should do. However, when in doubt, follow these guidelines or call our office at (713) 665-9200 for clarification.
First Hour: All depending on where the incision has been made, keep gauze in place to control any bleeding that may occur. The gauze pack may be changed as needed.
Exercise Care: Do not disturb the procedure area today. Do not rinse vigorously or probe the area with any objects. Please do not smoke for at least 48 hours since this may slow down the healing process.
Oozing: Intermittent bleeding or oozing is normal. Bleeding may be controlled by placing fresh gauze at the site for 30–45 minutes at a time. The ooze may look more impressive than it really is because it is mixed with saliva. Do not be alarmed. This is normal.
Swelling: Swelling is often associated with oral procedures. It can be minimized by using a cold pack, ice bag, or a bag of frozen peas wrapped in a towel and applied firmly if procedure area is on a cheek or area where the compress can be placed directly over the site. You can apply compress on and off for the first 24 hours.
Pain: Unfortunately, oral procedures can be accompanied by some degree of discomfort. Depending on the severity of the procedure, you may be given a prescription for pain medication or be instructed to use over-the-counter medications to manage the discomfort. It is best to take the pain medication before the anesthetic has worn off. Make sure and bring with you the medication that has been prescribed on the day of the procedure.
Nausea: Nausea is not uncommon after a procedure. Sometimes the medications may be the cause. This can be reduced by preceding the pain medication with a small amount of food and taking the medication with a large volume of water. Classic Coca-Cola® may help with any nausea that may be experienced.
Diet: Eat any nourishing food that can be taken with comfort. Avoid extremely hot foods. You may need to stick to soft foods for the first few days after the procedure.
Sutures: These are placed at the site of the biopsy and will fall apart on their own within 7–10 days.
Mouth Rinses: Keeping your mouth clean after extractions is essential. Use the prescribed Peridex™ Rinse at least 3 times daily. Gently roll a capful of solutions for approximately 1 minute and spit.
Brushing: Begin your normal oral hygiene routine as soon as possible after the procedure. Soreness and swelling may not permit vigorous brushing, but please make every effort to clean your teeth within the bounds of comfort.
Healing: Normal healing after biopsy should be as follows: The first couple of days are generally the most uncomfortable, and there is usually some swelling. On the third day, you should be more comfortable and, although possibly still swollen, can usually begin a more substantial diet. If you do not see continued improvement, please call our office. Peak swelling is at day 3–4 after surgery. Do not be worried; this is normal.
Note: Tongue biopsies can be very painful for 1–2 weeks after surgery. There is usually a good amount of swelling as well. This swelling usually resolves over the next 3–4 days.
It is our desire that your recovery be as smooth and pleasant as possible. Following these instructions will assist you, but if you have questions about your progress, please call our office. A 24-hour answering service is available to contact the doctor on call after hours. Calling (713) 665-9200 during office hours will afford a faster response to your question or concern.
|
https://www.bellaireoralsurgery.com/instructions/post-operative-instructions-biopsy/
|
According to information disclosed by the president of the Ghana Olympic Committee, Mr. Ben Nunoo Mensah; Jamaican athlete, Asafa Powell, and his family will visit Ghana this December.
Leveraging his relationship with Powell’s wife, Alyshia Powell, who happens to be his niece; Nunoo claimed that the record-breaking Jamaican Sprint Legend would visit the Motherland.
Powell, who is known for breaking the ten-second barrier 97 times—more times than anyone else—specializes in the 100-meter race.
The Jamaican sprinter has set several records; including setting the 100-meter world record twice, between June 2005 and May 2008, with times of 9.77 and 9.74 seconds.
Powell has consistently broken the 10-second barrier in competition, and his personal best of 9.72 seconds ranks fourth among men’s 100-meter runners of all time.
He presently owns the world record for the 100-yard dash, which he achieved on May 27, 2010, in Ostrava, Czech Republic, with a time of 9.09 seconds. He won gold in the 4 x 100-meter relay at the 2016 Olympic Games in Rio.
Furthermore, he will hold a series of events with young athletes and visit major tourist locations and educational institutions while in Ghana.
Powell’s Achievements
Powell will also meet with high-ranking government officials and investigate how to strengthen links between Ghana and Jamaica in sports and related industries.
Mr. Ben Nunoo Mensah stated that Ghana and Jamaica share many similarities, and that supermodel Alysha Miller Powell will be a guest of various women’s organizations such as the Women’s Commission of the GOC, WOSPAG, and WISA.
Powell participated in the 100 meters at the Olympics in 2004, 2008, and 2012. This placed him fifth in 2004 and eighth in 2012, despite injuring his groin during the race. He won a bronze and a silver medal in the 100 m and 4 x 100 m relay events at the 2007 Osaka World Championships, and he won two golds and one silver medal at the Commonwealth Games.
He won bronze in the 100 meters and gold in the relay at the 2009 World Championships. Powell has won the IAAF World Athletics Final five times. He also holds the world record in the 100-meter dash.
|
https://iloveafrica.com/world-sprint-legend-asafa-powell-and-family-to-visit-ghana/
|
The heatwave of 2003 – which peaked at 38.5°C in the UK – will be a normal summer by the 2040s, leading to related deaths more than tripling, according to the Committee on Climate Change (CCC) report published last month.
Currently no policies exist to ensure homes, schools and offices remain tolerable in high heat, and critical facilities – such as hospitals and care homes – are particularly at risk.
The paper Design and delivery of robust hospital environments in a changing climate – funded by the UK Engineering and Physical Sciences Research Council (EPSRC) – investigated the incidence of summer overheating in five hospital building types in the National Health Service’s (NHS’s) acute estate.1 Here, we focus on the 1983 Rosie maternity hospital, part of Addenbrooke’s in Cambridge, during the summers of 2010 and 2011, when it experienced elevated internal temperatures – increasingly an issue for the NHS.2
From analysis of its real and simulated performance – now and through the coming century – alternative corrective interventions were devised, with a focus on summer overheating. The full paper reports the relative costs and ‘value for money’ of four options.
The Rosie is a courtyard type of building: steel and concrete-framed, three storeys, with double-loaded racetrack corridors and brick-clad. Research identified 117 medium-rise courtyard examples in England, covering approximately 3 million m2.
In 2010, the – since refurbished – building was mechanically ventilated throughout the year, with a set point of 24°C. Air, which was maintained below 22°C all year, was supplied through corridors, and exhausted in wards, WCs and dirty utility rooms. Warm air heating was supplemented by perimeter hot water (HW) heating elements without local thermostatic control and with no zone control. The building provided chilled water to an air handling unit (AHU), but only delivered comfort cooling to intensive care units (ITUs) and operating theatres. Orientated east-west, the building’s main entrance, and many of the multi-bed wards – occupying the second and third levels – face south.
The researchers’ data loggers were placed on the second floor in June 2010. Figure 1 records factors that were observed to contribute to overheating, including: uninsulated steam pipes; internal heat gains from lighting and equipment; liberal glazing, largely fixed with a limited row of lights opening to only 100mm, exposed to direct solar radiation. Thermal comfort was marginal, even on mild days, in south- and south-westerly facing wards.
Carter Bronze Medal Award winner
This article is an edited version of the paper: A medium-rise 1970s maternity hospital in the east of England: Resilience and adaptation to climate change , which appeared in CIBSE’s technical journal BSERT. The paper won the 2016 Carter Bronze medal for most highly rated paper relating to application and development. The paper was written by Alan Short, Giridharan Renganathan and Kevin Lomas (University of Cambridge/University of Kent/University of Loughborough). CIBSE members can access BSERT free from the Knowledge Portal.
Project definitions of thermal comfort included the simple static guidelines and criteria described by CIBSE and the adaptive thermal comfort thresholds in BS EN15251. The appropriateness of these approaches and their relative credibility is discussed by Lomas and Giridharan, and considered in the recent redrafting of the health technical memorandum (HTM) 07-02.3,4
The existing building enjoyed relatively low energy and low carbon performance against the Department of Health (DH) guidance benchmarks – but at the cost of comfort, unable to shed heat in relatively mild external conditions.
Hobo U2 temperature loggers monitored 26 spaces on three levels at hourly intervals from July 2010 to October 2011. The second level emerges as the hottest floor, where logger AR2-SB202 recorded the hottest space at 30.7°C. Maximum ambient temperatures during the summers of 2010 and 2011 were 29.6°C and 31.2°C respectively.
The year 2010 was relatively cool. Night temperatures were consistently uncomfortable – 23.4°C to 26.1°C – and one bedroom recorded 1,992 hours above 25°C, 888 of which occurred at night. Simple threshold criteria miss this. In the 2030 Design Summer Year (DSY), the existing building has a mean night temperature in excess of 25°C. The mechanical ventilation rate is too low – in most bedrooms it does not reach a minimum value of 10 l/s/p.
To predict the annual frequency of overheating of the wards, and the energy demands and CO2 emissions in the current climate, the dynamic thermal model IES was used.5 The full paper records reasonable agreement between IES predictions and measured values.
Current and future weather files were created by the Prometheus research team at Exeter University, and the future probabilistic weather files were derived using UKCP09 data. A detailed account of this process has been presented elsewhere.6 Schemes intended to deliver greater resilience to hot summers were devised, starting with what appears to have become the enlightened – but questionable – industry standard Passivhaus-derived model.
Option 1: Sealed mechanical ventilation heating and cooling (SMVHC)
In this option glazing is sealed, airtightness improved, and 100mm of insulation added to the roof and external walls. Mechanical ventilation is operated to achieve the then DH recommendation of 6ach, with 60% heat recovery. DSY peak temperatures will oscillate around 28°C, so additional cooling capacity will be required and will become increasingly necessary. Does the building have the capacity to accommodate it? Predicted energy demand is high (Figure 5) and associated CO2 emissions very high – 130kgCO2/m2 (Figure 6). Almost two-thirds of this will result from its electrical demand, which will, of course, rise as cooling capacity is increased.
Option 2: Natural cross-ventilation retaining perimeter heating (NCVPH)
This reintroduces natural ventilation as the primary ventilation strategy (Figure 2): roof insulation is increased; all glazing to south – southwesterly and southeasterly elevations – is shaded externally, to a geometry that preserves bedhead-level views out; all glazed panels open to 45° (safeguarded with grillage); opening, opaque panels are added; and the whole recipe is repeated on the glazed courtyard elevations.
Spaces adjacent to the courtyards are opened, to enable distributed cross-ventilation by the removal of some cellular rooms to form larger, open, patient day-spaces. Suspended ceilings are removed in these areas to reveal concrete soffits that capture ‘coolth’ from cross-ventilation. Ducts connect these inboard spaces directly to the perimeter.
Peak DSY temperatures hover between 33.5°C and 34.9°C. Additional cooling will be required, but night ventilation cooling is wholly excluded by current set-point policy. A peak of 31°C is predicted in Test Reference Year (TRY) summer conditions. More research is required into clinically safe night conditions in maternity wards. Option 2 has the lowest energy penalty (Figure 5) and markedly lower CO2 emissions (Figure 6) than option 1.
Option 3: Advanced natural cooling summer ventilation (ANCSVPH)
Courtyards are unheated, glazed atria with a liberal opening area above to dissipate summer solar gains (Figure 3). Air is supplied via concrete ducts below the ground slab, offering a measure of ground cooling. All glazing to the atria becomes operable to 45° and perimeter heating is retained. In winter mode, air is admitted through damper-controlled perimeter heating units. Transfer ducts exhaust air from zones adjacent to the atria and, as in option 2, suspended ceilings are cut back to expose thermal mass. All vulnerable glazing is shielded from direct summer solar gains. Option 3 offers similar DSY peak conditions as option 2, but TRY peaks within current guidance. Predicted energy demand and CO2 emissions are only marginally higher than option 2 (Figures 5 and 6).
Option 4: Natural ventilation incorporating Passive downdraught cooling and perimeter heating (NVPDCPH)
This option (Figure 4) also proposes enclosing the courtyards, but developing the low energy cooling strategy of the authors’ UCL School of Slavonic and East European Studies in London. Cooled water batteries at high-level openings induce a downward flow of pre-cooled air, contained by a lightweight, acoustically absorbent, fabric shroud. The cooled air is then drawn across surrounding occupied spaces.
The diagram proposes ground-sourced cooling to supplement the action of the Passive downdraught ceiling (PDC) by using seasonal thermal storage, readily available water tanks, so that heat gained from summer hot spells is dissipated in winter, and winter coolth used in summer. Banks of passive solar water heaters on the roof of each PDC rooflight supplement warming of winter supply tanks. Recovered heat from all sources is gathered in winter to supplement the supply to the perimeter heating system. Option 4 offers lower TRY and DSY peaks than options 2 and 3. Energy input is higher, but the CO2 penalty is in line with options 2 and 3 (Figures 5 and 6). This is a complex option to model.
During extreme years, DSYs, option 1 (not illustrated) projected the best performance, while the 2010 building exhibited the worst performance. For the current DSY extreme conditions, all four options broadly met the HTM 03 1%/28°C, while – for the 2030s – option 2, 3 and 4 exceeded the criteria. But improved performance is achieved at a cost. Although the 2010 building, unaltered, had the worst thermal performance, it had the lowest energy consumption (Figure 5).
Figure 5: Predicted energy demand for existing and refurbishment options for the year 2010, Cambridge (Bedford weather file). The energy values are the average of monitored and modelled spaces AR1-EX, AR2-MB2 and AR3-DR
Figure 6: Predicted CO2 emissions for existing and refurbishment options for year 2010, Cambridge (Bedford weather file). The CO2 values are the average of monitored and modelled spaces AR1-EX, AR2-MB2, and AR3-DR
The DH Energy Efficiency Fund Scheme of 2013-14, which Alan Short administered in part, would not have funded any of the options; the Treasury requires a return on investment of 2.4 within five years of implementation. This is potentially a major barrier to achieving the adaptation of the public, non-domestic building stock – which is unfortunate given that the NHS Retained Estate seems a particularly promising place to implement a public sector adaptation scheme.
Acknowledgements
The authors would like to acknowledge the contributions of Phil Nedin and Shahila Sheikh, mechanical engineers at Arup; Paul Banks and David Nichol, quantity surveyors at Davis Langdon Aecom building cost consultants; Short and Associates Architects for producing adaptive design drawings; colleagues engaged in the EPSRC project; the Addenbrooke’s hospital estates and facilities department; and the staff of the Rosie for their forbearance during data collection.
- This article was written by Alan Short, University of Cambridge; Renganathan Giridharan, University of Kent; and Kevin Lomas, University of Loughborough
References:
- Short CA, Lomas KJ, Giridharan R and Fair AJ Building resilience to overheating into 1960s UK hospital buildings within the constraint of the national carbon reduction target: adaptive strategies, Building and Environment, 55, pp73-95 (2012) 10.1016/j.buildenv.2011.12.006
- Heatwave Plan for England 2014. Protecting health and reducing harm from severe heat and heatwaves. Public Health England, May 2014, downloaded 23 October 2014.
- Lomas KJ ,Giridharan R, Thermal comfort standards, measured internal temperatures and thermal resilience to climate change of free-running buildings: a case study of hospital wards, Building and Environment, 55, pp57-72 (2012) 10.1016/j.buildenv.2011.12.006
- The guidance on energy efficiency for the NHS, Health Technical Memorandum (HTM) 07-02, published in March 2015.
- Integrated Environmental Solutions (IES) (2010) IES VE Software Version 6.4, accessed 23 February 2012.
- M Eames, T Kershaw, D Coley (2011), On creation of future probabilistic design weather years from UKCP09, Building Services Engineering Research and Technology 32 (2) (2011) 127-142.
- NHS Energy Efficiency Fund 2013-14, adjudicating the distribution of the £50m fund across NHS England, with the professor of sustainable engineering, reporting to the under-secretary of state for health. The executive summary was published in February 2015.
|
https://www.cibsejournal.com/case-studies/heat-stress-addressing-overheating-at-addenbrookes-rosie-maternity-unit/
|
In Search of Beauty refers to a long-term project that reflects our commitment to the arts and the creative industry as a whole, by finding the beautiful in our lived environment. While the definition of beauty will always remain subjective, the common definition is a person, place, object, or experience which obtains the perception of balance and harmony with its surroundings.
In Search of Beauty documents a multigenerational experience of two women traveling to various cities, towns, and villages around the world. The blog features what we find beautiful in the locations we visit by highlighting what to see, who to know, and how to live.
As a scholar of Aesthetics, Jennifer Ford finds it easy to become lost in the various textbook definitions of “beautiful”, and how it seems at odds with the social and political environment of the last few years. She has traveled the world and inherently knows that most people are genuinely good, that most places have an element of beauty, and many experiences can result in beauty, given the right perspective. Unfortunately, the silver lining has been lost in the majority of our social media exchanges, news media productions, and even in some of our personal relationships. The tipping point came when she realized that fear and disappointment was growing in her 11-year-old son as he scanned newspapers for a current events assignment, which led to a discussion on how we can make a make a positive impact on the barrage of information and images we digest. The mother-son duo decided to seek out people who are doing beautiful things in their community, art and architecture created for the sake of beauty, and places on earth that could only be defined as sublime.
Bridget O'Reilly has always found a way to immerse herself in the art world, whether it was through an internship, job, traveling, or creating. She has had two aha moments in her life. The first unexpectedly occurred while touring the Notre-Dame Cathedral in Paris when she broke into tears over the sheer beauty of a place she had only ever seen in textbooks and movies; in that moment, she understood that kind of overwhelming emotion meant something greater than herself and art would forever be an integral part of her life. The second came when she brought artwork by artists from the Midwest into an East-Coast art fair last summer. As she unloaded the pieces carefully from the truck, she felt incredibly humbled to be able to share beauty that had never reached anywhere outside of its' hometown. Bridget knew that these two experiences (among many others) meant that she had more work to do in finding and spreading beauty to share.
We believe that beauty is everywhere. By starting in our own community and then working our way around the world, we hope to be a point of inspiration, positivity, and delight.
Jennifer
There is so much beauty in the world. When asked the question, “what do you find beautiful”, it is easy to point directly to my art history background and rattle off five of my favorite classical works of art. However, if I take time to really examine that question beyond its surface level implications I would talk about my son’s laugh, my grandmother’s garden, the work of my favorite author, and the thousands of beautiful moments in my day that I should be grateful for. Over the past year, it seemed so much easier to find the ugly. Social Media, the press, the blah landscape of a Midwest winter without snow contributed to a negative spiral of complacency and ambivalence to my surroundings and the people around me doing beautiful things. This project is a more than a travel blog, it is a practice. Finding beauty, recognizing it, and appreciating it are skills that take time, energy, and a certain amount of commitment.
Truly living demands that we go beyond the empirical realms of life, and beautiful objects symbolize our ability to do so. The relationship between the mind and world shown in the beauty of nature gives us hope that happiness can be fully realized. While the qualifications for beauty differ for everyone, clinical psychologists have shown that the same reward and pleasure centers of the brain (the medial orbital frontal cortex) lights up for everyone when they behold an object deemed beautiful.
There are many direct links between the beauty and the judgement of taste. In much of his first book “Analytic of the Beautiful”, Kant explains how beauty creates a compatible relationship in our mind and, additionally, between our mind and the world.
He tells us that beauty happens when something in the world stimulates a response in our minds. Simply put, we can experience beauty by having a feeling which enters our consciousness. This feeling does not necessarily need to produce and explanation for the object of beauty. While not its main intention, beauty brings about a harmony of the cognitive powers of judgement, reason, and understanding.
An object shows us its beauty when its form or design is equal to the expectations of our imagination and understanding. Kant says, “A natural beauty is a beautiful thing, artistic beauty is a beautiful presentation of a thing”. Beauty arises from the way in which the object appears to us. In other words, judgements of beauty grasp only the form of the object and the way in which it plays out in our understanding and imagination.
This year will be dedicated to the beautiful places, people, things, and experiences that are naturally occurring in our world. While this journey will be focused on recognizing beauty in others, my personal quest to recognize the beauty in myself will be at the forefront of my mind. I am aware of the potential dysmorphia that could occur when looking for the good in others. It would be easy to discount my own beautiful accomplishments, my physical beauty, or my beautiful relationships when I seek out the most outstanding form of beauty in others. There are many old adages about the nature of beauty in our lived experiences. I hope to prove that beauty can be more than an appreciation of pleasing symmetries. It can be a source of the good and the powerful.
|
https://www.insearchofbeauty.net/posts/our-story
|
The best part of Racheal Clark’s job as a medical assistant instructor at Career Quest Learning Centers is seeing the spark of knowledge that comes from her students when they achieve just a little more than they thought they were capable of.
“Sometimes they like to think that they don’t like to think, but then I see that light bulb come on,” said Racheal.
The path to that bright light isn’t always easy, admits Racheal. She works with her students, using real world case studies to serve as examples of what they might encounter in the field. She forces them to think critically and says she asks them to answer questions “in their own words” to demonstrate that they have a full grasp of a concept she’s teaching.
And when they do “get it,” Racheal says “that’s what makes it all worthwhile.”
Racheal worked as a medical assistant for 18 years before coming to the Jackson campus to impart a little of her long-learned wisdom to the next generation of healthcare professionals. In addition to teaching them the fundamentals of the profession, Racheal says she tries to serve as a positive role model.
“If I can be a positive influence in their lives,” said Racheal “that is very important to me.”
Racheal teaches her students how to function as professionals in today’s healthcare field. She shows them the importance of knowing the material, working hard to achieve their goals and of being prepared for the what-if scenarios of work and life.
“My hope is that their work ethic will take them as far as they can go so they’ll be able to compete out in the world,” said Racheal.
She also says she hopes students who complete the medical assistant program develop a real desire for lifelong learning.
“If you think you know all there is to know in medicine, then that’s when it’s time to get out of the field,” said Racheal. “I want my students to always continue their education.”
Racheal is a popular instructor who has a great rapport with her students. She likes watching her students succeed and come together as a class. And she likes being there for them.
“If I can make a difference in one student’s life, that’s great,” said Racheal, “because they’ve already made a difference in mine.”
Racheal Clark is just one of the talented professionals who serves as an instructor at Career Quest Learning Centers. If you’re interested in learning from people who are passionate about their careers and the students they teach, check out all the career training programs at Career Quest Learning Centers.
|
https://www.careerquest.edu/blog/2014/04/mrs-clark-%E2%80%93turning-light-bulbs
|
Carotid Artery Disease
What is carotid artery disease?
The carotid arteries are the main blood vessels that carry blood and oxygen to the brain. When these arteries become narrowed, it’s called carotid artery disease. It may also be called carotid artery stenosis. The narrowing is caused by atherosclerosis. This is the buildup of fatty substances, calcium, and other waste products inside the artery lining. Carotid artery disease is similar to coronary artery disease, in which buildup occurs in the arteries of the heart and can cause a heart attack.
Carotid artery disease reduces the flow of oxygen to the brain. The brain needs a constant supply of oxygen to work. Even a brief pause in blood supply can cause problems. Brain cells start to die after just a few minutes without blood or oxygen. If the narrowing of the carotid arteries becomes severe enough that blood flow is blocked, it can cause a stroke. If a piece of plaque breaks off it can also block blood flow to the brain. This too can cause a stroke.
What causes carotid artery disease?
Atherosclerosis causes most carotid artery disease. In this condition, fatty deposits build up along the inner layer of the arteries forming plaque. The thickening narrows the arteries and decreases blood flow or completely blocks the flow of blood to the brain.
Who is at risk for carotid artery disease?
Risk factors associated with atherosclerosis include:
-
Older age
-
Male
-
Family history
-
Race
-
Genetic factors
-
High cholesterol
-
High blood pressure
-
Smoking
-
Diabetes
-
Overweight
-
Diet high in saturated fat
-
Lack of exercise
Although these factors increase a person's risk, they do not always cause the disease. Knowing your risk factors can help you make lifestyle changes and work with your doctor to reduce chances you will get the disease.
What are the symptoms of carotid artery disease?
Carotid artery disease may have no symptoms. Sometimes, the first sign of the disease is a transient ischemic attack (TIA) or stroke.
A transient ischemic attack (TIA) is a sudden, temporary loss of blood flow to an area of the brain. It usually lasts a few minutes to an hour. Symptoms go away entirely within 24 hours, with complete recovery. When symptoms persist, it is a stroke. Symptoms of a TIA or stroke may include:
-
Sudden weakness or clumsiness of an arm or leg on one side of the body
-
Sudden paralysis of an arm or leg on one side of the body
-
Loss of coordination or movement
-
Confusion, decreased ability to concentrate, dizziness, fainting, or headache
-
Numbness or loss of feeling in the face or in an arm or leg
-
Temporary loss of vision or blurred vision
-
Inability to speak clearly or slurred speech
If you or a loved one has any of these symptoms, call for medical help right away. A TIA may be a warning sign that a stroke is about to occur. TIAs do not precede all strokes, however.
The symptoms of a TIA and stroke are the same. A stroke is loss of blood flow (ischemia) to the brain that continues long enough to cause permanent brain damage. Brain cells begin to die after just a few minutes without oxygen.
The disability that occurs from stroke depends on the size and location of the brain that suffered loss of blood flow. This may include problems with:
-
Moving
-
Speaking
-
Thinking
-
Remembering
-
Bowel and bladder function
-
Eating
-
Emotional control
-
Other vital body functions
Recovery also depends on the size and location of the stroke. A stroke may result in long-term problems, such as weakness in an arm or leg. It may cause paralysis, loss of speech, or even death.
The symptoms of carotid artery disease may look like other medical conditions or problems. Always see your doctor for a diagnosis.
How is carotid artery disease diagnosed?
Along with a complete medical history and physical exam, tests for carotid artery disease may include:
-
Listening to the carotid arteries. For this test, your doctor places a stethoscope over the carotid artery to listen for a sound called a bruit (pronounced brew-ee). This sound is made when blood passes through a narrowed artery. A bruit can be a sign of atherosclerosis. But, an artery may be diseased without producing this sound.
-
Carotid artery duplex scan. This test is done to assess the blood flow of the carotid arteries. A probe called a transducer sends out ultrasonic sound waves. When the transducer (like a microphone) is placed on the carotid arteries at certain locations and angles, the ultrasonic sound waves move through the skin and other body tissues to the blood vessels, where the waves echo off of the blood cells. The transducer sends the waves to an amplifier, so the doctor can hear the sound waves. Absence of or faintness of these sounds may mean blood flow is blocked.
-
MRI scan. This procedure uses a combination of large magnets, radiofrequency energy, and a computer to make detailed images of organs and structures in the body. For this test, you lie inside a big tube while magnets pass around your body. It’s very loud.
-
Magnetic resonance angiography (MRA). This procedure uses magnetic resonance technology (MRI) and intravenous (IV) contrast dye to make the blood vessels visible. Contrast dye causes blood vessels to appear solid on the MRI image so the doctor can see them.
-
Computed tomography angiography (CTA). This test uses X-rays and computer technology along with contrast dye to make horizontal, or axial, images (often called slices) of the body. A CTA shows pictures of blood vessels and tissues and is helpful in identifying narrowed blood vessels.
-
Angiography. This test is used to assess the how blocked the carotid arteries are by taking X-ray images while a contrast dye is injected. The contrast dye helps the doctor see the shape and flow of blood through the arteries as X-ray images are made.
How is carotid artery disease treated?
Your healthcare provider will figure out the best treatment based on:
-
How old you are
-
Your overall health and medical history
-
How sick you are
-
How well you can handle specific medicines, procedures, or therapies
-
How long the condition is expected to last
-
Your opinion or preference
If a carotid artery is less than 50% narrowed, it is often treated with medicine and lifestyle changes. If the artery is between 50% and 70% narrowed, medicine or surgery may be used, depending on your case.
Medical treatment for carotid artery disease may include:
Lifestyle changes
-
Quit smoking. Quitting smoking can reduce the risk for carotid artery disease and cardiovascular disease. All nicotine products, including electronic cigarettes, constrict the blood vessels. This decreases blood flow through the arteries.
-
Lower cholesterol. Eat a low-fat, low-cholesterol diet. Eat plenty of vegetables, lean meats (avoid red meats), fruits, and high-fiber grains. Avoid foods that are processed, and high in saturated and trans-fats. When diet and exercise are not enough to control cholesterol, you may need medicines.
-
Lower blood sugar. High blood sugar (glucose) can cause damage and inflammation to the lining of the carotid arteries. Control glucose levels through a low-sugar diet, and regular exercise. If you have diabetes, you may need medicine or other treatment.
-
Exercise. Lack of exercise can cause weight gain and raise blood pressure and cholesterol. Exercise can help maintain a healthy weight and reduce risks for carotid artery disease.
-
Lower blood pressure. High blood pressure causes wear and tear and inflammation in blood vessels increasing the risk for artery narrowing. Blood pressure should be below 140/90 for most people. People with diabetes may need even lower blood pressure.
Medicines
Medicines that may be used to treat carotid artery disease include:
-
Antiplatelets. These medicines make platelets in the blood less able to stick together and cause clots. Aspirin, clopidogrel, and dipyridamole are examples of antiplatelet medicines.
-
Cholesterol-lowering medicines. Statins are a group of cholesterol-lowering medicines. They include simvastatin and atorvastatin. Studies have shown that certain statins can decrease the thickness of the carotid artery wall and increase the size of the opening of the artery.
-
Blood pressure-lowering medicines. Several different medicines work to lower blood pressure.
If a carotid artery is narrowed from 50% to 69%, you may need more aggressive treatment, especially if you have symptoms.
Surgery is usually advised for carotid narrowing of more than 70%. Surgical treatment decreases the risk for stroke after symptoms such as TIA or minor stroke.
Surgical treatment of carotid artery disease includes:
-
Carotid endarterectomy (CEA). This is surgery to remove plaque and blood clots from the carotid arteries. Endarterectomy may help prevent a stroke in people who have symptoms and a narrowing of 70% or more.
-
Carotid artery angioplasty with stenting (CAS). This is an option for people who are unable to have carotid endarterectomy. It uses a very small hollow tube, or catheter, that is thread through a blood vessel in the groin to the carotid arteries. Once the catheter is in place, a balloon is inflated to open the artery and a stent is placed. A stent is a thin, metal-mesh framework used to hold the artery open.
What are the complications of carotid artery disease?
The main complication of carotid artery disease is stroke. Stroke can cause serious disability and may be fatal.
Can carotid artery disease be prevented?
You can prevent or delay carotid artery disease in the same way that you would prevent heart disease. This includes:
-
Diet changes. Eat a healthy diet that includes plenty of fresh fruits and vegetables, lean meats such as poultry and fish, and low-fat or non-fat dairy products, Limit your intake of salt, sugar, processed foods, saturated fats, and alcohol.
-
Exercise. Aim for 40 minutes of moderate to vigorous-level physical activity at least 3 to 4 days per week.
-
Manage weight. If you are overweight, take steps to lose weight.
-
Quit smoking. If you smoke, break the habit. Enroll in a stop-smoking program to improve your chances of success. Ask your doctor about prescription options.
-
Control stress. Learn to manage stress in your home and work life.
When should I call my healthcare provider?
Learn the symptoms of stroke and have your family members also learn them. If you think you are having symptoms of a stroke, call 911 immediately.
Key points about carotid artery disease
-
Carotid artery disease is narrowing of the carotid arteries. These arteries deliver oxygenated blood from the heart to the brain.
-
Narrowing of the carotid arteries can cause a stroke or symptoms of a stroke and should be treated right away.
-
Eating a low-fat, low-cholesterol diet that is high in vegetables, lean meats, fruits, and high fiber is one way to reduce the risk of carotid disease. Exercise, quitting smoking, blood pressure control, and medicine can also help.
-
Opening the carotid arteries once they are narrowed can be done with a surgery or with angioplasty and a stent.
-
Carotid artery disease may not have symptoms, but if you have significant risk factors, see your healthcare provider for screening and diagnosis.
|
https://www.hopkinsmedicine.org/health/conditions-and-diseases/carotid-artery-disease
|
This fourth edition of the book provides readers with a detailed explanation of PLM, enabling them to gain a full understanding and the know-how to implement PLM within their own business environment. This new and expanded edition has been fully updated to reflect the numerous technological and management advances made in PLM since the release of the third edition in 2014, including chapters on both the Internet of Things and Industry 4.0.
The book describes the environment in which products are ideated, developed, manufactured, supported and retired before addressing the main components of PLM and PLM Initiatives. These include product-related business processes, product data, product data management (PDM) systems, other PLM applications, best practices, company objectives and organisation. Key activities in PLM Initiatives include Organisational Change Management (OCM) and Project Management. Lastly, it addresses the PLM Initiative, showing the typical steps and activities of a PLM project or initiative.
Enhancing readers’ understanding of PLM, the book enables them to develop the skills needed to implement PLM successfully and achieve world-class product performance across the lifecycle.
- About the authors
-
Dr John Stark, a recognised PLM expert, started working in product development in 1979. In the 1980s, he worked in computer-aided design, product data management and business process improvement. He has worked as a consultant to companies in the product development and support area since the mid-1980s; first for Coopers & Lybrand, then as an independent consultant after setting up his own business in 1991. Over the last 30 years, he has helped over 100 companies implement PLM. He has written numerous highly successful books on PLM. He's also developed and delivered PLM courses for executives, people working in the product lifecycle, PLM professionals and university students.
|
https://www.springer.com/us/book/9783030288631
|
Investments in new technology are leaving companies with less to invest in training and development while their dependence on individual employees is growing sharply. Workers are being asked to achieve more productivity with less resources while their jobs are becoming more complex, eroding the competitive marketplace advantage of their employers. To facilitate productivity, engagement and retention, we need to reconsider how we train and equip our team members. Organizations who master teaching in a way that matches their employees’ learning process will prosper.
Why employee learning is so important
Work in the last several decades has become increasingly complex. Mass-digitization of information, decreasing costs of computation, and ease of communication across the organization have all contributed to major changes in the way we work.1 With each technological breakthrough, companies have been able to become more productive with less human capital.
These advances come with their own costs. While less employees are doing more work, the tools they use and the work they do has become much more complicated. According to Deloitte, more than 80% of organizations recognize the need to simplify work as an important problem.2 Ignoring this problem leads to even bigger issues with employee productivity, engagement3 and ultimately retention.4
Why the most prominent solutions aren’t paying off
To address the issue of complexity, a multi-billion-dollar industry has emerged to facilitate the way we train our employees and share knowledge across our organizations. Unfortunately, many of these solutions have only exacerbated the problem further by introducing more complexity.
Management systems provide tools for developing and organizing training materials, but not a framework for driving employee engagement in their day-to-day work. They require large upfront investments in budget, IT and infrastructure. They don’t integrate easily with existing HR or Business Management systems. And while they do help to produce and organize content, they don’t help to identify the best content, or determine who should be viewing it, or provide it to you when you need it.
So, what’s missing from these systems? Many of them focus on the act of training, organizing files and tracking their use. They don’t reinforce best practices, help to identify better ones, or become a part of the workflow itself. Most importantly they don’t focus on the way people learn, or how to change behavior.
If standardizing and training on the performance of complex procedures is important to your business, first you need to understand the basics of how your employees process information into knowledge.
How the brain sorts inbound data
New information is introduced to us through our senses. Data enters our brain at a dizzying pace. Take a moment to try to notice all the external forces impacting your current environment.
What sounds can you hear? What do you see as you look around? What’s the temperature? What can you smell?
The number of competing inputs will vary greatly depending on your location.
Is the phone ringing? Is your co-worker talking on the other side of the wall? Is your calendar alert popping up? Are your kids playing with the dog? Is your spouse making dinner?
Your brain takes in every single one of the hundreds and thousands of signals around you. Thankfully, you don’t have to think about each of these signals individually and decide how to react to them.
As information comes into your brain through your senses, it enters your immediate memory. Here, your brain makes quick (millisecond) judgements – based on previous experiences – about whether to further review the information or discard it.
For example, if you see an ambulance coming with lights and sirens, you immediately take notice. You assess the situation and decide what to do based on previous experience. But if a colleague’s phone rings, you probably don’t even hear it.
Once the brain discards something, it’s gone for good.6 However, if based upon previous experiences, your brain decides that certain information requires additional attention, it is passed along to your working memory.
In your working memory, your brain must make a lot of decisions, so it can only process a few things at a time. Your working memory determines whether a piece of information is:
- something to simply discard – a passing smell that reminded you of your grandmother
- something that you should maintain for a short period – where did you park at the grocery store
- something that is compelling enough to make a permanent part of you – August 23 is your son’s birthday.
Which memories become part of you
Every new input that your brain processes is filtered through your past experiences. The information the brain readily passes into long-term memory relates to survival, or your most vivid emotions. So typically, the best and worst things that happen to you become long-term memories fastest and easiest.
For most of us, the best and worst moments of our lives don’t happen, on a regular basis, during our work life. Fortunately, the brain has another standard for entering information into long-term memory. The other process performed on information in working memory is an assessment of whether the information makes sense and whether it has meaning.6 If information makes sense, that means it fits into the way you understand the world. If information has meaning that means it is relevant to you.
Learning in the workplace
When information is presented to you during the work day, you review it in context of what you understand about your job, your company and your industry. If the information fits into that context, it will likely have meaning for you. If you can envision it applying to your personal work, then it is relevant for you. With both boxes checked, the information is a candidate for long-term memory storage.
However, experience always shades how new information is perceived too. If an employee has had a previous experience with something and it didn’t work, they will likely expect new, similar information to perform the same way.6
For example, imagine an employee was attempting to locate an SOP document on Sharepoint and couldn’t find it after several attempts. That negative experience could taint their view of any document on that platform. Now when you reference a new document, you not only have to rely on the quality of the document itself, you must also overcome the prior negative experience.
Another important aspect of learning is repetition or practice. Activities performed at work are really no different than riding a bike or playing a musical instrument. As you practice an activity, you become more efficient at it. When the same information is presented to your brain with repetition, it begins to develop physical and chemical changes that relate to the activity. Eventually, the activity can be performed reflexively as the brain builds a pattern that is automatically recalled when you perform the task.
That’s why it’s critical that materials used in training are also incorporated into daily activity. While in-person training sessions are valuable at introducing the information into the working memory, the brain will only retain certain portions of that information, and only for a short period of time. If the pattern of information is repeated while it is still in working memory, then it is much more likely that it will pass into long-term memory.
The trick is making sure that the same materials are used throughout the learning process and well into the performance of the task.
Keys to helping employees learn:
- Make the information meaningful and relevant.
- Present it in a clear and actionable manner, leaving the employee with the expectation that this will help them succeed.
- Engage as many senses as possible with the material.
- Give them the information in context, ideally while they are performing the task.
- Make the information part of the task so that it is repeated each time and the employee knows s/he is responsible for it.
- Check for comprehension to ensure the most important information has been retained.
Tools built on these principles
From the beginning, the fundamental nature of how the brain learns has served as the compass for developing the Acadia Performance Platform. Each phase of the memory process has been accounted for, and improvements have been made over time based on customer feedback.
All knowledge entered into the Acadia platform can be retrieved by employees using simple and intuitive online tools. Tools they are accustomed to using in their daily lives. This triggers emotions from past successful experiences and puts the employee in the proper mindset for learning.
All materials used in training (step-by-step instructions, images, video, etc.) can be added to policies, procedures and manuals in Acadia, making sure they are easily accessible while performing activities. By seeing these materials multiple times, it reinforces the correct behavior and helps to generate long-term memories.
Similarly, policies and procedures can be automatically converted to auditable task lists that employees can use on mobile devices while performing activities. With a consistent and structured approach to each important procedure, the one best way to perform it becomes adopted quickly and easily.
Assessing comprehension, through quizzing, after receiving training, performing a task, or acknowledging a policy can further help to engage long-term memory. It reinforces the most important material and helps make the employee accountable.
Finally, and perhaps most importantly, employees can provide feedback on every document to suggest better, more efficient ways of performing tasks. Making an employee part of the process not only encourages a culture of continual improvement, it also helps them to think more critically about their work and makes each learning experience a positive one.
Sources:
- The World Is More Complex than It Used to Be
- Simplification of work
- History of employee engagement - from satisfaction to sustainability
- The Overwhelmed Employee: What HR Should Do?
|
https://www.acadia-software.com/news-insights/understanding-how-your-workforce-learns-can-make-them-more-productive?utm_source=SG&utm_medium=Eml&utm_campaign=Acadia-HowWorkforceLearns&utm_content=OOO&utm_start=20180612&utm_end=&utm_metric=CTs
|
Members of the North community gathered in the cafeteria to celebrate and recognize the diverse cultures of students at North during the first ever Multicultural Night on March 22.
The event was organized by North’s PTSO in collaboration with parents from around the community. In total, 14 different countries represented their cultures, with food and presentations to teach and illustrate each culture.
Pria Sarma, a parent of a junior at North, lead the organization of Multicultural Night with Monika Jain, parent of a sophomore at North. She said it had always been an aspiration to learn about other cultures and Multicultural Night was a perfect opportunity for students and parents at North to do so.
“Not only is this great way to learn about different cultures, but it’s also a great way to build community,” said Sarma. “And what’s a better way to get people together than with food?”
The culture stands included a variety of traditional foods native to the countries as well as different items of significance from each culture. A written explanation was also presented with every stand, providing additional insight and information.
“Representing your own culture makes you feel more connected with your community,” said Ana Kales, a parent of a freshman at North, who represented Venezuela. “Being able to learn about other cultures helps you connect with the world within your community.”
According to Jian, having Multicultural Night also allowed the participants of the event to better appreciate and understand their own cultures, as well as building empathy for other cultures. “Not only should we celebrate our own cultures,” said Jian. “But also the other cultures that make the world go round.”
Jian and Sarma worked with PTSO president Sally Brickell and Global Education Leadership Fund (GELF) coordinator Samantha Mandell to brainstorm and prepare for the event.
Sarma said the event was a success with a lot of energy and participants, especially for the first ever Multicultural Night.
Alex Cheeser, who works at the Scandinavian Culture Center, came to Multicultural Night to share his culture and represent the Scandinavian community in Newton, which he has seen grow over the past few years. He also pointed out the importance of understanding cultures around the world, especially in a place like Newton, where “there is so much diversity and little pockets of all different kinds of culture.”
Despite its large student body, North both recognizes and includes all cultures, according to sophomore Yesha Thakkar, who represented India at Multicultural Night.
“We have a lot of culture clubs, and culture days,” said Thakkar, “which shows how the North community understands and celebrates the diversity here.”
Thakkar also said she hopes that Multicultural Night becomes an annual tradition, to further expose the North community to the different cultures within it.
|
https://thenewtonite.com/33107/news/first-annual-multicultural-night-celebrates-cultural-diversity-in-north-community/
|
Tesla Inc (TSLA) stock skidded on Monday after a pair of bearish reports from two analyst firms. Both of them expect the automaker to disappoint on Model 3 deliveries, and one even said company management doesn’t seem confident that they’ve resolved the bottlenecks that have plagued Model 3 production since the beginning. Tesla Inc (TSLA) stock plunged more than 3%, continuing the downtrend it’s been on for about the last month, other than a minor uptick a little over a week ago.
Model 3 orders still on the rise
Dov Gertzulin's DG Capital is having a strong year. According to a copy of the hedge fund's letter to investors of its DG Value Partners Class C strategy, the fund is up 36.4% of the year to the end of June, after a performance of 12.8% in the second quarter. The Class C strategy is Read More
In a note to investors, KeyBanc analyst Brad Erickson said they finally got to drive the Model 3 for the first time during a meeting with Tesla Inc (TSLA)’s investor relations department. He said the car was about as they expected, but he described the interior as “simple,” expressing skepticism that it can attract “the sustainable over-$40K buyer.”
In spite of the anecdotal accounts about Model 3 buyers canceling their orders, he said that the company’s order book for the Model 3 is still growing, which seems to be calming Tesla Inc (TSLA) bulls for now. He also noted that this is a basic fact, “no matter where one stands on demand,” and he feels that Tesla Inc (TSLA) stock could rise on this simple fact, paired with progress in resolving the Model 3 production problems.
In fact, he warned about the potential for a short squeeze in Tesla Inc (TSLA) stock due to “signs of strong Model 3 demand” and “improving production.”
Here’s what Tesla must do to keep its stock going
Erickson reported that Tesla management still expects Model 3 gross margins to turn positive by the second quarter, and he feels that bulls are still waiting for “that breakout quarter” marked by the Model 2 turning positive, Model S and X margins improve, and cash burn slows.
The KeyBanc analyst continues to rate Tesla Inc (TSLA) stock at Sector Weight, as he doesn’t believe the automaker can achieve the gross margin it has said it can on the Model 3. He feels that bulls “are ultimately ascribing too much value from perceived innovative superiority.” He warned that Tesla Inc (TSLA) stock will likely stop rising if the automaker again comes up “far short” of its Model 3 production target, which is 5,000 per week, and its gross margin ramp.
Erickson warned that he has little confidence that Tesla Inc (TSLA) will be able to ramp production to expectations for the first or even second half of the year. However, he also notes that this remains “largely irrelevant” to Tesla Inc (TSLA) stock bulls. He added that the company’s investor relations essentially dodged his attempt to get an update on how production is going, and what’s worse is that he “detected low confidence of no further production bottlenecks.”
“The Company’s goal continues to be maximizing increases to weekly production run rates as fast as possible and then, quite literally, seeing what breaks and fixing it,” he wrote.
Tesla Inc (TSLA)stock is still a Sell: Goldman
Goldman Sachs analyst David Tamberrino offered up an even more bearish view of Tesla Inc (TSLA) stock in his own note today. The automaker is expected to reveal how many vehicles it delivered during the first quarter within the first few days of April, as it typically does this shortly after the end of every quarter. Tamberrino believes the EV maker is on track for yet another delivery miss, based on the company’s “February cadence.” Like Erickson, he believes that although Tesla Inc (TSLA) is delivering more and more Model 3 cars with each new month, it’s still on track to come up “well short” of the consensus. He even expects Tesla Inc (TSLA) to miss its own guidance for Model S and Model X deliveries, which some might see as a major setback.
The automaker guided for 100,000 vehicle deliveries this year, which implies 25,000 per quarter. However, we point out that because the company expects to increase production month by month, it shouldn’t be a surprise if it doesn’t deliver 25,000 vehicles in the first quarter. By the very nature of ramping production, this year could be back-end-loaded in terms of deliveries.
Tamberrino reports that implied weekly production rates according to VIN registrations are falling below the level needed to exit the first quarter producing 2,500 vehicles per week. In fact, he estimates that the quarter-to-date pace is actually down on both a year-over-year and sequential basis. Based on this, he’s now expecting the automaker to report 11,000 Model S deliveries and 11,000 Model X deliveries, which would be a decline from the previous total of 24,000 combined.
He believes the decline is due to the first quarter being seasonally softer and also because fourth-quarter production was impacted by the labor shift toward the Model 3 and a sell-down of inventory. He expects only 7,000 Model 3 cars to be delivered during the first quarter based on VIN registration numbers, which imply that the company is producing about 1,000 of them per week “at times” in the first quarter.
Will Tesla Inc (TSLA)stock break this year?
The Goldman Sachs analyst maintains his Sell rating and six-month price target of $205 per share on Tesla Inc (TSLA) stock. He believes investors won’t be surprised if the company misses expectations for Model 3 deliveries again. Like Erickson, he notes that bulls are just looking past the April update into July, when the company plans to be producing 5,000 Model 3 cars per week.
He believes Tesla Inc (TSLA) stock bulls expect guidance to imply 2,500 Model 3 cars per week. Further, Tamberrino contributed further to the anecdotes about Model 3 buyers cancelling their orders, warning that he sees risk of this due to the growing number of reports about vehicle quality and software problems. He also warned that if it becomes clear that Model 3 buyers are canceling their orders, Tesla Inc (TSLA) stock could come under a lot of pressure.
Tesla Inc (TSLA) stock plunged more than 3% in intraday trading on Monday, falling as low as $310 per share.
|
https://www.valuewalk.com/2018/03/tesla-inc-tsla-stock-tumbles-model-3/
|
Loam is a Michelin-starred restaurant and wine bar located just off Eyre Square in Galway city, where their philosophy is only to use ingredients that are native to the west of Ireland.
The restaurant opened its doors in November 2014 and was awarded its first Michelin star just ten months later (becoming only the second restaurant in Galway to receive the prestigious award). Since then, the restaurant has gone on to achieve further success with head chef and owner, Enda McEvoy, being voted ‘Best Chef in Ireland’ and Loam being named ‘Best Restaurant in Connacht’ at the Restaurants Association of Ireland Awards in May 2016.
Enda and his team focus on modern ambitious cooking rooted in tradition. Seasonally driven, they work very closely with local farmers and producers, many of whom are close friends, to get the products they need to reflect and capture the feeling and magic of the west of Ireland.
|
https://www.discoverireland.ie/galway/loam
|
The human foot and ankle is a complicated structure with 26 bones, 33 joints, and more than 100 tendons. The heel bone is the largest one in your foot. Heel pain may develop if it is injured or overused. It is recommended to see an orthopedic or podiatrist specialist to accurately diagnose the cause of your heel pain.
Heel Pain Causes and Treatments
There can be several causes of heel pain:
- Plantar Fasciitis is a painful condition that develops if you damage your plantar fascia ligament when too much pressure is applied regularly.
- Sprains and strains are common injuries of the body that often result from excessive physical activities.
- Fracture of the heel bone is a medical emergency, and urgent care must be acquired for it.
- Achilles tendonitis is a painful condition in which the tendon that connects the calf muscles to the heel becomes inflamed due to excessive physical activity.
- Bursitis and bursae are fluid-filled sacs in our joints; any condition that affects them can cause pain and joints’ problems.
- Heel bumps occur in teenagers as a result of using improper footwear and heels.
- Tarsal tunnel syndrome, a large nerve at the back of the foot, becomes entrapped, causing pain.
- The heel pad’s chronic inflammation can develop either by heavy footsteps or the heel pad becoming too thin.
Treatment of heel pain and plantar fasciitis requires the consultation of an expert podiatrist or orthopedic specialist. After running a few tests, the specialist will determine the cause of your heel pain and provide an appropriate treatment plan.
If you are experiencing heel swelling and pain, the first line of treatment is the conservative ones:
- Rest as much as possible.
- Apply ice to the heel for 10 to 15 minutes twice a day.
- Use heel lifts or shoe inserts to reduce pain.
- Wear a night splint, a unique device that stretches the foot while sleeping.
- Take over-the-counter pain medications.
- Wear shoes that fit correctly.
After a few days of following this regime, the swelling should subside, and the pain should be gone; however, if that does not happen, you should see a specialist.
Several conditions may cause pain in the heel bone:
- Fractures in the bone – possible due to injury.
- Wearing high heels for a longer time.
- Wearing ill-fitting shoes for a longer period.
- Running on hard surfaces without proper shock-absorbing shoes.
For the treatment for heel bone pain, rest as much as possible, take non-steroidal anti-inflammatory drugs and pain relievers; if the pain sustains, you should see an orthopedic or podiatrist specialist.
Heel Pain Diagnosis
To diagnose the real problem causing the heel pain, your doctor will conduct a series of tests.
- The feet are physically and visually examined for any visible signs of bruising or deformities, also with weight and non-weight-bearing activities.
- The foot is touched by the specialist and inspected for any swelling, deformities, swelling, tender spots, or any alterations in the foot and arch bones.
- To identify problems in the foot muscles, the specialist may perform a test that involves holding or moving the feet against resistance.
- You may be asked to stand, stroll, or run.
- The foot’s skin is evaluated for any signs of infection, bruising, or breaks in the skin.
- Foot nerves are tested to rule out any injuries sustained by them.
- X-rays, MRIs, or bone scans of the foot and the arch are ordered to determine the bone structure or soft tissue’s damage or abnormalities.
- Your specialist may ask blood tests to check any systematic disease or disorder such as diabetes, rheumatoid arthritis, or gout.
Once the specialist is sure about the cause of the pain, they can commence proper ankle and heel pain treatment.
What is the treatment for heel pain?
Heel and arch pain treatment depend on the condition that is causing the pain in your foot. Your doctor might prescribe one or more of the following:
- Physical therapy.
- Strengthening exercises.
- Surgery.
- Massage therapy.
- Chiropractic therapy.
- PRP injections.
- Heel spur removal.
Call us now to book an appointment with one of the best Orthopedic Specialists in NJ and NY to escape heel pain.
FAQ’S
What Causes Pain in the Heel of the Foot?
- Several reasons can cause heel pain, but the most important one is the Plantar fasciitis. A spur syndrome in the heel usually causes this condition in the heel.
- Other causes of heel pain may include arthritis, fracture, cyst, or damaged nerves.
What Is the Best Treatment for Plantar Fasciitis?
- Pain medications or NSAIDs are considered so far, the best treatment for plantar fasciitis.
- Do not apply ice directly to the heel.
- Strengthening exercises of lower foot are also considered as a good treatment option for plantar fasciitis.
What Aggravates Plantar Fasciitis?
- This injury usually occurs as a result of overuse or putting extra pressure on the foot.
- It also may occur as a result of wearing improper footwear choices.
How Long Does Heel Pain Last?
- The heel tissues may get affected and cause sharp, radiating, or stabbing pain in the heel.
- The recovery timeline for plantar fasciitis may vary from 6-18 months in case if it is left untreated.
- But after going through its non-surgical procedure, the success rate for most of the patients is 95-97% with guaranteed results.
|
https://completemedicalwellness.com/departments/orthopedic/foot-and-ankle-procedures/heel-pain-treatment/
|
These are some general questions that new artists might encounter in their art journey. I attempt to answer them with my best effort. For technical tips about drawing and painting, check the "Teaching" page instead.
Q: What copyright laws and knowledge should an artist be aware of?
- You own the copyright to what you originally create. You don't take credit for someone else's work. Plain and simple.
- What defines "original work"? This question can take some effort to answer.
If you work exclusively from your own imagination, it's definitely your original work.
If your work involves recognizable human figures, you may need to take some extra cautions because of something called "personality rights".
If you take photos of strangers in a public setting (such as a park, a restaurant, a supermarket, etc.), you do not need any consent to use a particular likeness in your artwork. The reason is that fine art is not considered "commercial use". However, let's say if a publisher wants to use your artwork as the cover of a book, then it becomes "commercial" and you do need written permission from the person you got the likeness from.
Q: Can anyone learn to draw and paint well? Do I have to be talented to make good artwork?
A: I came up with a phrase that precisely sums up my attitude towards this question.
"If you can write your name and if you can tell egg yolk apart from egg white, you could be as great as anyone in the history of art...with a lot of hard work, of course".
What I mean by that is, if you can write your name, you obviously have the proper hand eye coordination required to manipulate lines in different directions, so the physical aspect of drawing is no problem for you.
And if you can tell egg yolk apart from egg white, you obviously have the cognitive ability to distinguish different values and colors. So there, you have all the "innate talent" required to make great artwork. Now start practicing and unleash your creativity!
Q: How do I find my own art style? Will learning fundamentals make my artwork unimaginative and boring?
A: I used to struggle quite a bit with this question, but now I think I have a satisfactory answer to offer here. I know there are influential people in the art world who believe that training fundamentals will suppress the artistic creativity in people. I don't agree with that, and here is my argument. Think about the art of writing, the art of music, the art of dancing, etc...which one of these disciplines doesn't require rigorous fundamentals to start the journey? In writing, you have to first learn alphabets, vocabulary, and grammar, in which the rules are exactly the same for everybody, yet what you write about and in what style you write is completely up to you. I just can't see how learning alphabets, vocabulary, and grammar can limit your imagination and creativity, those are merely tools for you to use so you that what you write will make sense to other people. The same goes for art, the fundamental art training is only meant to teach you about the universal visual elements such as shape, value, color, edge, composition, texture, and form...whether you want to use them for realism, abstract art, modern expressionism, or whatever names you can come up with, is totally your choice. So all in all, yes you will have your own style eventually because your DNA is unique; and no, boring work is just boring work, it has nothing to do with good fundamentals. So learn the basic visual elements, and put your soul into them to make exciting artwork of your own.
Q: Do I really need to use expensive art supplies to make good art?
A: Absolutely not. Can you beat Roger Federer or Serena Williams if you can use the best tennis racket in the world while they can only use cheap, old rackets? I don't think so. Give Sargent or Schmid some dollar store acrylic paint and brushes and they will still create masterpieces after masterpieces. The top quality art supplies will do nothing for you if you have not mastered your drawing, shape, value, color, edge, composition, texture, and form. So if I were you, I would rather spend the money on some quality books and DVDs, because low end art supplies do not hinder your ability to learn and practice whatsoever. To get this straight, I'm all for using the best materials you can afford, but if you only have $100, I would recommend spending that on a good instructional video rather than expensive art supplies.
Q: What is your purpose in the art world?
A: The purpose for me is to get to know my true self better through art, because art scratches an itch in my heart that nothing else could, so I just want to see where art can take me and what's truly deep inside my soul.
Q: How do I find the inspiration for art? How do I know what to draw and paint?
A: The generic answer would be to ask yourself - What fascinates you in everyday life? What are you passionate about? What perspective do you want to share with the world? Put them in your artwork.
|
https://henrywtian.weebly.com/faqs.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.