text
stringlengths
8
5.77M
On 2012-01-19 6:26 PM, Hendrik Boom wrote: > Would text/plain be UTF-8? It could be. (It probably should be.) Also, given the custom reader ability of the framework, #lang text/plain;charset=utf-8 would be, while #lang text/plain;charset=iso-8859-15 would not be. (How handy that ; is a comment character in the bootstrap reader!) Aw, rats, I just tried that last example, and it doesn't work: tt.rkt:1:0: read: expected only alphanumeric, `-', `+', `_', or `/' characters for `#lang', found ; Regards, Tony
The Toronto police homicide unit has taken over an investigation into a fatal shooting in Scarborough Friday morning. Emergency crews were called to a home near Sheppard Avenue East and Brimley Road just before 10 a.m. Police say two victims were taken to hospital by emergency run. One has since been pronounced dead, while the other remains in hospital. There is no word yet on their ages or genders. The investigation is ongoing.
[Exploration of attitudes about healthy nutrition based on a questionnaire survey]. Nowadays the number of people suffering from different non-communicable diseases is continuously rising. However, the risk of the incidence of these diseases can be reduced with the help of conscious and healthy lifestyle. The main aim of the study was to explore Hungarian consumers' attitude related to healthy diet. A questionnaire survey was conducted with 473 respondents. According to the participants it is difficult to make head or tail of information about healthy nutrition, and the "Internet" is the most frequently used source of information. With cluster analysis 3 significantly different consumer groups were identified: participants of the "ambitious" group show positive attitude towards healthy diet; the "health conscious" cluster cares about and actively supports health and diet; and members of the "indifferent" cluster are less interested and do not make a remarkable effort for their healthy diet. Results of the questionnaire survey pointed out the importance of targeted information to relevant consumer groups, as well as the importance of popularization of accurate and reliable information sources. Furthermore, presentation and popularization of cost-effective healthy nutrition are of outstanding importance, especially for consumers in need (e.g. elderly, low-income people).
Poll Education Taylorsville Elementary School Principal Chuck Abell, right, presents a $25 prize to Kaylee Smith who won a poster contest at the school. The presentation was made April 29 during the school’s olympic-style opening ceremony for state testing. Spencer County High School is seeking professionals to help conclude the senior projects for the class of 2011. “For the final component, each senior must participate in an exit interview with a panel of three to four professionals,” organizers said in a news release. Interviews will be from 7:45 a.m. to 2:15 p.m. on May 23 at the high school. Those interested in participating in these interviews, please email available times to [email protected].
Badges His gritty backstory is that his abusive dad was a street hotdog vendor. He'd go around, just force feeding his victims sweet relish and hot mustard until their stomach explodes like the first victim in Se7en.
Q: Calculate the sum of i^2453467 mod 2453468 for 1<=i<=999999 (^ means power) How to do this type of problems efficiently in less time? I have tried to do this problem in Python but it's taking so much time. I have been thinking that maybe ^ mean xor not power but according to them it was power. This is a problem from a coding competition. A: In general case, yes, you should better use modular exponentiation, and this is indeed rather simple, as @F.Ju shown. However, with a bit of math you can calculate the sum completely with pen and paper1. The key thing to note is the fact the the exponent (2453467) is very close to the modulus (2453468), and this calls for a much simpler representation of x^2453467 mod 2453468. Indeed, if 2453468 were prime, then x^2453467 mod 2453468 would be always 1 according to the Fermat's little theorem. It is not prime though, but it has a very simple representation of 2*2*613367. So we can remember the Euler's theorem, and find that phi(2453468) equals to 1226732, and so 2453467=2*phi(2453468)+3. Thus for every x that is relatively prime with 2453468 we have x^1226732=1, and, as 2453467 is 1226732*2+3, we have x^2453467 mod 2453468=x^3 mod 2453468. Let us then consider the numbers that are not relatively prime to 2453468. In out range (1 to 999999) there are three kinds of numbers that are not relatively prime to 2453468. One is 613367, and it is relatively easy to proof that 613367^(2k+1) mod 2453468=613367 for each k. The other kind are numbers divisible by 4. For a number x=4k we need to find (4k)^2453467 mod (4*613367). It is equivalent to 4*(4^2453466*k^2453467 mod 613367) mod (4*613367) and by little Fermat's theorem this reduces to (4k)^3 mod (4*613367). The final kind is the numbers divisible by 2, but not 4, and they can be treated the same was as the previous kind. As a result, we have that For every x from 1 to 999999, x^2453467 mod 2453468 = x^3 mod 2453468 Hence, we need to calculate sum(x^3) mod 2453468 for x from 1 to 999999. But the sum under the modulus operation is, as is well known, just (n(n+1)/2)^2, with n being 999999. Therefore, our answer is 499999500000^2 mod 2453468, which evaluates to 2385752. 1 Well, almost. I used Python to do simple arithmetic. A: In python code answer = 0 for i in range (1, 1000000): answer += pow(i, 2453467, 2453468) print answer % 2453468 The speed seems fast enough
/* Copyright (c) 2012 the authors listed at the following URL, and/or the authors of referenced articles or incorporated external code: http://en.literateprograms.org/Red-black_tree_(C)?action=history&offset=20120524204657 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Retrieved from: http://en.literateprograms.org/Red-black_tree_(C)?oldid=18555 */ #include "rbtree.h" #include <assert.h> #include <stdlib.h> typedef rbtree_node node; typedef enum rbtree_node_color color; static node grandparent(node n); static node sibling(node n); static node uncle(node n); static void verify_properties(rbtree t); static void verify_property_1(node root); static void verify_property_2(node root); static color node_color(node n); static void verify_property_4(node root); static void verify_property_5(node root); static void verify_property_5_helper(node n, int black_count, int* black_count_path); static node new_node(void* key, void* value, color node_color, node left, node right); static node lookup_node(rbtree t, void* key, compare_func compare); static void rotate_left(rbtree t, node n); static void rotate_right(rbtree t, node n); static void replace_node(rbtree t, node oldn, node newn); static void insert_case1(rbtree t, node n); static void insert_case2(rbtree t, node n); static void insert_case3(rbtree t, node n); static void insert_case4(rbtree t, node n); static void insert_case5(rbtree t, node n); static node maximum_node(node root); static void delete_case1(rbtree t, node n); static void delete_case2(rbtree t, node n); static void delete_case3(rbtree t, node n); static void delete_case4(rbtree t, node n); static void delete_case5(rbtree t, node n); static void delete_case6(rbtree t, node n); node grandparent(node n) { assert (n != NULL); assert (n->parent != NULL); /* Not the root node */ assert (n->parent->parent != NULL); /* Not child of root */ return n->parent->parent; } node sibling(node n) { assert (n != NULL); assert (n->parent != NULL); /* Root node has no sibling */ if (n == n->parent->left) return n->parent->right; else return n->parent->left; } node uncle(node n) { assert (n != NULL); assert (n->parent != NULL); /* Root node has no uncle */ assert (n->parent->parent != NULL); /* Children of root have no uncle */ return sibling(n->parent); } void verify_properties(rbtree t) { #ifdef VERIFY_RBTREE verify_property_1(t->root); verify_property_2(t->root); /* Property 3 is implicit */ verify_property_4(t->root); verify_property_5(t->root); #endif } void verify_property_1(node n) { assert(node_color(n) == RED || node_color(n) == BLACK); if (n == NULL) return; verify_property_1(n->left); verify_property_1(n->right); } void verify_property_2(node root) { assert(node_color(root) == BLACK); } color node_color(node n) { return n == NULL ? BLACK : n->color; } void verify_property_4(node n) { if (node_color(n) == RED) { assert (node_color(n->left) == BLACK); assert (node_color(n->right) == BLACK); assert (node_color(n->parent) == BLACK); } if (n == NULL) return; verify_property_4(n->left); verify_property_4(n->right); } void verify_property_5(node root) { int black_count_path = -1; verify_property_5_helper(root, 0, &black_count_path); } void verify_property_5_helper(node n, int black_count, int* path_black_count) { if (node_color(n) == BLACK) { black_count++; } if (n == NULL) { if (*path_black_count == -1) { *path_black_count = black_count; } else { assert (black_count == *path_black_count); } return; } verify_property_5_helper(n->left, black_count, path_black_count); verify_property_5_helper(n->right, black_count, path_black_count); } rbtree rbtree_create() { rbtree t = malloc(sizeof(struct rbtree_t)); t->root = NULL; verify_properties(t); return t; } node new_node(void* key, void* value, color node_color, node left, node right) { node result = malloc(sizeof(struct rbtree_node_t)); result->key = key; result->value = value; result->color = node_color; result->left = left; result->right = right; if (left != NULL) left->parent = result; if (right != NULL) right->parent = result; result->parent = NULL; return result; } node lookup_node(rbtree t, void* key, compare_func compare) { node n = t->root; while (n != NULL) { int comp_result = compare(key, n->key); if (comp_result == 0) { return n; } else if (comp_result < 0) { n = n->left; } else { assert(comp_result > 0); n = n->right; } } return n; } void* rbtree_lookup(rbtree t, void* key, compare_func compare) { node n = lookup_node(t, key, compare); return n == NULL ? NULL : n->value; } void rotate_left(rbtree t, node n) { node r = n->right; replace_node(t, n, r); n->right = r->left; if (r->left != NULL) { r->left->parent = n; } r->left = n; n->parent = r; } void rotate_right(rbtree t, node n) { node L = n->left; replace_node(t, n, L); n->left = L->right; if (L->right != NULL) { L->right->parent = n; } L->right = n; n->parent = L; } void replace_node(rbtree t, node oldn, node newn) { if (oldn->parent == NULL) { t->root = newn; } else { if (oldn == oldn->parent->left) oldn->parent->left = newn; else oldn->parent->right = newn; } if (newn != NULL) { newn->parent = oldn->parent; } } void rbtree_insert(rbtree t, void* key, void* value, compare_func compare) { node inserted_node = new_node(key, value, RED, NULL, NULL); if (t->root == NULL) { t->root = inserted_node; } else { node n = t->root; while (1) { int comp_result = compare(key, n->key); if (comp_result == 0) { n->value = value; /* inserted_node isn't going to be used, don't leak it */ free (inserted_node); return; } else if (comp_result < 0) { if (n->left == NULL) { n->left = inserted_node; break; } else { n = n->left; } } else { assert (comp_result > 0); if (n->right == NULL) { n->right = inserted_node; break; } else { n = n->right; } } } inserted_node->parent = n; } insert_case1(t, inserted_node); verify_properties(t); } void insert_case1(rbtree t, node n) { if (n->parent == NULL) n->color = BLACK; else insert_case2(t, n); } void insert_case2(rbtree t, node n) { if (node_color(n->parent) == BLACK) return; /* Tree is still valid */ else insert_case3(t, n); } void insert_case3(rbtree t, node n) { if (node_color(uncle(n)) == RED) { n->parent->color = BLACK; uncle(n)->color = BLACK; grandparent(n)->color = RED; insert_case1(t, grandparent(n)); } else { insert_case4(t, n); } } void insert_case4(rbtree t, node n) { if (n == n->parent->right && n->parent == grandparent(n)->left) { rotate_left(t, n->parent); n = n->left; } else if (n == n->parent->left && n->parent == grandparent(n)->right) { rotate_right(t, n->parent); n = n->right; } insert_case5(t, n); } void insert_case5(rbtree t, node n) { n->parent->color = BLACK; grandparent(n)->color = RED; if (n == n->parent->left && n->parent == grandparent(n)->left) { rotate_right(t, grandparent(n)); } else { assert (n == n->parent->right && n->parent == grandparent(n)->right); rotate_left(t, grandparent(n)); } } void rbtree_delete(rbtree t, void* key, compare_func compare) { node child; node n = lookup_node(t, key, compare); if (n == NULL) return; /* Key not found, do nothing */ if (n->left != NULL && n->right != NULL) { /* Copy key/value from predecessor and then delete it instead */ node pred = maximum_node(n->left); n->key = pred->key; n->value = pred->value; n = pred; } assert(n->left == NULL || n->right == NULL); child = n->right == NULL ? n->left : n->right; if (node_color(n) == BLACK) { n->color = node_color(child); delete_case1(t, n); } replace_node(t, n, child); if (n->parent == NULL && child != NULL) child->color = BLACK; free(n); verify_properties(t); } static node maximum_node(node n) { assert (n != NULL); while (n->right != NULL) { n = n->right; } return n; } void delete_case1(rbtree t, node n) { if (n->parent == NULL) return; else delete_case2(t, n); } void delete_case2(rbtree t, node n) { if (node_color(sibling(n)) == RED) { n->parent->color = RED; sibling(n)->color = BLACK; if (n == n->parent->left) rotate_left(t, n->parent); else rotate_right(t, n->parent); } delete_case3(t, n); } void delete_case3(rbtree t, node n) { if (node_color(n->parent) == BLACK && node_color(sibling(n)) == BLACK && node_color(sibling(n)->left) == BLACK && node_color(sibling(n)->right) == BLACK) { sibling(n)->color = RED; delete_case1(t, n->parent); } else delete_case4(t, n); } void delete_case4(rbtree t, node n) { if (node_color(n->parent) == RED && node_color(sibling(n)) == BLACK && node_color(sibling(n)->left) == BLACK && node_color(sibling(n)->right) == BLACK) { sibling(n)->color = RED; n->parent->color = BLACK; } else delete_case5(t, n); } void delete_case5(rbtree t, node n) { if (n == n->parent->left && node_color(sibling(n)) == BLACK && node_color(sibling(n)->left) == RED && node_color(sibling(n)->right) == BLACK) { sibling(n)->color = RED; sibling(n)->left->color = BLACK; rotate_right(t, sibling(n)); } else if (n == n->parent->right && node_color(sibling(n)) == BLACK && node_color(sibling(n)->right) == RED && node_color(sibling(n)->left) == BLACK) { sibling(n)->color = RED; sibling(n)->right->color = BLACK; rotate_left(t, sibling(n)); } delete_case6(t, n); } void delete_case6(rbtree t, node n) { sibling(n)->color = node_color(n->parent); n->parent->color = BLACK; if (n == n->parent->left) { assert (node_color(sibling(n)->right) == RED); sibling(n)->right->color = BLACK; rotate_left(t, n->parent); } else { assert (node_color(sibling(n)->left) == RED); sibling(n)->left->color = BLACK; rotate_right(t, n->parent); } }
Q: Silverlight 4 ComboBox - Bug when using OneWay binding on SelectedItem This is the purest example i can give. I have a simple ComboBox : <ComboBox ItemsSource="{Binding ItemsSource}" SelectedItem="{Binding SelectedItem, Mode=OneWay}"/> This is the CodeBehind: public partial class MainPage : UserControl, INotifyPropertyChanged { private List<string> m_ItemsSource; public List<string> ItemsSource { get { return m_ItemsSource; } set { m_ItemsSource = value; PropertyChanged(this, new PropertyChangedEventArgs("ItemsSource")); } } private string m_SelectedItem; public string SelectedItem { get { return m_SelectedItem; } set { m_SelectedItem = value; PropertyChanged(this, new PropertyChangedEventArgs("SelectedItem")); } } public MainPage() { InitializeComponent(); DataContext = this; ItemsSource = new List<string>() { "Value A", "Value B" }; } private void button1_Click(object sender, RoutedEventArgs e) { SelectedItem = "Value A"; } private void button2_Click(object sender, RoutedEventArgs e) { SelectedItem = "Value B"; } public event PropertyChangedEventHandler PropertyChanged; } For some reason, the SelectedItem in the ComboBox updates correctly on the first button click but then stops to respond. But strangely enough, when changing to Mode=TwoWay, it works. I specifically need a OneWay binding and don't want the ComboBox to change the property. Is it a known bug or some weird design decision ? A: It's a known bug and it seems it's still happening in Silverlight 4. Always use TwoWay binding with SelectedItem. Binding ComboBox.SelectedItem in Silverlight
Getty Images It’s not known precisely when the Browns will be making changes to the football operations, but changes to the football operations seem to be coming. Per a league source with knowledge of the situation, the 0-6 Browns (1-21 since the start of last season) have begun reaching out to candidates to potentially join the organization’s front office. The Browns currently are targeting football executives, and for good reason. Currently, the front office is run by executive V.P. of football operations Sashi Brown, a lawyer who has never worked as a scout. Former baseball executive Paul DePodesta serves as the team’s chiefs strategy officer. The team has no General Manager. The Browns have two more games before a bye, with the eighth game of the season coming in London. It’s unclear whether changes would be made during the season, or whether it would be an addition or an addition and one or more subtractions.
Burraubach The Burraubach is a river in the Sigmaringen district in Baden-Württemberg, Germany. It flows for about 5.7 kilometres. It flows into the Kehlbach near Pfullendorf. See also List of rivers of Baden-Württemberg References External links Category:Rivers of Baden-Württemberg Category:Rivers of Germany
Doctors Explore Connection Between Sleep Apnea and Heart Failure Aug. 31--Tyrone Conner's heart was in such bad shape that he could barely walk up a flight of steps. "I felt like I was 80 years old," said Conner, 50, of Norristown. He also suffered from sleep apnea, snoring heavily and gasping for breath every night. What he did not initially realize was that the two problems were linked. Conner's physicians, at Thomas Jefferson University Hospital, made the connection, but many do not. Sleep apnea afflicts as many as 60 percent of patients with heart failure -- the term for a weakened heart muscle that cannot keep up with the body's demands. Yet only 2 percent of them nationwide are treated for the nocturnal breathing problem, said Sunil Sharma, associate director of the Jefferson Sleep Disorders Center. Physicians do not fully understand how heart failure and apnea are related, but evidence suggests that each can contribute to the other, and thus treating one of the conditions can alleviate both, said Fredric L. Ginsberg, director of the heart failure program at Cooper University Health Care in South Jersey. "It's hard to know sometimes in individual patients what is the chicken and what is the egg," Ginsberg said. Jefferson and Cooper physicians are involved in separate trials of devices that treat sleep apnea and may, they hope, address heart disease into the bargain. An earlier version of one of the devices seems to have worked for Conner, his physicians say. Out of breath Conner, a paramedic at Jefferson's surgical intensive care unit, had known for years that he was not getting enough sleep at night. He often felt sleepy, and sometimes would take a quick nap while on a work break. A coworker told him he snored heavily and sometimes stopped breathing entirely. He also knew something was amiss with his heart, to the point that he felt short of breath even from walking across the street from work to a Wawa store. The road to recovery began in 2008, when he went to the emergency room with severe abdominal pain. His abdomen turned out to be OK, but additional tests revealed that his heart was weak and enlarged, failing to pump enough blood to the rest of his body. He was treated with standard heart-failure medicines such as beta blockers and diuretics, but his condition grew so bad that cardiologist Paul J. Mather determined he was eligible for a transplant. In the meantime, Conner was surprised to learn that he might get some relief from a device that would help him sleep. There are two common forms of sleep apnea. One is obstructive sleep apnea, in which the muscles supporting the airway relax in such a way that it collapses -- a condition often accompanied by heavy snoring and repeated awakening. This is often treated with a continuous positive airway pressure device -- a CPAP, pronounced SEE-pap -- that props the airway open by delivering air through a mask. The other primary form of the disease is central sleep apnea, in which the patient "forgets" to breathe. That is, the brain fails to send the proper automatic message that causes the lungs to inhale. The exact cause of this breakdown is not fully understood, but is believed to be due to the ebb and flow in levels of carbon dioxide in the blood, and may be exacerbated by lung congestion, stress hormones or a weak heart muscle, Sharma said. In any event, a standard CPAP device may be of little use for someone with central sleep apnea, said Mather, director of Jefferson's Advanced Heart Failure and Cardiac Transplant Center. "Giving oxygen to a body that doesn't inhale won't really help," he said. Jefferson is one of 10 centers testing a "smart" device made by San Diego-based ResMed. As with a CPAP, the patient receives air through a mask, but the device also senses how well the patient is breathing and increases the air pressure as necessary to prompt inhalation. While the device already is approved by the Food and Drug Administration for the treatment of central sleep apnea, the new trial will focus on whether it reduces symptoms and hospitalizations associated with heart failure. These "adaptive servo-ventilation" devices cost about $5,000 and are generally covered by insurers, said Adam Benjafield, ResMed vice president of medical affairs. At Cooper University and at the Hospital of the University of Pennsylvania, meanwhile, physicians are implanting a pacemakerlike device that stimulates the patient's phrenic nerve, which signals the diaphragm to contract in the breathing process. The device, made by Respicardia Inc. of Minnetonka, Minn., is approved for use in Europe but not yet in the U.S. It senses when the patient has stopped breathing and delivers an electrical stimulus as necessary, said company chief executive officer Bonnie Labosky. In Conner's case, Jefferson sleep doctor and pulmonologist Ritu G. Grewal had him try a series of different breathing machines and masks and settings over the course of a year, ultimately settling on one of ResMed's smart devices. Conner suffered primarily from obstructive sleep apnea but also to some degree from the central variety, she said. The apnea treatment worked so well that, in conjunction with heart medications, Conner was eventually able to come off the transplant list, said Mather, the cardiologist. The nightly deprivation of oxygen was causing Conner's heart muscle to become scarred and stiff, Mather said. But the sleep apnea treatment helped arrest the damage before it got too far, he said. Conner was on leave from his job for more than two years while recovering, finally returning to work in 2011. At one point, eager to show Mather he was ready to return, he walked to Jefferson from SEPTA's 69th Street Terminal in Upper Darby -- a distance of more than 5 miles. "I got my life back," Conner said. Now, for the last two years, Jefferson's practice has been to conduct sleep tests on all patients admitted for heart failure, Mather said. Jefferson physicians now are expanding that practice to patients seen in outpatient clinics, he said. Any time they encounter someone like Tyrone Conner, they plan to be ready.
1. Technical Field The present invention relates to a three-dimensional image display apparatus and method for directly correlating 3D data (three-dimensional shape data) measured on a measuring object and an image (single photographic image or stereo image) of the measuring object, and concurrently displaying the measurement data and the image of the measuring object. 2. Related Art Methods for obtaining 3D data on a working object or a manufacturing object include an approach of obtaining data with a three-dimensional measurement apparatus (total station), and a stereo image measurement approach of obtaining 3D data by stereo measurement of a stereo image photographed using a measuring object and a comparative calibrated body. The approach using a three-dimensional measurement apparatus is superior in accuracy of obtained 3D coordinates and therefore used to measure reference positions in affixing an image. In particular, recent motor-driven total stations have become capable of obtaining a relatively large number (about several tens, for example, for each measuring object) of three-dimensional coordinates. In the stereo image measurement approach, several thousands—several tens of thousands of three-dimensional coordinates can be obtained relatively conveniently by performing work called orientation in affixing 3D data and an image. However, 3D data obtained with a three-dimensional position measurement apparatus is basically constituted of three-dimensional coordinate data including distance data. Therefore, problems have become apparent that it is difficult to correlate 3D data obtained with the three-dimensional position measurement apparatus and the site conditions, and that it is difficult to decide which part of a measuring object is measured in tying the 3D data with image information on the measuring object. On the other hand, in the stereo image measurement approach, 3D measurement is performed on a stereo image to display the image in stereo, so that 3D data and the stereo image can be compared. However, a stereoscopic monitor or deflector glasses are required to compare the 3D data and the stereo image. In addition, some people can achieve stereoscopic vision well while others cannot. That is, there has been a problem that not everyone can always make recognition easily. Also, in the stereo measurement of the measuring object or the tying work between the 3D data and the stereo image, correlation need be established between the image information and the 3D data or between the images, to perform orientation work. The orientation work involves great differences among individuals and thus cannot be performed with ease and accuracy, which has been another problem. Further, there has been a demand from customers for a stereoscopic display of a measuring object even in cases of a single photographic image, which is a single photograph representing the measuring object. A first object of the present invention, which has been made to solve the foregoing problems, is to provide a three-dimensional image display apparatus and method for integrating and visualizing 3D measurement data obtained from a stereo image with an image of a measuring object to which stereoscopic texture is applied. Also, a second object of the present invention, which has been made to solve the foregoing problems, is to provide a three-dimensional image display apparatus and method for conveniently extracting a characteristic point included in a stereo image, to conveniently integrate 3D measurement data obtained from the stereo image with an image of a measuring object to which stereoscopic texture is applied. A third object of the present invention, which has been made to solve the foregoing problems, is to provide a three-dimensional image display apparatus and method for using an image of a measuring object and separately obtained 3D measurement data, to conveniently create a stereoscopic two-dimensional image of the measuring object. A fourth object is to provide a three-dimensional image display apparatus and method for using a stereo image of a measuring object and separately obtained 3D measurement data, to create a stereoscopic two-dimensional image of the measuring object with accuracy. A fifth object is to provide a three-dimensional image display apparatus and method for simplifying photographing work of a measuring object.
Q: Where does a magnet/torrent client look for the hash/torrent/file? In short: Wikipedia mentions a required "availability search" to find peers (and the actual file): Note that, although a particular file is indicated, an availability search for it must still be carried out by the client application. Where does the client look? Does a magnet link require a tracker URI or is that up to the client's network? More info: A certain magnet URI/URN from tpb looks like this: magnet:?xt=urn:btih:e9b785fc2d70811a72df5a76bb34bd2eaf9df956&dn=Dances+with+Wolves+1990+20th+Anniversary+Extended+Cut+720p+BRRip&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Ftracker.publicbt.com%3A80&tr=udp%3A%2F%2Ftracker.istole.it%3A6969&tr=udp%3A%2F%2Ftracker.ccc.de%3A80 It contains 4 tr query params with (I suppose) tracker locations that contain some sort of hash index. However, Wikipedia doesn't mention the tr param, so I assume it's not mandatory. Where does a client start looking for the file if no tracker URI's are included? And if there are? I can imagine a torrent client (like uTorrent) itself having an enormous index of file hashes. A: The client will use DHT and Peer Exchange to look for clients if no trackers are provided. A: If trackers are listed, the client will query them first. If none are listed, DHT is used to query other clients for copies of the file, and then PEX kicks in to find more copies once the first has been found. Even if trackers are found, the client may still leverage DHT to find additional peers. The trackerless approach is analogous to the Gnutella(2) network if you were familiar with its operation.
Police spent £111,000 last year on a crackdown on an anti-war protest outside Westminster. Brian Haw, 57, has held vigil in Parliament Square for six years, using a megaphone to attack the government policy on Iraq. Seventy-eight officer shifts were devoted to the overnight raid to scale back Mr Haw's encampment on 23 May 2006, Scotland Yard figures show. Mr Haw, of Redditch, Worcs, has blocked several attempts to have him removed. Legal battle He has staged a continuous vigil against the Iraq war outside Parliament since 2 June 2001. Mr Haw won a legal battle to remain in place due to a drafting error in a new law banning unauthorised protests in Westminster. The Serious Organised Crime and Police Act 2005 states anyone wanting to demonstrate in a 1km (0.62 miles) "exclusion zone" around Parliament must seek permission from the police. He was granted permission to continue his protest but on the condition that his placards, which were spread over 40m, were reduced to just 3m. During last May's raid, police moved in to enforce the conditions - putting many banners, pictures and placards in a large metal container. Extra patrols A Scotland Yard report delivered to the Metropolitan Police Authority (MPA) stressed "a large proportion of costs quoted do not represent additional costs to the MPS. "Rather, the officers and other staff assigned to a given operation would be assigned to other policing duties or operations," the document added. The figure included in the report is more than four times greater than the £27,000 previously estimated. The government and the police have contrived to make a mountain out of a molehill Liberal Democrat home affairs spokesman Nick Clegg However, that original figure did not reflect the cost of extra police patrols of the area in days following the raid. "The final cost incorporates the high visibility patrols over the following days to ensure any subsequent but related protests did not break the law, obstruct movement in and around the area or disrupt the important business of parliament," a Scotland Yard spokesman said. A further 358 officer shifts were devoted to these "high visibility" patrols. Liberal Democrat home affairs spokesman Nick Clegg said government attempts to remove Mr Haw had been expensive and "laughably incompetent". "The government and the police have contrived to make a mountain out of a molehill," he said. Jenny Jones, who represents the Green Party on the London Assembly and MPA, called for Scotland Yard to stop enforcing the legislation. "The commissioner should tell his officers to back off from the protesters and focus on the real problems faced by Londoners," she said.
#include "RT_Class.h" #include "ORB_Holder.h" #include "Servant_var.h" #include "RIR_Narrow.h" #include "RTServer_Setup.h" #include "Send_Task.h" #include "Client_Group.h" #include "ORB_Task.h" #include "ORB_Task_Activator.h" #include "Low_Priority_Setup.h" #include "EC_Destroyer.h" #include "Client_Options.h" #include "orbsvcs/Event_Service_Constants.h" #include "tao/Messaging/Messaging.h" #include "tao/Strategies/advanced_resource.h" #include "tao/RTCORBA/Priority_Mapping_Manager.h" #include "tao/RTCORBA/Continuous_Priority_Mapping.h" #include "tao/RTPortableServer/RTPortableServer.h" #include "ace/High_Res_Timer.h" #include "ace/Sample_History.h" #include "ace/Basic_Stats.h" #include "ace/Stats.h" #include "ace/Sched_Params.h" #include "ace/Barrier.h" int ACE_TMAIN (int argc, ACE_TCHAR *argv[]) { const CORBA::Long experiment_id = 1; RT_Class rt_class; try { ORB_Holder orb (argc, argv, ""); Client_Options options (argc, argv); if (argc != 1) { ACE_ERROR_RETURN ((LM_ERROR, "Usage: %s " "-i iterations (iterations) " "-h high_priority_period (usecs) " "-l low_priority_period (usecs) " "-w high_priority_workload (usecs) " "-v low_priority_workload (usecs) " "-r (enable RT-CORBA) " "-n nthreads (low priority thread) " "-d (dump history) " "-z (disable low priority) " "\n", argv [0]), 1); } RTServer_Setup rtserver_setup (options.use_rt_corba, orb, rt_class, 1 // options.nthreads ); PortableServer::POA_var root_poa = RIR_Narrow<PortableServer::POA>::resolve (orb, "RootPOA"); PortableServer::POAManager_var poa_manager = root_poa->the_POAManager (); poa_manager->activate (); PortableServer::POA_var the_poa (rtserver_setup.poa ()); ACE_Thread_Manager my_thread_manager; ORB_Task orb_task (orb); orb_task.thr_mgr (&my_thread_manager); ORB_Task_Activator orb_task_activator (rt_class.priority_high (), rt_class.thr_sched_class (), 1, &orb_task); ACE_DEBUG ((LM_DEBUG, "Finished ORB and POA configuration\n")); CORBA::Object_var object = orb->string_to_object (options.ior); RtecEventChannelAdmin::EventChannel_var ec = RtecEventChannelAdmin::EventChannel::_narrow (object.in ()); EC_Destroyer ec_destroyer (ec.in ()); CORBA::PolicyList_var inconsistent_policies; (void) ec->_validate_connection (inconsistent_policies); ACE_DEBUG ((LM_DEBUG, "Found EC, validated connection\n")); int thread_count = 1 + options.nthreads; ACE_Barrier the_barrier (thread_count); ACE_DEBUG ((LM_DEBUG, "Calibrating high res timer ....")); ACE_High_Res_Timer::calibrate (); ACE_High_Res_Timer::global_scale_factor_type gsf = ACE_High_Res_Timer::global_scale_factor (); ACE_DEBUG ((LM_DEBUG, "Done (%d)\n", gsf)); CORBA::Long event_range = 1; if (options.funky_supplier_publication) { if (options.unique_low_priority_event) event_range = 1 + options.low_priority_consumers; else event_range = 2; } Client_Group high_priority_group; high_priority_group.init (experiment_id, ACE_ES_EVENT_UNDEFINED, event_range, options.iterations, options.high_priority_workload, gsf, the_poa.in (), the_poa.in ()); Auto_Disconnect<Client_Group> high_priority_disconnect; if (!options.high_priority_is_last) { high_priority_group.connect (ec.in ()); high_priority_disconnect = &high_priority_group; } int per_thread_period = options.low_priority_period; if (options.global_low_priority_rate) per_thread_period = options.low_priority_period * options.nthreads; Low_Priority_Setup<Client_Group> low_priority_setup ( options.low_priority_consumers, 0, // no limit on the number of iterations options.unique_low_priority_event, experiment_id, ACE_ES_EVENT_UNDEFINED + 2, options.low_priority_workload, gsf, options.nthreads, rt_class.priority_low (), rt_class.thr_sched_class (), per_thread_period, the_poa.in (), the_poa.in (), ec.in (), &the_barrier); if (options.high_priority_is_last) { high_priority_group.connect (ec.in ()); high_priority_disconnect = &high_priority_group; } Send_Task high_priority_task; high_priority_task.init (options.iterations, options.high_priority_period, 0, ACE_ES_EVENT_UNDEFINED, experiment_id, high_priority_group.supplier (), &the_barrier); high_priority_task.thr_mgr (&my_thread_manager); { // Artificial scope to wait for the high priority task... Task_Activator<Send_Task> high_priority_act (rt_class.priority_high (), rt_class.thr_sched_class (), 1, &high_priority_task); } ACE_DEBUG ((LM_DEBUG, "(%P|%t) client - high priority task completed\n")); low_priority_setup.stop_all_threads (); ACE_DEBUG ((LM_DEBUG, "(%P|%t) client - low priority task(s) stopped\n")); ACE_Sample_History &history = high_priority_group.consumer ()->sample_history (); if (options.dump_history) { history.dump_samples (ACE_TEXT("HISTORY"), gsf); } ACE_Basic_Stats high_priority_stats; history.collect_basic_stats (high_priority_stats); high_priority_stats.dump_results (ACE_TEXT("High Priority"), gsf); ACE_Basic_Stats low_priority_stats; low_priority_setup.collect_basic_stats (low_priority_stats); low_priority_stats.dump_results (ACE_TEXT("Low Priority"), gsf); ACE_DEBUG ((LM_DEBUG, "(%P|%t) client - starting cleanup\n")); } catch (const CORBA::Exception& ex) { ex._tao_print_exception ("Exception caught:"); return 1; } return 0; }
<?php namespace rbac\controllers; use Yii; use rbac\models\Assignment; use rbac\models\searchs\Assignment as AssignmentSearch; use yii\web\Controller; use yii\web\NotFoundHttpException; use yii\filters\VerbFilter; /** * AssignmentController implements the CRUD actions for Assignment model. * * @author Misbahul D Munir <[email protected]> * @since 1.0 */ class AssignmentController extends Controller { public $userClassName; public $idField = 'id'; public $usernameField = 'username'; public $fullnameField; public $searchClass; public $extraColumns = []; /** * @inheritdoc */ public function init() { parent::init(); if ($this->userClassName === null) { $this->userClassName = Yii::$app->getUser()->identityClass; $this->userClassName = $this->userClassName ? : 'rbac\models\User'; } } /** * @inheritdoc */ public function behaviors() { return [ 'verbs' => [ 'class' => VerbFilter::className(), 'actions' => [ 'assign' => ['post'], 'assign' => ['post'], 'revoke' => ['post'], ], ], ]; } /** * Lists all Assignment models. * @return mixed */ public function actionIndex() { if ($this->searchClass === null) { $searchModel = new AssignmentSearch; $dataProvider = $searchModel->search(Yii::$app->getRequest()->getQueryParams(), $this->userClassName, $this->usernameField); } else { $class = $this->searchClass; $searchModel = new $class; $dataProvider = $searchModel->search(Yii::$app->getRequest()->getQueryParams()); } return $this->render('index', [ 'dataProvider' => $dataProvider, 'searchModel' => $searchModel, 'idField' => $this->idField, 'usernameField' => $this->usernameField, 'extraColumns' => $this->extraColumns, ]); } /** * Displays a single Assignment model. * @param integer $id * @return mixed */ public function actionView($id) { $model = $this->findModel($id); return $this->render('view', [ 'model' => $model, 'idField' => $this->idField, 'usernameField' => $this->usernameField, 'fullnameField' => $this->fullnameField, ]); } /** * Assign items * @param string $id * @return array */ public function actionAssign($id) { $items = Yii::$app->getRequest()->post('items', []); $model = new Assignment($id); $success = $model->assign($items); Yii::$app->getResponse()->format = 'json'; return array_merge($model->getItems(), ['success' => $success]); } /** * Assign items * @param string $id * @return array */ public function actionRevoke($id) { $items = Yii::$app->getRequest()->post('items', []); $model = new Assignment($id); $success = $model->revoke($items); Yii::$app->getResponse()->format = 'json'; return array_merge($model->getItems(), ['success' => $success]); } /** * Finds the Assignment model based on its primary key value. * If the model is not found, a 404 HTTP exception will be thrown. * @param integer $id * @return Assignment the loaded model * @throws NotFoundHttpException if the model cannot be found */ protected function findModel($id) { $class = $this->userClassName; if (($user = $class::findIdentity($id)) !== null) { return new Assignment($id, $user); } else { throw new NotFoundHttpException('The requested page does not exist.'); } } }
[Management of hypertension in pregnancy]. Hypertensive states of pregnancy are a set of disorders that occur during gestation whose common nexus is hypertension. They must be given special emphasis due to their implication in maternal and neonatal morbidity and mortality. A classification is made of the different hypertensive states, with special emphasis placed on preeclampsia. This article defines the symptoms and signs of the disease and a differential diagnosis is made amongst diseases that must be ruled out. It is important to identify expectant mothers with preeclampsia, and it is of even greater importance in such cases to rule out some criterion of seriousness, as this will enable a different management to be carried out. The article includes the indications and the moment when the pregnancy finalises. Similarly, it details the controls that must be made if an expectant management is chosen for the benefit of the premature baby. The different anti-hypertensive therapeutical options are detailed, as well as the prophylactic treatment of eclampsia with magnesium sulphate. Because of their intrinsic interest, we draw special attention to the HELLP syndrome and to eclampsia as complications. The treatment and conduct that must be followed in gestation is described.
Angkor Extra Stout Angkor Extra Stout is a Cambodian beer. It is brewed at the Cambrew Brewery in Sihanoukville by Angkor Beer. References External links Official website Category:Beer in Cambodia
Q: Are all finite sets of numbers decidable? I'm reading Peter Smith's Goedel's Theorems and am embarrassingly suspicious of the claim (theorem 3.1) that all finite sets are effectively decidable. For example, the set $\{BB(7918)\}$ has one member (which I think is definable in arithmetic), the value of which is independent of ZFC. The proof given in the book seems to rely on a finite set consisting of a concrete list of concrete numbers (rather than being defined by a predicate), and hence having a decidable characteristic function that simply checks the argument against all members of the set. What am I missing? If I'm not missing anything, what do we lose by realising that some finite sets aren't decidable? A: We have to distinguish between sets and definitions of sets. A set is decidable (or computable, or recursive - the subject has unfortunately redundant terminology!) iff some Turing machine decides it. Every finite set is decidable since we can always "hard-code" a Turing machine to accept a given finite set: fixing $a_1,...,a_n$, just write a program which on input $k$ checks whether $k=a_1$, whether $k=a_2$,... , whether $k=a_n$, and outputs "YES" if the answer to one of these questions is YES and outputs "NO" otherwise. However, some definitions of very simple sets appear computationally intractable. For example, for any sentence $\varphi$ let $$True_\varphi=\{x: (x=0\mbox{ and $\varphi$ is true})\mbox{ or } (x=1\mbox{ and $\varphi$ is false})\}.$$ Obviously $True_\varphi$ defines a decidable set (either $\{0\}$ or $\{1\}$), but we don't know which. This suggests the following notion (the terminology below is my own, I don't know if there's a more common one): Say that a definition $\delta$ of a set of natural numbers is concrete iff there is some Turing machine $M$ such that $\mathsf{ZFC}$ (say) proves that $M$ decides the set defined by $\delta$. (I'm being a bit vague here about what exactly I mean by "definition" - it doesn't really matter, but if you like we can take it to mean "formula in the language of set theory with one free variable which $\mathsf{ZFC}$ proves only ever holds of natural numbers.") Before moving on, let me give a quick caveat: Not every decidable set has a concrete description! The concretely-describable sets form a proper subclass of the decidable sets; every finite set is indeed concretely describable, though, so for now this is a reasonable thing to think about. (See the end of this answer for more on this.) The point, then, is the following: Not every definition of a finite set need be concrete. In particular, "The set containing exactly the $7918$th value of the Busy Beaver function" is not a concrete definition of a set; however, the set it defines is finite and does in fact have some concrete definition, we just don't know what it is. OK, now let me get back to the point above about concrete describability vs. decidability. Since we can search through $\mathsf{ZFC}$-proofs (and assuming $\mathsf{ZFC}$ is sound), we can compute an enmeration $(M_i)_{i\in\mathbb{N}}$ of all Turing machines which $\mathsf{ZFC}$ proves decide a set - that is, which $\mathsf{ZFC}$ proves always halt on all inputs. But now we can diagonalize against these: the set $$D_{\mathsf{ZFC}}=\{i: M_i(i)=0\}$$ is decidable but has no concrete description. It's a good exercise to check why the above definition of $D_{\mathsf{ZFC}}$ is not in fact concrete! Also, note that $\mathsf{ZFC}$ isn't really important here - we can replace $\mathsf{ZFC}$ throughout with any other "appropriate" theory, like first-order Peano arithmetic. In particular, "The set of numbers decided by machine $M$" is not concrete in general ... since maybe $\mathsf{ZFC}$ doesn't prove that $M$ actually decides a set! This is one of many situations in computability theory where semidecidability (or computable enumerability, or recursive enumerability) is a much tamer notion than decidability: while it's hard to tell whether a Turing machine decides a set, every Turing machine definitely accepts a set, so "is $\mathsf{ZFC}$-provably equivalent to a definition of the form 'the set of numbers accepted by $M$'" is a much better-behaved notion of concreteness than "is $\mathsf{ZFC}$-provably equivalent to a definition of the form 'the set of numbers decided by $M$.'"
1. Field of the Invention The present invention relates generally to articles which are designed to be in contact with biologically active agents. Such articles include implant devices and other structures which are designed to be utilized in vivo. Such articles also include containers, supports, and transport systems wherein biologically active agents are in continual contact with the surfaces of the article. More particularly, the present invention relates to reducing and thereby controlling the degree to which biologically active agents bind to the surfaces of such articles. 2. Description of Related Art Most biologically active agents interact with other molecules present on either surfaces or membranes. In fact, the effectiveness of many biological systems is dependent on the presence of certain intrinsic binding properties between biologically active agents and biological surfaces. For example, biological surfaces, such as endothelial linings or receptor-embedded cell membranes, incorporate high affinity (energy) binding properties to achieve optimal biological function. Although the binding properties of biologically active agents is essential for proper biological function, there are many situations where binding of these biologically active agents to non-biological surfaces presents a problem. For example, the coagulation protein factor XII is a biologically active agent which binds to healthy vascular endothelial cells. Protein factor XII plays an important role in the naturally occurring coagulation process. However, when protein factor XII binds to the surface of an implanted biomaterial, the result may be a thrombotic or thromboembolic complication of the prosthetic device. Other situations where reduced surface binding of biologically active agents would be desirable include vessels used to transport biologically active agents. In these situations, binding of the agent to the wall of the transport container results in reduced yield of the transported product. In addition, reduced binding would be desirable in a vascular prosthesis where interactions of biologically active agents can promote complications and reduce the medical utility of the device. For example, it would be desirable to reduce surface binding of biologically active agents to hip prostheses where the binding of such agents can result in denaturization of the agents and the initiation of an inflammatory reaction clinically associated with pain and reduced utility of the device. Another situation where reduced and thereby controlled surface binding of biologically active agents would be desirable includes the fabrication of biological opto-electronic devices. These devices would provide electronic output from electron transporting biologically active molecules responding to photoelectric, thermal, or other environmental stimulus. To fabricate these devices, only limited numbers of biologically active molecules would be deposited ideally on a solid support. Moreover, the reduced and thereby controlled binding of the biologically active molecules would ideally not result in conformational denaturation of the molecules. The non-biological materials which are commonly used in the manufacture of biomedical and food service devices include polymers, ceramics and metals, most of which have high surface energies. These high surface energies result frequently in increased binding of biologically active molecules in situations, such as those described above, where such binding is undesirable. Accordingly, it would be desirable to provide a treatment for the surfaces of such non-biological materials which would effectively reduce the surface energy and thereby decrease undesirable binding of biologically active agents thereto. Over the years, various materials have been developed for use as surface modifying agents which reduce the binding of biologically active agents to their surfaces. Examples include polymers, such as silicone, polystyrene, polyethylene and polytetrafluoroethylene. All of these materials have low surface energies. Accordingly, the binding affinities between these materials and biologically active agents is reduced. These materials are generally used in bulk form, i.e., the entire device is made from the materials. More recently, different alcohol based compounds have been either physically adsorbed or chemically bonded to the surface of non-biological materials to reduce the subsequent surface binding of biologically active agents. Among the more commonly used are polyethylene glycol and sodium heparin. While affording improved resistance to absorption of proteins and other biologically active agents, these two exemplary materials are each subject to their own specific problems. For example, non-biological surfaces, such as immunoaffinity chromatography columns and electrophoretic capillaries, have been coated with polyethylene glycol. Although such coatings have reduced binding of biologically active agents, the nephrotoxic effects of polyethylene glycol are well documented. Further, binding of polyethylene glycol to the non-biological surface is possible only through various forms of covalent chemistry. Sodium heparin is a well-recognized anti-coagulation factor whose use entails correlative physiological effects. Most often, sodium heparin is covalently bound directly to the non-biological surface or indirectly through various carbon chain extenders. In addition, sodium heparin has been physically absorbed onto the non-biological surface. Other surface modification techniques have involved the coating of electrophoretic capillaries with phosphate moieties and conventional silanes and polyacrylimides. Other attempts at reducing the surface activity of non-biological materials have involved the covalent bonding of maltose to silica substrates wherein an additional silicone-based intermediate moiety (3-aminopropyltriethoxysilane) is covalently bound to both the fused-silica capillary walls and the disaccharide. In another procedure, cellulose has been absorbed onto non-biological surfaces. Specifically, methylcellulose has been used to coat the inside of quartz electrophoresis tubes to reduce or eliminate electroendosmosis. The protocol used in applying the methylcellulose coating involves three steps. First, the electrophoresis tube is washed with detergent. The possibility of detergent residues present on the quartz surface is not desirable since it may block carbohydrate adsorption. The second step involves addition of formaldehyde and formic acid to the methylcellulose solution to catalyze the cross-linking of the carbohydrate molecules which are present in the coating. Finally, the quartz tube is heated between applications of the methylcellulose. There presently is a need to provide a simple, quick, and efficient technique for reducing the surface energy of articles which are designed for use in contact with biologically active agents. The technique should be capable of reducing surface energy levels sufficiently to reduce and thereby control the binding of biologically active agents to the article's surface.
Q: Questions on how the build .c files I have been trying to implement the API for the serial port found the the below web page. I am a beginner in all this and I am sure about what I am looking at: http://code.google.com/p/android-serialport-api/source/browse/#svn%2Ftrunk%2Fandroid-serialport-api%2Fproject%2Fjni Questions: 1) The .c files are built how? Do I need to download the NDK? I assume the .c file is run directly by the virtual machine, or what? Or is the executable for the .c the file in the libs directory? If so, how do I utilize the libserial_por.so file? Thanks! A: The .c files are built into a library by running ndk-build in the project directory. You need the NDK. The .c files are not run directly by the virtual machine, but rather a library is created in the libs directory, which is then loaded along with the SerialPort class. To use the library, just use the SerialPort class which already has bindings to the library. C files will be compiled to an ARM binary library with the extension .so by the NDK. Take a look at the NDK Documentation, section "Getting Started with the NDK", to find out how to use it. Basically, you place your .c files in the jni directory, change Android.mk to specify how to compile them, then run ndk-build to build the library. The resulting lib<name>.so will be placed in the lib directory. You then use your library in the Java project with System.loadLibrary('<name>'). This of course means the library must have a JNI interface for you to be able to use with the Java application, since Android doesn't support JNA yet. I see though that the code you pointed out is an Android project. To run it, simply run ndk-build in the project directory to build the library, then run the project in an emulator.
// Copyright 2017 The TensorFlow Authors. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // ============================================================================= #include <queue> #include "tensorflow/contrib/tensor_forest/kernels/data_spec.h" #include "tensorflow/contrib/tensor_forest/kernels/v4/decision-tree-resource.h" #include "tensorflow/contrib/tensor_forest/kernels/v4/fertile-stats-resource.h" #include "tensorflow/contrib/tensor_forest/kernels/v4/input_data.h" #include "tensorflow/contrib/tensor_forest/kernels/v4/input_target.h" #include "tensorflow/contrib/tensor_forest/kernels/v4/params.h" #include "tensorflow/contrib/tensor_forest/proto/fertile_stats.pb.h" #include "tensorflow/core/framework/op_kernel.h" #include "tensorflow/core/framework/resource_mgr.h" #include "tensorflow/core/framework/tensor.h" #include "tensorflow/core/framework/tensor_shape.h" #include "tensorflow/core/framework/tensor_types.h" #include "tensorflow/core/lib/gtl/map_util.h" #include "tensorflow/core/lib/strings/strcat.h" #include "tensorflow/core/platform/mutex.h" #include "tensorflow/core/platform/thread_annotations.h" #include "tensorflow/core/platform/types.h" #include "tensorflow/core/util/work_sharder.h" namespace tensorflow { namespace tensorforest { using gtl::FindOrNull; // Creates a stats variable. class CreateFertileStatsVariableOp : public OpKernel { public: explicit CreateFertileStatsVariableOp(OpKernelConstruction* context) : OpKernel(context) { string serialized_params; OP_REQUIRES_OK(context, context->GetAttr("params", &serialized_params)); ParseProtoUnlimited(&param_proto_, serialized_params); } void Compute(OpKernelContext* context) override { const Tensor* stats_config_t; OP_REQUIRES_OK(context, context->input("stats_config", &stats_config_t)); OP_REQUIRES(context, TensorShapeUtils::IsScalar(stats_config_t->shape()), errors::InvalidArgument("Stats config must be a scalar.")); auto* result = new FertileStatsResource(param_proto_); FertileStats stats; if (!ParseProtoUnlimited(&stats, stats_config_t->scalar<string>()())) { result->Unref(); OP_REQUIRES(context, false, errors::InvalidArgument("Unable to parse stats config.")); } result->ExtractFromProto(stats); result->MaybeInitialize(); // Only create one, if one does not exist already. Report status for all // other exceptions. auto status = CreateResource(context, HandleFromInput(context, 0), result); if (!status.ok() && status.code() != tensorflow::error::ALREADY_EXISTS) { OP_REQUIRES(context, false, status); } } private: TensorForestParams param_proto_; }; // Op for serializing a model. class FertileStatsSerializeOp : public OpKernel { public: explicit FertileStatsSerializeOp(OpKernelConstruction* context) : OpKernel(context) { string serialized_params; OP_REQUIRES_OK(context, context->GetAttr("params", &serialized_params)); ParseProtoUnlimited(&param_proto_, serialized_params); } void Compute(OpKernelContext* context) override { FertileStatsResource* fertile_stats_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 0), &fertile_stats_resource)); mutex_lock l(*fertile_stats_resource->get_mutex()); core::ScopedUnref unref_me(fertile_stats_resource); Tensor* output_config_t = nullptr; OP_REQUIRES_OK( context, context->allocate_output(0, TensorShape(), &output_config_t)); FertileStats stats; fertile_stats_resource->PackToProto(&stats); output_config_t->scalar<string>()() = stats.SerializeAsString(); } private: TensorForestParams param_proto_; }; // Op for deserializing a stats variable from a checkpoint. class FertileStatsDeserializeOp : public OpKernel { public: explicit FertileStatsDeserializeOp(OpKernelConstruction* context) : OpKernel(context) { string serialized_params; OP_REQUIRES_OK(context, context->GetAttr("params", &serialized_params)); ParseProtoUnlimited(&param_proto_, serialized_params); } void Compute(OpKernelContext* context) override { FertileStatsResource* fertile_stats_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 0), &fertile_stats_resource)); mutex_lock l(*fertile_stats_resource->get_mutex()); core::ScopedUnref unref_me(fertile_stats_resource); const Tensor* stats_config_t; OP_REQUIRES_OK(context, context->input("stats_config", &stats_config_t)); OP_REQUIRES(context, TensorShapeUtils::IsScalar(stats_config_t->shape()), errors::InvalidArgument("Stats config must be a scalar.")); // Deallocate all the previous objects on the resource. fertile_stats_resource->Reset(); FertileStats stats; OP_REQUIRES(context, ParseProtoUnlimited(&stats, stats_config_t->scalar<string>()()), errors::InvalidArgument("Unable to parse stats config.")); fertile_stats_resource->ExtractFromProto(stats); fertile_stats_resource->MaybeInitialize(); } private: TensorForestParams param_proto_; }; // Try to update a leaf's stats by acquiring its lock. If it can't be // acquired, put it in a waiting queue to come back to later and try the next // one. Once all leaf_ids have been visited, cycle through the waiting ids // until they're gone. void UpdateStats(FertileStatsResource* fertile_stats_resource, const std::unique_ptr<TensorDataSet>& data, const TensorInputTarget& target, int num_targets, const Tensor& leaf_ids_tensor, std::unordered_map<int32, std::unique_ptr<mutex>>* locks, mutex* set_lock, int32 start, int32 end, std::unordered_set<int32>* ready_to_split) { const auto leaf_ids = leaf_ids_tensor.unaligned_flat<int32>(); // Stores leaf_id, leaf_depth, example_id for examples that are waiting // on another to finish. std::queue<std::tuple<int32, int32>> waiting; int32 i = start; while (i < end || !waiting.empty()) { int32 leaf_id; int32 example_id; bool was_waiting = false; if (i >= end) { std::tie(leaf_id, example_id) = waiting.front(); waiting.pop(); was_waiting = true; } else { leaf_id = leaf_ids(i); example_id = i; ++i; } const std::unique_ptr<mutex>& leaf_lock = (*locks)[leaf_id]; if (was_waiting) { leaf_lock->lock(); } else { if (!leaf_lock->try_lock()) { waiting.emplace(leaf_id, example_id); continue; } } bool is_finished; fertile_stats_resource->AddExampleToStatsAndInitialize( data, &target, {example_id}, leaf_id, &is_finished); leaf_lock->unlock(); if (is_finished) { set_lock->lock(); ready_to_split->insert(leaf_id); set_lock->unlock(); } } } // Update leaves from start through end in the leaf_examples iterator. void UpdateStatsCollated( FertileStatsResource* fertile_stats_resource, DecisionTreeResource* tree_resource, const std::unique_ptr<TensorDataSet>& data, const TensorInputTarget& target, int num_targets, const std::unordered_map<int32, std::vector<int>>& leaf_examples, mutex* set_lock, int32 start, int32 end, std::unordered_set<int32>* ready_to_split) { auto it = leaf_examples.begin(); std::advance(it, start); auto end_it = leaf_examples.begin(); std::advance(end_it, end); while (it != end_it) { int32 leaf_id = it->first; bool is_finished; fertile_stats_resource->AddExampleToStatsAndInitialize( data, &target, it->second, leaf_id, &is_finished); if (is_finished) { set_lock->lock(); ready_to_split->insert(leaf_id); set_lock->unlock(); } ++it; } } // Op for traversing the tree with each example, accumulating statistics, and // outputting node ids that are ready to split. class ProcessInputOp : public OpKernel { public: explicit ProcessInputOp(OpKernelConstruction* context) : OpKernel(context) { string serialized_params; OP_REQUIRES_OK(context, context->GetAttr("params", &serialized_params)); ParseProtoUnlimited(&param_proto_, serialized_params); OP_REQUIRES_OK(context, context->GetAttr("random_seed", &random_seed_)); string serialized_proto; OP_REQUIRES_OK(context, context->GetAttr("input_spec", &serialized_proto)); input_spec_.ParseFromString(serialized_proto); data_set_ = std::unique_ptr<TensorDataSet>( new TensorDataSet(input_spec_, random_seed_)); } void Compute(OpKernelContext* context) override { const Tensor& input_data = context->input(2); const Tensor& sparse_input_indices = context->input(3); const Tensor& sparse_input_values = context->input(4); const Tensor& sparse_input_shape = context->input(5); const Tensor& input_labels = context->input(6); const Tensor& input_weights = context->input(7); const Tensor& leaf_ids_tensor = context->input(8); data_set_->set_input_tensors(input_data, sparse_input_indices, sparse_input_values, sparse_input_shape); FertileStatsResource* fertile_stats_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 1), &fertile_stats_resource)); DecisionTreeResource* tree_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 0), &tree_resource)); mutex_lock l1(*fertile_stats_resource->get_mutex()); mutex_lock l2(*tree_resource->get_mutex()); core::ScopedUnref unref_stats(fertile_stats_resource); core::ScopedUnref unref_tree(tree_resource); const int32 num_data = data_set_->NumItems(); auto worker_threads = context->device()->tensorflow_cpu_worker_threads(); int num_threads = worker_threads->num_threads; const auto leaf_ids = leaf_ids_tensor.unaligned_flat<int32>(); // Create one mutex per leaf. We need to protect access to leaf pointers, // so instead of grouping examples by leaf, we spread examples out among // threads to provide uniform work for each of them and protect access // with mutexes. std::unordered_map<int, std::unique_ptr<mutex>> locks; std::unordered_map<int32, std::vector<int>> leaf_examples; if (param_proto_.collate_examples()) { for (int i = 0; i < num_data; ++i) { leaf_examples[leaf_ids(i)].push_back(i); } } else { for (int i = 0; i < num_data; ++i) { const int32 id = leaf_ids(i); if (FindOrNull(locks, id) == nullptr) { // TODO(gilberth): Consider using a memory pool for these. locks[id] = std::unique_ptr<mutex>(new mutex); } } } const int32 num_leaves = leaf_examples.size(); const int32 label_dim = input_labels.shape().dims() <= 1 ? 0 : static_cast<int>(input_labels.shape().dim_size(1)); const int32 num_targets = param_proto_.is_regression() ? (std::max(1, label_dim)) : 1; // Ids of leaves that can split. std::unordered_set<int32> ready_to_split; mutex set_lock; TensorInputTarget target(input_labels, input_weights, num_targets); // TODO(gilberth): This is a rough approximation based on measurements // from a digits run on local desktop. Heuristics might be necessary // if it really matters that much. const int64 costPerUpdate = 1000; auto update = [this, &target, &leaf_ids_tensor, &num_targets, fertile_stats_resource, &locks, &set_lock, &ready_to_split, num_data](int64 start, int64 end) { CHECK(start <= end); CHECK(end <= num_data); UpdateStats(fertile_stats_resource, data_set_, target, num_targets, leaf_ids_tensor, &locks, &set_lock, static_cast<int32>(start), static_cast<int32>(end), &ready_to_split); }; auto update_collated = [this, &target, &num_targets, fertile_stats_resource, tree_resource, &leaf_examples, &set_lock, &ready_to_split, num_leaves](int64 start, int64 end) { CHECK(start <= end); CHECK(end <= num_leaves); UpdateStatsCollated(fertile_stats_resource, tree_resource, data_set_, target, num_targets, leaf_examples, &set_lock, static_cast<int32>(start), static_cast<int32>(end), &ready_to_split); }; if (param_proto_.collate_examples()) { Shard(num_threads, worker_threads->workers, num_leaves, costPerUpdate, update_collated); } else { Shard(num_threads, worker_threads->workers, num_data, costPerUpdate, update); } Tensor* output_finished_t = nullptr; TensorShape output_shape; output_shape.AddDim(ready_to_split.size()); OP_REQUIRES_OK( context, context->allocate_output(0, output_shape, &output_finished_t)); auto output = output_finished_t->unaligned_flat<int32>(); std::copy(ready_to_split.begin(), ready_to_split.end(), output.data()); } private: int32 random_seed_; tensorforest::TensorForestDataSpec input_spec_; std::unique_ptr<TensorDataSet> data_set_; TensorForestParams param_proto_; }; // Op for growing finished nodes. class GrowTreeOp : public OpKernel { public: explicit GrowTreeOp(OpKernelConstruction* context) : OpKernel(context) { string serialized_params; OP_REQUIRES_OK(context, context->GetAttr("params", &serialized_params)); ParseProtoUnlimited(&param_proto_, serialized_params); } void Compute(OpKernelContext* context) override { FertileStatsResource* fertile_stats_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 1), &fertile_stats_resource)); DecisionTreeResource* tree_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 0), &tree_resource)); mutex_lock l1(*fertile_stats_resource->get_mutex()); mutex_lock l2(*tree_resource->get_mutex()); core::ScopedUnref unref_stats(fertile_stats_resource); core::ScopedUnref unref_tree(tree_resource); const Tensor& finished_nodes = context->input(2); const auto finished = finished_nodes.unaligned_flat<int32>(); const int32 num_nodes = static_cast<int32>(finished_nodes.shape().dim_size(0)); // This op takes so little of the time for one batch that it isn't worth // threading this. for (int i = 0; i < num_nodes && tree_resource->decision_tree().decision_tree().nodes_size() < param_proto_.max_nodes(); ++i) { const int32 node = finished(i); std::unique_ptr<SplitCandidate> best(new SplitCandidate); int32 parent_depth; // TODO(gilberth): Pushing these to an output would allow the complete // decoupling of tree from resource. bool found = fertile_stats_resource->BestSplit(node, best.get(), &parent_depth); if (found) { std::vector<int32> new_children; tree_resource->SplitNode(node, best.get(), &new_children); fertile_stats_resource->Allocate(parent_depth, new_children); // We are done with best, so it is now safe to clear node. fertile_stats_resource->Clear(node); CHECK(tree_resource->get_mutable_tree_node(node)->has_leaf() == false); } else { // reset fertile_stats_resource->ResetSplitStats(node, parent_depth); } } } private: tensorforest::TensorForestDataSpec input_spec_; TensorForestParams param_proto_; }; void FinalizeLeaf(bool is_regression, bool drop_final_class, const std::unique_ptr<LeafModelOperator>& leaf_op, decision_trees::Leaf* leaf) { // regression models are already stored in leaf in normalized form. if (is_regression) { return; } // TODO(gilberth): Calculate the leaf's sum. float sum = 0; LOG(FATAL) << "FinalizeTreeOp is disabled for now."; if (sum <= 0.0) { LOG(WARNING) << "Leaf with sum " << sum << " has stats " << leaf->ShortDebugString(); return; } if (leaf->has_vector()) { for (int i = 0; i < leaf->vector().value_size(); i++) { auto* v = leaf->mutable_vector()->mutable_value(i); v->set_float_value(v->float_value() / sum); } if (drop_final_class) { leaf->mutable_vector()->mutable_value()->RemoveLast(); } return; } if (leaf->has_sparse_vector()) { for (auto& it : *leaf->mutable_sparse_vector()->mutable_sparse_value()) { it.second.set_float_value(it.second.float_value() / sum); } return; } LOG(FATAL) << "Unknown leaf type in " << leaf->DebugString(); } // Op for finalizing a tree at the end of training. class FinalizeTreeOp : public OpKernel { public: explicit FinalizeTreeOp(OpKernelConstruction* context) : OpKernel(context) { string serialized_params; OP_REQUIRES_OK(context, context->GetAttr("params", &serialized_params)); ParseProtoUnlimited(&param_proto_, serialized_params); model_op_ = LeafModelOperatorFactory::CreateLeafModelOperator(param_proto_); } void Compute(OpKernelContext* context) override { DecisionTreeResource* tree_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 0), &tree_resource)); FertileStatsResource* fertile_stats_resource; OP_REQUIRES_OK(context, LookupResource(context, HandleFromInput(context, 1), &fertile_stats_resource)); mutex_lock l1(*fertile_stats_resource->get_mutex()); mutex_lock l2(*tree_resource->get_mutex()); core::ScopedUnref unref_me(tree_resource); core::ScopedUnref unref_stats(fertile_stats_resource); // TODO(thomaswc): Add threads int num_nodes = tree_resource->decision_tree().decision_tree().nodes_size(); for (int i = 0; i < num_nodes; i++) { auto* node = tree_resource->mutable_decision_tree() ->mutable_decision_tree() ->mutable_nodes(i); if (node->has_leaf()) { FinalizeLeaf(param_proto_.is_regression(), param_proto_.drop_final_class(), model_op_, node->mutable_leaf()); } } } private: std::unique_ptr<LeafModelOperator> model_op_; TensorForestParams param_proto_; }; REGISTER_RESOURCE_HANDLE_KERNEL(FertileStatsResource); REGISTER_KERNEL_BUILDER(Name("FertileStatsIsInitializedOp").Device(DEVICE_CPU), IsResourceInitialized<FertileStatsResource>); REGISTER_KERNEL_BUILDER(Name("CreateFertileStatsVariable").Device(DEVICE_CPU), CreateFertileStatsVariableOp); REGISTER_KERNEL_BUILDER(Name("FertileStatsSerialize").Device(DEVICE_CPU), FertileStatsSerializeOp); REGISTER_KERNEL_BUILDER(Name("FertileStatsDeserialize").Device(DEVICE_CPU), FertileStatsDeserializeOp); REGISTER_KERNEL_BUILDER(Name("ProcessInputV4").Device(DEVICE_CPU), ProcessInputOp); REGISTER_KERNEL_BUILDER(Name("GrowTreeV4").Device(DEVICE_CPU), GrowTreeOp); REGISTER_KERNEL_BUILDER(Name("FinalizeTree").Device(DEVICE_CPU), FinalizeTreeOp); } // namespace tensorforest } // namespace tensorflow
Comparative study of adult polycystic kidney disease. An Ethiopian series dealt with 4 cases of adult polycystic kidney disease and mentioned the paucity of it in analyses of hospital admission in African literature. The present series on the lgbos of Nigeria, West Africa, compares well with the Ethiopian experience and constitutes one more group to be placed on the world map of this intriguing disease.
Launching rockets into space is an expensive endeavor but rocket failure could result in failed launches and even costly and potentially fatal accidents. Crucial Role Of Rockets In Space Missions Last December, Russia's Progress MS-04 space cargo crashed on its way to the International Space Station. The investigation later revealed issues with the Proton-M rocket prompting Russian authorities to ground the Proton space rockets for three and a half months. Rockets can make or break space missions. To improve the performance of rockets, NASA has turned to pressure-sensitive paints. Unsteady Pressure-sensitive Paint NASA aerospace researchers used the high-tech paint called Unsteady PSP (pressure-sensitive paint) in a state-of-the-art aerodynamics test to measure the fluctuating pressure forces that affect aircraft and spacecraft. NASA explained that aircraft and spacecraft should both be designed to withstand the dynamic forces called buffeting. Otherwise, there is risk of being shaken into pieces. Unsteady PSP, which produces a bright crimson glow in the presence of high-pressure airflow, allowed researchers to precisely measure these fluctuating forces. How Pressure-Sensitive Paint Works The paint works by reacting with oxygen to generate light. Differences in pressure produce variations in the amount of oxygen that interacts with the painted surface causing variations in the intensity of light emitted. The changes in the paint allow researchers to visualize where the changing forces apply on the rocket as it accelerates. The different pressures are visualized as colors. Red means higher than average pressure and blue means lower than average pressures. "It's full of tiny pores that let the air flowing over the model come into contact with a greater surface area of the paint. This allows oxygen to react more quickly with the paint, yielding more accurate data on the fluctuating pressures affecting planes and rockets during flight," NASA explained in a statement. Used In Simulated Flights Of Space Launch System Rocket Model During simulated flights of a model of the Space Launch System (SLS) rocket in a wind tunnel at NASA's Ames Research Center, cameras recorded images that researchers combined to know the pressure everywhere on the model vehicle. SLS is the world's most powerful rocket and is set to carry NASA's Orion spacecraft on missions to an asteroid and to planet Mars. Ensuring that the rocket works properly and efficiently would be a step closer to a successful manned mission to the Red Planet. The technology allowed researchers to capture measurements fast enough to catch up with the rapidly changing pressure load over the entirety of the model vehicle's surface. The data offered a first step in a better understanding of how the structure of a vehicle will respond to buffet in flights and minimize impacts through design. The paint, which is sprayed on in a thin layer, can also speed up and lower costs of SLS tests. "We learned from this test that this method is what you need to study buffet," said Jim Ross, an aerospace engineer in the Experimental Aero-Physics Branch at Ames. "There's a lot we don't understand about unsteady flow that this paint will help us figure out." ⓒ 2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Q: Numerical integration VS Discretion If I want to develope a control system for a microcontroller. What is the best/recommended way to control? Numerical integration e.g for-loop - Continuous time with digital controller n = 1000; % Loops x = [0; 0]; for i = 1:n tic; dx = A*x + B*u; u = -L*x + r dt = toc; x = x + dx*dt; t = t + dt; endfor Discretion e.g transform a state space model to discrete model. $$F = e^{Ah}$$ $$G = \int_{0}^{h}e^{At}Bdt $$ $$ x(k+1) = Fx(k) + Gu(k)$$ A: I don't have any experience with microcontrollers myself, but I will try to compare the two using the knowledge I do have. Both can probably be made to work if the sampling rate would be significantly higher then your bandwidth. But since the sampling rate is usually also used to generate the control input $u$, with zero order hold in between the sample times. Therefore I would say that the discretized state space model would be preferred. For this I would have to assume a (near) constant sample rate. It can also be noted that discretizing the continues time model can also be done with $$ e^{\begin{bmatrix}A&B\\0&0\end{bmatrix}h} = \begin{bmatrix}F&G\\0&I\end{bmatrix}. $$ For the real continues time model implementation I would assume that you would not calculate the states directly yourself, but use an observer, which calculates the full state space from the outputs of the system. The integrated continues time model should be more flexible in variable sample rates. However it might be good to look into different numerical integration methods, because (forward) Euler's method might perform badly. When using the discretized model, then you can make use of the theory for discrete Kalman filters. This can give you an estimate of the state before the beginning of a new sample time, so you can calculate a full state feedback control input without a delay.
Interviewing Kath Liam James Two people sit across from each other in a small café in a library, the younger sips on coffee, dissonance fills the room, a letter rests on the table in front. Do you have your reading glasses. Yes. I thought I would just start with, I have been going through mums archives and this is a little letter I came across, I imagine you haven’t read it. (Silence) Does your mother have any information on Alfred Baker. On Alfred, yeh. I have done a bit of family history research. (A longer silence). Where abouts are the Tasmanian state archives, in Hobart? Ah, Yeh. Its through the state Library. (Silence continues) Yes I… (Silence). Did you know her? Glady? Yes, yes, she lived in Launceston, years ago, and actually she is my cousin. But here she has your mothers neice. But she is your cousin. But she is my cousin. She must be a little bit confused. Because her, wait on, Gladdys mother and my mother were sisters. So yes she is my cousin. Did you know she wrote to Mum? Pardon. Did you know she had written to Mum. No, no I didn’t, No. Well Patracia is still in Hobart. If this was 94. Well Gladdy and Bruce, her husband, they moved to Hobart years ago, oh, many years ago, let me think, and they just had two kids, Patricia and John. Gladdy died, she has only died in the last couple of years. Oh yes I knew them, I knew them well. When you were young were you ever told you were indigenous, or of indigenous ancestry at all? No, It came up more when my sister, her brother in law, he had a relative, Baker, and we had the Baker relatives too, and his sister said that the Bakers were related, and from the Bakers, on my brother-in-laws side, they got there indigenous side so… So that was the first time you had been told? Yes. How old were you at the time? Well Barabara, let me think. Just roughly. Oh about 19 or 20. 19 20. And when you were much younger was aboriginal culture ever taught or discussed within the family and the home, or was it something else, a separate thing? It was a separate thing because there was no talk of it until I was that age, and then I was married shortly after that and it was mainly as I said Barbara and her side of the family. Ok, so from what has been told to me your mother openly spoke about being of indigenous ancestry. Well not a lot that I remember. Particularly before Trevor and Barbra were married, Granny had the appearance of an aboriginal but nothing was ever really discussed. Would you say you feel indigenous at all, or Aboriginal in anyway? No not really, I just feel the same, just, I can’t say that I do. No, that’s fine. So my mother raised us, her children, well she told us we were of Aboriginal ancestry, and did so from a very young age. What did you think about that when she started doing that? Oh I didn’t mind. Well the culture, if it was there, the background, why shouldn’t you acknowledge it. I think that Jennie just used, like you went to the Homework centre, and you seemed to enjoy it. But I think she pushed us to be active at school as well, to be outspoken about it. Why do you think she did it? I think your mother believed it. She was going on what she had heard, what she had been told, and I don’t know who discussed it with her originally. Then she started doing the family tree thing just to see if she could get a background, but I think it came across that Grannys family were English, they came from England and she wasn’t sure about the grandfather. Not one hundred percent sure. But did you ever read that book, from the family re-union. Yes. I had a look at part of it and I couldn’t see where we were connected there. But your Mother believes. Do you think the potential Aboriginality within the family has been something people within the family have been ashamed of or denied it? I don’t think your mother has been ashamed of it. No but I mean other members say your sisters or any of your kids or your grandchildren. Well there is one child who wont or wouldn’t acknowledge. Oh well you know Judith doesn’t acknowledge any aboriginality, but I cant see anything to be ashamed of. They were here before we came here; it was there land. Where do you think that comes from, the denial of it even being a possibility? Being Ashamed, I don’t know, Judith, I mean your uncles they don’t mind, they don’t say anything against it, They don’t lean either way. It is not something they discuss I don’t think. But I think Judith is determined, but I don’t see why anyone should be ashamed. As we spoke about before the majority of the documentation I can find refutes or argues against us being Indigenous. It gets confusing and unsure, there are no set answers. Do you think the family narrative out ways the documentation? Should we trust one truth over the other? Well I think if your inclined to find the truth you have to follow it through. But how do you do it? I don’t know. Do you think that there is an importance because it has been passed in an oral tradition or is the weight in the facts. Well it is mainly about what you hear, what is passed down, if you really wanted to go further into it you would really have to study and find out were it starts. Because of how things were, lost and damaged paper work, lies and cover-ups, embarrassment and denial, what if there can be no resolute answer, always some uncertainty. Does that leave you uneasy? It doesn’t leave me uneasy. I know when I was at school there were some aboriginal children, and they weren’t treated the same. But to me it has always been the same. They weren’t given the same chances. At the time family members started openly speaking about an indigenous ancestry it would of been a damaging thing. Possibly not, Possibly not. Is it something that would of moved you back socially? It wouldn’t of moved you forward and if you had that heritage you didn’t talk about it. Then why do you think it was talked about and not just kept quite? Well I think it was about time, everyone was treated equally, they were the first inhabitants here and why shouldn’t they have the same respect that white people have. Do you think family story telling and family mythology is important to you? Yes it is. Otherwise how is culture going to go forward especially now there is so many different nationalities coming to Australia. The search for an indigenous past has been strong for many family members of many generations. Why do you think it has been so important, especially compared to research into other heritages? I don’t know if it is because the indigenous people have been put in the background for so long that they have become a more or less forgotten people until someone started to stand up for them, so they could have there rights, the same as we had. So you think it is a sign of solidarity? Yes, but its taken a long time for people to recognize the culture of the indigenous. And you don’t want to lose it and the more you hear about it the more you can become involved and really take notice. Were you proud of my mother for supporting and nurturing the indigenous aspect to our identity? Yes, she took a step forward to follow it through. She has looked into proving. Yes, of course I was. Do you think it was right of her to tell children that we were of Aboriginal ancestry when it is uncertain and unsure? Well she could discuss the possibility that it was there, and if you wanted to go further looking into it, by all means. I am not trying to lay blame on her, but the fact that growing up it is what we were told and that it is what we believed, and it is maybe untrue. Do you think that there is perhaps something unethical about it? Or is it a positive thing. Well if they are old enough and they are uncertain and that child wants to follow it through, then all should be done one way or another and see if it can be proved. Does any part of you want a resolute answer? Would it make you feel more comfortable? Yes, particularly if was positive. So you would feel better if it was positive? Yes, but I can see no difference in having indigenous heritage. But you are saying that there is no difference between white and black culture, but you would be happier if our family were aboriginal? Well I would like to know if it was and that way you can connect to the things that you miss out on if you don’t do anything about it. How do you feel about acceptance by community? With some people within the family being accepted, and others are not? Where does it all play out? Well as I said that’s where Barbara and Trevor, because Trevors grandmother, I think it was, was a Baker and was declared Aboriginal. So Barbara and Trevor’s side of the family have been accepted as indigenous. And that Baker was related to my grandmother, so I cant see why it isn’t on her side to. At the start you said you didn’t feel indigenous, but have you ever felt confused about your relationship to indigenous culture? Or have you always felt comfortable with where you stood? I am always comfortable, but as I said it was such a difference at school with the way indigenous people were classed. But no it has never worried me, I have always been happy and if I discovered I was, it wouldn’t change me in anyway. But if I was then it would be nice to know more of the culture.
High-performance liquid chromatography analysis of naturally occurring D-amino acids in sake. We measured all of the D- and L-amino acids in 141 bottles of sakes using HPLC. We used two precolumn derivatization methods of amino acid enantiomer detection with o-phthalaldehyde and N-acetyl-L-cysteine, as well as (+)-1-(9-fluorenyl)ethyl chloroformate/1-aminoadamantane and one postcolumn derivatization method with o-phthalaldehyde and N-acetyl-L-cysteine. We found that the sakes contained the D-amino acids forms of Ala, Asn, Asp, Arg, Glu, Gln, His, Ile, Leu, Lys, Ser, Tyr, Val, Phe, and Pro. We were not able to detect D-Met, D-Thr D-Trp in any of the sakes analyzed. The most abundant D-Ala, D-Asp, and D-Glu ranged from 66.9 to 524.3 μM corresponding to relative 34.4, 12.0, and 14.6% D-enantiomer. The basic parameters that generally determine the taste of sake such as the sake meter value (SMV; "Nihonshudo"), acidity ("Sando"), amino acid value ("Aminosando"), alcohol content by volume, and rice species of raw material show no significant relationship to the D-amino acid content of sake. The brewing water ("Shikomimizu") and brewing process had effects on the D-amino acid content of the sakes: the D-amino acid contents of the sakes brewed with deep-sea water "Kaiyoushinosousui", "Kimoto yeast starter", "Yamahaimoto", and the long aging process "Choukijukusei" are high compared with those of other sakes analyzed. Additionally, the D-amino acid content of sakes that were brewed with the adenine auxotroph of sake yeast ("Sekishoku seishu kobo", Saccharomyces cerevisiae) without pasteurization ("Hiire") increased after storage at 25 °C for three months.
BOSTON — One of the biggest problems small entrepreneurs have had starting marijuana businesses in Massachusetts is a lack of money. Starting a business is expensive, and traditional loans and financing options are generally not available because of federal prohibition. State senators will decide this week whether to take the first step toward creating a no-interest loan fund managed by the state to help marijuana entrepreneurs get their businesses off the ground. The fund would be available only to social equity and economic empowerment applicants, designations given by the Cannabis Control Commission to applicants from communities that were disproportionately impacted by marijuana prohibition and enforcement. Sen. Sonia Chang-Diaz, D-Boston, Senate chair of the Joint Committee on Cannabis Policy, introduced the amendment to be considered during the Senate’s fiscal 2020 budget debate this week. Chang-Diaz said the top barrier social equity applicants have faced is access to capital. She said lawmakers wrote into the state’s marijuana law the need to “make sure it’s an industry that benefits Massachusetts residents, not just multi-state corporations” and that benefits communities disproportionately harmed by the War on Drugs. “I’m looking for feedback from the CCC and the field, if we’re not hitting the mark, what else needs to happen?” Chang-Diaz said. The Cannabis Control Commission began discussing the idea of a state-run loan fund as part of a larger discussion in February about how to improve access to the industry for social equity applicants. So far, most marijuana licenses have been granted to larger companies, with little diversity. Of nearly 350 license applications submitted as of early April, only five had priority status as “economic empowerment” applicants. Over 300 of the applicants did not identify as a “disadvantaged business enterprise,” which includes categories like women and minority-owned businesses. Cannabis Control Commission Chairman Steven Hoffman said in February that he has been talking to banks, charitable foundations, wealthy individuals and the industry about ways to help small and minority business owners access financing. He also agreed to explore the possibility of asking the Legislature to create a program that would offer grants or interest-free loans to equity participants – potentially similar to existing programs that help inner city entrepreneurs. Chang-Diaz’s amendment would allocate $1 million to start a Cannabis Social Equity Loan Trust Fund. The money would have to be matched by private donations. Going forward, the amendment proposes that the fund would be paid for with 10% of revenue the state gets from marijuana excise taxes and private donations. The fund’s regulations would be written by the Cannabis Control Commission. It would be administered by the secretary of housing and economic development. Asked about concerns that taxpayers could be on the hook if a business fails, Chang-Diaz said state officials could establish qualifications for eligibility to make sure the state is taking “reasonable risk.” Certain amounts of money could be reserved for different phases of the licensing process. Chang-Diaz said Massachusetts has the country’s strongest law in terms of ensuring the industry will benefit communities hurt by marijuana enforcement. “We should be proud of that, but we have to actually make sure we’re getting there,” she said. “That it’s not just words on paper, we’re actually achieving the goals the Legislature rightly articulated last session.”
Versions for other platforms were also made. In 1993 Sega released a Master System version of the game specifically for the European market, while in 1994Hudson Soft remade the game for the Turbo Duo under the title of The Dynastic Hero(超英雄伝説ダイナスティックヒーロー,Chō Eiyū Densetsu Dainasutikku Hīrō?), featuring an all-new theme and cast of characters. In 2007, the Turbo Duo and Mega Drive versions were re-released on the WiiVirtual Console download service. Contents Wonder Boy In Monster World puts you in control of Shion in his quest to save Monster World from the evil BioMeka. It controls like your standard platform game: run, jump, crouch, and kill enemies. The game is filled with Adventure elements close to the ones in The Legend of Zelda such as talking to townsfolk, collecting money to buy items, extending your life bar by collecting hearts, and equipping a large variety of armor, weapons and magic. Shion travels through the many interconnected regions of Monster World, all the while collecting increasingly powerful equipment in the form of many different swords, spears, shields, suits of armor, and boots. The game introduced a one slot save feature to save progress at inns throughout the game world. In the Japanese original Shion returned to the inn last saved at upon death (and was charged its fee accordingly), so returning to an inn in order to save is a simple matter of allowing Shion to be killed. In the English-language Mega Drive versions this was changed to a "Game Over" screen; this made it often tedious to return to the inns early in the game when Return magic hadn't been obtained yet. During his travels, Shion will often be joined by a small companion who follows you around. Each companion is bound to the region he or she belongs to, and will return to their respective homes when you leave said region. All travel companions will also temporarily stay out of action during boss fights. Priscilla A small fairy who hails from Alsedo, the Fairy village. She joins Shion when he talks to Queen Eleanora. Priscilla will randomly fly over to an enemy and bop the enemy with her wand but this does no damage, and is completely useless. Also, when Shion's health is getting low, she may conjure up a few small hearts for Shion to catch. Hotta A dwarf kid, who lives in the Dwarf village of Lilypad. He will follow Shion around when you save him from the bushmen. Hotta can break open some walls, enabling Shion's entry in the nearby temple, as well as uncovering a couple hidden rooms in the said temple. He also randomly digs up a fountain of small coins Shabo A little summoned reaper, who Shion obtains in Childam, the Darkworld village. He will fly alongside Shion through the Ice Caverns, attacking enemies randomly with his throwing scythe. Elder Dragon's Grandson The Elder Dragon's grandson hatchling, who will accompany Shion in Begonia, the Dragon village. He can help him through the volcano, frequently attacking enemies with his fire breath, but this can be more of a hindrance than a help, as it does low damage, and puts enemies in stun lock preventing standard attacks being able to hit for a short amount of time, meaning the player character may get hit as soon as the stun-lock ends. Her village is the first you encounter. She'll lend you Priscilla, one of her fairy companions, to aid you through your journey through the Mushroom infested forest lair near Alsedo. Princess Shiela Purapril You save her early in the game from a Dark Knight. She'll give you advice at a few points throughout the game, if you care to visit her. Near the end, the game eventually suggests some sort of love interest, as well. Elder Dwarf He's distraught about the kidnapping of Hotta, a young dwarf who lives in the village of Lilypad. Elder Dragon This wise dragon will tell you how to obtain the materials for creating the Legendary Sword. He also sends his grandson along with you on your journey through the volcano. The Darkworld Prince The Prince has gone missing recently. Rumor has it he was abducted by an evil force... The Sega Master System port is somewhat different. It features re-drawn graphics, fewer and shorter stages, and a complex password system (approximately 40 digits in length) rather than battery-backed save data. Hudson Soft later released a slightly re-branded version for the Turbo Duo titled The Dynastic Hero. It features palette-swapped visuals, new insect-themed graphics for main characters (and insects' natural predators as bosses), a Red Book audio soundtrack which is completely different from the Wonder Boy original, and anime-style cutscenes at the intro and ending. Shion was renamed Dyna and was modeled after a Hercules Beetle, and the final boss was changed to a giant lizard king. An English-language version was also produced, but both were built off of the Japanese version of Wonder Boy in Monster World so they feature the same difficulty and mechanics as the Japanese version. This particular version was released on Nintendo's Virtual Console service in Europe on November 30, 2007 and in North America on December 3, 2007. Tec Toy, Sega's distributor in Brazil, altered the Mega Drive version and released it as Turma da Mônica na Terra Dos Monstros (translated as Monica's Gang in the Land of Monsters). Like other Wonder Boy-to-Monica conversions, the game is in Portuguese, the main character is Monica (Mônica in Portuguese) from the Monica's Gang (Turma da Mônica in Portuguese) comics, and other elements and characters from it were added. Electronic Gaming Monthly gave the Turbo Duo version an 7.2 out of 10, praising the music, graphics, and vast size of the game.[2]GamePro were less impressed, remarking that the characters "have that doe-eyed look reminiscent of the best motel art" and that figuring out how to use some of the items is difficult. They did praise the game's emphasis on action over dialogue and travel, but concluded, "Still, it appears that the designers didn't work too hard to inject much freshness, like a more intriguing story line or more realistic graphics. That's what makes Dynastic Hero a 'run of the mill' rather than a 'better' RPG."[3]
Articles Herman Cyril McNeile, under the pseudonym, "Sapper," first introduced the legendary ex-military man-turned-detective, Bulldog Drummond, to mystery fans in 1920. In 1929, Samuel Goldwyn wisely chose to adapt the character as a vehicle for his major male attraction, Ronald Colman. Drummond's witticisms perfectly suited both the star and the movie's new "talkie" revolution. As an early sound achievement, this big budget thriller remains a watershed transition effort, and, at the time became a smash hit - an element that 20th Century (soon to merge with Fox) remembered when they hired Colman to reprise the suave, handsome adventurer in 1934 for the appropriately entitled Bulldog Drummond Strikes Back. Two years previous, the British made their first Drummond outing, The Return of Bulldog Drummond, featuring the celebrated young thespian Ralph Richardson. By the time MGM resurrected the character in 1951, as a frozen fund entry for their UK studios, the master sleuth had made 21 screen appearances - becoming the star of an excellent string of "B"s, whose various incarnations were released by Paramount, Columbia and Fox respectively, and, personified by Messrs. John Howard, Ron Randell and John Newland. For the MGM entry, Calling Bulldog Drummond (1951), directed by the extremely busy Victor Saville (who had also helmed Conspirator (1949), the lead acting chores fell upon the always reliable Walter Pidgeon, who did the picture after finishing up his co-starring duties for The Miniver Story (1950), MGM's long-awaited sequel to their 1942 blockbuster. A crisp, fast-moving mystery thriller, the kind the Brits do so well, Calling Bulldog Drummond had the now-retired shamus being coaxed back into service by Scotland Yard. With wonderful support from Margaret Leighton, Robert Beatty and David Tomlinson, this modest unpretentious entry did not disappoint fans, save in being the sole contribution of Pidgeon and MGM to the series. Of special note is the participation of Bernard Lee, best known to James Bond enthusiasts as Her Majesty's Secret Service head M; ironically, in 1966, Drummond would once again be revived as a souped up Bond-inspired crime fighter in the underrated Deadlier Than the Male, starring one of the original 007 contenders, Richard Johnson.
The cardio vascular system is made up of three main sections: a. The Heart: the driving organ which provides the exact amount of blood momentarily required, it provides strokes of blood which are energy loaded, in a governed rate and shape. b. The Vascular bed: This is the distribution system of the blood to the various organs, in accordance with the specific needs of each of them. It also acts as collecting system and general reservoir for blood, towards its reallocation to the heart, to be distributed on a beat by beat basis, in accordance with the integrated needs of the entire body. c. The control: The control system is made up of subsystems which control the various organs via feed-back loops. All these controlled loops are part of the Central Nervous Systems (CNS), where they are evaluated and integrated. The CNS then operates the vascular flow towards the heart to provide the exact amount of blood to be circulated on the next beat and activates the heart to provide that volume of "blood stroke" at a predetermined rate and shape. The heart's ventricles are operated by the following parameters, which are governed by the heart's own control system: 1. Preload: the pressure of the atrium preceding the acting ventricle, which defines the volume of the next stroke; PA0 2. Afterload: the "pressure head" against which the ventricle has to act; PA0 3. Contractility: the capability of the myocardium (the heart's muscle) to apply the needed force to reach the required stroke; PA0 4. Rate the heart's beats per minute which together with the stroke volume define the cardiac output. The coordinated performance of all these above operators is manifested in the Cardiac-Output, which provides the amount of blood per beat, at the right pressure wave. In case of heart failure, the entire flow system is disturbed. Such disturbance is manifested by an inadequate Cardiac Output and damming of blood behind the defective heart chambers. Depending on the kind, rate of development, and severity of the heart failure, a whole set of compensating mechanisms is activated in the vascular system by its control system, reallocating and affecting pressures, directing the reduced flow to the various organs, bringing the whole vascular system to a new balance point. This new balance point is in accordance with the central control system, operating the heart's controls as well. However, in acute heart failure (such as acute myocardial infarction, AMI) a reduction in cardiac output and acute damming of blood behind the affected ventricles occurs. This effect might be too big to be regulated by those compensatory mechanisms and thus be fatal. If all conventional therapeutic treatments of the heart failure mechanical support of the heart and circulation is required, and more specifically, Ventricular Assistance. This form of sustained life assist might either be temporary or permanent. A temporary heart assist system can, for example, be applied to patients who cannot be resuscitated (such as at AMI, or the end of an open heart operation) while recovery of the ventricular function is anticipated. If this does not happen, the device serves as a "bridge to transplantation" until a donor heart will be available. The permanent ventricular assistance should be applied to patients who sustain permanent damage, where recovery is not anticipated, and the patient is not a candidate for heart transplantation, for any reason.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems. An indispensible part an information handling system is its power system. A power system of an information handling system, or other device, may include a power source, one or more voltage regulators for controlling one or more voltages distributed to various components of the information handling system, a power controller for controlling operation of the various voltage regulators, and a distribution network for distributing electrical current produced by the one or more voltage regulators to various components of the information handling system. Often it is desirable to intentionally disable or power down a voltage regulator in order to reduce power consumption in an information handling system. For example, it may be determined that a portion of memory on an information handling system is unused. Accordingly, a voltage regulator providing electrical current to such portion of memory may be intentionally disabled in order to reduce power consumption. The disabling may occur as a result of a message or command communicated to the voltage regulator by a processor or access controller, and any such command may be communicated automatically (e.g., by means of a program executing on the processor that determines that it is advisable to disable a voltage regulator in order to conserve power) or manually (e.g., a user or administrator may determine that it is advisable to disable a voltage regulator and manually issues a command to do so). However, when a voltage regulator is disabled, it may indicate to a power controller that the voltage regulator is no longer receiving or producing power. Accordingly, unless the fact that the voltage regulator has been intentionally disabled (as opposed to experiencing a fault condition) is communicated to the power controller, the power controller may detect a false fault, and may in response take unneeded and undesirable corrective action (e.g., by disabling power throughout other portions of the information handling system). In traditional approaches, such false faults are masked via a somewhat complex method involving handshakes among various components of the information handling system. For example, if an intentional disable originates from a processor, the processor may communicate to the information handling system's basic input/output system (BIOS) of the intent to disable a voltage regulator prior to issuing a disable command to the voltage regulator. The BIOS may then write a masking bit to a register file in a power controller to prevent the power controller from interpreting the disabling of the voltage regulator as a fault. If an intentional disable originates from an access controller, the access controller may, prior to issuing a disable command to the voltage regulator, write a masking bit to a register file in a power controller to prevent the power controller from interpreting the disabling of the voltage regulator as a fault. Such complex handshaking requires undesirable design complexity.
Major rebuild or minor adjustments? What Liverpool fans said Major rebuild or minor adjustments? What Liverpool fans said Use your head like Sadio and subscribe to the Liverpool FC newsletter Sign me up Thank you for subscribing We have more newsletters Show me See our privacy notice Invalid Email Jurgen Klopp said he does not need to oversee a major rebuild at Liverpool FC – but the supporters appear to disagree. In some positions, at least. We assessed, position-by-position, just what needed a major rebuild, and what needed a few minor adjustments. Then, we asked the fans to get voting and tell us what they thought. Nearly 50,000 votes were cast to say which parts of this current squad Klopp must rebuild, and which bits only need a little tweak. The majority of voters feel the wing is the position that needs the biggest renovation, but attacking midfeld is just right. You can see the full results by clicking through the gallery above. Do you agree with the results? Leave your comments below
Saint John Hospital Reviews Overall Featured review Saint John Hospital Leavenworth, KS Registered Nurse, Leavenworth, KS - July 7, 2015 I have the best co-workers ever and they are the main reason I have stayed at this hospital so long. We are like a family, we care about each other inside & outside of work. They are truly some of my best friends. I love that I know most of the people I work with, including ancillary teams. I like that I got experience in different areas by floating to other units. However, being a small facility there are a lot of issues with favoritism. CNO is friends with supervisors and employees outside of the hospital, makes for some working environment conflicts and favoritism.
1.. Introduction ================ Practical control systems are susceptible to component malfunctions which may cause significant performance degradation and even instability of the system. The past two decades have therefore seen considerable research on Fault Tolerant Control (FTC). FTC systems are designed to allow recovery from damage and system faults. When it comes to electrical drives used in safety critical applications or industrial processes where system faults may lead to enormous costs, FTC systems are crucial \[[@b1-sensors-12-04031]\]. Stator, rotor and shaft faults together constitute up to 47% of recorded induction motor faults \[[@b2-sensors-12-04031]\]. Fourier Transform (FT) techniques, such as those using high resolution frequency estimation \[[@b3-sensors-12-04031]\] and signal demodulation \[[@b4-sensors-12-04031]\] have been applied to fault detection. The drawback of FT techniques is that they provide information only of the frequency domain, not the time domain \[[@b5-sensors-12-04031]\]. Also, Fourier Transform does not allow the use of current as a basis for fault detection, because the current through a faulty motor is non-stationary and contains minor transients \[[@b6-sensors-12-04031]\]. Artificial intelligence techniques have also been proposed \[[@b7-sensors-12-04031]--[@b9-sensors-12-04031]\]. A very promising avenue in motor fault detection is wavelet-based. Wavelets provide both time and frequency domain information. Chow *et al.* \[[@b10-sensors-12-04031]\] used a Gaussian-enveloped oscillation wavelet for fault detection, although they restricted their study to mechanical faults. A more extensive wavelet-based fault detection algorithm was used by Schmitt *et al.* \[[@b11-sensors-12-04031]\], with open winding faults, unbalanced voltage and unbalanced stator resistance taken into consideration. No hardware implementation was presented, however. Most recently, detection of stator winding shorts was presented in \[[@b12-sensors-12-04031]\]. The work focused only on one type of fault. In this paper, a fault tolerant control strategy which deals with a wide range of induction motor faults is implemented. A vector control drive with an encoder is the dominant control scheme. In the event of an encoder fault, the system switches to sensorless vector control. If the stator winding is open circuited or shorted, a closed loop V/f controller takes over. If a minimum voltage fault occurs, the system goes to open loop V/f control. Even further deterioration activates a protection circuit which halts the motor. Faults are detected using a wavelet index. The four different controllers ensure the effectiveness and availability of the control scheme. The wavelet index is shown to be an excellent fault indicator. Additionally, the system has the ability to revert back to the dominant controller if the motor resumes normal operation, thus ensuring its availability at all times. Moreover, the protection circuit requires no extra hardware, thus reducing the cost of the drive. Additionally, the sensorless vector control features a novel Boosted Model Reference Adaptive System (BMRAS) to estimate the speed that eliminates the need for a PI controller and thus of much tuning. The fault tolerant algorithm was executed initially through Matlab/Simulink and then was verified experimentally. This paper is organized as follows. Section 2 describes the motor control strategies used in this work. The BMRAS controller is presented in Section 3. Section 4 explains the wavelet transform. The fault tolerant control strategy is described in Section 5. The experimental results are presented in Section 6. Finally, concluding remarks are given in Section 7. 2.. Control Strategies of the Induction Motor ============================================= 2.1.. Sensor Vector Control --------------------------- Vector control decouples flux and torque currents so as to linearly control the output torque of a nonlinear induction motor. The three phases of voltage and current are transformed to two-phase dq axes. The dq frame rotates synchronously with the rotor flux space vector. The expression for torque in an induction motor is \[[@b13-sensors-12-04031]\]: $$T_{e} = \frac{3}{2}p\frac{L_{m}}{L_{r}}\left( {\Phi_{\textit{rd}}i_{\textit{sq}} - \Phi_{\textit{rq}}i_{\textit{sd}}} \right)$$ According to the orientation of [Figure 1](#f1-sensors-12-04031){ref-type="fig"}, Φ*~rq~* becomes zero. The new expression becomes: $$T_{e} = \frac{3}{2}p\frac{L_{m}}{L_{r}}\left( {\Phi_{\textit{rd}}i_{\textit{sq}}} \right)$$where *L~m~, L~r~, p, Φ~rd~, Φ~rq~, i~sd~, i~sq~* and *T~e~* are mutual inductance, rotor inductance, pole pairs, direct rotor flux and quadratic rotor components, direct stator current component, quadratic stator current component and electromangnatic torque, respectively. As is clear from [Equation (2)](#FD2){ref-type="disp-formula"}, the motor torque can be controlled by controlling the quadrature component of stator current *i~sq~*. Vector control with a sensor is the dominant controller in this work, due to its straightforward implementation. The following calculations are carried out in the vector control according to the Park transformation: $$\left\lbrack \begin{array}{l} i_{\textit{qs}} \\ i_{\textit{ds}} \\ \end{array} \right\rbrack = \left\lbrack \begin{array}{ll} {\mspace{7mu}\cos\varphi} & {\sin\varphi} \\ {- \sin\varphi} & {\cos\varphi} \\ \end{array} \right\rbrack\ \left\lbrack \begin{array}{l} i_{Q} \\ i_{D} \\ \end{array} \right\rbrack$$ This operation can be illustrated in [Figure 2](#f2-sensors-12-04031){ref-type="fig"}. dq to abc transformation is: $$\left\lbrack \begin{array}{l} i_{\textit{as}} \\ i_{\textit{bs}} \\ i_{\textit{cs}} \\ \end{array} \right\rbrack = \left\lbrack \begin{array}{ll} {\mspace{7mu} 1} & {\mspace{7mu} 0} \\ {- 0.5} & {- \sqrt{3}/2} \\ {- 0.5} & {- \sqrt{3}/2} \\ \end{array} \right\rbrack\ \left\lbrack \begin{array}{l} i_{\textit{ds}} \\ i_{\textit{qs}} \\ \end{array} \right\rbrack$$ Therefore, the rotor flux and the torque can be independently controlled to obtain a linear current/torque relationship through the stator current in the *dq*-axis. The Simulink model is shown in [Figure 3](#f3-sensors-12-04031){ref-type="fig"}. 2.2.. Sensorless Vector Control ------------------------------- The encoder used for position and speed measurement may lead to problems. Faults such as loss of output information, offset, disturbances, measure deviation and channel mismatch may occur \[[@b14-sensors-12-04031]\]. Sensorless vector control of induction motor drives estimates position using an observer and eliminates the need for the speed sensor. It reduces hardware complexity, size, maintenance and ultimately cost. It also eliminates direct sensor wiring and has been shown to have better noise immunity and increased reliability \[[@b15-sensors-12-04031]\]. The Simulink implementation of sensorless vector control is shown in [Figure 4](#f4-sensors-12-04031){ref-type="fig"}. 2.3.. Volt to Frequency (V/f) Control ------------------------------------- The V/f control is one of the most popular control techniques due to the following reasons: It is a simple algorithmThere is no need of current sensorsThere is no requirement of speed measurement The following equations can explain the principle of V/f: $$\hat{V} \approx j\omega\hat{\Lambda}$$where *ω* and Λ̂ are the phasors of stator voltage and stator flux respectively: $$|\hat{V}| \approx |j\omega\hat{\Lambda}|$$ $$V \approx 2\pi f\Lambda$$ $$\Lambda = \frac{1}{2\pi f}V\ \textit{or}\Lambda = \frac{1}{2\pi}\frac{V}{f}$$ The stator flux remains constant if the ratio V/F remains constant despite the change in the frequency. The stator flux in an induction motor is proportional to the ratio of applied voltage and supply frequency. Varying the frequency changes the speed. With the voltage to frequency maintained at the same ratio, flux and torque can be kept constant throughout the speed range. The speed is adjusted by varying frequency (*f*), maintaining V/f constant to avoid flux saturation as is shown in the following equations: $$E_{\textit{airgap}} = \textit{kf}\phi_{\textit{airgap}}$$ For constant air gap flux (*ϕ~airgap~*): $$E_{\textit{airgap}}/f \approx v/f$$ It is a much simpler control strategy than vector control and does not require high performance digital processing \[[@b16-sensors-12-04031]\], which makes it suitable as a backup control strategy in the event of faults. While it is generally implemented in open loop, a closed loop approach is also adopted for higher accuracy of the speed response. A PI controller regulates the slip speed of the motor to keep the motor speed at its set value. 3.. Boosted Model Reference Adaptive System (BMRAS) =================================================== Model Reference Adaptive Systems (MRAS) are used to estimate quantities using a reference model and an adaptive model. The difference between the outputs of the two models drives an adaptive mechanism that provides the quantity that is to be estimated. Conventional MRAS use a simple fixed gain linear PI controller to generate the estimated rotor speed. This PI controller consumes time for tuning. In this work, the PI controller is replaced with a 'booster', which cuts down on tuning time while providing a good response. The booster is constructed using a rate limiter and zero order hold. Taking the system shown in \[[@b17-sensors-12-04031]\], the reference model can be expressed in the following equations: $$p\lambda_{\textit{dr}} = L_{r}/L_{m}(v_{\textit{ds}} - R_{s}i_{\textit{ds}} - \sigma L_{s}di_{\textit{ds}}/dt)$$ $$p\lambda_{\textit{qr}} = L_{r}/L_{m}(v_{\textit{qs}} - R_{s}i_{\textit{qs}} - \sigma L_{s}di_{\textit{qs}}/dt)$$ The adaptive model can be expressed in the following equations: $$p\lambda_{\textit{qr}}^{\prime} = (L_{m}/T_{r})i_{\textit{qs}} - (L_{m}/T_{r})\lambda_{\textit{qr}}^{\prime} + \omega_{r}\prime\lambda_{\textit{dr}}^{\prime}$$ $$p\lambda_{\textit{dr}}^{\prime} = (L_{m}/T_{r})i_{\textit{ds}} - (L_{m}/T_{r})\lambda_{\textit{dr}}^{\prime} - \omega_{r}\prime\lambda_{\textit{qr}}^{\prime}$$ $$ɛ = \lambda_{\textit{qr}}\lambda_{\textit{dr}}^{\prime} - \lambda_{\textit{dr}}\lambda_{\textit{qr}}^{\prime}$$where *R~s~, L~s~, V~ds~, V~qs~, T~r~, w~r~* are stator resistance, stator inductance, direct component of stator voltage, quadratic component of stator voltage, rotor time constant and rotor speed, respectively. The error between the reference and adaptive outputs, along with the reference speed (*N~ref~*) is passed to the booster block shown in [Figure 5](#f5-sensors-12-04031){ref-type="fig"}. The initial condition of both signals is kept to zero. The rate limiter restricts the change of the signal passed to it by limiting the slope. The upper limit is called the rising slew parameter (*δ*) and the lower limit is the falling slew parameter (*γ*). The output of the rate limiter is calculated as follows: $$O_{o/p}(i) = \nabla t.\delta + N(t - 1)$$ $$O_{o/p}(i) = \nabla t.\gamma + N(t - 1)$$ $$O_{o/p}(i) = N(i)$$where *N* refers to the input to the rate limiter. The output is passed to a Zero Order Hold (ZOH) to generate continuous time input by holding each sample value constant over one sample period. The ZOH also acts as a hypothetical filter that gives a piece-wise signal as is demonstrated by the following equation: $$O_{\textit{ZOH}_{o/p}}(t) = \sum_{n = - \infty}^{\infty}{N_{\textit{in}}\lbrack n\rbrack.\textit{rect}}(\frac{t - \textit{nT}}{T} - \frac{1}{2})$$ Finally, the estimated speed is calculated as follows: $$\textit{spd\ est}(i) = N_{\textit{ref}}(i) - \omega_{\textit{Booster}}(i)$$ The BMRAS was tested in both simulation and experiments (The experimental setup is described in Section 6). [Figure 6](#f6-sensors-12-04031){ref-type="fig"} shows good tracking by the BMRAS of low speeds, high speeds and step changes in speed. There is no steady state error. The experimental results up to 1,600 rpm also show fast settling time and low steady state error (less than 30 rpm), as is seen in [Figure 7](#f7-sensors-12-04031){ref-type="fig"}. The [Figure 6](#f6-sensors-12-04031){ref-type="fig"} shows the full speed simulation with long operation period. The serial communication interface output of the experimental result for 3.5 s is shown in [Figure7](#f7-sensors-12-04031){ref-type="fig"}. According to the computer simulation and experimental results shown above, the system shows fast response with higher accuracy than the conventional MRAS in the literature \[[@b18-sensors-12-04031]\]. 4.. Wavelet Index ================= A wavelet is an orthogonal function that can be applied to a finite group of data \[[@b19-sensors-12-04031]\]. While Fourier analysis techniques have been used extensively for induction motor fault diagnosis, they require large amounts of data \[[@b5-sensors-12-04031]\]. Also, Fourier techniques inform us only about frequency components of signals, while wavelet transforms provide both time and frequency information. They are therefore more comprehensive and have wider ranging applications. Wavelet coefficients, at a first level of decomposition, are obtained from a signal by applying a mother wavelet, which represents a family of functions that need to satisfy a number of criteria. The mother wavelet, denoted by *ψ(t)*, must have a zero mean as shown in [Equation (21)](#FD21){ref-type="disp-formula"}: $$\int{\psi(\omega) = 0}$$ It must also have a square norm of one, as is seen in [Euqation (22)](#FD22){ref-type="disp-formula"}: $${|\psi(\omega)|}^{2} = 1$$ A general equation of the mother wavelet, shown in [Equation (23)](#FD23){ref-type="disp-formula"}, that shows the family of wavelets it represents, can be shown by adding a translation and scaling factor *a* and *b*, respectively: $$\omega_{a,b}(t) = {|a|}^{- 1/2}\psi(\frac{t - b}{a})$$ Wavelet coefficients are obtained using a low pass filter to obtain what is called an 'approximation' signal, while a high pass filter provides 'details'. The approximation signal is progressively decomposed into further approximations and details, till a desired level of decompositions is obtained \[[@b11-sensors-12-04031]\]. In this work, changes in the waveform of stator current are used as the basis for detecting faults. The current signal is passed through the wavelet transform. For every detail obtained from the high pass filter, the energy is calculated by adding the squared coefficients of the details and the final approximation. The maximum energy serves as the most effective piece of information to determine the wavelet index. The index is calculated according to [Equation (24)](#FD24){ref-type="disp-formula"}: $$W_{\textit{indx}} = \textit{abs}(\textit{energy\ of\ selected\ level})/\textit{average}(\textit{energy}(\textit{Ia}))$$ The energy is calculated according to [Equation (25)](#FD25){ref-type="disp-formula"}: $$\textit{energy\ of\ selected\ level} = {\int{d8}^{2}\textit{dt}}$$where d8 refers to the 8th decomposition detailed frequency and information about the stator current status obtained from the high pass filter. A Daubechies wavelet (db10) is the mother wavelet function using which the wavelet index is generated. The Simulink implementation is shown in [Figure 8](#f8-sensors-12-04031){ref-type="fig"}. The wavelet decomposition levels used in [Equation (24)](#FD24){ref-type="disp-formula"} was performed according to the following criteria: $$W_{\textit{decomp}} = \frac{\log(f_{s}/f)}{\log(2)} \pm 1$$where *f~s~* is the sampling frequency (20 kHz) and *f* is source frequency. The optimal levels of decomposition are gauged through the optimum mother wavelet. The Shannon entropy orientates the route in the selection of this optimal level by determining the entropy of each original (parent) subspace of the (DWT) and also views it in comparison to its new (children) subspace. 5.. Fault Tolerant Control ========================== Fault tolerant control is indispensable, especially taking into consideration the formidable costs of unplanned stops in industrial system operations. The mechanism to switch between controllers in the event of fault and the overall fault tolerant control scheme used in this work is shown in [Figures 9](#f9-sensors-12-04031){ref-type="fig"} and [10](#f10-sensors-12-04031){ref-type="fig"}, respectively. In the [Figure 9](#f9-sensors-12-04031){ref-type="fig"}, the trip is a binary indication of fault and is either 0 or 1. The control signal determines the type of fault and SVM seen in the figure. [Figure 10](#f10-sensors-12-04031){ref-type="fig"} shows the flow of the SVM signal. In this work, four control strategies are used. In normal operation, sensor vector control runs the drive. When an encoder fault occurs, sensorless vector control takes over. An open circuit in the stator winding or a short reverts the system to closed loop V/f control. V/F controlled drives are very reliable due to the restriction to low dynamic performance and the absence of closed loop control, while a minimum voltage fault enables open loop V/f control to maintain acceptable level of operation due to the degradation of the system performance and the difficulties of keeping good performance with the closed loop. If a slight noise is wrongly interpreted as a fault, the system quickly reverts back to sensor vector control. Finally, the protection circuit is enabled in the event that two or more faults occur at once. Digital motor control blocks (DMC) are used to simulate the proposed algorithm due to their easy compilation from Simulink/Matlab to C++ or C through the Texas Instruments F28335 DSP. The Simulink model is shown in [Figure 11](#f11-sensors-12-04031){ref-type="fig"}. 6.. Experimental Results ======================== Experimental setup of the induction motor drive is based on the TMS320F28335 DSP. The induction motor parameters are listed in [Table 1](#t1-sensors-12-04031){ref-type="table"}. The hardware scheme is depicted in [Figure 12](#f12-sensors-12-04031){ref-type="fig"} and shown in picture in [Figures 13](#f13-sensors-12-04031){ref-type="fig"} and [14](#f14-sensors-12-04031){ref-type="fig"}. 6.1.. Performance under Healthy Operation ----------------------------------------- The wavelet decompositions of stator current in the healthy induction motor are shown in [Figure 15](#f15-sensors-12-04031){ref-type="fig"}. The lack of any heavy perturbation shows that the motor is healthy (faultless). The small perturbation is negligible and is simply because of the high sensitivity of the wavelet which we actually use to our advantage. The experimental and simulation wavelet indices are compared in [Figure16](#f16-sensors-12-04031){ref-type="fig"}. The amplitude of the wavelet index for healthy operation, as seen in [Figure 16](#f16-sensors-12-04031){ref-type="fig"}, is 1.4. The crossing of this threshold is an indication of a fault. The monitoring of the system parameters can be obtained through a serial communication cable between the DSP and the PC using SCI transmit and receive blocks as is shown in [Figure 17](#f17-sensors-12-04031){ref-type="fig"}. To demonstrate the effectiveness of the fault tolerant algorithm, three faults are investigated: Short and open circuits in the stator winding and sensor faults. At each fault, the appropriate wavelet index is calculated, as is demonstrated in the following sections. 6.2.. Stator Winding Short -------------------------- To simulate this fault, the stator resistance was decreased 10 times in steps of 0.1 Ω. The motor has a delta connection. The variable resistance serves to reduce the stator resistance according to the equation for equivalent resistance of two parallel resistors. For each shunt resistance value, the mean wavelet index is calculated. The wavelet decomposition details are shown in [Figure 18](#f18-sensors-12-04031){ref-type="fig"}. Experimental responses of the drive at 450 rpm, 900 rpm and 1,600 rpm were obtained with this fault. At each speed, the wavelet index was recorded and compared to the simulation results as is detailed below: ### 6.2.1.. At 450 rpm The first test was with a speed of 450 rpm. The wavelet index comparison between experimental and simulation results at this speed is listed in [Table 2](#t2-sensors-12-04031){ref-type="table"}. It shows the amplitude of the wavelet index increases to 1.5 due to the winding short introduced. ### 6.2.2.. At 900 rpm The second test is with a maximum speed of 900 rpm. As its clear from [Table 2](#t2-sensors-12-04031){ref-type="table"}, the wavelet index increases to 1.8 for the winding short at 900 rpm. ### 6.2.3.. For 1,600 rpm The wavelet index lies between 1.8 and 2 for a stator winding short at 1,600 rpm as is seen in [Table 2](#t2-sensors-12-04031){ref-type="table"}. The data shows a slight difference between the wavelet indices for the different speeds. The reason for that is the distortion in the stator current waveform in the experimental test. 6.3.. Stator Winding Open Circuit --------------------------------- To introduce the open circuit fault, the stator resistance was increased 10 times the original (20 Ω) in steps of 2 Ω. The wavelet decomposition of the faulty stator current is shown in [Figure 19](#f19-sensors-12-04031){ref-type="fig"}. The wavelet index was recorded to be 1.5 for 450 rpm, between 1.2 and 1.6 for 900 rpm and 1.8 at 1,600 rpm as is shown in [Figures 20](#f20-sensors-12-04031){ref-type="fig"}, [21](#f21-sensors-12-04031){ref-type="fig"} and [22](#f22-sensors-12-04031){ref-type="fig"}, respectively. 6.4.. Encoder Faults -------------------- Two types of speed sensor (encoder) faults are presented in this work. The first is complete speed sensor failure as is depicted in [Figure 23](#f23-sensors-12-04031){ref-type="fig"}. To introduce complete speed sensor failure the cables of the encoder channels A, B and index I were disconnected. The blue line is the encoder output (zero) when it fails. The red line is the rotor position estimated with the BMRAS, as the system switches to sensorless operation when the sensor fault is detected. The second type of sensor fault was a partial sensing error in the position, which was created by introducing noise in the encoder LED. The encoder output in [Figure 24](#f24-sensors-12-04031){ref-type="fig"} depicts this fault. The fault tolerant algorithm was tested with these faults at different speeds. Before starting the induction motor, the cables of the encoder channels were disconnected. As is seen in [Figure 25](#f25-sensors-12-04031){ref-type="fig"}, the encoder fault is introduced at the 1000th iteration (3 s), at which point the system switches from sensor vector control to sensorless vector control. At 5 s a stator short winding fault is introduced and the system switches to closed loop V/f control. At 10 s, a compound fault (both stator winding open and short circuits simultaneously) is introduced which activates the protection unit and brings the motor to a halt. The protection unit is part of the software program and requires no extra hardware. The recovery from a fault occurs rapidly and the transition from one control scheme to the other is seen to be smooth. The performance does not degrade considerably even as the control strategy changes. The flexibility of the control strategy is depicted in [Figure 26](#f26-sensors-12-04031){ref-type="fig"}. The operation is started with an encoder fault. At the 550th iteration (1.5 s), the system returns to a healthy state. The system reverts back to sensor vector control with minimal recovery time. When a minimum voltage fault occurs at the 3,000th iteration, open loop V/f takes over. The general flow chart of the wavelet based fault tolerant control algorithm can be seen in [Figure 27](#f27-sensors-12-04031){ref-type="fig"} (some parts of the flowchart are not included in this paper). 7.. Conclusions =============== A fault tolerant control system incorporating (sensor and sensorless) vector control and (closed loop and open loop) V/f control has been presented. The wavelet index used for fault detection has been shown to be both fast and effective. The index detected complete sensor failures, partial sensor errors, stator winding shorts and open circuits and compound faults. The transitions from one controller to the other were both quick and smooth. The threshold of the WI is set according to the amplitude of the stator current, which differs for every fault. The Boosted Model Reference Adaptive System (BMRAS) used in sensorless vector control was shown to be effective for rotor speed estimation. It saved time otherwise consumed in tuning the conventional PI controller, while maintaining excellent performance. The system has been shown to be flexible, in that if a fault is removed and the system returns to a healthy state, the drive reverts back to the dominant sensor vector control. The protection unit was implemented successfully, not requiring additional hardware and thus saving cost. Future work may consider adding strategies such as Direct Torque Control (DTC) to the control scheme. Additionally, a thorough analysis of the switching mechanism, such as time delays, would be useful. The inclusion of prognostic mechanisms, for an early prediction of faults before they occur, is also a very good prospect. The authors acknowledge the financial support of the University of Malaya, Provision of High Impact Research, Grant No. D000022-16601, Hybrid Solar Energy Research Suitable for Rural Electrification. ![Reference frame for vector control.](sensors-12-04031f1){#f1-sensors-12-04031} ![Park transformation principle.](sensors-12-04031f2){#f2-sensors-12-04031} ![Simulink implementation of sensor vector control.](sensors-12-04031f3){#f3-sensors-12-04031} ![Simulink implementation of sensorless vector controller.](sensors-12-04031f4){#f4-sensors-12-04031} ![Simulink implementation BMRAS.](sensors-12-04031f5){#f5-sensors-12-04031} ![Simulation results of the speed tracking by the BMRAS.](sensors-12-04031f6){#f6-sensors-12-04031} ![Experimental results of the speed tracking by the BMRAS.](sensors-12-04031f7){#f7-sensors-12-04031} ![Simulink wavelet index implementation.](sensors-12-04031f8){#f8-sensors-12-04031} ![Switching mechanism between the controllers.](sensors-12-04031f9){#f9-sensors-12-04031} ![Fault tolerant control algorithm.](sensors-12-04031f10){#f10-sensors-12-04031} ![Simulink model of FTC system.](sensors-12-04031f11){#f11-sensors-12-04031} ![Hardware implementation scheme.](sensors-12-04031f12){#f12-sensors-12-04031} ![Circuits of the induction motor drive.](sensors-12-04031f13){#f13-sensors-12-04031} ![Induction motor setup.](sensors-12-04031f14){#f14-sensors-12-04031} ![Wavelet decomposition in healthy motor.](sensors-12-04031f15){#f15-sensors-12-04031} ![Experimental (red) and simulation comparison of the healthy I.M wavelet index.](sensors-12-04031f16){#f16-sensors-12-04031} ![Serial communication Interface to show the experimental output.](sensors-12-04031f17){#f17-sensors-12-04031} ![Wavelet decomposition.](sensors-12-04031f18){#f18-sensors-12-04031} ![Wavelet decomposition Rs = 200 Ω.](sensors-12-04031f19){#f19-sensors-12-04031} ![Experimental and simulation Wavelet index Ohm at 450 rpm.](sensors-12-04031f20){#f20-sensors-12-04031} ![Experimental and simulation wavelet index for series Rs (2--200) Ohm at 900 rpm.](sensors-12-04031f21){#f21-sensors-12-04031} ![Wavelet index for series Rs (2--200) Ohm at 1,600 rpm.](sensors-12-04031f22){#f22-sensors-12-04031} ![Experimental rotor position: Complete sensor failure.](sensors-12-04031f23){#f23-sensors-12-04031} ![Experimental rotor position with partial sensing error.](sensors-12-04031f24){#f24-sensors-12-04031} ![Experimental speed transition with different controllers.](sensors-12-04031f25){#f25-sensors-12-04031} ![Control system recovery.](sensors-12-04031f26){#f26-sensors-12-04031} ![Expanded flow chart of the work.](sensors-12-04031f27){#f27-sensors-12-04031} ###### Induction motor parameters. **Motor spec.** **Value** ------------------- --------------- Power 1 kW Current 2.5 A Voltage (delta) 400 V Rated Speed 2,780 rpm No. of poles 2 Moment of Inertia 2.4e−4 kgm^2^ Stator Resistance 20.9 ohm Rotor Resistance 19.5 ohm Stator Inductance 50e−3 Henry Rotor Inductance 50e−3 Henry ###### Wavelet index (WI) for shunt Rs (0.1--2) for different speeds. **R~sh~(Ω)** **WI at 450 rpm** **WI at 900 rpm** **WI at 900 rpm** -------------- ------------------- ------------------- ------------------- ------ ------ ------ 0.10 1.10 1.00 1.78 1.76 1.80 1.70 0.20 1.40 1.40 1.80 1.80 1.81 1.72 0.40 1.43 1.42 1.80 1.80 1.85 1.76 0.80 1.43 1.42 1.82 1.80 1.90 1.88 1.60 1.50 1.50 1.84 1.86 1.92 1.91 1.80 1.50 1.50 1.84 1.88 1.95 1.93 2.00 1.50 1.50 1.84 1.90 2.00 1.96
<node val="501321"/>
The Green Bay Packers and free agent receiver Greg Jennings never appeared close to finding mutual ground on a new deal, and it has been widely believed for some time now that Jennings will bolt Green Bay for a better offer once unrestricted free agency opens on March 12. ESPN 1500's Tom Pelissero may have found the reason why the two sides failed to even approach a deal that would keep Jennings in Green Bay. According to Pelissero, word around the NFL Scouting Combine is that Jennings is looking for $14 million a year on the free-agent market. Lot of opinions here about what top receiver UFAs may command. Heard Greg Jennings wants $14M/year. Tough to see that happening. — Tom Pelissero (@TomPelissero) February 22, 2013 A $14 million a year deal would make Jennings the NFL's third highest paid receiver, behind only Calvin Johnson and Larry Fitzgerald. As Pelissero states, envisioning Jennings and that kind of money this offseason is difficult. Jennings will turn 30 years old in September, and he's coming off back-to-back seasons in which he missed games to injury (11 total games missed). Also, it's worth considering that Jennings is only 5'11"—small for a No. 1 receiver—and his career numbers (425 receptions, 6,537 yards, 53 TDs) are likely inflated from playing his entire career with Brett Favre and Aaron Rodgers. If Jennings' demands are true—and we do have to take combine chatter with a grain of salt—then it is fairly easy to see why the two sides were so willing to part ways without much work done on the re-signing front. No one, especially the Packers, will be willing to pay Jennings $14 million a season. Jennings, who caught 36 passes for 366 yards and four touchdowns in eight games last season, would be fortunate to eclipse the five-year, $55.55 million deal Vincent Jackson received from the Tampa Bay Buccaneers last spring. Zach Kruse is a 24-year-old sports writer who contributes to Cheesehead TV, Bleacher Report and the Milwaukee Journal Sentinel. He also covers prep sports for the Dunn Co. News. You can reach him on Twitter @zachkruse2 or by email at [email protected].
; RUN: llc -mtriple=x86_64-pc-linux-gnu < %s | FileCheck %s @foobar = common global i32 0, align 4 define void @zed() nounwind { entry: call void asm "movq %mm2,${0:H}", "=*m,~{dirflag},~{fpsr},~{flags}"(i32* @foobar) nounwind ret void } ; CHECK: zed ; CHECK: movq %mm2,foobar+8(%rip)
Pages 14 July 2019 The Bird of Paradise Jubilee Brooch Chris Jackson/Getty Images Few things say sunny summertime like gold, diamonds, and flowers. Today's brooch, the Bird of Paradise Jubilee Brooch, has all three! Chris Jackson/Getty Images The intricate gold and diamond brooch was a Diamond Jubilee gift to the Queen from government of Singapore in 2012. The piece is a traditional Peranakan-style brooch, made by Singapore-based jeweler Thomis Kwan. The brooch wasn't a commission; it was a stock item from Kwan's firm, made of yellow gold and set with 61 diamonds. An official from the country's Ministry of Foreign Affairs purchased the brooch from Kwan's store, but he didn't learn until later that the piece was destined for the Queen's jewelry box. Fun fact: the brooch's design is made to resemble a bird of paradise plant; that plant's scientific name is Strelitzia reginae, which was bestowed to honor one of the Queen's ancestors, Charlotte of Mecklenburg-Strelitz. Chris Jackson/Getty Images The brooch is fairly large in terms of its circumference, but as you can see from this angle, it lays very flat against the surface of the Queen's coats, jackets, and dresses. Warren Little/Getty Images The piece has become one of the Queen's favorite brooches over the past seven years. She often pairs it with brightly colored ensembles, like this sunny yellow jacket and hat from the Epsom Derby in May 2017... Chris Jackson/Getty Images ...or this lime green outfit and hat, worn for the first day of Royal Ascot a few weeks later in June 2017. SIMON DAWSON/AFP/Getty Images In March 2019, she wore the brooch with a vibrant orange coat and hat for a visit to the Science Museum in London. Chris Jackson/Getty Images And just a few days ago, the brooch coordinated beautifully with the vivid pink coat and hat that HM wore for a visit to the National Institute of Agricultural Botany in Cambridge.
CARACAS, VENEZUELA - Relatives of the seven rebels killed in a raid by Venezuelan security forces on Wednesday demanded access to their bodies, which the government has threatened to cremate. The seven, led by rogue police pilot Oscar Perez, were slain Monday in a siege on their hideout in the small mountain community of El Junquito, government officials said. "They should deliver the body of my father and Oscar Perez," Jeandribeth Diaz told journalists outside the Palace of Justice here Wednesday morning, her voice choking with emotion. She and a male cousin hoped to retrieve the body of Jose Alejandro Diaz. "For us to be able to rest and stay calm, we need to see it even if it is one last time." A women who said she was a supporter of rebel poli A women who said she was a supporter of rebel police officer Oscar Perez holds a poster that reads in Spanish "I am Oscar Perez" at a checkpoint near the morgue where his body is held, in Caracas, Venezuela, Jan. 17, 2018. A women who said she was a supporter of rebel police officer Oscar Perez holds a poster that reads in Spanish "I am Oscar Perez" at a checkpoint near the morgue where his body is held, in Caracas, Venezuela, Jan. 17, 2018. Later in the day, authorities allowed Perez's aunt, Aura Perez, to recover the ringleader's body from the Bello Monte morgue in the capital city. Aura Perez was accompanied at the morgue by parliamentarian Delsa Solarzano, who's on the opposition-controlled National Assembly's special commission to investigate the deaths. She has requested autopsies. Solarzano said she and other lawmakers had been trying since Tuesday to help relatives gain access to the bodies. "We demand respect for the law," she tweeted in Spanish. Desde el día de ayer estamos haciendo gestiones en la morgue de #BelloMonte para la entrega de los cadáveres del caso #OscarPerez a sus familiares. Exigimos respeto a la ley y a los protocolos de actuación — Delsa Solorzano (@delsasolorzano) January 17, 2018 On Wednesday, armed security troops ringed the morgue where the bodies were being held. Another parliamentarian on the commission, Winston Flores, accused President Nicolas Maduro's administration of extrajudicial killing. "There was an extrajudicial execution, and we want to prove it," the French news agency AFP quoted him as saying outside the morgue. Flores also said authorities had refused to "hand over the bodies because they are [being held] at the order of a military court." Oscar Perez, 36, had been a fugitive since last June, when he stole a police helicopter and buzzed the Supreme Court and other government buildings. Interior Minister Nestor Reverol, in announcing the deaths Tuesday, said Perez and his "terrorists" initiated the firefight. He said the fatalities included seven members of Perez's group as well as two police officers who fought them. On Wednesday, civic groups reported that authorities had used a grenade launcher along with other weapons in attacking Perez and his group. Perez, his face bloodied, appears in several videos taken during Monday's siege and posted to his Instagram account. In one, he says of security forces, "Venezuela, they don't want us to surrender. Literally, they want to kill us — they just told us!" The Venezuelan Program of Education and Action on Human Rights has called for the government to provide a full and transparent report into the deaths. Rights groups, including Venezuela's Foro Penal, said extrajudicial killings are "expressly forbidden in Venezuela." The raid adds a new wrinkle to talks, set to resume Thursday, on how to resolve Venezuela's political and economic crisis. Representatives of the Maduro government and its political opposition are scheduled to meet in the Dominican Republic for two days. Along with Perez and Diaz, authorities identified the dead rebels as Daniel Enrique Soto Torres, Abraham Israel Agostini, Jose Alejandro Pimentel, and brothers Jairo Lugo Ramos and Abraham Lugos Ramos. Authorities also said two police officers were killed in the raid: Andrian Ugarte and Nelson Chirinos.
The Stepford Devaluation The form of devaluation of our appliances depends on a variety of factors. For instance, what type of narcissist is applying the devaluation, what is the nature of the appliance (IPPS, IPSS, NISS, TS etc) , what is the status of the narcissist’s fuel matrix, what is the position of the façade and other matters beyond that also. With a Tertiary Source, there is no long lasting relationship to begin with and therefore any devaluation which takes place will be short and effective and is often done in the context of triangulation, for instance making the narcissist look good in front of say a new target (IPSS) or a group of friends (NISSs) by putting down the Tertiary Source as part of the devaluation. Secondary Sources have two types of devaluation. Corrective and Dis-Engagement. The Corrective Devaluation is short in nature but can be rather savage and is designed to bring the malfunctioning secondary source appliance back into line. Thus, it might be ostracising a friend (NISS) by inviting everybody else to a BBQ but not the offending appliance. Recognising that he or she has offended the narcissist in some way, the NISS apologises, makes amends and ceases the troublesome activity which led to the Corrective Devaluation. Thus the Corrective Devaluation has proven effective and the NISS enjoys the golden period once again and is welcomed back into the fold. Should the NISS not respond to the Corrective Devaluation (or commits a particularly treacherous act at the outset) then a short Dis-Engagement Devaluation occurs and the secondary appliance is then dis-engaged from. The DED does not last for long because the narcissist and the secondary appliance will not see one another repeatedly (unlike the IPPS) and also because the narcissist can dis-engage from the secondary source readily and either turn to other pre-existing secondary sources (dependent on the size of the fuel matrix) or recruit a replacement with relative ease. The phase of devaluation really earns its stripes when applied to intimate partners (IPSS or DLS) but especially the IPPS. The devaluation of the IPPS is the one which most commentators focus on and is usually the one which contains abusive treatment and the full horror of nasty manipulations from the narcissist. There is no denying that such an unpleasant devaluation occurs, but it is but just one of several forms of devaluation that is deployed against the IPPS. Other forms include The Stranger Zone, The Oblivious Mis-Treatment, The Full Horror and others besides. Within the devaluation of the IPPS there is also the Stepford Devaluation. You may be familiar with the novel (and film) The Stepford Wives. Ira Levin’s novel follows the premise whereby a new arrival at the idyllic neighbourhood of Stepford begins to suspect that the wives who live there and are frighteningly submissive are actually robots created at the behest of their privileged and controlling husbands. This resulted in the term ‘Stepford Wife’ being used in the English language to describe a submissive wife (or partner) who appears to conform blindly to a stereo-typically old-fashioned subservient role in the relationship with her husband or partner. It may also refer to an accomplished woman who has sub-ordinated her life and/or career to her husband’s interests and who has affected submission to him even in the face of his own disgrace and poor behaviour. A Stepford Devaluation is one form of the devaluation of the IPPS. Often, the relevant victim fails to recognise that she is being devalued because of the nature of this devaluation. The following traits are applicable to the Stepford Devaluation. It only ever applies to the person who is the Intimate Partner Primary Source of the narcissist. The IPPS is likely to have an almost idyllic lifestyle. The narcissist is usually Mid Range or Greater in nature (possibly Upper Lesser also). There is financial security and a superior lifestyle encompassing good house, clothing, dining out, gifts etc. The narcissist and IPPS are regarded as having an excellent marriage/relationship by external observers such as family, friends and neighbours. The narcissist and IPPS are regarded as having an enviable lifestyle by external observers. The IPPS may work, but this is not always the case. The IPPS does not need to work because the narcissist’s financial firepower is sufficient to avoid the financial necessity of the IPPS having to work (and in turn remove financial independence and create isolation). If the IPPS does work, their work will be regarded as unimportant and unnecessary by the narcissist who will take little interest in it and refer to it rather patronisingly. The narcissist will expect the IPPS to fulfil other duties (see below) on top of the IPPS’ professional commitments. The narcissist whilst varying between disparaging and dismissive about the IPPS’ job in private, will hold it out as an admirable element as he seizes it as a character trait to draw fuel from secondary and tertiary sources and to use as part of the façade. More usually, the IPPS will be ‘allowed’ a ‘window dressing’ role as occasionally helping out a charity shop, or sitting on a couple of infrequent ‘good works’ committees. The narcissist regards these as acceptable since they contribute to the façade and do not interfere with the IPPS’ other duties (see below) to the narcissist. The narcissist prefers that the IPPS does not work. The IPPS has or had an accomplished position of employment. If retained it is treated dismissively by the narcissist as explained above or more likely the narcissist will have engineered the giving up of this position. This will have been achieved through apparently benign reasons but is done in order to create submission, remove independence and remove distraction and support networks. The IPPS is expected to be a superb home-maker. Whilst domestic assistance may be permitted, the narcissist expects a pristine residence of show-home proportions. The home would not look out of place on the front cover of Interior Design or Elle Décor. The IPPS prides herself on such an achievement and strives to ensure that nothing is out of place in the home. The IPPS is expected to always be presentable. She will be beautifully dressed, hair done, make-up worn, nails manicured and will never be seen slumming it in track pants and sweat top. Any slight deviation from picture perfection will be picked up and commented on by the narcissist. Similar to the situation concerning the home, the IPPS will ensure that she presents as elegant and refined at all times. The IPPS is expected to play the role of convivial hostess at dinner parties, encouraging mother at school events and loyal housewife putting up with the narcissist’s demands for perfection. The IPPS is expected to be wholly submissive to the needs and demands of the narcissist in creating this idyll and portrayal of domestic privilege and bliss to the outside world. No dissention is accepted by the narcissist. The IPPS ‘enjoys’ a gilded existence. She wants for nothing in terms of money, prestige, acknowledgement by external observers, admiration and friendship by third parties. She gratefully accepts that she is a ‘lucky girl’ to have what she has and does not like to complain. She may have done so to begin with, but the irrepressible force of the narcissist’s demands brings about the desired submission. The narcissist’s demand for perfection means that part of the Stepford Devaluation manifests through the imposition of this desire for perfection and adverse response if it is not achieved. However, such is the nature of the relevant narcissist and also the extent of the compliance, that the narcissist does not have to devalue in any savage way. It will either be a remark (“I see the children have been active”) when referring to the house appearing untidy or the imposition of a silent treatment (Present or Absent) to express disapproval at a failing on the part of the IPPS. The usual range of manipulations applied during devaluation will be absent. The narcissist generally treats the IPPS ‘well’ in terms of engaging in conversation, doing activities together and maintaining the façade of the enviable home life. Whilst you may see this existence as demanding, you may also see that it has its rewards and the extent of the devaluation whilst unacceptable to you is nowhere near as bad as it could be. This is where the second strand of the Stepford Devaluation applies. The narcissist repeatedly engages in infidelity with IPSSs and has an extensive ‘stable’ of those he turns to. He will repeatedly have ‘golfing weekends away’, ‘business trips’ or a ‘late meeting which necessitates staying over in town’. The IPPS knows that the narcissist is engaging in repeated affairs and one-night stands. The IPSSs or IPTSs are never, ever brought to the marital home (that would damage the façade). The IPSSs and/or IPTSs may even contact the IPPS to try to expose the narcissist and the IPPS will listen to these tales of infidelity and poor treatment of the IPSSs and/or IPTSs. The narcissist will hold the IPPS up as a shining example of the good wife/partner and will often be disparaging about other women, picking fault with their behaviour, looks, occupations and so forth. Comments are made such as “Thanks goodness I have you, yes darling?” “I was right to pick you.” “They disgust me, such whores and lowlifes.” The narcissist reveres the IPPS because she has created the stable and enviable home, she contributes to his impressive façade and he is allowed to do as he pleases through extensive engagements outside of his marriage. He may have long standing affairs, short affairs, intermittent Dirty Little Secrets, in fact all types and forms of extra-marital liaison but he will never leave the IPPS. None of them ever compare to the IPPS. The IPPS is expected to be totally compliant, never complain, always be supportive, always be presentable, always put the narcissist first and in return she is largely treated ‘well’ (in the eyes of the narcissist and third parties) but her devaluation occurs through two main strands A very high standard of compliance; and The total acceptance that her husband/partner is engaging sexually with various other appliances and will always do so. How does this Stepford Devaluation operate in terms of fuel for the narcissist? This is where there is something of a peculiarity. The IPPS will provide negative fuel (at first) when the devaluation first begins and she learns of the affairs and is also subjected to the controlling behaviour vis a vis appearances. She will initially fight back, rebel, be hurt etc and thus provide negative fuel. However, once the narcissist has effectively ‘broken’ her in, by achieving compliance, the IPPS provides positive fuel to the narcissist through her striving to maintain the idyllic appearance, her support in his endeavours and the maintenance of the façade and it is the IPSSs and IPTSs who will suffer horrendous treatment at the hands of the narcissist. The narcissist, being usually a Greater, or an Upper Mid Ranger most of the time in this arrangement (although it can occur with MMR and UL) has no problem in ensnaring mistress after mistress, booty call after booty call and so on and it is here that they are treated to the malice (with the Greater) and also the devaluation in order to gain negative fuel from them, in contrast to the (largely) positive fuel now provided by the IPPS. The Stepford Devaluation is part of the Madonna-Whore concept. The narcissist may engage in intimate relations with the IPPS still but it is not often and the IPPS may actually be cold sexually and be perfectly happy to be left alone in that respect, content for the IPSSs/IPTSs to bear the brunt of her husband’s devaluing perversions. Only a particular type of empathic individual is able to perform this role and endure it, which comes as a consequence of their own particular traits, their susceptibility to the overtures of the type of narcissist who engages in this behaviour and the fact that she is ultimately conditioned to see her position as one which ‘could be far worse if I was honest’. She is brain-washed, controlled and ultimately the automaton which was so desired in the Stepford Wives. Like this: Related Post navigation 23 thoughts on “The Stepford Devaluation” “The movers and shakers” who might ‘sign up’ for exactly the life you describe sound, to me, like someone who may be a narcissist themselves who ‘desire’ that lifestyle… I’m not sure that empaths don’t actually expect ‘true love’ in such a situation…depends what the narc mirrors to them. Mine mirrored my values and ‘appeared’ to want the same things as me – to trap me of course but turns out – as his reality gap experience suggests otherwise – he actually wanted the life of things like ‘the house’, ‘the car’ , the ‘trips to Europe’ etc…except I never wanted those…(well, maybe the trips to Europe) but I have never valued people for their status symbols… I do hear ya on the “empath ego shit” though…I remember my narcissist asked me once “Why do you love me?” I said “Who else would love you the way I do?” But then again, I used to say that about my cat too. He was a one person cat who sometimes bit others and people would exclaim “Why do keep that devil cat?” My response: “Because no one else would love him the way that I do.” I thought I was dating a normal .. but I think he’s somatic “ MidRange “ for a while now . Everything in this article matches his behaviour . I don’t know much about infidelity right now I did feel something was wrong but he keeps it all well hidden denied it all without being harsh hes like super nice to me he doesnt want me to worry stay up all night worrying about him no silent treatments never ignores my calls or text .. I’ve done so many things what would trigger devaluation or even dis-engangement but he just wont let go i’m so confused ! HG, For a cerebral narc who is not as interested in proximate IPSS, could this scenario occur using virtual IPSS for negative fuel? Or using colleagues or even the narc’s own children for negative fuel? Thank you. I don’t entirely buy that people in this position don’t know that they are entering into a contract and not a relationship. That they think they can handle it for the residual benefits just like the narcissist. Until they can’t. I think my other posts were likely unsuccessful, so if that’s the case I’ll retype my reply: I think it is actually that many do not “see” the contract at first – or at least not the fine print… If you’re heading into a relationship with *materialistic* residual benefits in mind…maybe in that case you could argue that one neglected to read the contract thoroughly. In the beginning there are no difficult behaviours to ‘handle’ in exchange for residual benefits. Even in my case, I’m not conventionally materialistic – as one would expect from the commonly understood idea of a ‘Stepford Wife’ or spouse; so it played out differently for me. (As I described in an earlier long-winded post to this article.) Once I realized there was something wrong with my ex – I did actually think I could handle ‘it.’ So I agree with you there. But I didn’t know that ‘it’ was Narcissism…which is a losing battle for the empath. (And I attributed all the difficulties to ‘stress’ of one kind or another and there was no outright verbal abuse and physical outbursts until near the very end.) I recall, in the depths of devaluation, kind of romanticizing my situation and thinking about the movie, A Beautiful Mind. And even, independent of those thoughts, (because I’ve never actually mentioned that movie before with regard to my entanglement) I recall a children’s protection services worker – who I talked with at length following my escape – saying that she was reminded of the movie A Beautiful Mind after getting to know my story… Hi WhoCares In that comment I was thinking more along the lines of celebrity, royalty, etc. The movers and shakers. There are plenty of women who see what that life will hold and what will be expected of them (not all that will be of course but a considerable amount). They will know that they must always be immaculate, perfect hostess, and many of those things listed for instance.They will have children they can give many things and experiences they would never otherwise have. The lure is great. People of power are seldom satified. With anything. And a lot of them have a long history of musical partners. We see this looking in and can point to many examples, and yet given the chance at bat, many will convince themselves that they can keep up the facade expected, that they alone have the golden vagina that will keep him from wandering. That he will love them like no other because everyone before them has just been a failure. That is some serious empath ego shit right there. The ones who are narcissistic (but not narcissists) might readily sign on thinking they can handle it for the gains (Harvey Weinstein’s wife Georgina Chapman comes to mind). I see her as thinking: hes a fat disgusting pig and powerful. He can help with my fashion empire. I know what I must do and I don’t care who he fucks (all the better if he leaves me alone). As long as I am hollywood royalty and get my residual benefits I will stay with this POS and be blind to what he is doing. I thought Lady Diana another example who played blind but that will get lots of knickers in a twist. But someone going in expecting true love? Come on. NA, I understand your point but see it from another way due to self reflection. I am the type of empath that would fit the particular type. The only reason I don’t think my marriage was like this is because I was devalued and my ex is a cerebral. Unfortunately, I can see this happening if I were to marry another type of narcissist. It would not be for money or items, as I make my own money and I am far from materialistic. I think he would be able to just wear me down as my ex husband did in other ways. I would never jumped into that kind of relationship but slowly accept it as the way it is (after having the same fights over and over). I learned in my marriage that sometimes it is easier to just do and not fight. Sometimes it isn’t worth the same fight and frustration; ok, not sometimes but usually. I’m a doer and will just do if it needs to get done. I have allowed things in my life that I never expected and can tie a lot to: not wanting to even perceive hurting another; not wanting to fight when not necessary; being too tired emotionally and mentally from all responsibilities; prioritizing what issues I will take a stand on (not as many as I probably should); and just do what needs to get done and move on. Sometimes the peace and the calm is worth the extra work or not speaking up. I know many would leave instead but if the person is treating you with perceived love and care, that is a hard situation to leave. I left my ex not for me but for our child to not be caught in the environment. As long as everyone acts like my ex or worse, then I will be able to have my strength. Act in a way like the next narc did, and I struggle with recognizing until it is too late. I feel this way too in certain stepford relationships. One springs to mind and thats trump and his wife. His wife wanted all the glam that money had to buy but im sure she didnt expect to go thru all she has. She probably knew he was a jerk but loved the golden period and all that mkney coukd by but failed to see the hefty pricetag attached.Shes a prime example of a stepford wife. Its evident from her pictures and seeing her on camera shes miserable! Both of them frown all the time and never look happy. Of course those who support trump will disagree and i can respect they feel differently but i call it as i see it! Chihuahua nun I agree with you on Melania Trump. She is IMO a good example of a Stepford devalued codependent. I think that out of the three that Trump married, she is the most tenacious at making her marriage work. I see several factors why. One, in interviews I saw how proud she is that her mom and dad are still together and never divorced. I think that I is her goal to be like her parents. In her mind, divorce is a failure and staying together like her parents did is a success. And I think that she is highly narcissistic to aim for that success. Two, from the things that I have read and heard in the news, it sounds like her mom is an empath and her dad is highly dominant and possibly a narc too. So that dynamic is probably normal in her mind. She has said so many times about how kind and beautiful her mom is and how she wants to be like her mom. Three, Trump is her Prince Charming. He met her at a party when her career was not as successful as she hoped it to be and he chose her out of all of the attractive women in that party. That can be deeply ingrained in her mind like the golden period. When she is unhappy now, her mind just goes back to that time. Also Trump even went to her country and took her parents to the fanciest restaurant in her country and asked them for her hand. Talk about a fairy tale. Especially since she and her parents had a very modest lifestyle in that country. I think that all of those plays a factor in her mind aside from her enjoying the gilded lifestyle, status and fame attached to her marriage with Trump. It was easier to brainwash her because of the circumstances of her life and how they met. Yes, I was accomplished (post-secondary education, embarking on a new career, plus creative pursuits with both financial reward and admiration.) But I didn’t sub-ordinate my life and career to meet his interests. I encouraged him to meet me at my level. In addition to encouragement I gave moral support and practical support (financial and otherwise.) It’s not like I handed money over to him; but I covered many of our financial needs during that time when he had an accident, was changing careers etc… Yes, many thought (and commented on) that we had an “idyllic” lifestyle – but not in the form of what is traditionally considered a “superior” lifestyle (i.e. the house, the car, the trips, dining out, gifts) – instead they envied the freedom we had from being chained to those things. The passion or ‘chemistry’ we appeared to have; the way we could work together and appeared to support each other’s goals… So yes, I worked; he did not disparage or dismiss my job success, but he did slowly undermine it and he was always very jealous of my positive work relationships. And I’m sure he did seize upon it as a character trait for fuel and as part of the facade… Did he expect me to fulfill other duties? Sure he did – but I was already spread too thin – so he had to do them…or neglect them (that was a sure-fired way to get fuel.) No wonder he so viciously resented me at times. He also did allow me “window dressing” roles – which still confuses me to this day, but he likely considered it acceptable because of contributing to the facade and with me being away he was free to draw fuel elsewhere. Yes I did have an accomplished position of employment that was retained but was subject to the following scenario: “…more likely the narcissist will have engineered the giving up of this position. This will have been achieved through apparently benign reasons but is done in order to create submission, remove independence and remove distraction and support networks.” Yep. That’s the part that bites. It was slow and sure – but effective and insidious because it happened almost organically…naturally…because of the person that I am and because of the person that he is. And because he chose me. It is also one of the most difficult things to explain the workings of to others… “The IPPS is expected to be a suberb home-maker…the narcissist expects a pristine residence of show-home proportions.” Pffft…Hahahahaha! How? When I was juggling a job and a half, commuting and a child… That *had* to partly contribute to “The Reality Gap”…but I wasn’t the one suffering panic attacks from it – was I? “The IPPS is expected to always be presentable. She will be beautifully dressed, hair done, make-up worn, nails manicured and will never be seen slumming it in track pants and sweat top.” Yeah. Okay. I remember those days… “The IPPS is expected to play the role of convivial hostess at dinner parties, encouraging mother at school events and loyal housewife putting up with the narcissist’s demands for perfection.” Check. I have those skills…they just played out in a different way. “The IPPS is expected to be wholly submissive to the needs and demands of the narcissist in creating this idyll and portrayal of domestic privilege and bliss to the outside world. No dissention is accepted by the narcissist.” This is where I diverge from expected role of a “Stepford Wife.” Because in reading this paragraph as soon as my eyes hit “wholly submissive,” the rest of the words are translated by my brain as this: blah blah, blah and blah…until my eyes hit “No dissention” – now we have a problem, Houston. “The IPPS ‘enjoys’ a gilded existence. She wants for nothing in terms of money, prestige, acknowledgement by external observers, admiration and friendship by third parties. She gratefully accepts that she is a ‘lucky girl’ to have what she has and does not like to complain.” No. It wasn’t gilded…well, it was and it wasn’t…I *wanted* for much in the end, but I was a ‘lucky girl’ – because on the flipside: I had freedom galore! “The narcissist generally treats the IPPS ‘well’ in terms of engaging in conversation, doing activities together and maintaining the façade of the enviable home life.” Yes, for the most part. And I acquiesced…until I no longer cared about maintaining the facade with him – or anyone, for that matter. “Whilst you may see this existence as demanding, you may also see that it has its rewards and the extent of the devaluation whilst unacceptable to you is nowhere near as bad as it could be.” Yes. And as for infidelities, here is a difference too…I could never “see” them: there was no evidence and I busy being engaged in the daily living of life. And he must of been exceptionally careful – or, as I suspect he hid it in plain site because he knew: if I had an inkling, that would have been a deal-breaker. HG – can you please advise as to whether it may still constitute a Stepford Devaluation if the IPPS is treated well (in maintenance of the fascade), taken care of financially and yet completely unaware of the multiple affairs? In this case the IPSS and DLS are treated poorly and replaced regularly unknown to the IPPS. HG, I apologise – that last part of my question makes no sense at all! You are a man of many talents but reading minds is a lofty expectation. If you would be so kind, I would like to know if a Stepford Devaluation could occur where the infidelity was unknown to the IPPS who is generally treated well? In your example it appears the IPPS is aware of the affairs. I wondered if the devaluation of the IPSS and DLS would make it easier for the N to maintain the fascade with the IPPS and result in an elongated golden period for the IPPS? Second time lucky *fingers crossed*
Cotton bag L (46x30 cm) - zero waste 10 pcs A bag from stronger organic cotton cloth is especially suited to a whole loaf of bread or more pieces of pastry. Replaces all the microtene bags in which bread gets moldy easily. Dimensions 46 x 30 cm Organic cotton is approved for contact with food. A bag from stronger organic cotton cloth is especially suited to a whole loaf of bread or more pieces of pastry. Replaces all the microtene bags in which bread gets moldy easily. Dimensions 46 x 30 cm Organic cotton is approved for contact with food. Ingredients 100% GOTS-certifikovaná biobavlna The entered values are not correct, fix them please. Your name E–mail * Subject * Your question * I agree that my personal information, ie my name, surname, e-mail address, phone and text, will be processed in order to process this demand. * More information here.
In the weeks since his inauguration, U.S. President Donald Trump has shown little appetite for anything else besides tweeting. This apparent lack of interest in any actual governance means much of actual policy making will fall to Congress. However, they seem to have forgotten how to govern after not having a unified government in over ten years. After the fiasco that was the rollout of the American Health Care Act (AHCA) and Trump’s skinny budget, it is apparent that Republicans will occasionally need Democratic support to pass legislation. If Republicans are serious about regaining credibility as more than an obstructionist party and Democrats want to be more than simply the Tea Party 2.0, then the parties’ respective leaders must find areas that are conducive to bipartisanship. One of those areas could be the Earned Income Tax Credit. Like Social Security, the earned income tax credit (EITC) is another program that has attracted bipartisan support since it was enacted. The EITC is a tax credit that is available to both low-income singles and couples, with and without children. It is also refundable, meaning that eligible filers can receive the money back from the credit even if they have no tax liability. For the poorest in society, EITC refunds can represent double-digit percentages of annual income. These benefits in and of themselves attract bipartisan support. The right likes it because it is seen as a tax break; the left likes it because it is an effective poverty fighting measure. But there is an additional benefit that strengthen this support: an incentive to work. The EITC has benefit levels that are divided into three brackets that are determined by the recipients’ adjusted gross income. The first is the “phase-in.” For each dollar that an eligible worker earns, the benefit level increases. Once the worker reaches the maximum value of the benefit, it stays stable until workers start to earn too much to earn the credit. In this final bracket, the benefit level is lowered for each dollar earned until the benefit is finally zero. This facet of the EITC appeals to both sides once again: The left sees billions of dollars of aid going to low-income households. The right sees a policy that encourages work rather than dependence. On top of the clear benefits of increased income, there have been a range of other, indirect benefits that have come to be associated with the EITC. These other benefits large affect children in EITC eligible households. They include increased infant and maternal health, better school performance, increased enrollment in higher education, higher incomes later in life, and increased Social Security benefits among women. Research has found that women who were eligible for the EITC expansion in the 1990s experienced “reduced mental stress, compared to similar women who would not have been eligible for the expansion.” Furthermore, researchers at UC Davis found that the EITC was associated both with a decrease in rates of low birth weight and an increase in the average birth weight, a measure that is “an effective predictor of adult health as well as economic outcomes.” Although the EITC is not primarily concerned with health, those outcomes place it on par with health programs such as the Supplemental Nutrition Assistance Program (SNAP) and Women, Infants, and Children (WIC). Recent studies have also linked the EITC to better educational outcomes among children in EITC eligible households. These outcomes include increased test scores, particularly in math, among elementary and middle school aged students. The same research also found an increased probability of earning a high school diploma or GED by age nineteen, and the probability of completing at least year of college by age nineteen by 2.1 percentage points and 1.4 percentage points, respectively. This latest study joins the large body of research that has already shown increases in K-12 education performance and test scores. A final indirect benefit of EITC benefits is increased income among adults whose families received the benefits as children: “For children in low-income families, an extra $3,000 in annual family income between their prenatal year and fifth birthday is associated with an average 17 percent increase in annual earnings and an additional 135 hours of work when they become adults.” Researchers hypothesize that this correlation is due to better health as children. By avoiding poverty-induced childhood diseases, they are able to avoid these diseases carrying into adulthood and therefore able to have healthier lives overall. On top of that, most of those that benefit from the EITC are working-age single mothers. As a result of the EITC serving as a work incentive and these women working, and earning, more, they receive increased Social Security benefits at retirement. Possible Improvements There are three possible improvements that are commonly cited to improve the EITC: creating multiple tax refund days, expanding eligibility, and implementing state level EITC programs in addition to the federal one. Ninety-five percent of EITC recipients have an average of $2,000 in debt. The majority of this debt comes from credit cards, with other types including utility, car, and student loans. This debt was largely because income only covered an average of two-thirds of their monthly expenses, with the last third coming from the EITC. Although receiving the EITC benefit should in theory help families pay off their debt, it does not. These families continue to accumulate debt because they use the windfall of the EITC at tax time to buy things that are “normally confined to middle-class families, such as a special birthday present for a child or dinner out at a restaurant.” Despite this, most families report that they prefer receiving the one-time payment instead of hypothetical periodic payments throughout the year, which would help pay down debt. However, in a pilot program in Chicago, 229 EITC recipients were enrolled in a program where they received their benefits in four installments throughout the year. At the end of this program, 90% of participants reported that they preferred the installments rather than a single payment. Furthermore, they were able to save more of their refund both relative to the control group and to themselves in the past year when they received a one-time payment. Multiple payments over the course of the year helps to improve financial stability and would require minimal administrative changes to implement. This is even more feasible, given the strong bipartisan support that the EITC already enjoys. One of the criticisms that surrounds the EITC is the low levels of benefits that it pays out to childless adults relative to what it pays families. Although families tend to have more expenses, childless adults can still have as much, if not more, debt. In a plan endorsed by both former President Obama and Speaker of the House Paul Ryan, the benefit for childless recipients would be raised from $506 to over $1,000 annually, while also lowering the minimum age to receive the credit from 25 to 21. The White House has stated that this plan would benefit 13.2 million Americans. On the other side of the aisle, the conservative American Action Forum estimates that the plan would increase employment by 8.3 million jobs and bring the labor participation rate to pre-recession levels. State EITC programs also improve the federal program. Twenty-six states and the District of Columbia have a state EITC program that mirrors the requirements of the federal program and provides a percentage of the benefit that recipients already receive from the federal government, ranging from 3.5% in Louisiana to 85% in California. Twenty-three of these states have a refundable credit, which makes them especially effective. There are more complications for running the EITC at the state level, as they cannot run deficits like the federal government can. However, given the effectiveness and support of the EITC, states should make every effort that they can in order to enact and expand their own programs to supplement the federal one. Making policy is difficult. Making effective, well-thought-out policy even is more so. Fortunately, the EITC is already effective, well-thought-out, and largely implemented. These are simply tweaks, and given the policy’s broad support, should not be that hard to enact. The views presented in this piece do not reflect the views of other Arbitror contributors or of Arbitror as a whole. Photo: "Tax" originally taken by 401(K) 2012 (CC BY-SA 2.0) for Flickr. No changes were made. Use of this photo does not indicate an endorsement from its creator.
Wednesday, March 22, 2017 Cassie's Journal - March 21, 2017 We’re definitely cooling down from the highs on Sunday, and while near-sixty is still pretty good for March; the flip-flopping in the weather does seem a bit extreme this winter. I know, I shouldn’t mess with Mother Nature – especially after the cold we had for most of March Break; but I can’t help myself. That does remind me to mention that the weather has been affecting the farms around here.We’ll see how badly it messes the growing season up for them, but while our BMR crops can handle the extremes; there’s a very good chance that other winter or early spring crops have been ruined by the too-early warm spells.Ditto that for some tree foliage and flowers; though for the latter; we’ll just plant again when needed; and the frost is only going to be a problem for the perennials that were put to bed for the winter in the gardens. I wasn’t going to get into a gardening report tonight.It’s late; I’ve have a very long day; and need to get started on my nap soon before it’ll be time to get up and going again. Tai Chi was by the river; but chilly this morning; breakfast was standard school-day fare and uneventful; and we had a quiet day at school too – at least until the after school practices.Michael and I had fun at our concert band practice; marching band is much easier now that we’re just getting ready for our last two parades and don’t have any of the fundraising to deal with on either side of the practice.Our teen praise team practice was fairly intense and went late because we are down to just a few practices before Easter; so we’ll be switching to full rehearsals for the entire Easter service next week – and needed to have the music ready so we can do that without slowing down the rehearsals more than needed – especially when we always have issues as we start filling in the gaps with the skits, bible readings, and everything else that goes on during our Easter sunrise service. Michael and I didn’t get home until ten-thirty; he came to my house so we could do our homework and a Magi lesson with Mom; and it was going on midnight by the time he went home – even though we used a time phase for the studying and parts of the lesson.Okay, I might have spent a bit of time saying goodnight to him in a non-verbal sort of way after we took care of getting Ethan and Ehlana tucked in for the night; but that was only a minor bit of fun that we fit in between the studying and work.By the time I saw him out and finished getting ready for bed; it was closer to twelve-thirty than midnight; so I really tried to kick up the work into high gear for as long as I could.That was still hours ago; I did a lot of that work in a time phase; and that’s pretty much wiped me out. The language studying is going well, but the family business work is getting very challenging as I’m juggling a lot of crazy issues around the world – and it sometimes gets especially-tiring to deal with the darker side of business where greed and self-interest need to be dealt with more often than honesty and compassion.I’m not going to start a diatribe on that, but when the ascension of the light finally arrives; I am really going to enjoy doing everything I can to wipe out the evil that too many people inflict on others simply to get their hands on as much wealth as possible. How is it that everyone laughs at packrats that hoard hundreds or thousands of any one thing; yet don’t have the same reaction to the psychopaths that hoard billions of dollars for themselves; and never want to share – or help to care for the people and the world around them?At least the packrats of the world are cleaning up the mess that would be left everywhere if they didn’t have their quirky collections! ;^) Okay, I really must be tired for these mental wanderings to end up in my journal tonight.I have had a fun day even with all of the work, but it’s past-time to sleep now; so, until next time...
Q: Spark Error: Could not initialize class org.apache.spark.rdd.RDDOperationScope I've created a spark standalone cluster on my laptop, then I go into an sbt console on a spark project and try to embed a spark instance as so: val conf = new SparkConf().setAppName("foo").setMaster(/* Spark Master URL*/) val sc = new SparkContext(conf) Up to there everything works fine, then I try sc.parallelize(Array(1,2,3)) // and I get: java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.rdd.RDDOperationScope$ How do I fix this? A: maybe you missed following lib. <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.4.4</version> </dependency>
Catastrophic Injuries Massachusetts Catastrophic Injury Attorney Massachusetts Injury Lawyer Jeffrey Chapdelaine will fight to get you and your family the medical treatment, support and compensation critical to dealing with catastrophic injuries, including brain injuries, spinal cord injuries, back injuries and amputation injuries. If you or a loved one has suffered such an injury, it is imperative that you immediately find a Massachusetts personal injury attorney you can trust to fight for your rights. The medical bills will be astronomical. And the long-term care and rehabilitation needs can last a lifetime. Everyone involved in your case knows this. And everyone but you already has qualified and experienced legal representation. The doctors. The hospitals. The insurance companies and anyone else involved with your case is already moving to limit their liability and their responsibility. Massachusetts catastrophic injury cases include: Traumatic Brain Injuries (TBI) Spinal Cord Injuries Brain Injuries Amputation Injuries Burn Injuries If you or a loved one has suffered a catastrophic injury in Massachusetts, contact Boston Personal Injury Lawyer Jeffrey R. Chapdelaine at (617) 262-1800 for a free appointment to discuss your case.
Transposase activity was thought to be extinct in humans because DNA movement can be deleterious in higher organisms, resulting in genomic instability and perhaps malignancy. However, we isolated a human transposase protein termed Metnase that had histone methylase and non-homologous end-joining (NHEJ) DNA repair activity. It was found to interact with DNA Ligase IV, consistent with its NHEJ repair activity. Metnase also was an endonuclease preferential for supercoiled DNA. We therefore explored Metnase's role in decatenating replicated chromatids. Metnase interacted with Topoisomerase II (Topo II ), the critical decatenating enzyme, and enhanced its activity in DNA decatenation, both in vitro and intracellularly. The nuclease activity within the transposase domain of Metnase was required for full enhancement of Topo II decatenating activity. Metnase improved the rate at which Topo II decatenated DNA, and increased the ability of Topo II to resist the decatenation inhibitor ICRF-193. The finding that Metnase improved Topo II resistance to ICRF-193 stimulated an investigation into whether it could mediate resistance to the clinically relevant Topo II inhibitor etoposide. We found that Metnase prevented inhibition of Topo II decatenation by etoposide in vitro, and mediated cellular resistance to etoposide, promoting proliferation in the presence of etoposide. Metnase also promoted a more rapid clearance of etoposide-induced DSB. Thus, Metnase appeared to mediate resistance to etoposide-induced DNA damage and cell cycle arrest. This is a novel mechanism of etoposide resistance that is unexplored. This application proposes to define the mechanism by which Metnase mediates resistance to etoposide by asking three questions: 1) What are the upstream signals that regulate the ability of Metnase to reduce etoposide DSBs? 2) What is the downstream pathway by which Metnase reduces etoposide DSBs? 3) Do Metnase levels predic clinical resistance to etoposide in human malignancy? PUBLIC HEALTH RELEVANCE: We have isolated a novel protein termed Metnase that helps chromosomes repair breaks and also untangle, thereby allowing them to separate properly during cell division. Because of these actions, Metnase mediates resistance to the cancer drug etoposide. Understanding the mechanism by which Metnase does this would allow the generation of drugs targeting Metnase, which could help improve the response of cancer patients to etoposide. In addition, the levels of Metnase could predict which patient will respond to etoposide.
Monday, November 17, 2014 Experimental Cures for Flattened Register Definitions in vr_ad On my current project, I had an issue with my register definitions. Quite a few of my DUT's registers where just instances of the same register type. My vr_ad register definitions were generated by a script, based on the specification, a flow that I'm pretty sure is very similar to what most of you also have. Instead of generating a nice regular structure, this script created a separate type for each register instance. What resulted was a flattened structure where I'd, for example, get one instance each of registers SOME_REG0, SOME_REG1, SOME_REG2, instead of three instances of SOME_REG. I was lucky enough to be able to (partly) change the definitions by patching them by hand. Someone on StackOverflow had the same problem, but didn't have the luxury of being able to fix it like I did. They weren't allowed to touch the code as I'm guessing it probably belonged to a different team. They probably also had a lot of legacy code that was using those flattened register definitions. This made me want to do an experimental post on how to best cope with such an issue. Naturally the best thing to do is to fix the underlying problem of the registers getting flattened, but that might not be possible, so let's look at how to fix the symptoms. To be able to do any kind of serious modeling, we need to be able to program generically. We can't (easily) do this if each register is an own type. I've tried to think of how to best handle this from a maintainability point of view. As a bonus requirement, we'd also like it that when the register definitions do get fixed (i.e. the generation flow gets updated) we have to make as few changes as possible to the modeling code. Enough with the stories, let's get our hands dirty. As always, we'll start small, but think big. We'll go through a few iterations, look at where we're lacking and gradually refine our approach. Let's say we have a device that can operate with shapes. Part of its functionality involves doing stuff with triangles. It can process multiple triangles at the same time, where each triangle is described by a register containing the lengths of its sides. Our DUT does computations on the triangles, based on these values. For example, it can compute the areas of the triangles. We want to check that what the DUT writes out is correct so we need to model these computations. We have a trusty script that can generate the register definitions from the specification (maybe an XML file). This script isn't very well written and it doesn't know that all three TRIANGLE registers are just the same register instantiated 3 times (i.e. a regular structure), or maybe the information got lost in the XML somehow. This is what we get for our register definitions: We can immediately see a problem with this approach. We've implemented the formula in three different places. This means that should something change, we have three places to fix. Now, Heron's formula changing is a pretty unlikely event, but should we have a different computation to perform here the discussion stands. What we can do is extract the part that computes the actual area as an own method, that takes the three sides as its arguments: At least this way we've centralized the computation part to one location. The number of such methods will grow linearly, though, with the number of TRIANGLE registers. This means that for n triangles we'll need n methods to compute the areas. Let's add a new requirement: our DUT is also able to compute which triangle is the largest and we need to model that too. We can define a new method to do that based on the areas: In this method, the number of calls to get_triangleX_area() also grows with the number of triangles. Moreover, if we want to be able to find out which triangle is the smallest, the method for that would have to look like this: Pretty much the same as largest(), isn't it? In this setup, adding a single triangle would require adding a new method for the area and changing two others. That's not very maintainable. We can use the same trick we did for the area computation and pull out computing the list of areas to it's own method, while simplifying the largest() and smallest() methods: Now we only need to update the get_triangle_areas() method when adding a new triangle. Not much of an improvement, but every little thing counts when you're potentially dealing with a large number of triangles. While we may have things sorted out for areas, we get a new requirement. Our DUT can also compute perimeters and tell us which triangle is the longest and which one is the shortest. This means we'll need to add a similar set of methods to handle this aspect, based on the examples from above: Adding just one measly triangle is starting to become a real pain. What would be awesome is being able to just add one line of code every time a new triangle gets added and be done with it. Well, thanks to our good friends, the macros, this is possible. What we notice is that the code is very regular. Aside from the indices, the method bodies look remarkably similar. This means that for the area aspect we can create the following macro: We could define a similar macro for the perimeter aspect (I won't show it here). While we have made adding new triangles easier, we've also shot ourselves in the foot. Excessive use of macros is a code smell because it can be very difficult to understand what code gets expanded in the background. Also, it makes the code more difficult to refactor, since we can't rely on fancy IDE features. If we analyze the code up now we see that one of our main problems is that each triangle is stored in an individual field. This means that there's no way to access a triangle from a method by just passing in the index of the triangle (0, 1, 2, etc.). If we could do this, we could get rid of all our get_triangleX_area() methods. A way of doing this is using the reflection API. Reflection allows us, among others, to get a field of a struct by using only the name of that field, specified as a string. In our case, we know that our register file contains fields named triangle0, triangle1, triangle2, etc. We can use the reflection API to extract the field that contains contains the appropriate index as its suffix: The way to use the reflection API is to get the representation of our register file from the rf_manager singleton. What we'll end up with is a struct of type rf_struct that understands what fields, methods, etc. the register file has. Out of this we can extract a representation of the field for the triangle that interests us, of type rf_field. Based on this field we can construct our return value. How exactly this happens is explained in the documentation and in this excellent post from the Specman R&D team. Have a look at those resources for more details on how to use the reflection interface. After we've gotten an instance of our desired register, we can use this to compute the area. We can do away with the get_triangleX_area() methods and replace them with one get_triangle_area_by_index(...) method: Because the return value of get_triangle_reg(...) is of type vr_ad_reg, we can't reference the SIDEx fields directly (as these are defined under when subtypes). We can't cast the value to any of these subtypes, because we would need n cast statements (the very thing we want to avoid). We can use the same method as before to get the values of the sides via the reflection interface. The resulting code isn't pretty, but it works. Can we do better, though? Of course we can! An essential observation to make here is that all triangle register types contain the same fields, whether they are of type TRIANGLE0 or TRIANGLE1 or TRIANGLE2. We could do all of our operations using only a variable of one of these types, provided that we fill it up with the appropriate values for the sides. That is, a TRIANGLE0 with sides 1, 2 and 3 has the same area as a TRIANGLE1 with the same sides. With this idea in mind, we can do the following: We can just create a variable of type TRIANGLE0 and fill it up with the contents of our desired register. We can then reference the SIDE fields directly, without the need for all of that messy reflection code. The price we pay for this convenience, however is in essence a copy operation. Whether this is slower than using the reflection interface I can't say (though I suspect it isn't), but it is in any case cleaner. Not only that, but we can now handle any number of triangles without increasing the number of lines in the code. The only modification we need to make is to set the num_triangles field to the appropriate value. I'd propose one final refactoring step. Why do we have to define the methods that compute the area and the perimeter inside the reference model? A triangle register contains all of the information required to compute these values. Seeing as how we'll just be using the TRIANGLE0 subtype in our code, we can extend that to contain a get_area() method: Of course, we can do the same for the perimeter aspect (not shown here). Let's take a moment to see what we've achieved. We've managed to program our computations in a generic way, by relying on methods that take the index of a register as a parameter. This saves us a lot of typing because we don't have to define a method that accesses each field. We've also nicely encapsulated our methods: all methods that refer to a single triangle (get_area() and get_perimeter()) are defined in the triangle register struct, while the methods that refer to all triangles are encapsulated in the reference model struct. Further above, I've mentioned the bonus requirement that we want our resulting code to look as similar as possible to the case where the register definitions aren't flattened. Let's see how our reference model would look in the ideal case. Notice that we don't need the get_triangle_regs() method anymore, as we already have our triangles organized in a list. If we were to implement the last proposal, once our register definitions would be fixed, migrating to the new structure would only require some minor search and replace operations. This goes to show that starting off on the wrong foot doesn't mean we're completely out of the dance. With some extra work, we can get very close to the ideal solution, but we have to be willing to compromise a bit on simulation speed. Still, it's better than compromising on maintainability and getting stuck in an endless loop of bad coding style. I hope you found this post useful. I've posted the code to SourceForge for reference. Stay tuned for more! You're right! This also removes the need to do reflection. I was so fixed on that idea that I forgot about get_reg_by_kind(...). What I also usually do is define an extra method inside vr_ad_reg, called get_field_by_name(fld_name : string). This basically does what your snippet does (including asserts, etc.). I find it really handy. About I am a Verification Engineer at Infineon Technologies, where I get the chance to work with both e and SystemVerilog. I started the Verification Gentleman blog to store solutions to small (and big) problems I've faced in my day to day work. I want to share them with the community in the hope that they may be useful to someone else.
An Australian court has dealt the final blow to Pink Lady America’s (PLA) bid to secure ownership of the apple brand in Chile. The ruling stems from a decision made in the Supreme Court of Victoria last November, which found PLA had no right to use the Pink Lady trademark in the South American nation. The case determined peak industry body Apple and Pear Australia (APAL) was the rightful owner of the trademark. Following the Supreme Court case, PLA lodged an application for special leave to appeal the decision to the High Court of Australia. It also requested a stay of execution of the orders issued by the Supreme Court to allow it to continue licensing exporters in Chile until the High Court application was determined. “The High Court has considered Pink Lady America’s appeal application which means that the Court of Appeal’s [Supreme Court] initial decision stands and Pink Lady America cannot appeal this decision further,” APAL chief executive Phil Turnbull said in a statement. The ruling ensures all use of the Pink Lady trademarks in Chile on Chilean-grown apples must be licensed by APAL, including where apples are exported from Chile. Licences will only permit the use of the Pink lady trademark on apples that meet international Pink lady brand quality standards. “This is a great outcome for APAL’s Pink Lady business and all our stakeholders,” Turnbull added. “It’s important to acknowledge the hard work, dedication and advice we’ve received from so many individuals on this matter. “I’d also like to recognise and thank Garry Langford and Rebekah Jacobs for the great work and tireless hours they have each dedicated to the case over many years.”
The Sun reported that it was the most scandalous accusation to hit the tournament since a previous big-time Scrabble dustup when a player accused another of swallowing a tile to gain an advantage.
Q: Is docker suitable to be used for long running containers? I'm currently migrating from a powerfull root server to a less powerfull and most notably cheaper server. On the root server i had some services isolated into separate VMs. On the new server this is not possible. But I'd like to still have some isolation for some services... If possible. Currently I'm thinking of using docker for this isolation. But I'm not sure if docker is the right tool here. I tried to google for an answer but most posts i found about docker are only related to short term containers for development, ci or testing purposes. In my case it would be more like having a long term container that runs eg a web service stack with nginx, php and mysql/mariadb (while the db might even get its own container) and other container that run other services. so my question is: Is Docker suitable for a task of running a container for a longer time. or in other words... is docker usable as a "replacement" for kvm based VMs? A: Docker is used all over the place for web apps which are long running apps. Currently in production I have the following running in docker php-fpm apps celery queue workers (python) nodejs apps java tomcat7 Go A: As with all judgement calls, there will be some opinion in any answer. Nevertheless, it is definitely true to say that containerisation is not virtualisation. They are different technologies, working in different ways, with different pros and cons. To regard containerisation as virtualisation lite is to make a fundamental mistake, just as regarding a virtualised guest as a cheap dedicated server is a mistake. We see a lot of questions on SF from people that have been sold a container as a "cheap VPS"; misunderstanding what they have, they try to treat it as a virtualised guest, and cause themselves trouble. Containerisation is undoubtedly excellent for development work: it enables a very large number of environments to be spun up very quickly, and thus makes development on multiple fast-changing copies of a slowly-changing reference back end very easy. Note that in this scenario the containers are all very similar in infrastructure and function; they're all essentially subtly-different copies of a single back-end. Trouble may arise when people try to containerise multiple distros on a single host, or guests have different needs in terms of kernel modules, or external hardware connectivity arises as an issue - and in many other comparable departures from the scenarios where containerisation really does work well. If you decide to deploy into production on containers, keep your mind on what you've done, and don't fall into the mindset of thinking of your deployment as virtualised; be aware that saving money has opportunity costs associated with it. Cut your coat according to your cloth, and you may very well have a good experience. But allow yourself (or, more commonly, management) to misunderstand what you've done, and trouble may ensue.
07 GAINESVILLE, Fla. — Deeply concerned about major federal budget cuts to research and higher education at a time when other nations are steadily increasing investments in those areas, University of Florida President Bernie Machen today joined 163 other university presidents and chancellors in calling on leaders in Washington to close what they call the “innovation deficit.” GAINESVILLE, Fla. — Consumer confidence among Floridians sank three points in July to 78 from a revised reading of 81, after edging upward for four consecutive months, according to a University of Florida survey. GAINESVILLE, Fla. — Standing in the dairy aisle, hand on a gallon of milk, a consumer might wonder why reports of falling dairy prices aren't reflected in a lower price on the milk he's eyeing in his neighborhood grocery. GAINESVILLE, Fla. — Standing in the dairy aisle, hand on a gallon of milk, a consumer might wonder why reports of falling dairy prices aren’t reflected in a lower price on the milk he’s eyeing in his neighborhood grocery. GAINESVILLE, Fla. — The Society of Professional Journalists has honored Mike Foley, master lecturer of journalism in the University of Florida’s College of Journalism and Communications, with its 2013 Distinguished Teaching in Journalism Award. GAINESVILLE, Fla. — The University of Florida Sid Martin Biotechnology Incubator was ranked "World's Best University Biotechnology Incubator," according to an international study conducted by the Sweden-based research group UBI. GAINESVILLE, Fla. — University of Florida Health Shands Hospital has been recognized among the nation’s best hospitals in five adult medical specialties, according to U.S. News & World Report’s 2013-14 Best Hospitals rankings. GAINESVILLE, Fla. — In celebrating the 90th year of the University of Florida’s Homecoming, Gator Growl 2013 will feature co-headliners The Fray and Sister Hazel, with special guest New Directions Veterans Choir. GAINESVILLE, Fla. — Florida's agriculture, natural resources and related food industries provided a $104 billion impact on the state in 2011 and have continued to improve since the 2008 recession, according to a new University of Florida study. GAINESVILLE, Fla. — Julie A. Johnson has been named dean of the University of Florida College of Pharmacy, becoming the seventh dean and the first woman to hold the appointment in the college’s 90-year history GAINESVILLE, Fla. — After hearing from teachers who actively engaged with Algebra Nation in its trial period, the state Legislature has invested $2 million to expand the reach and impact of the University of Florida's innovative program to help students succeed on the high-stakes End-of-Course exam. GAINESVILLE, Fla. — University of Florida student Michael Wayne Pirie, who died Feb. 12, 2011, while trying to save a friend who became stranded in a cave, has been recognized with a Carnegie Medal from the . GAINESVILLE, Fla. — In observance of Independence Day, all RTS offices will be closed on Thursday, July 4, and RTS bus service will be suspended. ADA paratransit service will not be running on this date, except for dialysis appointments. GAINESVILLE, Fla. — Personalized medicine at University of Florida Health celebrates its first successful year helping heart patients with news of major funding from the National Institutes of Health that will advance the program to more patients and health care providers across the state. David Norton, vice president for research, was quoted in a July 3 Business Report story about UF receiving an $8 million award from the National Nuclear Security Administration to conduct high-performance computing simulations aimed at addressing some of the world’s most complex problems. David Norton, vice president for research, was quoted in a July 3 Business Report story about UF receiving an $8 million award from the National Nuclear Security Administration to conduct high-performance computing simulations aimed at addressing some of the world’s most complex problems. David Norton, vice president for research, was quoted in a July 3 Business Report story about UF receiving an $8 million award from the National Nuclear Security Administration to conduct high-performance computing simulations aimed at addressing some of the world’s most complex problems. Akito Kawahara, assistant curator of Lepidoptera at the Florida Museum of Natural History, was quoted in a July 4 CBS News story about his research into how hawkmoths confuse predator bats by sending sonar signals from their genitals. George Burgess, director of the International Shark Attack File, was interviewed in a July 6 CBS News report about the surge in the great white shark population along the East Coast and what people can do to avoid sharks. Chris McCarty, director of the Survey Research Center, was quoted in a June 25 Miami Herald story about an increase in Floridians’ confidence in the economy, which hit a six-year high in June. The story and others were the result of a Media Relations office news release. Nicole Avena, a research neuroscientist in the fields of diet and addiction, was quoted in a June 26 National Public Radio story about new research that finds certain carbohydrates can lead to more intense hunger and overeating. Anthony Randazzo, retired geology professor, was quoted in a June 13 Tampa Bay Times story about the recent heavy rains in Florida creating the right conditions for the formation of sinkholes this summer. Political science professor Daniel Smith was quoted in a June 28 Tampa Bay Times story about his research results that found Florida’s Hispanic voters waited longer than white voters in the November election. Roger Fillingim, director of the Pain Research and Intervention Center of Excellence, was quoted in a July 2 Tampa Bay Times story about a U.S. Centers for Disease Control and Prevention study that found overdose deaths from prescription painkillers rose much faster among women than men in the past decade. Clay Calvert, Brechner Eminent Scholar in Mass Communication, was quoted in a June 13 Washington Post blog about claims that journalist Glenn Greenwald aided and abetted Edward Snowden and whether Greenwald should be charged with a crime.
The FA refused to confirm the identities of the positive controls which comes on the back of Rio Ferdinandís failure to show up for a drugs test resulting in his exclusion from Englandís Euro 2004 qualifier against Turkey.
Suzuki GSXR 600 K4-K5 R&G Crash ProtectorsThe briliant Gixer 6 K4-K5 is one of the best road bikes out there according to our customers and some of the press. Why not protect your asset with R and G crash protectors? Easy to fit in a couple of hours they may well save you some money if you have a spill. They are top engine mount and no modifications are required.
PBS recently aired a short segment on a policy proposal known as the Basic Income Guarantee (BIG). The idea behind BIG is that all citizens in a given country should be guaranteed a basic income whether they work or not. This idea has been embraced by people on both sides of the political spectrum, from left-wing academics such as David Graeber and Frances Fox Piven to right-wing free-market thinkers such as Milton Friedman and Charles Murray. Although the two groups agree that the policy is desirable, they see it as a means to entirely different goals. For right-wingers the idea is to use BIG as the price for purchasing an entirely free-market-based society. The welfare state — everything from food stamps to public housing to public health care — would be eliminated, and BIG would allow people to take care of their own needs. For left-wingers BIG would free people from having to do jobs that do not give them any satisfaction and allow them to pursue creative goals that they would not otherwise pursue for lack of financial support. While BIG is a good means to raise aggregate spending and income levels in an economy plagued with deficient demand, both positions gloss over some rather serious issues. The right-wingers seem to think that BIG can make public welfare institutions unnecessary. This is unlikely. These institutions provide services that the private sector is often bad at allocating. It is well known, for example, that the costs of health care are lower for those who need it least and more expensive for those who need it most. It is also well known that the property market is prone to speculation and that this can price people at the bottom of the ladder out of the market; that includes young professionals starting families as well as the poorer members of society. The fact of the matter is that, as economists have recognized for decades, the private sector often fails to meet the needs of citizens because of its reliance on the profit motive, and so the government sector needs to intervene to make sure that outcomes in certain sectors are equitable. Those on the left also gloss over some of the social complexities that the welfare state seeks to address. Imagine, for example, that a BIG office opened up in a drug-addled neighborhood. Any citizen could go to this office and receive an income of, say, $2,000 a month. Even though the monthly income is intended to free people to pursue their creative impulses, without added social support the money could end up funding drug habits instead. This, in turn, would have the twin effect of leading to poorer health outcomes for drug addicts and making the violent enterprise of drug dealing even more profitable than it already is. Many children who grow up in such an environment would also likely emulate their parents by simply collecting BIG payments and buying drugs with them. What is needed in such circumstances is a program that at once increases income and ensures that people do not remain idle because, as is well known among labor economists, it is idleness and unemployment above all else that lead to problems such as drug addiction.
Filed: May 30, 2013 UNITED STATES COURT OF APPEALS FOR THE FOURTH CIRCUIT No. 12-1566 (11-0624) MARINE REPAIR SERVICES, INCORPORATED; SIGNAL MUTUAL INDEMNITY ASSOCIATION, LIMITED, Petitioners, v. CHRISTOPHER E. FIFER; DIRECTOR, OFFICE OF WORKERS' COMPENSATION PROGRAMS, UNITED STATES DEPARTMENT OF LABOR, Respondents. O R D E R Upon Petitioners’ motion for publication of the Court’s opinion, IT IS ORDERED that the motion to publish is granted. The Court amends its opinion filed May 2, 2013, as follows: On the cover sheet, section 1 -- the status is changed from “UNPUBLISHED” to “PUBLISHED.” On the cover sheet, section 6 -- the status line is changed to read “Vacated and remanded by published opinion.” On the cover sheet -– the reference to the use of unpublished opinions as precedent is deleted. For the Court – By Direction /s/ Patricia S. Connor Clerk 2 PUBLISHED UNITED STATES COURT OF APPEALS FOR THE FOURTH CIRCUIT No. 12-1566 MARINE REPAIR SERVICES, INCORPORATED; SIGNAL MUTUAL INDEMNITY ASSOCIATION, LIMITED, Petitioners, v. CHRISTOPHER E. FIFER; DIRECTOR, OFFICE OF WORKERS' COMPENSATION PROGRAMS, UNITED STATES DEPARTMENT OF LABOR, Respondents. On Petition for Review of an Order of the Benefits Review Board. (11-0624) Argued: March 20, 2013 Decided: May 2, 2013 Before WILKINSON, SHEDD, and DUNCAN, Circuit Judges. Vacated and remanded by published opinion. Judge Duncan wrote the opinion, in which Judge Wilkinson and Judge Shedd joined. Lawrence Philip Postol, SEYFARTH SHAW, LLP, Washington, D.C., for Petitioners. Michael J. Perticone, HARDWICK & HARRIS, Baltimore, Maryland, for Respondents. DUNCAN, Circuit Judge: Marine Repair Services, Inc. (“Marine”) petitions for review of the Decision and Order of the Benefits Review Board (“BRB” or the “Board”) awarding permanent partial disability benefits to Marine’s former employee, Christopher Fifer, under the Longshore and Harbor Workers’ Compensation Act (“LHWCA”). Applying the burden-shifting scheme that governs LHWCA disability claims, the administrative law judge (“ALJ”) reviewing Fifer’s claim concluded that Marine failed to meet its burden of presenting suitable alternative employment for Fifer. The BRB affirmed. Because the ALJ made findings unsupported by the record and demanded more of Marine than our precedent requires, we grant Marine’s petition for review, vacate the Decision and Order of the BRB, and remand for further proceedings consistent with this opinion. I. A. Prior to the events underlying this petition, Fifer earned $1,219 weekly working for Marine as a repairman of large shipping containers, a physically demanding job requiring climbing, bending, and heavy lifting of over fifty pounds. On October 26, 2007, Fifer suffered shoulder, arm, and back injuries in an on-the-job car accident. After the accident, 2 Marine began paying Fifer temporary total disability benefits while Fifer sought treatment. Dr. Michael Franchetti became Fifer’s primary orthopedist, to whom Fifer complained of back pain which radiated down his legs, as well as back spasms. During his two-year course of treatment, Dr. Franchetti encouraged Fifer to perform physical therapy, prescribed muscle relaxers and painkillers, and reviewed scans of Fifer’s spine. He also referred Fifer to another physician for epidural steroid injections. Dr. Franchetti ultimately diagnosed Fifer with chronic lumbosacral strain, sciatica, and disc protrusion and herniation. Fifer underwent his first functional capacity evaluation (“FCE”) in June 2008. In addition to finding that Fifer did “not meet the physical demands of his pre-injury occupation,” the evaluator concluded that Fifer should limit himself to jobs within “medium” work parameters, and that he should limit lifting to twenty-five pounds on an occasional basis. J.A. 241. In an attempt to prepare himself to return to Marine, Fifer completed a round of work-hardening from July to September 2008. 1 The work-hardening evaluator released Fifer on September 12, 1 Work-hardening is a rehabilitation process through which injured employees perform tasks that simulate the physical demands of their jobs in an effort to condition them for return to employment. 3 2008, ascribing him “full time tolerance[] with the lower parameters of heavy work, with limitations in bending and material handling.” Id. at 263 (the “2008 work-hardening release”). The evaluator instructed Fifer to see Dr. Franchetti on September 15, 2008 for “a full release back to work.” Id. Fifer’s September 15 visit to Dr. Franchetti resulted in updated work restrictions (the “September 2008 restrictions”). Dr. Franchetti indicated that Fifer could return “to restricted work status,” so long as he performed “[n]o repetitive bending or twisting with [his] back, no lifting more than 55 lbs., no carrying more than 40 lbs., no overhead lifting more than 30 lbs., no lifting more than 30 lbs. frequently, and no sitting more than 45 minutes without changing positions.” J.A. 211. Marine would not employ Fifer while he was subject to the September 2008 restrictions. As a result, Fifer began working at his family’s seafood restaurant, where he earned $400 weekly performing odd jobs, errands, and assisting with food preparation. Prior to his work as a longshoreman, Fifer had managed his family’s restaurant for two years. Both parties agree that Fifer reached maximum medical improvement in February 2009. On August 20, 2009, Fifer underwent a second FCE. That evaluation showed reduced lifting ability, as compared to the 2008 FCE, but also indicated that Fifer could sit and stand “frequent[ly]” and walk “const[antly]” 4 at a slow pace, improvements from the 2008 FCE. J.A. 371. The evaluator concluded that work in the family restaurant was “consistent with [Fifer’s] demonstrated activity tolerances,” that Fifer could not return to Marine as a container repairman, and that he should “[m]aintain work activity within the light work parameters.” Id. at 373. According to the FCE, “light work” includes jobs that involve occasionally lifting up to twenty pounds and require “walking or standing to a significant degree.” Id. at 371. During an October 2009 deposition in connection with this case, Dr. Franchetti clarified that based on the results of the August 2009 FCE, he would revise his September 2008 restrictions. Specifically, based on the August 2009 FCE, Dr. Franchetti would reduce Fifer’s “lifting and carrying weight to 25 pounds,” reduce overhead lifting to twenty pounds, and “would recommend no lifting more than about 10 to 15 pounds frequently.” J.A. 390 (“the October 2009 restrictions”). Fifer’s sitting restriction remained the same: no sitting without changing position for forty-five or more minutes. Dr. Franchetti confirmed that he did not see any problem with Fifer’s work in the family restaurant. 5 B. 1. After Marine discontinued temporary payments in January 2009, Fifer filed this claim for permanent disability benefits under the LHWCA, 33 U.S.C. § 901 et seq. The ALJ conducted a hearing on October 29, 2009. At the hearing, Fifer and Dr. Franchetti testified that physical limitations prevented Fifer from returning to work as a repairman at Marine. 2 Dr. Franchetti testified that Fifer “has sustained a permanent impairment to his person as a whole, as a result of his lumbar spinal injury,” resulting in a “31 percent whole person impairment.” J.A. 389. Marine presented evidence of alternative employment for Fifer in the relevant geographic area. Marine’s vocational rehabilitation specialist, Brian Sappington, testified to three labor market studies he had prepared to demonstrate alternative employment. The first two were conducted in December 2008 and relied on Fifer’s 2008 work-hardening release, which allowed “[h]eavy duty [work] with limitations.” J.A. 276. The first study listed positions as a welder, forklift driver, courier, and security guard; the second included five restaurant management positions with “light duty” physical requirements. 2 Dr. Franchetti testified by deposition. 6 Sappington’s third and final study took Dr. Franchetti’s September 2008 restrictions into account. J.A. 359 (noting that Fifer’s restrictions were “[u]nlimited standing with restricted lifting per Dr. Franchetti”). That study provided a description of the restaurant manager and assistant manager role from the Dictionary of Occupational Titles (“DOT”) and listed six restaurant management positions for which Sappington testified Fifer would be vocationally qualified. Sappington supplemented the second and third study with his testimony at the hearing before the ALJ. Specifically, upon receiving Dr. Franchetti’s October 2009 work restrictions, Sappington had contacted employers from the second and third studies and performed site visits to determine whether the restaurant management positions would comport with Fifer’s revised lifting restrictions. Sappington testified that he identified two restaurants where a person with a twenty-five pound lifting restriction “would be a candidate” or where “the restaurant would provide reasonable accommodation to someone with Mr. Fifer’s background and restrictions,” J.A. 156, and two more restaurant positions where employees told Sappington they rarely lifted anything over twenty-five pounds and felt accommodations were possible, id. at 157-58, even though the job descriptions for those restaurant posts required an ability to lift more than twenty-five pounds. Sappington identified three 7 additional restaurant positions which did not include a minimum lifting requirement, although he was unable to verify actual lifting requirements at those restaurants. Therefore, Sappington concluded that of the seven restaurants he visited, four of them would “definite[ly]” accommodate Fifer’s physical limitations. Id. at 164. The annual salary for these positions ranged from $28,000 to $40,000. Sappington also testified that the security guard positions listed in the first labor market study, which required “frequent standing and walking,” fit within Dr. Franchetti’s October 2009 restrictions. J.A. 282. 2. In an opinion issued on March 28, 2010, the ALJ concluded that Fifer met his burden of establishing a prima facie case of total disability since he could not return to his former position at Marine. The ALJ then assessed whether Marine had rebutted Fifer’s showing of disability by demonstrating the availability of suitable alternative employment by comparing Sappington’s labor market studies with Fifer’s vocational and physical abilities. She found that none of Sappington’s studies provided adequate levels of detail regarding the positions’ requirements. As such, the ALJ determined that Fifer’s job in the family restaurant, where he earns $20,800 annually, represented his wage earning capacity. She awarded permanent partial disability benefits accordingly. 8 The ALJ credited Fifer’s testimony regarding his physical limitations. Fifer testified that he chose to work at his family’s restaurant because there, “if I need to take a break and sit down I can sit down and . . . I’m not going to get fired.” J.A. 96. While Fifer testified that he can “do everything [at the restaurant] that needs to be done,” he has, on at least one occasion, taken a thirty minute break to lay down when he felt a muscle spasm developing in his back. J.A. 96-97. The ALJ also credited the testimony of Fifer’s brother, Tracy, who manages the restaurant; Tracy Fifer testified that his brother “has up days and down days” and sometimes “needs to sit down right away” when he arrives to work. J.A. 129. The ALJ also credited the deposition testimony of Dr. Franchetti, who confirmed that Fifer’s restaurant work comported with the October 2009 restrictions, which limited Fifer to lifting a maximum of twenty-five pounds. In rejecting the labor market studies, the ALJ found Marine’s first study inconsistent with Fifer’s restrictions, as some of the jobs--forklift operator and welder--“require[d] the ability to perform medium or heavy work.” Id. at 32. The ALJ rejected the security officer positions listed in the first study after finding that Fifer’s pain medication regimen would cause him to fail any required drug screenings, precluding employment as a security guard. The ALJ rejected the five light 9 duty restaurant management positions in Marine’s second study because “Mr. Sappington did not provide a description of the positions, other than by their title,” nor did he indicate that he “actually spoke to anyone about the job duties and availability of these positions.” Id. Finally, although the ALJ recognized that the third study, along with Sappington’s testimony, identified four positions where “lifting over 25 pounds was not regularly required of the manager,” she faulted that study for failing to “describe[] the specific duties of these positions, in particular, whether they require standing for long periods of time, and provide for rest breaks.” Id. at 33. The ALJ concluded that “Mr. Fifer’s credible complaints of pain, his inability to stand for long periods of time, his need for frequent rest breaks, and his regimen of medication” made the restaurant jobs inapplicable “although [the jobs] may accommodate the lifting restrictions.” Id. The Board affirmed the ALJ’s decision. It concluded that Sappington “did not provide all of the job duties or assess the jobs’ suitability in terms of all of claimant’s restrictions,” and “did not refer to any standard job descriptions.” Id. at 59. Because Sappington’s reports “lack[ed] . . . specific information regarding all the physical duties required of the positions,” the ALJ could not determine whether Fifer’s need for 10 “frequent breaks” and “limit[ations] in the amount of sitting and standing he can do” would be accommodated. Id. The Board issued its final opinion on April 5, 2012. This appeal followed. II. On appeal, Marine contends that it met its burden of showing suitable alternative employment for Fifer, and that the ALJ’s conclusions are therefore unsupported by substantial evidence. 3 In determining whether Marine met its burden of showing suitable alternative employment, we review Board decisions for errors of law and “to ascertain whether the Board adhered to its statutorily mandated standard for reviewing the ALJ’s factual findings.” Newport News Shipbldg. & Dry Dock Co. v. Riley, 262 F.3d 227, 231 (4th Cir. 2001). An ALJ’s factual findings “‘shall be conclusive if supported by substantial evidence in the record considered as a whole.’” Newport News Shipbldg. & Dry Dock Co. v. Stallings, 250 F.3d 868, 871 (4th Cir. 2001) (quoting 33 U.S.C. § 921(b)(3)). 3 Marine also raises several challenges related to Fifer’s attorney’s fee award. Attorney’s fees are available for successful prosecution of a LHWCA claim. 33 U.S.C. § 928. Because we vacate the Board’s Order and remand, we need not address the issue of attorney’s fees. 11 Our assessment of whether the Board complied with that standard comprises “an independent review of the administrative record”; “[l]ike the Board, [we] will uphold the factual findings of the ALJ so long as they are supported by substantial evidence.” Norfolk Shipbldg. & Drydock Corp. v. Faulk, 228 F.3d 378, 380 (4th Cir. 2000). We consider “substantial evidence” to require “more than a scintilla but less than a preponderance”; it is “such relevant evidence as a reasonable mind might accept as adequate to support a conclusion.” Id. at 380-81 (internal quotation and citation omitted). We review the ALJ’s legal determinations de novo. Dir., Office of Workers’ Comp. Programs v. Newport News Shipbldg. & Dry Dock Co., 138 F.3d 134, 141 (4th Cir. 1998). The Act provides compensation to longshore workers who have experienced on-the-job injuries “for the economic harm suffered as a result of the decreased ability to earn wages.” Norfolk Shipbldg. & Drydock Corp. v. Hord, 193 F.3d 797, 800 (4th Cir. 1999). LHWCA claims are governed by a burden-shifting scheme; in order to make a successful compensation claim, “a claimant must first establish a prima facie case by demonstrating an inability to return to prior employment due to a work-related injury.” Newport News Shipbldg. & Dry Dock Co. v. Dir., Office of Workers’ Comp. Programs, 315 F.3d 286, 292 (4th Cir. 2002). “If the claimant makes this showing, ‘the burden shifts to the 12 employer to demonstrate the availability of suitable alternative employment which the claimant is capable of performing.’” Id. (citation omitted). If the employer does not itself provide suitable alternative employment, it “‘may demonstrate that [such] employment is available to the injured worker in the relevant labor market.’” Id. at 293 (citation omitted). If the employer meets this burden, “its obligation to pay disability benefits is either reduced or eliminated, unless the employee shows ‘that he diligently but unsuccessfully sought appropriate employment.’” Id. (citation omitted). As Fifer established disability by showing that he is unable to return to his job at Marine, this case turns on whether Marine has met its burden of showing suitable alternative employment. In particular, Marine contends that it offered evidence of alternative employment more lucrative than Fifer’s position at his family’s restaurant. A finding of higher-paying alternative employment would increase Fifer’s wage-earning capacity and decrease or nullify the disability payments Marine owes Fifer. We find the ALJ’s conclusion that Marine failed to present suitable alternative employment erroneous for two reasons: (1) the ALJ made findings of fact as to Fifer’s physical limitations which were unsupported by substantial evidence in the record, and (2), the ALJ faulted Marine for failing to address these 13 limitations, imposing a heavier legal burden than our precedent requires. 1. First, in rejecting Marine’s labor market studies, the ALJ emphasized Fifer’s “inability to stand for long periods of time,” “need for frequent rest breaks,” and “regimen of medication,” physical limitations unsupported by substantial evidence in the record. J.A. 33. Although we may not disregard the ALJ’s findings “‘on the basis that other inferences might have been more reasonable,’” Ceres Marine Terminals, Inc. v. Green, 656 F.3d 235, 240 (4th Cir. 2011) (citing Newport News Shipbldg. & Dry Dock Co. v. Tann, 841 F.2d 540, 543 (4th Cir. 1988)), there must be some evidence in the record to support the findings. The ALJ’s conclusions regarding Fifer’s problems standing and need for breaks were unsupported by the evidence in the record. Fifer did not testify that he had trouble standing; instead, he indicated that he needed to take breaks during work- hardening in 2008 (while performing tasks targeted towards returning him to “hard” work parameters) and that he chose to return to his family’s restaurant because he knew he could take breaks there without reprimand. On one occasion, he had to lay down to rest his back; his brother testified that sometimes Fifer “needs to sit down right away.” Id. at 129. While the 14 ALJ credited Fifer’s testimony, she also credited the testimony of Dr. Franchetti, who never mentioned standing restrictions or rest break requirements, either in his testimony or in the September 2008 or October 2009 work restrictions. In fact, Dr. Franchetti indicated that Fifer’s physical limitations did not bar him from restaurant work. Further, the most recent FCE indicated that Fifer could stand “frequent[ly]” and walk “const[antly]” within light work parameters. J.A. 371. The ALJ also emphasized Fifer’s medication regimen as a barrier to employment, ultimately faulting Marine for failing to address Fifer’s medication-related restrictions in its labor market studies. The ALJ indicated that the security guard positions Marine offered would likely require drug tests which Fifer would fail. Nothing in the record, however, indicated that Fifer’s medications interfered with his ability to find work. There was no evidence to support the ALJ’s conclusion that security guards routinely undergo drug testing, that prescription painkillers cause applicants to fail required drug tests, or that Fifer’s regimen would bar Fifer from employment. The ALJ’s determination that Fifer could not qualify for the security guard positions because of his medication was thus unsupported by any evidence, much less substantial evidence. 15 2. Second, the ALJ’s emphasis on Fifer’s standing, rest break, and medication-related restrictions led her to fault Marine for overlooking them in its labor market studies. The ALJ thus penalized Marine for failing to address restrictions of which it was unaware, imposing too heavy a responsibility under the LHWCA’s burden-shifting scheme. This was legal error, for which we vacate the underlying decision and order. See Universal Mar. Corp. v. Moore, 126 F.3d 256, 264-65 (4th Cir. 1997) (vacating the BRB’s decision and remanding after holding that the ALJ’s imposition of too great a burden on the employer to demonstrate suitable alternative employment was an error of law); Trans- State Dredging v. Benefits Review Board, 731 F.2d 199, 201 (4th Cir. 1984) (reversing the BRB and remanding after finding that requiring the employer to contact prospective employers to determine whether they would hire someone with the claimant’s abilities “place[d] too heavy a burden upon the employer”). We have held that, to meet its burden, “an employer must present evidence that a range of jobs exists which is reasonably available and which the disabled employee is realistically able to secure and perform.” Lentz v. Cottman Co., 852 F.2d 129, 131 (4th Cir. 1988). There must be “a reasonable likelihood, given the claimant’s age, education, and vocational background that he would be hired if he diligently sought the job[s]” the employer 16 presents. Id. (quoting Trans-State Dredging, 731 F.2d at 201). Demonstrating a single job opening is not enough. Id. Once the employer has presented a range of appropriate jobs, however, “the employer need not contact prospective employers to inform them of the qualifications and limitations of the claimant and to determine if they would in fact consider hiring the candidate for their position.” Universal Mar., 126 F.3d at 264. Nor must the employer “contact the prospective employers in his survey to obtain their specific job requirements before determining whether the claimant would be qualified for such work.” Id. Rather, if the employer demonstrates “the availability of specific jobs in a local market,” he may rely “on standard occupational descriptions to fill out the qualifications for performing such jobs.” Id. at 265. Marine relied on the physical restrictions of which it was aware to present a range of suitable positions for Fifer. Prior to the hearing, Dr. Franchetti never indicated a standing restriction or a rest break requirement; to the contrary, after giving his revised October 2009 restrictions, he indicated that “cooking, deliveries and takeout,” as well as managerial work, would comport with Fifer’s physical restrictions. J.A. 390. Marine relied on the restrictions it knew of to prepare labor market studies, updating those reports as it became aware of revised restrictions. 17 Marine cannot be faulted for failing to account for restrictions which were unannounced prior to the hearing, a conclusion underscored by the ALJ’s unfounded findings with respect to Fifer’s medication-related restrictions. While the record corroborated the fact that Fifer took medication to manage his pain, neither his nor his treating physician’s testimony supports the conclusion that Fifer’s medication interfered with his ability to obtain employment. Indeed, as discussed above, nothing in the record indicated that security guards must undergo drug tests to qualify for employment. Faulting Marine for failing to address unfounded restrictions turns the employer’s showing of suitable alternative employment into a moving target. Moreover, the ALJ overstated Marine’s burden of presenting suitable alternative employment. The third labor study, at least, described with requisite specificity the responsibilities of a restaurant manager or assistant manager using the DOT. We have expressly approved the use of the DOT’s “standard occupational descriptions to fill out the qualifications” of suitable alternative employment in LHWCA cases. Universal Mar., 126 F.3d at 265. In Universal Maritime, we explained that we sanction the use of the DOT’s occupational descriptions because “the claimant is able to correct any overbreadth in a survey by demonstrating the failure of his good faith effort to secure 18 employment” once the burden shifts back to the employee. Id. at 264-65. Therefore, the ALJ’s rejection of the third labor market study for failing to describe “the specific duties of the[] positions” demands more than we require. J.A. 33. Further, Marine produced at least four alternative positions which the ALJ recognized would “accommodate [Fifer’s] lifting restrictions.” J.A. 33. Although “the employer need not contact prospective employers to inform them of the qualifications and limitations of the claimant,” Universal Mar., 126 F.3d at 264, Sappington communicated Fifer’s “physical limitations as [he] understood them” to the potential employers in order to determine whether the jobs were realistically available to Fifer, J.A. 168. Because Dr. Franchetti’s lifting and sitting restrictions were the only restrictions of which Marine was aware prior to the hearing, and because Marine presented several suitable positions which the ALJ found comported with those restrictions, we conclude that the ALJ erred in finding that Marine failed to meet its burden under the Act. Since Marine demonstrated the availability of suitable alternative employment which Fifer is capable of performing, the burden should have shifted to Fifer to prove he could not obtain more lucrative employment despite his diligent effort. We therefore vacate the final Decision and Order of the BRB, and 19 remand this matter for further proceedings consistent with this opinion. III. For the foregoing reasons, Marine’s petition for review is granted, the Decision and Order of the BRB is vacated, and the claim is remanded for further proceedings. VACATED AND REMANDED 20
Attrition in an exercise intervention: a comparison of early and later dropouts. To identify reasons for dropout and factors that may predict dropout from an exercise intervention aimed at improving physical function in frail older persons. An 18-month randomized controlled intervention in a community setting. The intervention comprised 2 groups: class-based and self-paced exercise. 155 community-dwelling older persons, mean age 77.4, with mildly to moderately compromised mobility. The primary outcome measure was dropout. Dropouts were grouped as: D0, dropout between baseline and 3-month assessment, and D3, dropout after 3-month assessment. Measurements of demographics, health, and physical performance included self-rated health, SF-36, disease burden, adverse events, PPT-8, MacArthur battery, 6-minute walk, and gait velocity. There were 56 dropouts (36%), 31 in first 3 months. Compared with retained subjects (R), the D0 group had greater disease burden (P = .011), worse self-perceived physical health (P = .014), slower usual gait speed (P = .001), and walked a shorter distance over 6 minutes (P<.001). No differences were found between R and D3. Multinomial logistic regression showed 6-minute walk (P<.001) and usual gait velocity (P<.001) were the strongest independent predictors of dropout. Controlling for all other variables, adverse events after randomization and 6-minute walk distance were the strongest independent predictors of dropout, and self-paced exercise assignment increased the risk of dropout. We observed baseline differences between early dropouts and retained subjects in disease burden, physical function, and endurance, suggesting that these factors at baseline may predict dropout. Improved understanding of factors that lead to and predict dropout could allow researchers to identify subjects at risk of dropout before randomization. Assigning targeted retention techniques in accordance with these factors could result in decreased attrition in future studies. Therefore, the results of selective attrition of frailer subjects, such as decreased heterogeneity, restricted generalizability of study findings, and limited understanding of exercise effects in this population, would be avoided.
Q: Can't find asp.net mvc 6 Template Application's Database I created an new asp.net 5 web application(mvc). In config.json, this is connection string: "Server=(localdb)\\mssqllocaldb;Database=aspnet5-sampleProject-be728759-6d45-4ac9-bb6c-f55ac4aee69e;Trusted_Connection=True;MultipleActiveResultSets=true" But when I connect to (localdb)\\mssqllocaldb from sql server management studio, I don't see this database: aspnet5-sampleProject-be728759-6d45-4ac9-bb6c-f55ac4aee69e I'm really stuck! How I can connect to this database? A: Solution was easy. I created a database and add its ConnectionString in config.json. Migration run automatically and membership tables added to database.
Q: AN application of Schwarz inequality. In the proof of Chung-Erd$\ddot{o}$s inequality: Let $X_k=1_{A_k}$,then: $$(\mathbb E(\sum_{i=1}^nX_k))^2\le\mathbb P(\sum_{i=1}^nX_k\gt0)\mathbb E[(\sum_{i=1}^nX_k)^2]$$ The textbook said this inequality is obtained by Schwarz inequality. But I have tried: $$(\mathbb E(\sum_{i=1}^nX_k))^2\le(\sum_{k=1}^n1^2)(\sum^n_{k=1}(\mathbb EX_k)^2)??$$ It seems that I'm going on the wrong direction.What should I do next? A: Let $X=\mathbf{1}_{A_1\cup\ldots\cup A_n}$ and $\langle a, b\rangle = \mathbb{E}[a b]$ for random variables $a,b$. Then by the Schwarz inequality $$\mathbb{E}[\sum_{k=1}^nX_k]^2=\langle X,\sum_{k=1}^nX_k\rangle^2\leq \langle X,X\rangle\langle \sum_{k=1}^nX_k,\sum_{k=1}^nX_k\rangle = \mathbb{E}[X]\cdot\mathbb{E}\left[(\sum_{k=1}^nX_k)^2\right]$$ Note that $\mathbb{E}[X]=\mathbb{P}[A_1\cup\ldots\cup A_n]$.
A rapid (4--6-hour) urine-culture system for direct identification and direct antimicrobial susceptibility testing. This study evaluates a new direct rapid system for urine cultures, including detection and quantitation of positive specimens by Gram stain, direct identification by 4--6-hour incubation of sediment with reagent strips, and antibiotic susceptibility testing by direct (3--4-hour) disk-elution methods. Of 987 routine urine specimens, 121 had significant (less than or equal to 10(5) colony-forming units/ml) gram-negative bacilluria, of which 89% were detected by the Gram stain. Direct rapid identification was correct in 94%. Results of direct disk-elution antimicrobial tests showed overall agreement with results of standard disk diffusion of 93% of tests, and major discrepancies in 4%. For urine specimens with gram-negative bacilluria, this system permitted detection, quantitation, identification, and antimicrobial susceptibility testing in four to six hours with reasonable, though not complete, accuracy.
Optical discs, which can store high-definition digital content and distribute such digital content at low cost, are used widely. For example, a Blu-ray disc (BD) has a capacity of 25 GB per layer. This means that a dual layer BD can store high-definition video with the digital broadcasting quality of up to about 4.5 hours. Such BDs have recently been used to distribute high-definition video. When compared with the DVD, a single BD can store content of up to ten DVDs. When compared with the CD, a single BD can store content of up to as many as 75 CDs. The content of a single BD can therefore be much more valuable than the content of a CD or a DVD. Thus, the sound distribution of content in the market using optical discs would be disabled by unauthorized copies of content stored in a BD or by pirated discs manufactured and shipped in the market by illegal manufacturers. As optical discs have higher capacity, expectations for copyright protection techniques for such discs become increasingly higher. After the emergence of DVDs, the copyright protection techniques for optical discs have mainly used encryption of content. Content is encrypted before being recorded onto an optical disc to prevent unauthorized copying by malicious users. However, the encryption is effective only when the encryption key is secret. Once the encryption key is leaked out, the copyright protection using the encryption would be disabled. The encrypted content is recorded on the optical disc medium in the form of concave or convex marks. In this case, however, the content is easily copied onto a different disc by forming the recording marks on the different disc using readout signals for the content. Instead of encrypting the content, another conventional copyright protection technique uses sub information recorded onto a disc. The sub information is recorded onto the disc in a manner that it cannot be copied using readout signals. With one such conventional technique, for example, sub information is recorded by slightly shifting the edge positions of recording marks in a regular manner in the tangential direction (see, for example, Patent Citations 1 to 5). With this technique of shifting the edges of recording marks in the tangential direction, the sub information is recorded as jitter of readout signals. The jitter elements are eliminated from the readout signals for the content that are extracted in synchronization with clock signals. This prevents unauthorized copying of the sub information using the readout signals. With another such conventional technique, sub information is recorded by slightly shifting recording marks in the radial direction (see, for example, Patent Citation 6). With this technique, the readout signals for the content do not contain information about the shifts of the recording marks in the radial direction. This technique also prevents unauthorized copying of the content using the readout signals. With another such conventional technique, synchronization code areas that are inserted in fixed cycles of recording marks are replaced by predetermined patterns (see, for example, Patent Citation 7). With this technique, the readout signals for the content do not contain the synchronization code signals. This technique also prevents unauthorized copying of the content using the readout signals. With the above conventional techniques, the sub information is recorded by shifting the recording marks in the tangential or radial direction, or by altering the synchronization codes, which are not the information representing the content. When the sub information is recorded with any of these techniques, the readout signals for the content do not contain the sub information. These techniques therefore prevent unauthorized copying of the sub information. For discs manufactured by duplicating a master, such as ROM discs, the sub information, which is recorded by shifting the recording marks or altering the pattern of the recording marks, needs to be recorded onto the master. In this case, the sub information is unique to the master. With another conventional technique, an optical disc substrate is first formed by duplicating a master, a reflective film is then formed on the optical disc substrate by vapor deposition and then a protective layer is formed on the reflective film to complete the disc, and then the sub information is recorded onto the completed disc by illuminating the disc at positions at predetermined distances from the edges of the recording marks with laser light to locally change the reflectivity of the information recording surface (see, for example, Patent Citation 8). When the sub information is recorded onto the disc with this technique of changing the reflectivity of the recording surface, the readout signals for the content do not contain the sub information. As a result, this technique prevents unauthorized copying of the content using the readout signals. With this conventional technique, the sub information is recorded onto the disc after the disc is completed. In this case, unlike the previously mentioned technique, the sub information is not recorded onto the master. The sub information can thus be unique to each disc. Patent Citation 1: Japanese Unexamined Patent Publication No. H11-126426 Patent Citation 2: Japanese Unexamined Patent Publication No. 2001-357533 Patent Citation 3: Japanese Unexamined Patent Publication No. 2002-203369 Patent Citation 4: International Publication No. 2004 or 036560 Patent Citation 5: Japanese Unexamined Patent Publication No. 2005-216380 Patent Citation 6: Japanese Unexamined Patent Publication No. 2000-195049 Patent Citation 7: Japanese Unexamined Patent Publication No. 2000-113589 Patent Citation 8: Japanese Unexamined Patent Publication No. H11-191218
Guacamelee! 2 (PS4) REVIEW – Luch-Adore This Game With all the shouting about walls and frozen water, it can be easy to forget how beautiful of a country Mexico really is. From its intricate mythologies and cultures all the way up to its music and people, there is so much going on South of the border. It’s nuts, and though it is true that not everything there is all sunshine and roses, Guacamelee! 2 does an outstanding job of helping you forget that and appreciate the best things the country has to offer with a dozen straight hours of Mexicana from Concentrate. With the recent spike in Metroidvania games, it can be really feel more special to finally play a good one. A good Metroidvania has an interesting, fully explorable world, tons of hidden nooks and reward-filled crannies, a new ability around every corner and a strong endgame that’ll test your mettle and may even force more exploration and backtracking. Guacamelee! 2 has all of this and more, featuring the return of two dimensions to explore, Life and Death, as well as the introduction of timeline jumping, which leads to payoffs in both story and gameplay. On top of the glorious exploration, Guacamelee! 2 continues to stand out from the rest by packing in complex platforming challenges and offering a deep combat system for when it’s time to get your hands dirty. Grappling is still as visceral as ever, especially after upgrading it, and the combo lab is much more forgiving than the first game, offering more viable combos that will help when your screen is packed with Alebrijes. The Chicken even has a viable moveset now with grapples, special moves and more. It’s all very cohesive and is one of the best instances of genre-blending I’ve seen from an indie game to date. Top it off with multiple costumes and playable characters, secrets galore and an alternate ending and you’ve got gold. The Mexiverse (as they call it) is realized beautifully. Dust billows through the dead agave fields, gold monuments to chickens and dead heroes shine realistically, and cities feel alive and bustling. Looking at the first game and Severed, it’s clear to see that the artists at Drinkbox know exactly what they’re doing with this distinctive art style, and it’s only gotten better over years of obvious tweaking. Lighting effects similar to those used in Rayman Legends give the characters and objects new depth. Shadows are cast upon far-off backdrops and metallic surfaces like the Aztec-inspired temples give off an alluring Gold sheen. The watery surfaces even have reflections, and enemies spatter purple blood all over the walls upon death to add great visual flair. This is a culmination of all their work put together and it’s totally paid off for them. I would be remiss to talk so highly of the art and not bring up the ingenious writing. Jokes are thrown about with reckless abandon as the plot is fired rapidly through the fourth wall, and I honestly wouldn’t have it any other way. It’s been 7 years since Juan defeated Calaca in 2013’s Guacamelee!, and he’s fat, well-loved, and retired with his family, but he’s grown complacent. The dude can’t even smash a barrel anymore. When it’s revealed that there are millions of timelines where Juan failed to defeat Calaca, leading to another Luchador taking over and summoning the evil power of the Sacred Guacamole, the plot launches you straight forward into an adventure spanning time and space. It’s 10+ hours of puns, memorable characters, video game references, betrayal, and overall wacky fun. Even the combo meter has witty one-liners that nearly took me the entire game to notice. Guacamelee! 2 is so much funnier than the first game, and at some points even had me nearly out of my seat onto the floor with laughter. If you’re having a bad day, this is the perfect game to get you through it. And yes, the cringey memes from the first game are totally gone. Totally. They’re certainly not referenced in the game as a hidden joke area at all. Nope. Of course not. The music adds another layer of authenticity to the entire experience, giving the over-the-top Mexiverse life. Despite not taking itself seriously with its writing, the game is dead set on providing a genuine mariachi-styled soundtrack that wouldn’t feel out of place at a Quinceñera. Towns have a hearty trumpet-laced soundtrack, while the wilder areas have quieter horns to make room for a strong acoustic presence that’s fast and rhythmic. It’s all very beautiful and I can guarantee it will be stuck in your head for days. Guacamelee! captured my heart back on the PS3, and even though I came into this game expecting to love it, it still shattered my expectations. The best thing a sequel can do is give you more of the first, but better, and DrinkBox totally delivers in that respect. The plot is engaging and hilarious, consistently egging you on to finish the game. The upgrade to the visuals is easily noticeable. I seriously still can’t get over how good Guacamelee! 2 looks, especially in motion. New characters like El Muñeco are crammed with personality and returning characters like Flameface now have more to say after making it to the sequel (and being fully aware of it). New additions to combat like enemy variety and a robust skill tree, as well as a much larger focus on chicken combat made encounters feel fresh again. Nearly every aspect surpasses the original and I can’t wait to go back through in local co-op and share the experience with friends. Verdict Guacamelee! 2 is one of the best Metroidvanias on the market. The art style is fantastic. The writing is hilarious and will leave you in stitches on a whim. The world is diverse and chock-full of culture, life, and collectibles. And the game’s unique focus on combat and precision platforming sets it apart from the pack. It’s the kind of game you lose track of time while playing, and when it’s over you’ll just want more.Microtransactions: None
Julian Assange has been inside for a year 137Shares 137Shares FROM OUR FRIENDS AT Tuesday marked Julian Assange's 365th day at the Ecuadorian embassy in Knightsbridge, London. Today, June 19, is the anniversary of his arrival. Uninterested in facing U.S. justice, Assange said he's prepared to spend five years living there. If he goes out for a walk, he'll be extradited to Sweden to answer rape accusations—after which he has no promise from Sweden to deny further extradition efforts to America, where a grand jury investigation into WikiLeaks awaits. This also means that London's Metropolitan Police have been devoting their resources to keeping tabs on Assange for a year. Yesterday, a spokesperson explained the updated costs of guarding the embassy over the phone: "From July 2012 through May 2013, the full cost has been £3.8 million ($5,963,340)," he said. "£700,000 ($1,099,560) of which are additional, or overtime costs." Julian has a treadmill, a SAD lamp, and a connection to the Internet, through which he's been publishing small leaks and conducting interviews. The indoor lifestyle has taken its toll on Julian, and it led to his contracting a chronic lung condition last fall. "A sad occasion that Julian could not follow me out the door," filmmaker Oliver Stone tweeted after he visited Assange earlier this year. "He lives in a tiny room with great modesty and discipline... " One of several public figures that have rallied behind Assange, Stone's support was little surprise. Of course, it's impossible to discuss Assange's solitary year at the embassy without considering the confinement of the man whose secrets he helped leak. It's a sort of absurdist parallel narrative to the trial of Pfc. Bradley Manning. The two figures are inextricably linked, and together, their saga reads like Miltonic poetry. Or a blockbuster film.
Q: Laravel & LaravelCollective HTML/Forms Error I'm building a simple form with a username and password. When I construct a username field like this: {{ Form::text('username') }} My page loads without issue. As soon as I want to define a class like this: {{ Form::text('username', ['class' => 'awesome']) }} I get an error, like so: ErrorException in HtmlBuilder.php line 65: htmlentities() expects parameter 1 to be string, array given Haven't been able to find any info regarding this error online. These examples are taken STRAIGHT from the LaravelCollective documentation. Any ideas? Thanks! A: You should pass the class in the third parameter like so: {{ Form::text('username', null, ['class' => 'awesome']) }} or: {{ Form::text('username', '', ['class' => 'awesome']) }} The second parameter is the value field
Validation study for development of the Japan NBI Expert Team classification of colorectal lesions. The Japan narrow-band imaging (NBI) Expert Team (JNET) was organized to unify four previous magnifying NBI classifications (the Sano, Hiroshima, Showa, and Jikei classifications). The JNET working group created criteria (referred to as the NBI scale) for evaluation of vessel pattern (VP) and surface pattern (SP). We conducted a multicenter validation study of the NBI scale to develop the JNET classification of colorectal lesions. Twenty-five expert JNET colonoscopists read 100 still NBI images with and without magnification on the web to evaluate the NBI findings and necessity of the each criterion for the final diagnosis. Surface pattern in magnifying NBI images was necessary for diagnosis of polyps in more than 60% of cases, whereas VP was required in around 90%. Univariate/multivariate analysis of candidate findings in the NBI scale identified three for type 2B (variable caliber of vessels, irregular distribution of vessels, and irregular or obscure surface pattern), and three for type 3 (loose vessel area, interruption of thick vessel, and amorphous areas of surface pattern). Evaluation of the diagnostic performance for these three findings in combination showed that the sensitivity for types 2B and 3 was highest (44.9% and 54.7%, respectively), and that the specificity for type 3 was acceptable (97.4%) when any one of the three findings was evident. We found that the macroscopic type (polypoid or non-polypoid) had a minor influence on the key diagnostic performance for types 2B and 3. Based on the present data, we reached a consensus for developing the JNET classification.
Our yoga teacher training is a well rounded yoga teacher training course that gives the student the tools to not only teach yoga, but also the practical knowledge on how to turn your passion of yoga in to a successful new career path. I was born in 1960. My Father was a Sufi, and a direct descendant of Mohamed through his daughter Fatimah. My mother was Hungarian, from a long line of Hungarian witches. My grandmother was Catholic, but still used her powers to help people. An unforgettable holiday experience that is totally unique at our luxury retreat on the charmingly beautiful dolphin island of Losinj, in the Croatian Adriatic sea. The program includes: Asana class 2x a day, Pranayama, Meditation, life coaching.
Q: Shiny + ggplot2 - plotOutput - zoom - floating geom_text position I have a plotOutput. The user selects a subarea, then double clicks, then the range of the plot (via a call to ggplot's coord_cartesian) adapts so that the graph is now zoomed on the subarea. It works ok: going from img1 to img2. The issue is that the position of the geom_text labels, because it is currently specified in absolute terms (x=Score - .75) does not adapt to the change in scale. The result is a messy graph (img2). I have tried replacing x=Score - .15 with Score-((ranges$x[2]-ranges$x[1])*.2); the latter expression is a double, the value of which depends on the current level of zoom. But R does not like it when I make that replacement (here is the error I get) : Listening on http://127.0.0.1:7310 Warning: Error in : Aesthetics must be either length 1 or the same as the data (4): x, label, y Stack trace (innermost first): 110: check_aesthetics 109: f 108: l$compute_aesthetics 107: f 106: by_layer 105: ggplot2::ggplot_build 104: print.ggplot 103: print 92: <reactive:plotObj> 81: plotObj 80: origRenderFunc 79: output$XassetOverview 4: <Anonymous> 3: do.call 2: print.shiny.appobj 1: print Img1: Img2: Full code (problematic line commented out): server = function (input, output){ # store range in a reactiveValues pair ranges <- reactiveValues(x = NULL, y = NULL) # generate the data XassetOverviewData <- reactive({ dataCrossAsset <- data.frame(c("point1", "point2", "point3"), c(50,33,45), c(49,50,53)) dataCrossAsset <- setNames(dataCrossAsset, c("Name", "Correlation", "Score")) return(dataCrossAsset) }) # generate the plot output$XassetOverview <- renderPlot({ ggplot(XassetOverviewData(), aes(x = Score, y = Correlation)) + geom_point(size = 5) + coord_cartesian(xlim = ranges$x, ylim = ranges$y) + geom_text(aes(x = Score - .15, label = Name), size = 3) # solution... causing a bug: # geom_text(aes(x = Score - ((ranges$x[2]-ranges$x[1])*.2), label = Name), size = 2) }) # observeEvent observeEvent(input$plot1_dblclick, { brush <- input$plot1_brush if (!is.null(brush)) { ranges$x <- c(brush$xmin, brush$xmax) ranges$y <- c(brush$ymin, brush$ymax) } else { ranges$x <- NULL ranges$y <- NULL } adjustment <- ((ranges$x[2]-ranges$x[1])*.2) cat(adjustment, file = stderr()) }) } ui = basicPage(plotOutput(click = "plot_click", outputId = "XassetOverview", dblclick = "plot1_dblclick", brush = brushOpts(id = "plot1_brush", resetOnNew = TRUE) )) shinyApp(server=server, ui=ui) A: The reason is that initially ranges$x is null. So you pass a null to geom_text. You should make a simple check if a double click occured already by checking the length of ranges$x: if(length(ranges$x)). server = function (input, output){ # store range in a reactiveValues pair ranges <- reactiveValues(x = NULL, y = NULL) # generate the data XassetOverviewData <- reactive({ dataCrossAsset <- data.frame(c("point1", "point2", "point3"), c(50,33,45), c(49,50,53)) dataCrossAsset <- setNames(dataCrossAsset, c("Name", "Correlation", "Score")) return(dataCrossAsset) }) # generate the plot output$XassetOverview <- renderPlot({ plot <- ggplot(XassetOverviewData(), aes(x = Score, y = Correlation)) + geom_point(size = 5) + coord_cartesian(xlim = ranges$x, ylim = ranges$y) + geom_text(aes(x = Score - .15, label = Name), size = 3) if(length(ranges$x)){ plot <- plot + geom_text(aes(x = Score - ((ranges$x[2]-ranges$x[1]) *.1), label = Name), size = 3) } else{ plot <- plot + geom_text(aes(x = Score - .15, label = Name), size = 3) } plot }) # observeEvent observeEvent(input$plot1_dblclick, { brush <- input$plot1_brush if (!is.null(brush)) { ranges$x <- c(brush$xmin, brush$xmax) ranges$y <- c(brush$ymin, brush$ymax) } else { ranges$x <- NULL ranges$y <- NULL } adjustment <- ((ranges$x[2]-ranges$x[1])*.2) cat(adjustment, file = stderr()) }) } ui = basicPage(plotOutput(click = "plot_click", outputId = "XassetOverview", dblclick = "plot1_dblclick", brush = brushOpts(id = "plot1_brush", resetOnNew = TRUE) )) shinyApp(server=server, ui=ui)
Q: WebClient c# Send post with \n without making new line I try to send a json that contains a \n in it but when i send it webclient makes a linebreak where \n is i want to send the example: using (WebClient wc = new WebClient()) { wc.Headers[HttpRequestHeader.Accept] = "*/*"; wc.Headers[HttpRequestHeader.AcceptLanguage] = "en-US,en;q=0.5"; wc.Headers[HttpRequestHeader.AcceptEncoding] = "deflate"; wc.Headers[HttpRequestHeader.ContentType] = "application/json"; ServicePointManager.Expect100Continue = false; string textToSend = "This is a Test\n This is a Test2" string sendString = textToSend; byte[] responsebytes = wc.UploadData("https://localhost/", "POST", System.Text.Encoding.UTF8.GetBytes(sendString)); string sret = System.Text.Encoding.UTF8.GetString(responsebytes); } Outputs: This is a Test This is a Test2 How can i make it output: This is a Test\n This is a Test2 ? A: Try escaping, pass \\n instead of \n. Is that what you want? Try using Regex.Escape
Tag: Flannery O’Connor UPDATE 2016: This has proven to be one of the most popular posts on the blog, which suggests that lots of people enjoy, but perhaps are puzzled by, Flannery O’Connor’s short stories. I would be happy to explore more of her stories (I’ve got a couple of half-written posts that are hanging fire). If you have a particular O’Connor story that excites, interests, or puzzles you, leave a comment at the end of this post and let me know — or you can email me, if you don’t want to leave a public comment. Original post: A recent comment on an old post about Flannery O’Connor raises some questions that I thought I would respond to in a separate post, rather than depositing them in the obscurity of the comm box. Janet Baker left a long comment (you can read it in its entirety there), which says in part: I’m currently working on the short story Revelation, looking at the text for what it says about Flannery’s Catholicism, rather than listening to her pronouncements in non-fiction, like her letters. If you read the story, you will note that it is Mrs. Turpin’s virtues that must be burned away before she enters heaven, and that people enter heaven in groups, racial and social. Perhaps you don’t read either St. Thomas Aquinas, or Teilhard de Chardin, nor have I extensively, but if you begin to read about it, you’ll see that St. Thomas promotes the virtues of which Mrs. Turpin is guilty–generous almsgiving, supporting the Church, helping others regardless of their worthines [sic] of help. It was Teilhard, whom Flannery really loved and read even when it wasn’t time for bed, as she did Thomas. Teilhard, on the other hand, supports the idea that we enter heaven in groups and all enter, all, after their individual identities had been burned away. That’s why he was a heretic and rejected by the Church, along with all his bogus evolutionary crap, although he influenced the Church deeply, and perhaps mortally. I just ran across the Facebook page for a television and film production company called Good Country Pictures. This small company is dedicated to bringing the works of Flannery O’Connor and Charles Williams to the screen, and currently is working on producing a TV series based on O’Connor’s short stories, and making a film of Williams’s novel, All Hallows Eve. Here’s how they describe their mission: Good Country Pictures is dedicated to producing TV and film projects that help their audience rediscover ‘mystery and manners.’ GCP presently owns the TV and film option rights to most of the works of Flannery O’Connor and Charles Williams. Already underway is a feature film of O’Connor’s ‘The Violent Bear It Away’ and a TV series of her short stories. A film treatment of Charles Williams’ ‘All Hallows’ Eve’ (1941) is also in progress. I’ve recently written a bit about Flannery O’Connor (there’s lots more I’d like to say, when time allows); if you visit Good Country Pictures’ Facebook page, you’ll find links to various resources online that will help you learn more about both these writers. A number of Flannery O’Connor’s works have been adapted for television (not very successfully); they are also the favorite subject of amateur filmmakers — just take a look on YouTube and you’ll find plenty of videos made by students, indies, and other O’Connor enthusiasts. By far the best made and best known adaptation is John Huston’s feature film of Wise Blood, in which a very young Brad Dourif was brilliantly cast as Hazel Motes (the Criterion edition is available on DVD). Charles Williams, Inkling & novelist Those who don’t know the works of Charles Williams are missing a treat. Inklings fans will know that Williams was a member of that literary coterie, the only one of the group who did not teach at one of the great English universities. C. S. Lewis was a great admirer. Williams is best known for his metaphysical novels, which are weirdly surreal yet rooted in a profoundly Christian worldview. (Williams also wrote poetry and at least two works of theology.) There’s really no way to describe his books adequately; probably the best one to begin with is War in Heaven, which has to do with the Holy Grail, found in an English country church, and the struggle between good and evil forces to possess it. I’m not aware of any screen adaptations of Williams’s novels, but they would all be wonderful as films. I had a friend who used to say, “Sometimes God gives you a sign, sometimes BILLBOARDS!” Flannery O’Connor is famous for saying that her characters were so colorful (critics like to call them “grotesque”) because you have to draw large pictures for the blind and shout at the deaf: “He who has ears to hear, let him hear.” I’ll admit that, fascinated as I was with her work when I first began to read it, I was often puzzled as to what was going on. I remember waking up in the dark hours of the night, years after first reading “A Good Man is Hard to Find,” with a sudden understanding of what the Misfit meant when he said, “She would of been a good woman, if it had been somebody there to shoot her every minute of her life.” For anyone similarly puzzled, my advice is to read “Revelation,” which probably makes clearer than any of her other stories just what Flannery is up to. (See my analysis of the climactic scene here.) If I’d read that one before I read “A Good Man is Hard to Find,” maybe my sleep wouldn’t have been disturbed at 3 a.m. years later. Then again, maybe not. Perhaps I had to learn something about the nature of Grace before I could get over being blind and deaf to what O’Connor was going on about. The great thing about her stories is that they fascinate even those who haven’t a clue about God or His grace or how it operates in the soul. Such readers will remember her strange characters and puzzle over their behavior, perhaps until one night God bonks them on the head and shouts, “Wake up, dummy!” And, oh yeah, by request, I’ve added a little more info to my online profile, in case you’re interested. If you’d like to know what some other Catholic bloggers have been doing this week, don’t forget to take a look at Sunday Snippets — A Catholic Carnival. Is it weird to be friends with someone who died years before you ever heard of them? Not if you believe in the Communion of the Saints, I guess. At any rate, since I first read any of her work, way back in my college days, I’ve thought of Flannery O’Connor as a friend I never got a chance to meet. Since then, I’ve come to know her better and I’m just sure that in Heaven we will be best buddies. I can imagine us laughing at each other’s jokes (dry wit, our specialty) and completing each others’ sentences — you know, when we aren’t discussing theology or doing imitations of our country cousins. I don’t suppose it really is too weird to look forward to great conversations after death, especially with those we never got a chance to meet in this life. Our local public radio station at Christmastime — or the politically-correct “holiday season” — likes to ask local luminaries who they would invite to their “dream dinner party.” The rules of the game are that you can pick anyone, living or dead, to invite, and you are supposed to think about which combination of guests would create the most interesting conversations. (Inevitably, when I listen to these show I think “yuck, why invite that guy? I could come up with a much better guest list.”) Socrates, you know, when he had been sentenced to death by his fellow Athenians, as punishment for making the local bigwigs and know-it-alls look like a bunch of chumps and thereby setting a bad example for young people, wagged his finger at the jury and said, “I know you guys think you’ve done something really mean to me by condemning me to death, but I don’t see it that way. No one knows exactly what death is like but it is either the Big Sleep that never ends (and who doesn’t love a nice, long dreamless sleep?) or it’s a chance to have endless conversations with all the wise and interesting people who have died before you.” That was Socrates’ idea of heaven — one long, interesting conversation among wise people. Flannery O’Connor cartoon “Oh, well, I can always be a Ph.D.” Although I hope to meet my friend, Mary Flannery, in Heaven and share some good times (the best!), I’ve had fun getting to know her through her writing and her friends’ accounts of her. Here are some books I can recommend. Her Works Collected Works (The Library of America), selected and edited by Flannery’s good friend and literary executrix, Sally Fitzgerald. This is the book to get if you want to get up to speed on Flannery O’Connor quickly. It includes all her short stories, both her novels, and a goodly selection of her essays and personal correspondence. If you are unfamiliar with her work, start with the short stories — I recommend “Revelation” and “A Good Man is Hard to Find” as quintessential O’Connor stories, but don’t stop there. This is one of those books that I’d want to have if I were stranded on a desert isle. If you know anything about Flannery O’Connor, you probably know that she suffered from lupus, a disease which eventually killed her at age thirty-nine; it also forced her to give up her independent life and move back to Georgia to live with her mother, with whom she shared a tense, if devoted, relationship. Since she couldn’t get out much, she became a prolific correspondent, with friends, strangers, and admirers alike. These letters give a wonderful sense of her personality, which was witty, generous, and self-deprecating. Biographies of Flannery O’Connor Between these two books, you’ll have almost everything Flannery O’Connor ever wrote that has appeared in print, with the exception of some book reviews she used to write for her diocesan newspaper. But you’ll want to know more, which means you’ll want to read biographies of her. Be warned, most biographies reveal more about the biographer than the biographee. Here are some that I have read and not absolutely hated. Gooch obviously is a great admirer of my friend Flannery, but he doesn’t quite get her — which was probably also true of the men who actually knew her. Gooch is very interested in such men (there were only a couple, and O’Connor’s relationships with them never really developed into romances), so his discussion of the two or three young men who were close to Flannery adds something that you won’t get from her own letters (at least not the ones that Sally Fitzgerald saw fit to publish). Gooch has a tendency to see O’Connor’s stories as fictional elaborations of incidents in her real life, which at times seemed to me a bit of a stretch. Flannery would have HATED the suggestion that she wrote her own life into the stories. Read my full review here on Library Thing. “Taken together, their stories are told as episodes in a recent chapter of American religious history, in which four Catholics of rare sophistication overcame the narowness of the Church and the suspicions of the culture to achieve a distinctly American Catholic outlook. [In other words, the AmChurch perspective.] “All of that is true and worth knowing. This book, though, will take a slightly different approach, setting out to tell their four stories as one, albeit one with four points of origin and points of view. It is, or is meant to be, the narrative of a pilgrimage, a journey in which art, life, and religious faith converge; it is a story of readers and writers — of four individuals who glimpsed a way of life in their reading and evoked it in their writing, so as to make their readers yearn to go and do likewise.” Does that make sense to you? It didn’t make much sense to me and, when I bought this book, I just read the Flannery bits (and a few of the Walker Percy bits) and skipped Merton and Day altogether, because they weren’t what I was interested in. This method worked pretty well to produce a stand-alone bio of Flannery. These four different lives didn’t actually intersect in any significant way — i.e., although they were aware of one another and perhaps interested in each other in an academic way, they were not consciously working out any shared agenda, other than being well-known Catholics in the middle of the twentieth century. I may go back and read the Percy, Merton, and Day bits one of these days to see what Elie thought he could make of them, all put together. Flannery O’Connor self-portrait w/pheasant It’s been a couple of years since I read The Life You Save etc., but I recall that Elie had a tendency to rank his biographees on various hot-button social and political issues, a practice that I find tedious and tendentious. “Where did Flannery O’Connor stand in matters of race?” he asks. “The black characters in O’Connor’s fiction are invariably admirable … [y]et at the same time there is the word ‘nigger’ running through the correspondence.” You can tell that Elie did not grow up in the South, or he would know that what is now referred to as “the N word” was used universally in the South before the Civil Rights movement in the ’60s, and was not necessarily derogatory. It was culturally neutral, if rather uncouth. (When I was a child in the South, about the time Flannery O’Connor was dying of lupus, I was taught that “colored” was the polite term.) Anyway, why can’t Elie just describe Flannery, rather than judging her? Let her life speak for itself. I’ll admit I haven’t actually read much of this yet. I bought it a couple of years ago, toward the end of a long, intense bout of Flanneryism, and got distracted before I got too far into it (no fault of Murray’s book). After reading the Gooch and Elie bios, I wanted to read something that gave due, and sympathetic, attention to Flannery’s deep Catholic faith — this book is certainly that. Murray apparently tries to show that Flannery, although a very “human” person with her share of sharp edges, nonetheless was deeply spiritual, and was sanctified through her suffering. Murray does not make a plaster saint of her, but she does acknowledge that Flannery was became saintly. If she is declared a saint, then let her be a saint sitting next to Regina [her mother] in the pew at Sacred Heart church, blanching at the St. Patrick’s Day decorations. Let her be a saint gazing with equal parts piety and irony at the pilgrims of Lourdes, dreading the moment of bathing in the grotto. Let her be a saint who laughs so loud that books fall from her hands. Let her be a saint from whose pen stampede the wild-eyed Hazel Motes, the lumbering Hulga, the dazed Mrs. Turpin. Let her be a saint in the same way that Thérèse was — in her own “human and terrible greatness.” I’m looking forward to hanging out with Saint Flannery in the Big Conversation of Eternity. ———- UPDATE ———- Since writing this post, I have downloaded a Kindle sample of a new “spiritual biography” of Flannery, called The Terrible Speed of Mercy: A Spiritual Biography of Flannery O’Connor, by Jonathan Rogers. The sample includes the Introduction, and a page or two of the first chapter. Judging from the introduction, I’d say this looks promising — i.e., I think Rogers “gets” Flannery. I’m not sure exactly how he’s going to approach her life, though, because he acknowledges: No amount of poking around in the external events and facts of her life is going to get at the heart of her. There’s no accounting for Flannery O’Connor in those terms. Thankfully we have her letters, which provide windows into an inner life where whole worlds orbited and collided. The outward constraints that O’Connor accepted and ultimately cultivated made room for an interior world as spacious and various as the heavens themselves. Her natural curiosity was harnessed and directed by an astonishing intellectual and spiritual rigor. She read voraciously, from the ancients to contemporary Catholic theologians to periodicals to novels. She once referred to herself as a “hillbilly Thomist.” She was joking, but the phrase turns out to be helpful. The raw material of her fiction was the lowest common denominator of American culture, but the sensibility that shaped the hillbilly raw material into art shared more in common with Thomas Aquinas and the other great minds of the Catholic tradition than with any practitioner of American letters, high or low. I expect I’ll wind up buying this one. When I’ve read it, as well as Lorraine Murray’s The Abbess of Andalusia, I’ll write a review of them. Watch this space! This morning I discovered a website called CatholicFiction.net, which offers, “news, views, and reviews” on fiction by Catholic writers. The site is sponsored and maintained by Idylls Press, a Catholic publishing concern with an interest in promoting a “new Catholic literary renaissance.” The Catholic Fiction site looks like a good place for anyone interested in finding books written from a Catholic perspective (they cover “fiction in every genre, both classic and contemporary .. [as well as] literary biography and criticism) or reading reviews that give a Catholic “take” on fictional works that may or may not have been written by Catholic authors. They also have a Catholic Fiction Reading List, where you may find authors you may not have read before, or may not have realized were Catholic. One of my all-time favorite writers, Flannery O’Connor What makes a “Catholic writer” is a more complicated question than you might think. A number of years ago, I bought a book from Ignatius Press called The Catholic Writer, containing a variety of papers from an academic literary conference sponsored by the Wethersfield Institute. After I got it home, I flipped through to look for a discussion of one of my favorite Catholic writers, Flannery O’Connor — but there was none! In the introduction to the volume, the editor explained that they only included writers who wrote on Catholic subjects — i.e., stories about Catholics doing Catholic stuff (presumably attending Mass, praying the rosary, burying statues of St Joseph upside down in their front yards to help sell a house). I thought this was an insane definition of the term “Catholic writer,” particularly as it necessarily excluded writers like O’Connor, whose stories are positively incandescent with the light of her Catholic faith. Fortunately, the Catholic Fiction web site does not embrace this narrow definition — in fact, they cite Flannery O’Connor’s definition that Catholic writing is “a Catholic mind looking at anything.” (This is precisely the idea I had in mind when I called this blog “A Catholic Reader.”) You can read more about their criteria for what constitutes “Catholic fiction” here. They also have a section devoted to “the conversation about Catholic fiction,” with links to articles that discuss this topic — “what it is (or isn’t), its history, its current state, its usefulness as a literary category.” Flannery O’Connor by John Murphy It looks interesting. When I’ve had a chance to peruse it more thoroughly, I’ll let you know. Meanwhile, cruise around, check out the Catholic Fiction site, and check back in here to let me know what you think. UPDATE Sept 2012: The Catholic Literature website has been updated, and is now called CatholicNovel.com. They’ve got a cleaner, better-organized website, which should make browsing easier. Also, the sponsoring publisher, Idylls Press, is about to debut a new website, too. Give ’em a look, and maybe buy some of their merchandise sporting illustrations of famous authors by John Murphy. Retired from college teaching, I'm now a freelance editor and writer living in the Dallas-Fort Worth Metroplex. When I'm not working for other writers, I'm busy writing books, novels, and short stories, or blogging about literature and the moral imagination on my blog, A Catholic Reader.
Dry-Erase magnetic glass panel with monthly planner + memo white height: 30,5 cm, length: 60 cm Plan your month's appointments, family dinners, chores and parties on this magnetic glass dry-erase monthly planner + memo board. This silkscreened board sports five weeks and one larger memo area, for a full month of planning and additional space for miscellaneous notes. The shatterproof, tempered glass dry-erase surface is non-ghosting, and offers a sleek alternative to standard whiteboards. Backed by metal, the glass surface serves double-duty as a magnet board for photos and cards. Hidden mounts behind the board maintain the clean design. Features Magnetic Shatterproof tempered glass Includes 2 magnets Dry·erase marker + shelf Wall mounting hardware Dimensions Length: 12" (30.5 cm) Width: 23.5" (60 cm) Manufactured after 2013/01/01, The object of the declaration described above is in conformity with DIRECTIVE 2011/65/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL an the restriction of the use of certain hazardous substances in electrical and electronic equipment. Plan your month's appointments, classes, family dinners, chores and parties on this magnetic glass dry-erase weekly planner. This silkscreened board sports seven labeled days for a full week of planning.
Non-return valves or check valves have long been known for allowing fluid flow in only one direction. Any reversal of the flow in the undesired direction results in stoppage or checking of the flow. This invention relates to a specific body construction and assembly for a check valve for carrying a fluid flow. Typical prior art check valve assemblies are comprised of a flow section, a plurality of flappers, a stop tube for controlling the angle of opening of the flappers, and a plurality of vertical supports, commonly referred to as ears, for supporting the stop tube in its proper position. The stop tube is commonly held in place relative to the vertical supports through the use of an external fixation or retention device, such as a pin, or weld deposit about an end of the stop tube where it is inserted into and meets the vertical support. In many applications, it is desirable to provide a check valve at one or more spaced locations in a pipe line or conduit for handling fluid flows. The check valve assures against back flow and provides a safety margin in the unlikely event of line breakage. These types of check valves, commonly referred to as insert check valves, preferably do not use an external mechanism be used for stop tube retention, thereby allowing for insertion of the check valve within confined space, such as a pipe, or the like. In addition, in many applications, when a check valve is assembled using external methods to retain the stop tube assembly, heat becomes a factor and may result in the shrinking of the stop tube supports, causing critical deformation. Hence, there is a need for a check valve including a check valve stop assembly that when retained within a plurality of vertical supports of the check valve flow body provides retention without the use of an external fixation or retention device. In addition, there is a need for a check valve stop assembly that is not susceptible to extreme heat conditions.
A machine translation system including an example database that stores and manages sentences in different languages so that sentences in one language are associated with sentences in the other language (Patent Document 1), an interpretation apparatus including a database that stores a question sentence in a first language and its translated question sentence in a second language as a pair (Patent Document 2), a method of creating a database associating words in a first language with words in a second language that are translation of each other (Patent Document 3) are all known. These techniques can reduce time and effort of a user required for translation. [Patent Document 1] Japanese Unexamined Patent Application Publication No. 2004-220266 [Patent Document 2] Japanese Unexamined Patent Application Publication No. 2000-090087 [Patent Document 3] PCT Japanese Translation Patent Publication No. 2004-535617
Stormgade Stormgade (lit. "Storm Street") is a street in central Copenhagen, Denmark. It runs from Frederiksholm Canal to H. C. Andersens Boulevard where it turns into Tietgensgade before continuing along the rear side of Tivoli Gardens and Copenhagen Central Station. In the opposite direction, Storm Bridge connects it to Slotsholmen where traffic may continue across Holmen's Bridge to Holmens Kanal, part of Ring 2, or across Knippel's Bridge to Christianshavn and Amager. The name of the street refers to the Swedish Storm of Copenhagen in 1659. History The area south of Slotsholmen was originally part of the shallow-watered area known as Kalveboderne. The coast line ran approximately where Stormgade runs today. On the night of 10 February 1658, Swedish troops made an assault on Slotsholmen across the ice. After the attack, it was decided to improve the defense of Slotsholmen by extending Copenhagen's Western Rampart into the water. The area between the rampart and the new Frederiksholm Canal was reclaimed and developed into a small new neighbourhood with three short streets: Slotsholmen, Ny Vestergade and Ny Kongensgade. When the Western Rampart was removed in the late 1870s, Stomgade was extended by one block to Vestre Boulevard (now H. C. Andersens Boulevard). The entire southeast side of the street was demolished in 1931 to make way for an expansion of the National Museum. One of the buildings had stood from 1783 until 1923. Notable buildings and residents The National Museum's façade on Stomgade dates from 1929-1938. Its most distinctive feature is the colonnade with 38 columns in Bornholmian granite which runs along the full length of the building. No. 6 is from 1851 and was listed in 1918. No. 8 was originally two separate buildings dating from some time before 1734, which were merged into one in 1748. The Holstein Mansion was originally built in 1687 but owes its current appearance mainly to an expansion carried out by Jacob Fortling in 1756. It was home to the Natural History Museum between 1827 and 1871. Det Harboeske Enkefruekloster (No. 14) was rebuilt by Elias David Häusser in 1741 but owes its current appearance to expansions and alterations carried out by Lauritz de Thurah between 1754 and 1760. The corner building at No. 16 was built for royal pastry-maker Jens Raae in 1791. The dormer window on Stormgade was added in 1811 and the building has undergone several alterations since then. It was listed in 1945. The corner building at No. 18, on the other side of Vester Voldgade, was originally built for Overformynderiet, a financial institution that later moved to a new building in Holmens Kanal. The building was designed by Hans Jørgen Holm and is from 1894. In 2014, it was decided to convert it, together with parts of the neighbouring building at No. 20, into a new home for the Museum of Copenhagen. References Stormgade on indenforvolden.dk External links Category:Streets in Copenhagen
Q: .net Framework installation using Advanced Installer I added the .net framework prerequisite to my installer using Advanced Installer which I had set to framework 4.5.My System already has framework 4.5 installed,It still forces me to install framework 4.5.I want that if the framework is already installed,it should skip the framework installation and proceed to the main installation.How is it to be done? Please Help, Thanks A: All .NET Framework prerequisites were updated in Advanced Installer 10.8. More specifically, .NET Framework 4.x cannot be installed on Windows 8/Server 2012 (or later) using the redistributable prerequisite but you can enable them from Windows Features. Cheers
The Dakar Rally will take place in only one country for the first time in its history next season, with Peru confirmed as the sole host nation for the 2019 edition. It comprises 10 stages and a rest day, all within Peru, starting and ending in the capital city Lima. It will begin on January 6 and finish on January 17. ASO had initially hoped for a route starting in Chile and ending in Ecuador, but was unable to reach an agreement with either country to join Peru. Talks were also held with Bolivia before it also made a late decision to withdraw. "We are going to build more technical and difficult stages because in this type of geography of sand and dunes we can not develop 400km special stages," rally director Etienne Lavigne told Motorsport.com. "It's too difficult. "We will have at least 70 percent of stages of sand and dunes and that in the history of the Dakar is unique. The last few years we did not have as much percentage of dunes." Lavigne is confident that, despite the lack of countries interested in hosting the Dakar, the beaches, dunes and Peruvian tracks will still attract the best drivers and riders in the discipline. "We know that we will attract the top drivers of this discipline because every year it is the highest level reference event," he continued. "In Peru, the Loeb accident happened, the one with Nani Roma... it's difficult terrain. It's not a tour that we're going to put together." Lavigne also confirmed a replacement for former sporting director Marc Coma has yet to be found. "To find a good person for this role is not easy," he said. "It is a complicated, demanding role that needs availability, a lot of energy, presence in the field work for several months. "It takes a little time to find the ideal person, we can not find them on the street. But today the priority is to build a tour in Peru of quality and with great sporting interest." Another change for the 2019 Dakar is that competitors in the cars and trucks classes will be able to rejoin the action in the second week if they retire from the first week, but will have their own classification as to not interfere with the starting order. This does not apply to bikes and quads riders.
Overview Varanasi is renowned for being one of the most spiritual parts of India. Gain local insight into the Hindu cycles of death and rebirth on an evening tour of Varanasi, the perfect cultural introduction for travelers visiting India for the first time. Traveling by air-conditioned vehicle, you’ll visit top Varanasi attractions such as Golden Temple (Kashi Vishwanath), Manikarnika Ghat, and Dasaswamedh Ghat while your guide provides information integral to understanding their spiritual significance. Receive important local insight into the Hindu cycles of death and rebirth in Varanasi
Zuzana Vojířová Zuzana Vojířová is an opera by Jiří Pauer to the composer's own libretto after the play of the same name by Jan Bor, which is itself based on a romance by František Kubka. The plot concerns the folk tale of nobleman Peter Vok's 30-year-long illicit romance with the miller-knight's daughter Zuzana Vojířová. The opera was written in a Janáček-like idiom and became one of the most successful post-war Czech operas, with 120 performances in Prague alone - long the most performed modern Czech opera. Recording Complete recording 1979. Gabriela Beňačková-Čápová, soprano, Václav Zítek, baritone. Prague Radio Chorus and Prague National Theatre Orchestra. :sv:František Vajnar, conductor. 3 LPs with libretto in Czech and English translation. References Category:Czech-language operas Category:Operas based on plays Category:Operas Category:1958 operas
/* ------------------------------------------------------------------- EmonESP Serial to Emoncms gateway ------------------------------------------------------------------- Adaptation of Chris Howells OpenEVSE ESP Wifi by Trystan Lea, Glyn Hudson, OpenEnergyMonitor Modified to use with the CircuitSetup.us Split Phase Energy Meter by jdeglavina All adaptation GNU General Public License as below. ------------------------------------------------------------------- This file is part of OpenEnergyMonitor.org project. EmonESP is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. EmonESP is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with EmonESP; see the file COPYING. If not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include "emonesp.h" #include "http.h" WiFiClientSecure client; // Create class for HTTPS TCP connections get_https() HTTPClient http; // Create class for HTTP TCP connections get_http() // ------------------------------------------------------------------- // HTTPS SECURE GET Request // url: N/A // ------------------------------------------------------------------- String get_https(const char *fingerprint, const char *host, String url, int httpsPort) { // Use WiFiClient class to create TCP connections if (!client.connect(host, httpsPort)) { DBUGS.print(host + httpsPort); //debug return ("Connection error"); } #ifndef ESP32 #warning HTTPS verification not enabled if (client.verify(fingerprint, host)) { #endif client.print(String("GET ") + url + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n"); // Handle wait for reply and timeout unsigned long timeout = millis(); while (client.available() == 0) { if (millis() - timeout > 5000) { client.stop(); return ("Client Timeout"); } } // Handle message receive while (client.available()) { String line = client.readStringUntil('\r'); DBUGS.println(line); //debug if (line.startsWith("HTTP/1.1 200 OK")) { return ("ok"); } } #ifndef ESP32 } else { return ("HTTPS fingerprint no match"); } #endif return ("error " + String(host)); } // ------------------------------------------------------------------- // HTTP GET Request // url: N/A // ------------------------------------------------------------------- String get_http(const char *host, String url) { http.begin(String("http://") + host + String(url)); int httpCode = http.GET(); if ((httpCode > 0) && (httpCode == HTTP_CODE_OK)) { String payload = http.getString(); DBUGS.println(payload); http.end(); return (payload); } else { http.end(); return ("server error: " + String(httpCode)); } } // end http_get
Over the past 24 hours, CBC News reported that Elections Nova Scotia is attempting to push recommendations for the penalization of a voter taking a photograph of his/her own ballot. I find the justification for any such penalties to be — to say the least — unconvincing. Voters should be able to take photos of their own ballots — at will. I believe that such a device helps to ensure the propriety and correctness of the voting process. Call me cynical — I only come from Florida, where I served (from 2004-2012) as chief county/regional elections counsel for scores of Democratic Party candidates — including John Kerry and Barack Obama. Also, I cannot (and, likely, will never) get over what happened in Florida in 2000, nor the legislative and executive-based responses to the nightmare that is Bush v. Gore. In effect, policymakers — both state and federal — made the process even less secure, especially as they resorted to mandating the use of computer-based systems, including Optical Scan devices and even touchscreen devices. These devices are — as reflected in several studies — insecure and, potentially, easily hacked. Elections Nova Scotia believes that taking a photo of your marked ballot will allow for voters to be coerced into voting a particular way, while presenting proof to whomever is extorting the voter. I find such fears specious, as there is (1) no proof of such occurring, and (2) even if there were the rare case of such, Elections Nova Scotia would be unlikely to prevent a voter from presenting such proof. Are we to now ‘pat-down’ voters for cellular devices before they enter the voting area? Instead, the core issue should revolve around ensuring the proper counting/recounting of ballots. If this was so, then Elections Nova Scotia would not be supporting the further penalization of voters, but instead supporting the rescinding of any law that prohibits a voter from photographing his/her own ballot card.
# snpExceptions.sql was originally generated by the autoSql program, which also # generated snpExceptions.c and snpExceptions.h. This creates the database representation of # an object which can be loaded and saved from RAM in a fairly # automatic way. #Set of queries to look for snps that appear problematic CREATE TABLE snpExceptions ( exceptionId int unsigned not null, # unique ID for this exception query varchar(255) not null, # SQL string to retrieve bad records num int unsigned not null, # Count of SNPs that fail this condition description varchar(255) not null, # Text string for readability resultPath varchar(255) not null, # path for results file #Indices PRIMARY KEY(exceptionId) );
Q: What does ModeShape offer that JackRabbit doesn't? I just familiarized myself with Apache JackRabbit. I've done a little multi-user repository for document management. If anybody used both of them, could you please answer these questions ? Is ModeShape somehow linked to JBoss ? I don't have much experience with JBoss AS or any other JBoss tools. I see a support for tomcat, but a lot of JBossy stuff Documentation says that future releases should have UI integration, is it far future ? What kind of UI integration would it be ? Is there something that uses ModeShape as it is in case of Hippo CMS with JackRabbit ? It's a shame that Gatein doesn't use it. How does JackRabbit compare to ModeShape in regard to fulltext search, indexing and the overall processing of text content ? How about CMIS support ? I see an unresolved issue MODE-650. Jackrabbit is supported by OpenCMIS (Apache chmistry), even for secondary types in near future. What about support/utils libraries, for developer convenience when working with Nodes I'm interested in any other comparison comments, thank you A: I can answer some of your questions. Full disclosure: I'm the founder and project lead for ModeShape. Briefly, ModeShape is a lightweight, embeddable, extensible open source JCR repository implementation that federates and unifies content from multiple systems, including files systems, databases, data grids, other repositories, etc. You can use the JCR API to access the information you already have, or use it like a conventional JCR system. Here are some of the higher-level features of ModeShape: Supports all of the JCR 2.0 required features: repository acquisition; authentication; reading/navigating; query; export; node type discovery; permissions and capability checking Supports most of the JCR 2.0 optional features: writing; import; observation; workspace management; versioning; locking; node type management; same-name siblings; orderable child nodes; shareable nodes; and mix:etag, mix:created and mix:lastModified mixins with autocreated properties. Supports the JCR 1.0 and JCR 2.0 languages (e.g., XPath, JCR-SQL, JCR-SQL2, and JCR-QOM) plus a full-text search language based upon the JCR-SQL2 full-text search expression grammar. Additionally, ModeShape supports some very useful extensions to JCR-SQL2: subqueries in criteria set operations (e.g, "UNION", "INTERSECT", "EXCEPT", each with optional "ALL" clause) limits and offsets duplicate removal (e.g., "SELECT DISTINCT") additional depth, reference and path criteria set and range criteria (e.g., "IN", "NOT IN", and "BETWEEN") arithmetic criteria (e.g., "SCORE(t1) + SCORE(t2)") full outer join and cross joins and more Choose from multiple storage options, including RDBMSes (via Hibernate), data grids (e.g., Infinispan), file systems, or write your own storage connectors as needed. Use the JCR API to access information in existing services, file systems, and repositories. ModeShape connectors project the external information into a JCR repository, potentially federating the information from multiple systems into a single workspace. Write custom connectors to access other systems, too. Upload files and have ModeShape automatically parse and derive structured information representative of what's in those files. This derived information is stored in the repository, where it can be queried and accessed just like any other content. ModeShape supports a number of file types out-of-the-box , including: CND, XML, XSD, WSDL, DDL, CSV, ZIP/JAR/EAR/WAR, Java source, Java classfiles, Microsoft Office, image metadata, and Teiid models and VDBs. Writing sequencers for other file types is also very easy. Automated and extensible MIME type detection, with out-of-the-box detection using file extensions and content-based detection using Aperture. Extensible text extraction framework, with out-of-the-box support for Microsoft Office, PDF, HTML, plain text, and XML files using Tika. Simple clustering using JGroups. Embed ModeShape into your own application. RESTful API (requires deployment into an application server). These are just some of the highlights. For details on these and other ModeShape features, please see the ModeShape documentation. Now, here are some specific answers to your numbered questions: ModeShape is hosted at JBoss.org and uses/integrates with other JBoss technology, because we thought it better to reuse the best-of-breed libraries. But ModeShape definitely is not tied to the JBoss Application Server. ModeShape can be used on other application servers in much the same way as other JCR implementations (typically embedded into a web application). Plus, ModeShape can be embedded into any application; it is, after all, just a regular Java library. It even uses SLF4J so that ModeShape log messages can be sent to the application's logging framework. Now, having said that, we do make it easier to deploy ModeShape to a JBoss AS installation with a simple kit: simply unzip, customize the configuration a bit (depending upon your needs), and start your app server. ModeShape will run as a service within the app server, allowing your deployed apps to simply lookup, use and share repositories. ModeShape can even be monitored using the JBoss AS console. I believe you're referring to our plans to develop a repository visualization tool (much less than a fully-fledged CMS system). Work on that has just recently begun, and we'd welcome any insight, requests for functionality, and interest in collaborating with us. I know that Magnolia can be run on top of ModeShape, but not sure if other CMS apps are able to do this. The JBoss Enterprise Data Services (EDS) platform also includes ModeShape and uses it as a metadata repository. The JBoss Business Rules Management System can also use ModeShape as its JCR repository. ModeShape and Jackrabbit both internally use Lucene for full-text search and querying. In that regard, they're pretty similar. Of course, ModeShape's implementation of search and query parsing and execution is different than Jackrabbits, and was actually written by some of the same folks that implemented the MetaMatrix relationally-oriented integration & federation engine (now part of JBoss EDS). As a result, ModeShape has a separate parser for each of its query languages, but after that all validation, planning, planning, and execution of all queries is done in the same way. We're very proud of the capabilities and performance of our query engine! ModeShape does not have a connector to other CMIS systems, but as you point out that's currently in-work (MODE-650). We'd also like to work with the Apache Chemistry team to make sure the JCR adapter works with ModeShape. We've just not had the time to do so. ModeShape does have a JcrTools utility class that may prove useful. But any utility class written on top of the JCR API should work just fine. Hope that helps! A: Documentation of modeshape seems better. The folks at Jackrabbit provide limited documentation, when compared to other apache projects. I suppose that if you need fancy (enterprise) features, they want you to pay for it. Also note that you are almost forced to used a sql database as backend. Because almost all other backends are 'not intended for production use'. Compare to modeshape who just comes out and says it: This is in fact the main purpose of ModeShape: provide a JCR implementation that provides access to content stored in many different kinds of systems, including the federation of multiple systems. A ModeShape repository isn't yet another silo of information, but rather it's a JCR view of the information you already have in your environment: files systems, databases, other repositories, services, applications, etc. ModeShape can help you understand the systems and information you already have, through a standard Java API I'd rather prefer this clarity than letting people search their doc and google for information that doesn't exist.
Overview of Pain Processing {#Sec1} =========================== Organisms need to process incoming sensory information and then respond to the external world. Consequently, pain alters and overlaps with other CNS functions such as those concerned with mood and responses to the outside world. All organisms need to sense their environment and so our peripheral pain receptors evolved from sensors seen in primitive creatures. Organisms need to learn about sensory stimuli and so centrally, the ability of spinal neurones to become sensitized by repeated stimuli is believed to be a part of associative learning. Thus, the ancient origins of pain and its widespread effects on CNS processes are responsible for the challenges of controlling pain and the misery it brings. The future of pain control will involve novel agents and a better use of existing therapies, including steps towards predicting patient responses based on improving our knowledge of pain and its modulation. We are off to a solid start in terms of success in dealing with the challenges since translation from basic science to patients, and vice versa, are becoming more prevalent and connected. Parallel rodent neuronal and human psychophysical studies can inform on peripheral and central mechanisms in experimental pain and so drug development will find an easier and more predictive transition from experimental drugs to phase I studies \[[@CR49], [@CR61]\]. Differentiation of the modulation of on-going and evoked pains in rodent models \[[@CR33]\] has been achieved and this separation has a bearing on responses to analgesics in neuropathic patients \[[@CR18]\]. In this account, we highlight how anti-NGF and anti-CGRP antibodies are reaching the patient, the effect of tapentadol and the rationale for selective sodium channel blockers, which are currently being tested in patients \[[@CR71]\]. Compliance with Ethics Guidelines {#Sec2} --------------------------------- This article is based on previously conducted studies and does not involve any new studies of human or animal subjects performed by any of the authors. Different Types of Pain {#Sec3} ======================= A key issue is defining the receptors and channels involved in pain transmission and modulation, both of which change following pathophysiological events, such as those that occur in patients with neuropathic and/or inflammatory pain. Low back pain and cancer pain can be a combination of the two, and are thus mixed pains. Indeed, there are about 40% of cancer patients with neuropathic pains and similar numbers with neuropathic elements to low back pain; the neuropathic components in pain states can be teased out by questionnaires and assessment of the sensory symptoms \[[@CR13]\]. This is an important issue since treatments aimed at the peripheral pain mechanisms have to distinguish these two main types of pain. Pain from tissue damage (inflammatory pains) will respond to the NSAIDs and steroids, whereas neuropathic pain (resulting from a lesion or disease of sensory nerves) will respond to drugs that target the altered ion channels within the nerves. Thus, peripherally targeted treatments must reflect the type of pain mechanism. We have managed to characterize many of the pain sensors in the body. Nociceptors have a polymodal nature so heat and cold sensors have been found as well as a large number of receptors that respond to chemical stimuli. A family of particular sodium channels, some selective to pain signaling, have been isolated \[[@CR29]\]. The peripheral mechanisms of the broad types of pain are very different and so treatments are linked to the pain type. Examples would be the use of the NSAIDs and steroids for the aforementioned inflammatory pains, but the need for drugs acting on ion channels for neuropathic pains where the lesion or disease of a nerve leads to disordered electrical events. However, on arrival within the central nervous system, the signaling and controlling systems appear to use common mechanisms, so that opioids, ketamine, and agents acting on the monoamine systems have broader spectrums of activity. Furthermore, the underlying mechanisms of some manifestations of pain are more likely to be central than peripheral, and here both fibromyalgia and irritable bowel syndrome are best explained by problems with brain control systems \[[@CR53], [@CR55]\]. The periphery provides the basic information but each patient builds up their own pain experience based on context, memory, emotions, and social/other issues. The outcome is subject to the incoming pain messages being modified and altered by the CNS, both up and down. Thus, we should never be surprised by any disconnect between the extent of peripheral damage and the pain score. Peripheral Events that Generate Nociceptive Pain {#Sec4} ================================================ Many pains start in the periphery where pain sensors are likely to be continually activated when tissue is damaged. Chemicals are released including the prostanoids, bradykinin, CGRP, and ATP, as well as many chemokines. The problem is that, at present, only steroids and cyclooxygenase inhibitors are able to modulate these events with a ceiling on efficacy since they will only modulate some of the chemical mediators. Hopes for drugs that block the receptors for ATP are high, and here the P2X3 receptor is a key target \[[@CR9]\]. NGF is a key target for inflammatory pains but there were problems with initial therapies and their side effects. Anti-NGF Therapies {#Sec5} ------------------ NGF is a key molecule for the sensitization of primary afferent nociceptors associated with tissue inflammation. It acts via neurotrophic tyrosine kinase receptor A (TrkA), as well as via p75 neurotrophin receptor (p75^NTR^) and levels of NGF increase in inflamed tissue. The molecule has a number of direct and indirect (through Mast cells and autonomic actions) effects to enhance pain signaling. Preclinical data revealed that neutralization of endogenous NGF prevents inflammatory hyperalgesia \[[@CR35], [@CR45], [@CR56], [@CR60]\]. NGF causes acute pain in humans but the NGF-TrkA complexes are also retrogradely transported by sensory fibers to the cell bodies, resulting in a number of genomic actions that increase the sensitivity of pain fibers. In addition to increased ion channel functions, it causes the release of substance P and CGRP at both peripheral and central levels, and therefore contributing to sensitization \[[@CR60]\]. Hence, several studies illustrated the importance of NGF and/or CGRP sequestration strategies in the variety of pain states where tissue is damaged. Among several agents developed to counteract the NGF-mediated sensitization, particular attention should be drawn to monoclonal antibodies like tanezumab, fulranumab, and fasinumab. Several clinical trials revealed a long-lasting (several weeks after a single injection) pronounced efficacy of tanezumab in the management of osteoarthritic, chronic low back, diabetic peripheral neuropathic, and cancer-induced bone pains \[[@CR32], [@CR39], [@CR62]\]. The major obstacle linked to the use of anti-NGF antibodies that arose from clinical trials was their osteonecrotic activity, often leading to premature joint replacement. Recent trials have adjusted the dose of tanezumab used, and identified an interaction with other pharmacotherapies often used to manage inflammatory conditions. Tanezumab monotherapy does not elevate the risk of total joint replacements, however if coadministered with NSAIDs, the risk is notably manifested \[[@CR59]\]. Also, there was a minimal incremental benefit of high doses of tanezumab high (10--20 mg) versus low (2--5 mg) doses, further restricting side effects \[[@CR12], [@CR24]\]. Finally, anti-NGF antibodies do not appear to have cardiovascular or gastrointestinal safety liabilities of NSAIDs, as well as undesirable effects of centrally acting analgesics such as opioids. Anti-CGRP Agents in Headache {#Sec6} ---------------------------- CGRP is a peptide found in many C-fibers and released at both their central and peripheral terminals. The latter action is a key event in the production of migraine where the peptide is likely to have both pain generating and vascular actions in dura and scalp \[[@CR27]\]. Antibodies to CGRP have been developed and been proven effective, and it is hoped that these agents will become alternatives to the triptans \[[@CR46]\]. In general, monoclonal antibodies are target-specific, which limits off-target toxicities common to most small molecules. Their actions are prolonged, which leads to less frequent dosing of about once a month or less. Their long half-life may lead to these molecules being used for migraine prevention and CGRP attenuation has potential use in other inflammatory pain conditions. Peripheral Events that Generate Neuropathic Pain {#Sec7} ================================================ Ion Channels {#Sec8} ------------ Critical changes in ion channels, in particular sodium channels, arise after nerve injury, thought to produce abnormal peripheral transmission to the spinal cord and we have proof of concept since mutations in some of these peripheral sensors and channels cause human familial pain disorders \[[@CR19]\]. The description of certain sodium channels, namely Nav 1.7 and 1.8, which are preferentially found in small fibers, lead to the possibility that their blockers could be novel analgesics with pain-selective actions, unlike present drugs such as lidocaine, which also blocks large fibers. Indeed, there are a number of gain-of-function of 1.7 mutations that lead to pain in the absence of injury and a loss of function mutation that renders the subjects analgesic \[[@CR10], [@CR20]\]. This proof of concept supports the idea that selective pain-related sodium channel blockers could become orally effective local anesthetic-like drugs \[[@CR43]\] since their selective roles in pain would not require local administration and clinical studies with NaV1.7 blockers are on-going \[[@CR71]\]. These drugs could have broad efficacy that includes inflammatory pains, where peripheral sensitization will also lead to altered action potential transmission. At present, we have drugs such as carbamazepine that work to subdue abnormal sodium channel function. Potassium channels provide another interesting target since these inhibitory channels are down-regulated after nerve injury, but at present we lack drugs that act to open them \[[@CR66]\]. Further, new analgesics could include drugs that target our sensors for heat, cold, and irritants such as the TRP family of channels. These are already pain-control targets since capsaicin is an agonist at TRPV1. A low dose desensitizes the channel whilst a high dose activates---it is the human heat pain sensor---but then causes the fine pain fibers to pull back from the area of application, producing prolonged pain relief \[[@CR48]\]. TRPM8 is our cold sensor, responding to menthol and this channel could be a useful target in patients with cold hypersensitivity such as those receiving cancer chemotherapy \[[@CR25], [@CR51]\]. TRPA1 is an irritant sensor and a gain-of-function mutation leads to a pain syndrome in humans, validating the channel as a target \[[@CR37]\]. Botulinum Toxin {#Sec9} --------------- Botulinum toxin has been used to control pain in migraine and in patients with peripheral neuropathy. As a paralytic agent, the drug blocks transmitter release at the neuromuscular junction, but this action can be harnessed to control pain. In headache, the local administration to sensory nerve terminals is thought to block the release of CGRP as well as the insertion of certain pain sensors into the membrane of the nociceptors \[[@CR52]\]. In neuropathy, the authors concluded that the toxin may be transported to the central terminals of the pain fibers where it could block central transmitter release \[[@CR2]\]. Spinal Cord Mechanisms of Pain {#Sec10} ============================== Whatever the cause of pain in the body, the next key stage in communication between peripheral nerves and CNS neurones is the release of transmitter into the spinal cord. Calcium channels are required for transmitter release and so control neuronal activity of spinal neurones. Calcium channel levels and function are altered in different pain states. In particular, in both inflammatory and neuropathic pains, there are increases in their function, and in the latter the alpha-2 delta subunit is highly upregulated \[[@CR15], [@CR50]\]. This is the target for the drugs gabapentin and pregabalin, which appear to prevent the correct movement of the channels to the membrane \[[@CR7]\], and so act to alter transmitter release through mechanisms brought into play by pathophysiological events. These drugs are active in certain physiopathological states (which may be generated peripherally by neuropathic mechanisms or intense stimuli), but also in disorders of central processing such as fibromyalgia, where they alter glutamate signaling in the brain \[[@CR30]\]. Both preclinically and also in patients, the alpha-2 delta ligands appear to act preferentially on evoked hypersensitivities and not on-going pain, forming a basis for differentiation of patients who might respond to them \[[@CR50]\]. Central Sensitization {#Sec11} --------------------- In the spinal cord, activation of the *N*-methyl-[d]{.smallcaps}-aspartate (NMDA) receptor is produced by the repeated release of peptides and glutamate from peripheral nerves. These actions of glutamate at the NMDA receptor in persistent pain states, acting alongside other systems, produce hypersensitivity of spinal sensory neurones. Consequences of this are wind-up, long-term potentiation (LTP) and central sensitization. This leads to both an increase in the pain sensation and the receptive field size of the spinal neurones \[[@CR21]\]. This spinal hypersensitivity is the most plausible explanation for allodynias since the deep dorsal horn neurones subject to wind-up receive both low and high threshold inputs. The NMDA receptor is a key target for controlling pain. Ketamine blocks the NMDA receptor complex at sub-anesthetic doses but with side effects, and there is a potential for drugs with better profiles through NMDA receptor sub-type selective agents. The other receptors for glutamate are unlikely to be viable targets since glutamate is the main CNS excitatory transmitter. Tissue and nerve trauma causes abnormal impulse propagation towards the spinal cord and marked changes in calcium channels causing them to release more transmitter, thereby favoring central spinal hypersensitivity. Here, the relation between the extent of peripheral activity and central consequences diverge and shift towards central hypersensitivity. It has been difficult to directly modulate central sensitization, but certain drugs can be useful: directly as with ketamine, and indirectly as with opioids and gabapentinoids \[[@CR58], [@CR67]\]. Central sensitization has been observed in many patient groups, ranging from neuropathy to osteoarthritis including fibromyalgia \[[@CR54]\]. Given that the originating events in these very different pains can be clearly peripheral or more likely central, such as in fibromyalgia, it becomes clear that altered processing and sensitization can be observed at many CNS sites. Altered Pain Transmission in the Brain {#Sec12} -------------------------------------- Increased activity within spinal circuits produced by peripheral activity, whether arising from tissue or nerve damage, is the rationale for the use of regional blocks since in most cases, the spinal events are driven by peripheral inputs. Increased spinal neuronal activity will in turn trigger ascending activity to the brain. There are two parallel pathways; firstly, ascending activity to the thalamus and the cortex, the sensory components of pain, allow us to locate and describe the intensity of the pain. Equally important are the pathways to the midbrain and brainstem, where the activity contacts and disrupts the limbic brain, areas such as the amygdala, and generates the common comorbidities that follow pain such as depression, fear, sleep problems, and anxiety. The brain processes and signals, in a dynamic fashion, the sensory and affective components of pain as well as the salience and aversive aspects of pain through connections between various areas that include insula, prefrontal and cingulate cortices, as well as the somatosensory cortex \[[@CR38]\]. The ascending pain messages from the cord that input these various brain regions also contact descending control pathways that run from the brainstem back to the spinal cord. These monoamine and opioid projections can be inhibitory or excitatory, so that cognitive and emotional events are able to switch pain on or off. Central Inhibitory Mechanisms {#Sec13} ----------------------------- Blocking the generation of excitability is one approach, and this can be achieved by targeting the periphery or the spinal cord, but increasing inhibitions may also provide control of pain. Opioids work at spinal levels by pre- and post-synaptic mechanisms and the spinal application of morphine in animals rapidly lead to the human epidural route in patients. Systemic opioids both increase descending inhibitions and reduce descending facilitations by CNS actions. All of these mechanisms are altered as pain shifts from acute to chronic. Opioids can be useful in pain control, although this is less clear for chronic non-malignant pain where there are issues with side effects, abuse potential, and overdose risk from the opioid load and potential paradoxical hyperalgesia as the inhibited spinal neuronal systems compensate \[[@CR64]\]. An advance has been tapentadol, which is a mu opioid with noradrenaline reuptake inhibition, a dual-action molecule, with key spinal actions \[[@CR8]\]. The latter action targets and enhances descending inhibitions and so opioid side effects are reduced. All presently used opioids act at the mu opioid receptor but can differ in potency, pharmacokinetics, and route of administration. Recently, after many decades of attempts to produce drugs acting on the other opioid receptors, agonists at the NOP receptor have gone into patients \[[@CR42]\]. A severe loss of spinal GABA-mediated inhibitions is reported within the spinal cord after peripheral nerve injury, which compound the gain of excitation. The widespread nature of the roles of GABA in the brain means that therapies aimed at restoring its normal inhibitions are not currently feasible. Altering the function of the chloride channel that GABA operates is being attempted \[[@CR22]\]. Pathways from the Brain to the Spinal Cord that Alter Pain {#Sec14} ---------------------------------------------------------- Abnormal signaling from the spinal cord alters pain processing in the brain. Pathways from the brain can in turn alter spinal sensory processing \[[@CR4]\]. These projections originate from the midbrain and brainstem in predominantly monoamine systems (noradrenaline and 5HT). The actions of anti-depressant drugs in pain therefore link to these systems. These pharmacological circuits also play major roles in the generation and control of emotions such as mood, fear, and anxiety as well as in thermoregulation and the sleep cycle. Pain inputs into these areas will alter descending controls and also form a basis for pain-induced co-morbidities. Early work in this field focused on descending inhibitions, which are now known to be predominantly noradrenergic acting through the alpha-2 adrenoceptor \[[@CR31]\]. A recruitment of descending inhibitions underlies placebo analgesia and a failure of descending inhibitions has been reported in many patient groups with diverse types of pain \[[@CR68]\]. However, pain could equally be increased by enhanced descending facilitations through the 5HT3 receptor \[[@CR57], [@CR65]\]. These excitatory influences from the brain will act to favor the development and maintenance of central sensitization in the spinal cord \[[@CR14]\]. Part of the substrates for these bidirectional controls are ON and OFF cells found in brainstem nuclei \[[@CR26]\]. There appear to be altered descending excitatory controls in patients with severe pain from osteoarthritis \[[@CR28]\]. In animals, there is a loss of descending noradrenaline controls after nerve injury and correspondingly, animals with nerve injury that have activated their descending inhibitory noradrenergic systems are protected against the pain and recovery from surgical pain is enhanced when the same systems operate \[[@CR17]\]. In general, painful inputs into the limbic brain and the resultant descending controls link emotional states and the levels of pain perceived, and could be one of the ways by which higher functions such as coping and catastrophizing can modulate sensory components of pain at the level of the first relays in the spinal cord. The levels of midbrain-generated modulation, both positive and negative, may be a key factor in individual variations in pain, the potential target for non-pharmacological therapies and contribute to some "dysfunctional" pain states such as fibromyalgia. Here, a "normal" peripheral input could be enhanced if the descending systems are abnormal and so enhance excitability of the spinal cord through central events \[[@CR53], [@CR63]\]. Diffuse pains may have their origins in disordered central pain modulation. Animal studies reveal that altered descending controls are important in the maintenance of persistent inflammatory and neuropathic pains \[[@CR4]\]. Gauging Descending Inhibitions in Patients {#Sec15} ------------------------------------------ The balance shifts towards descending facilitation in persistent pains and importantly the extent of loss of descending inhibitions in patients can be gauged. The finding that one pain could inhibit another through descending controls formed the basis for diffuse noxious inhibitory controls (DNIC) \[[@CR41]\] and its human counterpart, conditioned pain modulation (CPM) \[[@CR68]\], a descending inhibition that is lost in patients with brainstem lesions and spinal sections \[[@CR11]\]. Recent studies reveal that DNIC use a descending noradrenaline and alpha-2 adrenoceptor-mediated pathway from the brain to the spinal cord \[[@CR5]\]. Sham surgery produces no change in DNIC and no pain phenotype corresponding to reduced CPM being a risk factor for persistent pain after surgery \[[@CR69]\]. After peripheral neuropathy, DNIC is lost, yet can be restored by drugs that enhance noradrenalin levels and also by blocking the 5HT3-mediated descending facilitations \[[@CR5]\]. In patients, reduced CPM is seen in many pain states, including neuropathy, osteoarthritis, headache, CRPS, fibromyalgia, and others \[[@CR1], [@CR40], [@CR70]\]. CPM can be quantified by one pain versus another, often heat versus cold but as with DNIC, the modality of the conditioning stimulus only has to be noxious and the wide dynamic range of the neurones in animals subject to DNIC means that the conditioned response can be noxious or innocuous \[[@CR36]\]. Importantly, CPM can be restored in patients with peripheral neuropathic pain by the MOR-NRI drug tapentadol and a reduced CPM is predictive of efficacy of the SNRI duloxetine, suggestive of a loss of key noradrenaline signaling in patients akin to that seen with DNIC in animals \[[@CR44], [@CR70]\]. Both DNIC and CPM are dynamic---CPM can be present early in a pain condition but lost later such as with CRPS and alters over the course of headaches \[[@CR47]\]. On-Going and Evoked Pains {#Sec16} ------------------------- CPM allows for the quantification of descending inhibitions and so is a key step towards precision medicine. An overwhelming question is whether it is the spontaneous or the stimulus-evoked component of pain that is the greater problem for patients who are simply asked to rate their pain on a VAS score. Differentiating the two pain events, for example neuropathic spontaneous pain and inflammatory tonic pain from evoked, particularly mechanical hypersensitivity, is an on-going research goal both pre-clinically and clinically. Despite its terminology, spontaneous pain not only refers to the intrinsic firing of neurons active in pain-signaling pathways, but may rather---in the case of neuropathy for example---refer to deafferentation-induced spontaneous discharge in CNS neurons. The sensitization of such pain signaling neurons may then be responsible for on-going chronic pain. Stimulus-evoked hypersensitivity meanwhile refers to an enhanced neuronal, and therefore pain, response to an innocuous or noxious insult at the periphery. The presence of spontaneous pain is a common complaint amongst chronic pain patients, for example those with a neuropathy \[[@CR3]\]. An increased sensitivity to evoked stimuli is also present in such patients. Importantly, hyperalgesia can be pharmacologically treated in the absence of the relief of on-going pain \[[@CR23]\] and so it is likely that the underlying mechanisms governing on-going versus evoked pain are distinct and thus should be treated clinically as separate components of the pain state. It is well accepted that translating mechanisms in animal models can guide potential treatments in the patient domain. While detecting and mechanistically evaluating spontaneous pain pre-clinically was viewed as a complicated task, an insightful study by Frank Porreca and colleagues used conditioned place preference (CPP) to not only detect tonic pain in neuropathic rats but also to determine the efficacy of specific analgesic relief \[[@CR34]\]. Their study provided evidence for a tonic pain state in animals that had undergone spinal nerve ligation (SNL) surgery, while the presence of a spinal cord lesion similarly coincides with the expression of spontaneous pain, with CPP this time revealing that clonidine or motor cortex stimulation was able to unmask a tonic aversive state \[[@CR16]\]. Further studies reveal that certain brain areas such as the anterior cingulate cortex may contribute more to the longing aversive state rather than modulating evoked responses and importantly such studies impact the assessment of analgesic therapeutic potential on these different responses \[[@CR33]\]. Targeting Pain Mechanisms in Patients {#Sec17} ===================================== Whilst awaiting new agents, our understanding of mechanisms for pain and its treatments allows for a rationale for all approaches to pain control. These could range from regional blocks to restoration of normal central modulation with drugs to cognitive behavioral approaches. Indeed, even the descending controls, embedded deep in the brain, are altered by peripheral inputs and so could be altered by peripheral and spinal interventions. But who will respond to each particular treatment? NNTs for many pain drugs are quite high but trials have been based on etiology and so presume homogeneity, whereas the patients may have differing mechanisms and sensory profiles. Mechanism-based therapy is a laudable concept but unlikely to be helpful since how could mechanisms be identified in most patients? A brilliant variant on this would be to use the sensory phenotype of the patient as a surrogate reflection of underlying pain mechanisms. Using the sodium channel blocker oxcarbazepine, it was revealed that those patients with "irritable nociceptors", i.e., having evoked hypersensitivity rather than on-going pain, responded to the drug, an effect that was lost in the whole group analysis \[[@CR18]\]. Subtypes of patients with neuropathic pain, fibromyalgia and post-surgical pain can be formally distinguished. Analysis of patients with neuropathic pain has revealed three clusters of patients: Cluster 1---those with sensory loss; Cluster 2---those with thermal hyperalgesia; Cluster 3---those with mechanical hyperalgesia. In the near future, we will know if these subtypes have differential responses to different drugs if stratified trials can be conducted, but there are already hints of differential sensitivities to treatments. Cluster 1 patients responded to oral opioids and not well to Na channel block, whereas Cluster 2 patients did respond to this drug and also to BoTox. Cluster 3 had greater efficacy of pregabalin and topical or IV lidocaine \[[@CR6]\]. There is also the use of CPM, as discussed previously, to inform on impaired descending inhibitions and so predict responders to SNRIs and sensitivity to tapentadol. Other studies, at present limited to neuropathic pain, reveal heterogeneous responses to drugs in different subgroups of patients \[[@CR13]\], and this needs to be extended to nociceptive pain patients and those with fibromyalgia. There is considerable hope for the future. However, the use of both CPM and/or quantitative sensory testing are not appropriate for routine clinical practice, so if there is a relation between particular sensory profiles of patients and particular pharmacological agents, simple tests could be developed. Patients should be able to distinguish on-going from evoked pains during the taking of a history and could be asked if their pains were predominantly thermal or mechanically evoked, so delineating the clusters described above \[[@CR6]\]. Maybe patients could be asked if one pain could inhibit their pain---bite your thumb? This could represent a simple test of CPM. We have a lot further to go but the union of informed and thoughtful preclinical science and clinical medicine will lead us onwards. **Enhanced content** To view enhanced content for this article go to <http://www.medengine.com/Redeem/2848F0605AFB1AC3>. This work was funded by the Wellcome Trust Pain Consortium and Bonepain (European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 642720). No funding was received for the publication of this article. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this manuscript, take responsibility for the integrity of the work as a whole, and have given final approval for the version to be published. Disclosures {#FPar1} =========== Kirsty Bannister and Mateusz Kucharczyk have nothing to disclose. Anthony H. Dickenson has been a speaker for Allergan, Grunenthal, and Teva. Compliance with Ethics Guidelines {#FPar2} ================================= This article is based on previously conducted studies and does not involve any new studies of human or animal subjects performed by any of the authors. Open Access {#d29e581} =========== This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<http://creativecommons.org/licenses/by-nc/4.0/>), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Improving quality of care during labour and childbirth and in the immediate postnatal period. Quality of care during labour and childbirth and in the immediate postnatal period is important in ensuring healthy maternal and newborn survival. A narrative review of existing quality frameworks in the context of evidence-based interventions for essential care demonstrates the complexities of quality of care and the domains required to provide high quality of care. The role of the care provider is pivotal to optimum care; however, providers need appropriate training and supervision, which should include assessment of core competencies. Organisational factors such as staffing levels and resources may support or hinder the delivery of optimum care and should be observed during any monitoring. The woman's perspective is central to all quality of care strategies; her opinion should be sought where possible. The importance of assessing and monitoring quality of care during such a critical period should be appreciated. A number of quality frameworks offer organisations with a foundation on which they can deliver high quality care.
Q: Why not auto move if object is destroyed in next step? If a function return a value like this: std::string foo() { std::string ret {"Test"}; return ret; } The compiler is allowed to move ret, since it is not used anymore. This doesn't hold for cases like this: void foo (std::string str) { // do sth. with str } int main() { std::string a {"Test"}; foo(a); } Although a is obviously not needed anymore since it is destroyed in the next step you have to do: int main() { std::string a {"Test"}; foo(std::move(a)); } Why? In my opinion, this is unnecessarily complicated, since rvalues and move semantic are hard to understand especially for beginners. So it would be great if you wouldn't have to care in standard cases but benefit from move semantic anyway (like with return values and temporaries). It is also annoying to have to look at the class definition to discover if a class is move-enabled and benefits from std::move at all (or use std::move anyway in the hope that it will sometimes be helpfull. It is also error-prone if you work on existing code: int main() { std::string a {"Test"}; foo(std::move(a)); // [...] 100 lines of code // new line: foo(a); // Ups! } The compiler knows better if an object is no longer used used. std::move everywhere is also verbose and reduces readability. A: It is not obvious that an object is not going to be used after a given point. For instance, have a look at the following variant of your code: struct Bar { ~Bar() { std::cout << str.size() << std::endl; } std::string& str; } Bar make_bar(std::string& str) { return Bar{ str }; } void foo (std::string str) { // do sth. with str } int main() { std::string a {"Test"}; Bar b = make_bar(a); foo(std::move(a)); } This code would break, because the string a is put in an invalid state by the move operation, but Bar is holding a reference to it, and will try to use it when it's destroyed, which happens after the foo call. If make_bar is defined in an external assembly (e.g. a DLL/so), the compiler has no way, when compiling Bar b = make_bar(a);, of telling if b is holding a reference to a or not. So, even if foo(a) is the last usage of a, that doesn't mean it's safe to use move semantics, because some other object might be holding a reference to a as a consequence of previous instructions. Only you can know if you can use move semantics or not, by looking at the specifications of the functions you call. On the other side, you can always use move semantics in the return case, because that object will go out of scope anyway, which means any object holding a reference to it will result in undefined behaviour regardless of the move semantics. By the way, you don't even need move semantics there, because of copy elision.
package cn.iocoder.springboot.lab28.task; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
/* * Copyright (c) 2012 - 2020 Splice Machine, Inc. * * This file is part of Splice Machine. * Splice Machine is free software: you can redistribute it and/or modify it under the terms of the * GNU Affero General Public License as published by the Free Software Foundation, either * version 3, or (at your option) any later version. * Splice Machine is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; * without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. * See the GNU Affero General Public License for more details. * You should have received a copy of the GNU Affero General Public License along with Splice Machine. * If not, see <http://www.gnu.org/licenses/>. */ package com.splicemachine.spark2.splicemachine import org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils._ import org.apache.spark.sql.execution.datasources.jdbc.{JdbcUtils, SpliceRelation2, JDBCOptions, JdbcOptionsInWrite, JDBCRDD} import org.apache.spark.sql.sources._ import org.apache.spark.sql.types._ import org.apache.spark.sql.{DataFrame, SQLContext, SaveMode} class DefaultSource extends RelationProvider with CreatableRelationProvider with SchemaRelationProvider { override def createRelation(sqlContext: SQLContext, parameters: Map[String, String]): BaseRelation = { new SpliceRelation2(new JdbcOptionsInWrite(parameters))(sqlContext,None) } /** * * Creates a relation and inserts data to specified table. * * @param sqlContext * @param mode * @param parameters * @param df * @return */ override def createRelation(sqlContext: SQLContext, mode: SaveMode, parameters: Map[String, String], df: DataFrame): BaseRelation = { val jdbcOptions = new JdbcOptionsInWrite(parameters) val url = jdbcOptions.url val table = jdbcOptions.table val createTableOptions = jdbcOptions.createTableOptions val isTruncate = jdbcOptions.isTruncate val conn = JdbcUtils.createConnectionFactory(jdbcOptions)() try { val tableExists = JdbcUtils.tableExists(conn, jdbcOptions) if (tableExists) { val actualRelation = new SpliceRelation2(new JdbcOptionsInWrite(parameters))(sqlContext,Option.apply(df.schema)) mode match { case SaveMode.Overwrite => if (isTruncate && isCascadingTruncateTable(url) == Some(false)) { // In this case, we should truncate table and then load. truncateTable(conn, jdbcOptions) actualRelation.insert(df,true) actualRelation } else { // Otherwise, do not truncate the table, instead drop and recreate it dropTable(conn, table, jdbcOptions) createTable(conn, df, jdbcOptions) actualRelation.insert(df,false) actualRelation } case SaveMode.Append => actualRelation.insert(df,false) actualRelation case SaveMode.ErrorIfExists => throw new Exception( s"Table or view '$table' already exists. SaveMode: ErrorIfExists.") case SaveMode.Ignore => actualRelation // With `SaveMode.Ignore` mode, if table already exists, the save operation is expected // to not save the contents of the DataFrame and to not change the existing data. // Therefore, it is okay to do nothing here and then just return the relation below. } } else { createTable(conn, df, jdbcOptions) val actualRelation = new SpliceRelation2(new JdbcOptionsInWrite(parameters))(sqlContext,Option.apply(df.schema)) actualRelation.insert(df,false) actualRelation } } finally { conn.close() } } /** * * Creates a relation based on the schema * * @param sqlContext * @param parameters * @param schema * @return */ override def createRelation(sqlContext: SQLContext, parameters: Map[String, String], schema: StructType): BaseRelation = { new SpliceRelation2(new JdbcOptionsInWrite(parameters))(sqlContext,Option.apply(schema)) } }
diff -Nupr src.orig/fs/proc/generic.c src/fs/proc/generic.c --- src.orig/fs/proc/generic.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/generic.c 2017-09-22 15:27:48.190165879 -0400 @@ -194,6 +194,7 @@ int proc_alloc_inum(unsigned int *inum) unsigned int i; int error; + printk("kpatch-test: testing change to .parainstructions section\n"); retry: if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) return -ENOMEM;
Q: Can WKWebView instance show webpage while it is loading? I'm using WKWebView to browse the specific website. WKWebView instance loads initial page of the website by URL with method load(_ request: URLRequest) -> WKNavigation? Until load request isn't completed I see white screen. Can WKWebView show already loaded parts of the webpage while the rest of the webpage is loading? Can UIWebView do this trick? A: if white screen is your issue, i recommend you to display an custom progress indicator over the webView or some Animation view over your webView. Once the loading is completed hide the progress indicator. Start your Animation at: optional func webView(_ webView: WKWebView, didStartProvisionalNavigation navigation: WKNavigation!) { //Start Progress indicator animation } End/Hide your Animation/Progress View at: optional func webView(_ webView: WKWebView, didFinish navigation: WKNavigation!) { //Stop Progress indicator animation }
Can we settle? Yes we can! Part 1 of 5 The question of whether a claim can be settled while either waiting on CMS approval or without any approval at all comes up frequently. Just this morning, an attorney wrote to “Ask Jen” and posed the following question: “Can an employer or carrier settle a workers’ compensation claim prior to obtaining CMS approval?”. On its face, it seems like a fairly straightforward question. “Yes, of course you can” was my initial reaction. But then I started thinking of all the exceptions, the pro’s, the con’s and what I consider to be best claims handling practices after spending the better part of the last 20 years working in the claims settlement business. So after spending 30 minutes trying to succinctly answer this question while covering all the possibilities, probabilities and distinctions, I realized this was a seemingly simple question that needed a much more in-depth answer (reader note: in-depth answers are NOT my forte and are the exclusive province of Ms. Jennifer Jordan, MEDVAL’s General Counsel and prolific essayist on all things MSP. But given that she was on vacation all last week and will be on a five city speaking/training tour beginning today, you are stuck with me and my just this minute conceived blog format of answering in five, bite-sized parts). Back to the question at hand. Unless subject to the jurisdiction of the Workers’ Compensation Commission in the State of Maryland, the unequivocal answer is yes. Parties may settle their claim pursuant to State law, with or without CMS’ blessing, once there is a meeting of the minds on the settlement terms. To my knowledge, no state Workers’ Compensation Board or Commission other than Maryland has refused as a matter of law to approve a settlement without CMS approval in hand. There are scattered reports around the country about individual judges and jurisdictions requiring documentation that Medicare’s interests are being addressed (which is reasonable) but nothing mandating that the case must have been approved by CMS before finalizing the settlement. Most state agencies correctly understand that a parties’ MSP obligations are triggered by the underlying state law and are reluctant to give the federal government more oversight and authority then the feds expressly legislate for themselves. So whether to close the claim with this aspect of the settlement unresolved is a matter of personal preference and ultimately a risk management decision for both plaintiff and defense. Aside from the obvious advantages/disadvantages inherent in all settlements, settling a case while waiting on CMS approval presents a few special considerations. Namely, Who bears the cost of the additional indemnity/medical payments while waiting on CMS?What happens if CMS comes back with a different WCMSA amount than projected?In the event of a counter approval what happens if one party is in agreement with the amount and the other is not? What are the resources available if the settlement terms are fixed prior to CMS approval and neither party is satisfied with the outcome of the WCMSA review process?Are their alternatives to seeking CMS approval? What are the advantages and disadvantages? Stay tuned this week for answers to these and other pressing MSP questions. Ryan Share this post Monthly Archives Monthly Archives Category Archives Click here to track the status of Bill H.R. 2649: To amend title XVIII of the Social Security Act to provide for the application of Medicare secondary payer rules to certain workers' compensation settlement agreements and qualified Medicare set-aside provisions
Comments Finding snapshot by a name which could also be an id isn't best way how to do it. There will be rewrite of savevm, loadvm and delvm to improve the behavior of these commands. The savevm and loadvm will have their own patch series. Now bdrv_snapshot_find takes more parameters. The name parameter will be matched only against the name of the snapshot and the same applies to id parameter. There is one exception. If you set the last parameter, the name parameter will be matched against the name or the id of a snapshot. This exception is only for backward compatibility for other commands and it will be dropped after all commands will be rewritten. We only need to know if that snapshot exists or not. We don't care about any error message. If snapshot exists it returns TRUE otherwise it returns FALSE. There is also new Error parameter which will containt error messeage if something goes wrong. Signed-off-by: Pavel Hrdina <[email protected]> --- savevm.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 67 insertions(+), 26 deletions(-) On 04/25/2013 12:31 AM, Wenchao Xia wrote: > >> +>> + if (!found) {>> + error_setg(errp, "Failed to find snapshot '%s'", name ? name : id);> suggest not to set error, since it is a normal case. The way I understand it, failure to find a snapshot might need to be treated as an error - it's up to the caller's needs. Also, there pretty much is only one failure mode - the requested snapshot was not found - even if there are multiple ways that we can fail to find a requested snapshot, so I'm fine with treating all 'false' returns as an error path. Thus, a caller that wants to probe for a snapshot existence but not set an error calls: bdrv_snapshot_find(bs, snapshot, name, id, NULL, false); while a caller that wants to report a missing snapshot as an error calls: bdrv_snapshot_find(bs, snapshot, name, id, &local_err, false); and then propagates local_err on upwards. Or are you worried about a possible third case, where a caller cares about failure during bdrv_snapshot_list(), differently than failure to find a snapshot? What callers have that semantics? If that is a real concern, then maybe returning a bool is the wrong approach, and we should instead return an int. A return < 0 is a fatal error (bdrv_snapshot_list failed to even look up snapshots); a return of 0 means our lookup attempt hit no fatal errors but the snapshot was not found, and a return of 1 means the snapshot was found. Then there would be three calling styles: Probe for existence, with no error reporting: if (bdrv_snapshot_find(bs, snapshot, name, id, NULL, false) > 0) { // exists } Probe for existence but with error reporting on fatal errors: exist = bdrv_snapshot_find(bs, snapshot, name, id, &local_err, false); if (exist < 0) { // propagate local_err } else if (exist) { // exists } Probe for snapshot, with error reporting even for failed lookup: if (bdrv_snapshot_find(bs, snapshot, name, id, &local_err, false) <= 0) { // propagate local_err } But I don't know what the existing callers need, to make a decision on whether a signature change is warranted. Again, more reason to defer this series to 1.6. 于 2013-4-25 20:16, Eric Blake 写道: > On 04/25/2013 12:31 AM, Wenchao Xia wrote:>>>>> +>>> + if (!found) {>>> + error_setg(errp, "Failed to find snapshot '%s'", name ? name : id);>> suggest not to set error, since it is a normal case.>> The way I understand it, failure to find a snapshot might need to be> treated as an error - it's up to the caller's needs. Also, there pretty> much is only one failure mode - the requested snapshot was not found -> even if there are multiple ways that we can fail to find a requested> snapshot, so I'm fine with treating all 'false' returns as an error path.>> Thus, a caller that wants to probe for a snapshot existence but not set> an error calls:> bdrv_snapshot_find(bs, snapshot, name, id, NULL, false);>> while a caller that wants to report a missing snapshot as an error calls:> bdrv_snapshot_find(bs, snapshot, name, id, &local_err, false);> and then propagates local_err on upwards.>>> Or are you worried about a possible third case, where a caller cares> about failure during bdrv_snapshot_list(), differently than failure to> find a snapshot? What callers have that semantics? If that is a real> concern, then maybe returning a bool is the wrong approach, and we> should instead return an int. A return < 0 is a fatal error> (bdrv_snapshot_list failed to even look up snapshots); a return of 0> means our lookup attempt hit no fatal errors but the snapshot was not> found, and a return of 1 means the snapshot was found. Then there would> be three calling styles:>> Probe for existence, with no error reporting:> if (bdrv_snapshot_find(bs, snapshot, name, id, NULL, false) > 0) {> // exists> }> Probe for existence but with error reporting on fatal errors:> exist = bdrv_snapshot_find(bs, snapshot, name, id, &local_err, false);> if (exist < 0) {> // propagate local_err> } else if (exist) {> // exists> }> Probe for snapshot, with error reporting even for failed lookup:> if (bdrv_snapshot_find(bs, snapshot, name, id, &local_err, false) <= 0) {> // propagate local_err> }>> But I don't know what the existing callers need, to make a decision on> whether a signature change is warranted. Again, more reason to defer> this series to 1.6.> Personally I prefer internal layer have clean meaning, setting error only for exception. But I am not strongly against it, if caller can make easier use of it, a document for this function is also OK.
Alan Rusbridger, editor of the Guardian, and Nick Davies, the journalist who uncovered the extent of phone hacking at the News of the World, are to be this year’s recipients of the Media Society award. In a release, the Media Society, a charity that campaigns for freedom of expression and the encouragement of high standards in journalism, said: The Guardian’s revelations about phone hacking at the News of the World have not only been the biggest media story of the year, but have also triggered a public debate about the practices of the press, with potentially far-reaching consequences. Alan Rusbridger, editor of the Guardian since the mid-1990s, has presided over the paper’s development from a broadsheet to its current Berliner format, and its embrace of online journalism. He is an eloquent defender of the importance of journalism for holding power to account. Nick Davies, meanwhile, has demonstrated the highest qualities of persistence in his following of the biggest media stories in recent years, while his concern for the health and future of his craft is manifest: he is an outstanding advocate of the importance of good reporting as the basis for good journalism. This entry was posted on Wednesday, April 4th, 2012 at 10:51 am and is filed under Awards. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
Introduction {#s1} ============ Epstein--Barr virus (EBV) or human herpesvirus 4 was established as the major cause of infectious mononucleosis commonly known as glandular fever ([@GZN076C5]). The virus has also been linked to more severe diseases and is associated with severe infection in post-transplantation and immunocompromised individuals and some malignancies, including Burkitt\'s lymphoma, nasopharyngeal carcinoma, Hodgkin\'s disease and immunoblastic lymphoma. More recently, EBV infection was linked with autoimmune diseases such as systemic lupus erythematosus ([@GZN076C8]). There is currently a spectrum of diagnostic strategies (immunofluorescence assay, western blot, PCR and ELISA; reviewed by [@GZN076C9]), but there is no single standardised diagnostic test for EBV infection to date. Therefore, the search for a reliable routine diagnostic test is an important area for investigation. Current commercial serological diagnostic tests use ELISA to measure IgG and IgM antibodies to viral antigens, including viral capsid antigen (VCA), p18 an immunodominant region of VCA ([@GZN076C4]), EBV nuclear antigens (EBNA) and early antigen (EA-D); usually, a combination of these tests is required to confirm acute infections ([@GZN076C7]). Our aim in this study is to produce a panel of peptide mimics, which represent diagnostically important EBV epitopes. Our strategy is to find alternatives to the authentic EBV crude antigen used in current commercial ELISA-based tests. The use of these peptides could provide a more specific and cost-effective commercial diagnostic test. Peptides may also eliminate the high proportion of unwanted epitopes represented in the crude antigen preparation which can often result in false-positive cross-reactions. Cross-reactivity with rheumatoid factor and other herpes viruses has been described in serological EBV diagnostics ([@GZN076C7]). A peptide mimotope or epitope mimic is a peptide that will mimic the antibody binding site on the antigen and compete with the native protein for binding. Therefore, peptide mimotopes representing antigenic epitopes that are recognised by serum antibodies produced after infection with EBV may eliminate the need to use the whole antigen in diagnostic assays. We have previously generated EBV peptide mimotopes by identifying peptides against four different monoclonal antibodies from a phage-displayed random peptide library ([@GZN076C2]; [@GZN076C14]). These peptides were found to represent different EBV epitopes and were useful for detection of EBV IgM antibodies in clinical samples with 100% specificity and 54--88% sensitivity. Two of the most effective peptides F1 and Gp125 were subsequently conjugated to BSA and used in screening of \>200 EBV serum samples which resulted in improved sensitivity (95% and 92%; [@GZN076C2]). A limitation of this approach, however, is that individual mAbs represent only a fraction of the total antibody response to antigen, whereas the use of polyclonal antibodies is likely to increase the chances of selecting useful mimotopes as it samples the entire population of antibodies in the serum. MAbs are not always available for every infectious disease agent necessitating the use of polyclonal reagents in these cases. Furthermore, the antibodies in the polyclonal immune response will react with all the immunodominant epitopes associated with EBV virus infection. The aim in this study was to select peptide mimics specific for epitopes in polyclonal sera from our phage-displayed peptide library. These peptides may be useful for detecting antibodies reactive with cognate antigen diagnostically. In this current study, we have chosen two different approaches for selection of peptide ligands using polyclonal sera. First, we isolated the IgG fraction from patients\' sera containing a high titer of EBV antibodies. Second, we immunised a rabbit with EBV and affinity-purified antibodies. In this study, we show that polyclonal antibodies can be used to select peptide ligands from a random peptide library and these epitope mimics are useful for diagnosis, with a specificity and sensitivity similar to those peptide mimics selected against EBV mAbs described in our previous study ([@GZN076C2]). This study is also the first example of screening a random peptide library with polyclonal antibodies from an immunised rabbit and has allowed isolation of peptide mimotopes to several important diagnostic epitopes simultaneously. Materials and methods {#s2} ===================== Preparation of polyclonal EBV antisera {#s2a} -------------------------------------- A New Zealand white rabbit was immunised intramuscularly with 200 µg EBV-infected cell extract (crude EBV; ABI, MD, USA) emulsified in 0.5 ml Freund\'s complete adjuvant. Two booster doses diluted 1:1 in Freund\'s incomplete adjuvant followed by a final double-dose boost were performed in 21 day intervals. Human serum samples {#s2b} ------------------- A panel of 40 individual human serum samples were provided by Queensland Medical Laboratory (Brisbane, Australia). The positive sera (*n* = 16) were collected from individuals with recent or early stage of infectious mononucleosis and were tested for the presence of IgM antibodies to EBV using a commercial diagnostic test (PanBio Ltd). An individual seropositive serum sample with a high titer of IgM and IgG EBV antibodies was selected for purification. The negative sera (*n* = 16) were collected from patients having no previous exposure to EBV infection and were defined as seronegative using the commercial diagnostic test. Putative cross-reactive sera were also screened (*n* = 8), two Parvovirus (Parvo), two Herpes Simplex virus (HSV), two Cytomegalovirus (CMV) and two Rheumatoid factor (RF), to analyse the specificity of binding. Affinity purification of rabbit and human IgG {#s2c} --------------------------------------------- The IgG fraction from an EBV-immunised rabbit and human serum with a high titer of antibodies to EBV were purified using Protein G sepharose (2.5 ml column; Pharmacia), using the manufacturer\'s instructions. Briefly serum was diluted 1:5 in PBS and passed through a 0.2 µm syringe filter prior to being applied to the resin, and antibodies were eluted with 0.1 M glycine pH 3.0, neutralised and dialysed against PBS with three buffer changes. Phage library and selection {#s2d} --------------------------- For selection of phage peptides to affinity purified sera from an EBV-infected patient and an EBV-immunised rabbit, we screened our AdLib 1 library (AdAlta Pty Ltd) a linear peptide library of 20 random amino acids displayed as N-terminal fusions to protein III of filamentous phage M13 ([@GZN076C1], [@GZN076C2], [@GZN076C3]). A similar panning strategy was used as described in our previous study ([@GZN076C2]). Briefly ELISA wells were coated with 10 µg/ml purified rabbit or human anti-EBV IgG preparations and peptides were selected from the library of \>5 × 10^8^ random peptides by performing six rounds of panning. The stringency of washing was increased in each subsequent round of panning to enrich for phage peptides that bound specifically to the target antibodies. Peptide synthesis {#s2e} ----------------- Peptides were synthesised to \>70% purity by GL-Biochem (Shanghai, China). Peptides Eb1, Eb3 and Eb4 were dissolved in dimethyl formamide at 1 mg/ml used fresh or stored in aliquots at --20°C. Eb2 was soluble in PBS and stored in a similar manner. Gp125 and F1 peptides were prepared as previously described ([@GZN076C2]). Peptide conjugation to BSA {#s2f} -------------------------- Peptides were synthesised with four additional glycine residues and an additional cysteine residue at the C′ terminus (Gly~4~Cys). The glycines provide a small spacer region between the peptide and the additional cysteine residue allows for conjugation to BSA via the heterobifunctional cross-linker succinimidyl-4-(*N*-maleimidomethyl)cyclohexane-1-carboxylate (SMCC, Pierce), using the same method as described in our previous study ([@GZN076C2]). Briefly 100 molar excess of SMCC was added to 5 mg BSA (Thermo, New Zealand) in 0.1 M sodium phosphate/0.15 M sodium chloride buffer for 2 h mixing at room temperature. To remove excess linker, the mixture was desalted using a PD-10 column equilibrated with the same buffer containing 0.1 M EDTA. For conjugation, 1 mg of peptide was incubated with 1--2 mg of BSA-SMCC in the presence of 30% DMSO for 2 h. The final BSA-conjugated peptide was desalted using a PD-10 column into PBS and the concentration of conjugated peptides in column fractions was measured at 280 nm absorbance using a spectrophotometer. Phage ELISAs {#s2g} ------------ To analyse the binding of peptide phage clones, ELISAs were performed by coating a microtiter plate overnight at 4°C (Nunc, Maxisorp) with 100 µl/well of 10 µg/ml of purified rabbit or human polyclonal antibodies in PBS. Coated wells were washed twice with PBS and blocked with 10% blotto (milk powder/PBS) for 2 h. Phage dilutions (100 µl) were prepared in PBS, transferred in duplicate to the coated blocked wells and incubated for 1 h on a plate shaker. Wells were washed five times with PBS containing 0.1% Tween 20 (PBST) and 100 µl of anti-M13 antibody conjugated to horseradish peroxidase (HRP, Pharmacia) at 1/5000 dilution was added to each well. After 1 h incubation and washing as above, bound phages were detected with *o*-phenylenediamine substrate (Sigma). Antigen ELISAs {#s2h} -------------- ELISAs were performed using plates coated with crude EBV, p18, EBNA, EA-D and VCA antigens (pre-coated and blocked plates were kindly provided by PanBio Ltd, Brisbane). Serum dilutions were prepared in fish gelatin diluent \[2% fish gelatin (Sigma), 1% BSA (CSL), 1% Tween 20 in PBS\] and incubated for 1 h on the plate shaker. The plate was washed as above and sheep anti-human IgM-HRP (Chemicon) at 1/5000 dilution was added for 1 h. After a final wash step, binding was detected with 3,3′,5,5′-tetramethylbenzidine substrate (Sigma). BSA-conjugated peptide ELISAs {#s2i} ----------------------------- For ELISAs using peptides conjugated to BSA, a procedure similar to our previous study was used ([@GZN076C2]). One hundred microlitres per well of peptide conjugates were coupled to maxisorp plates at 5 µg/ml overnight at 4°C. Plates were washed three times with PBS and serum sample dilutions in PBST were added to the blocked wells in duplicate. The remainder of the procedure was performed as above, using goat anti-rabbit-HRP (Chemicon) at 1/2000 dilution. All ELISAs were performed in duplicate or triplicate and assays were repeated to establish reproducibility of results. Results {#s3} ======= Reactivity of rabbit anti-EBV antibodies {#s3a} ---------------------------------------- The rabbit immunised with an extract of EBV was shown to have a high titer of antibodies (\>1/1000) to p18, crude EBV and VCA antigens as shown in Fig. [1](#GZN076F1){ref-type="fig"}A, when compared with the low reactivity of the pre-bleed fraction to these antigens. The IgG fraction of the antisera was affinity-purified using protein G resin and this preparation was used for immunopanning. ![Characterisation of EBV immune sera. (**A**) ELISA showing the antibody titer of rabbit serum to crude EBV, VCA and p18 antigens before and after immunisation. (**B**) Specificity of human serum tested positive on a commercial EBV test that has been affinity purified using Protein G. ELISA showing reactivity of this human IgG to EBV antigens, crude EBV, VCA, p18 compared with wells coated with no antigen (no Ag).](gzn07601){#GZN076F1} Characterisation of EBV human sera {#s3b} ---------------------------------- Human serum from an individual with a high titer of EBV IgG antibodies (OD units in ELISA \>2.0) confirmed using an EBV diagnostic kit (PanBio Ltd) was purified using protein G resin. The reactivity of the purified antibodies is shown in Fig. [1](#GZN076F1){ref-type="fig"}B. There was high binding to crude EBV and p18 antigens slightly lower binding to VCA antigen, when compared with ELISA wells containing no antigen. This antibody preparation was used for immunopanning. Selection of peptide mimotopes {#s3c} ------------------------------ Peptides mimicking epitopes of anti-EBV rabbit IgG and anti-EBV human IgG were isolated by screening a 20 amino acid random linear peptide library (AdLib 1) using multiple rounds of panning. For selection of phage binding to rabbit EBV IgG, an increased number of bound phage was detected after the second round of panning with a further increase in round 3 and a plateau in binding in rounds 4, 5 and 6 (Fig. [2](#GZN076F2){ref-type="fig"}A). For immunopanning on human EBV IgG, enrichment in binding was observed after the fourth round of panning, indicated by an increase in binding to antigen and this increased further in subsequent rounds 5 and 6 (Fig. [2](#GZN076F2){ref-type="fig"}B). ![Selection of phage clones that are recognised by (**A**) rabbit anti-EBV IgG (Rab IgG) and (**B**) human anti-EBV IgG (Hu IgG). The reactivity of selected phages from each round of panning (R) is shown by ELISA. Error bars indicate the ranges of individual values.](gzn07602){#GZN076F2} Sequences of phage clones {#s3d} ------------------------- DNA sequences of 10 clones from each round of panning with a high ELISA signal, i.e. rounds 4, 5 and 6, are summarised in Table [I](#GZN076TB1){ref-type="table"}. Twelve different sequences with high reactivity with the rabbit EBV IgG fraction were identified, whereas only one sequence from a total of 30 clones was isolated that showed the specificity for the human EBV IgG preparation. No consensus sequence was observed; however, a small amount of homology for some of the sequences is shown in Table [I](#GZN076TB1){ref-type="table"}. For example, Eb10 and Eb11 shared a similar region 'D **F D** (**R/**F) K V' and Eb12 contained 'F D R' (in reverse orientation). This sequence of amino acids is contained within the same area of homology shown in bold type in the reverse orientation. In addition, sequences Eb4 and Eb7 also had a small area of homology, 'S I K'. Furthermore, 4/12 sequences contained an 'F F' motif. ###### Peptide sequences of clones selected after 4, 5 or 6 rounds of panning on immune rabbit and human EBV IgG Amino acid sequence ---------------------------------------------------- -------------- D G P S Y H V A F K N S R G L R H S **H1**^a^ N G A L Y P [R **F F**]{.ul} P D Y S I L M F P I I **Eb1**^a^ D Q F A Q A Y R G D R [N **F F**]{.ul} N E L T S T **Eb2**^a^ R Q F S K F K D A S D R Y G N Y L [H **F F**]{.ul} **Eb3**^a^ S S S I K I W N K L G W N T V I A G T R **Eb4**^a^ F V N A F Q N A N F M R P R E L F A L A Eb5 S A N L [N **F F**]{.ul} S P D F G L Y T P N A S A Eb6 A I T C A H T L S I K S R R C Q Y V F K Eb7 A A S Y A S R T V G F A S V Y W F S R P Eb8 [R L R]{.ul} G D Y N V G P I R F G W P V A P N Eb9 M S [D F D R K V]{.ul} Y T F N F I T D P Q H L^b^ Eb10 G V T [D F D F K V]{.ul} F S S T F P K I F L S^b^ Eb11 T P N T V [R D F]{.ul} Y Y N V S L P S Y M L I^b^ Eb12 G G W Y S F D S P Y L M S I T E M [R L R]{.ul} **Gp125**^c^ Y T D S S M A V T L M K F A S N F L F **F1**^c^ Bold and underlined areas represent areas of homology. ^a^Indicates peptides selected for further study. ^b^Indicates binding to pre-immune sera. ^c^Indicates peptides Gp125 and F1 described previously ([@GZN076C2]). Characterisation of individual phage clones {#s3e} ------------------------------------------- Individual phage clones were analysed for reactivity with pre-immune and immune EBV rabbit IgG antibodies (Fig. [3](#GZN076F3){ref-type="fig"}). Only 3/12 phage clones were reactive with the pre-immune IgG indicating the remaining nine clones bind to EBV-specific antibodies in the immune sera. The two clones Eb10 and Eb11 described above with a similar area of homology 'D F D (R/F) K V' were both recognised by antibodies in the pre-bleed sample, Eb12 containing the sequence 'F D R' (in reverse orientation) was also recognised by the pre-immune sera, therefore, indicating these clones are representing non-EBV-specific epitopes. The four clones with the highest reactivity with rabbit EBV immune sera (Eb1--Eb4) were selected for further study. ![Reactivity of rabbit EBV IgG individual phage clones. Clones isolated from rounds 4--6 selected on rabbit EBV IgG were analysed for reactivity with immune and pre-immune IgG by ELISA. Clones Eb1--4 with the highest binding to immune serum were selected for further study.](gzn07603){#GZN076F3} The single clone selected from the random peptide library (H1) with high reactivity with the human IgG EBV preparation was shown to have low reactivity with a control IgG (Fig. [4](#GZN076F4){ref-type="fig"}A). This observation suggests EBV-specific antibodies are reactive with H1 phage. Importantly, antibodies in the serum of purified EBV-positive individual sera from four individuals were reactive with H1, indicating the specificity with an epitope common to antibodies present in each of these positive sera (Fig. [4](#GZN076F4){ref-type="fig"}B). ![Reactivity of isolated phage clone (H1) selected by panning on human EBV IgG with the same IgG and a non-specific human IgG preparation (**A**). H1 is recognised by four different affinity purified EBV-seropositive IgGs (**B**), indicating specificity for a common epitope typically present in individuals infected with EBV. Bars show the mean ELISA signal of duplicate wells and the bars indicate +/− errors.](gzn07604){#GZN076F4} Eb1--4 and H1 phage clones were not recognised by mAbs (gp125, F1, A2 and A3; data not shown) described in our previous study ([@GZN076C2]), suggesting that we have selected novel peptides that do not mimic similar epitopes of these antibodies and therefore should represent different epitopes that are perhaps specific to those induced during a natural EBV infection. Peptide-BSA conjugates as diagnostic antigens {#s3f} --------------------------------------------- To analyse the potential of the peptides to behave as antigen mimics, their ability to react with IgM antibodies from individuals infected with EBV was assessed. In our previous study, we demonstrated that the sensitivity of detection was greatly improved when the peptides were coupled to a carrier molecule such as BSA prior to immobilisation onto a solid surface ([@GZN076C2]). This strategy was adopted to test peptides Eb1--4 and H1. A set of 40 clinical samples that were classified as EBV seropositive (*n* = 16), seronegative (*n* = 16) or potentially cross-reactive sera (*n* = 8) were assessed for reactivity with Eb1--4 and H1 peptides individually. The cut-off level was defined as the mean optical density of the seronegative samples plus 3 standard deviations shown as a line on the graphs in Fig. [5](#GZN076F5){ref-type="fig"}. Readings above this level were defined as positive and below this level negative. The same set of samples were analysed on BSA alone and these values were subtracted from the peptide-BSA conjugate readings and the corrected absorbance readings were plotted individually for our new peptides Eb1--4 and H1 in Fig. [5](#GZN076F5){ref-type="fig"}. There was a clear difference in the detection of seropositive antibodies by all the peptides (Fig. [5](#GZN076F5){ref-type="fig"}A--E) compared with the analysis of BSA alone (Fig. [5](#GZN076F5){ref-type="fig"}F), with the majority of absorbance readings above the cut-off level. We compared the ability of our panel of peptide mimotopes to be recognised by antibodies in the same set of seropositive samples in Fig. [6](#GZN076F6){ref-type="fig"}A and the sensitivity of detection is shown in Fig. [6](#GZN076F6){ref-type="fig"}B. We also included F1 and Gp125 mimotopes specific for two mAbs in our previous study ([@GZN076C2]). Of the peptides identified from polyclonal sera Eb1, Gp125 and F1 had the highest sensitivity (94%). Slightly lower sensitivity was observed for Eb2, 3 and 4 (88%) and H1 peptide had the lowest sensitivity (81%) as summarised in Fig. [6](#GZN076F6){ref-type="fig"}B. The sensitivity of F1 and Gp125 was similar to that produced by the mimotopes selected in our previous study, 95% for F1 and 92% for Gp125. ![Evaluation of peptides Eb1--4 and H1 coupled to BSA as EBV diagnostic reagents. Human serum (*n* = 40) previously analysed using a diagnostic test for VCA IgM was allowed to react with the peptides and the bound IgM antibodies were detected using anti-human IgM HRP. The absorbance readings for 1 (positive), 2 (negative) and putative cross-reactive sera for 3 (Parvo), 4 (HSV), 5 (CMV) and 6 (RF) are plotted for **(A)** Eb1, (**B**) Eb2, (**C**) Eb3, (**D**) Eb4, (**E**) H1 and (**F**) BSA, respectively. The cut-off value is defined as the mean of the negative population +3SD indicated by a solid horizontal line; since there were no false positives, the specificity for each mimotope was 100%.](gzn07605){#GZN076F5} ![Comparison of the reactivities of our panel of mimotopes Eb1--4, H1, F1 and Gp125 conjugated to BSA with EBV IgM-positive sera (*n* = 16) absorbance values are plotted and the cut-off levels are depicted by a horizontal line in (**A**). (**B**) Summary of the false-negative results from the 5/16 serum samples seropositive for IgM EBV and the overall sensitivity for each mimotope for diagnosis of EBV IgM antibodies.](gzn07606){#GZN076F6} We also considered which seropositive EBV samples contained antibodies that did not recognise the panel of peptides, i.e. false-negative readings, listed in Fig. [6](#GZN076F6){ref-type="fig"}B. The antibodies in serum 1 (s1) were unreactive with all of the peptides identified in this study, s2 was not reactive with Eb3, Eb4 and H1 and s3 was unreactive with H1. Gp125 and F1 that were selected in our previous study were recognised by s1, 2 and 3; however, two different serum samples (s4 and 5) did not recognise F1 or Gp125, respectively. This demonstrates that individual peptides are not recognised by all EBV antibodies and confirms that different peptides are required to represent different epitopes. Therefore, a combination of Eb1 peptide F1 and Gp125 peptides could be recognised by antibodies present in all this set of EBV clinical samples resulting in 100% sensitivity. For the samples defined as EBV-seronegative, there were no readings above the cut-off level and therefore no false positives, resulting in 100% specificity. In addition, there were no absorbance readings above the cut-off levels for the potentially cross-reactive serum samples, inferring that the peptides identified in this study have high specificity for EBV antibodies. Discussion {#s4} ========== We have developed a library screening approach to select peptides that can substitute for the cognate antigen in assays used in the diagnosis of acute EBV infection. EBV mimotopes were isolated from a phage-displayed peptide library by screening purified antibodies derived from polyclonal human sera and hyperimmune rabbit sera. This novel approach enables the simultaneous selection of peptides that mimic different epitopes with no prior knowledge of the native antigen and is therefore more rapid than selections using mAbs. The strategy to select disease-specific epitope mimics using immune sera has so far been largely unexplored as many previous studies in this field have employed mAbs to neutralising or immunodominant epitopes. However, a few reports have described the use of patients\' sera to identify peptide mimics. Polyclonal sera has been used to select peptides for Lyme disease ([@GZN076C11]), Hepatitis C virus core protein ([@GZN076C13]) and Hepatitis A virus ([@GZN076C10]) and led to the identification of disease-related peptides by screening large numbers of clinical sera from immune and non-immune individuals with no prior knowledge of the target antigen ([@GZN076C6]). More recently, phage display peptide libraries were used to validate a specific serological marker and identify the native antigen by screening antibodies purified from whole serum derived from prostate cancer patients ([@GZN076C12]). In this study, we have identified several peptides that can be used individually for detection of natural antibodies produced by patients recently infected with EBV. All the peptides demonstrated high specificity (all 100%) and sensitivity (94%, 88%, 88%, 88% for Eb1--4 and 81% for H1). As we have noted previously, these peptides have no obvious homology to EBV antigens and further studies should be carried out to identify their corresponding antigens. In addition, the peptides had low reactivity with negative and putative cross-reactive sera, indicating the high specificity of small peptides for serological diagnosis compared with a complex antigen which contains many unwanted epitopes. When the peptides were used in combination, a greater sensitivity was observed (up to 100%), indicating this is a requirement for complete coverage of pathogen-specific antibodies in the sera of patients. The aim of this study was to select peptides that mimic the most abundant and/or immunodominant epitope present during a recent infection with EBV. We chose to purify an individual serum rather than a pool of high titre patient samples. If a pool was used, these epitopes may have been diluted making it more difficult to isolate a peptide reactive with the antibody/s generated by this immunodominant epitope. We have shown here (Fig. [4](#GZN076F4){ref-type="fig"}) that the peptide mimotope H1 was recognised by antibodies in four different clinical samples, proving that antibodies derived from an individual seropositive serum can produce a pathogen-specific peptide mimic. In order to extend the coverage of diagnostically relevant epitopes, further selections could be carried out using a pool of high titer seropositive EBV samples to decipher whether more peptides could be selected simultaneously and similarly to the data we have shown in this study using polyclonal sera from rabbits immunised with EBV. In conclusion, we describe in this study a panel of EBV peptide mimotopes when used in combination are recognised by the whole repertoire of antibodies typically produced after acute infection with EBV. This methodology could be applied to many diseases and may provide novel reagents for diagnosis and prognosis of diseases and reveal information regarding unknown pathogenic agents. Funding {#s5} ======= This work was supported by the Cooperative Research Centre for Diagnostics (Australia). The authors wish to thank Graham Street (Queensland Medical Laboratory) for providing EBV clinical samples and Rosella Masciantonio (La Trobe University currently at Arana Ltd.) for technical assistance. **Edited by Hans Christian Thogersen**
1. Field of the Invention The invention relates to an apparatus for recording an electric signal on a magnetic record carrier in tracks which are inclined relative to the longitudinal direction of said record carrier, and to an apparatus for reproducing a signal recorded by means of such a recording apparatus. 2. Description of the Related Art Corresponding to U.S. Pat. No. 4,757,392, discloses European Patent Application 210,773 a recording apparatus as defined in the opening paragraph, comprising: - an input terminal for receiving the electric signal; PA1 - a signal separator, having an input coupled to the input terminal for dividing the electric signal into consecutive blocks having a specific length of time, and for applying the consecutive blocks to a first and a second output in such a way that blocks having odd sequence numbers are applied to the first output and blocks having even sequence numbers are applied to the second output; PA1 - a time-base correction circuit which is constructed to provide time compression or time expansion of the consecutive blocks, to delay blocks having odd sequence numbers relative to those having even sequence numbers, and to supply the two signals thus processed to a first and a second output, respectively; PA1 - at least one pair of write heads having different azimuth angles and arranged on a rotatable head drum, one write head of a pair being arranged to be coupled to the first output of the time-base correction circuit and the other write head of the same pair being arranged to be coupled to the second output of the time-base correction circuit. PA1 - at least one pair of read heads having different azimuth angles, and arranged on a rotatable head drum; PA1 - a time-base correction circuit having a first and a second input arranged to be coupled, respectively, to one read head and to the other read head of the pair of read heads, the correction circuit being constructed to provide a time compression or time expansion of the signal blocks applied to the first and the second input respectively, to delay the signal blocks applied to one input relative to those applied to the other input, PA1 - a signal-combination unit having a first and a second input and an output, for combining the signal blocks applied to the first and the second input in order to restore the electric signal and for feeding the electric signal to the output, which output is coupled to an output terminal for supplying the electric signal. The two write heads of a pair of write heads are regularly spaced along the drum circumference. The known apparatus can also be used as a reproducing apparatus, in which case it comprises: The known apparatus has the disadvantage that during operation it produces a high acoustic noise level, that only very dimensional tolerances of the components are permissible, and that sometimes the reproduction quality is not satisfactory. The invention aims at mitigating these drawbacks and therefore proposes an apparatus for recording an electric signal which is characterized in that the write heads of a pair of write heads are are arranged close to each other and have a mechanically rigid coupling to each other, and in that the time base correction circuit is adapted to provide a time expansion or time compression of the signal blocks by a factor of .alpha.*n/(180*(M+1), where .alpha. is the wrapping angle of the record carrier around the head drum and differs from 180.degree., n is the number of head pairs, and M is the number of times within a specific time interval that a head pair which comes in contact with the record carrier during said time interval does not record a signal in the record carrier, this time interval being defined by those instants at which two consecutive track pairs are recorded by one or two head pairs. The apparatus for reproducing an electric signal is characterized in that the read heads of one pair of read heads are arranged close to each other and have a mechanically rigid coupling to each other, and in that the time-base correction circuit is adapted to provide a time compression or time expansion of the signal blocks applied to the first and the second input by a factor of 180*(M+1)/(.alpha.*n), where .alpha. is the wrapping angle of the record carrier around the head drum and differs from 180.degree., n is the number of head pairs and M is the number of times within a specific time interval that a head pair which comes in contact with the record carrier during said time interval does not read a signal from the record carrier, the two track pairs being read consecutively by one or two head pairs. As the heads of one pair of read or write heads are arranged at one location on the head drum, these heads hit against the record carrier only once every revolution of the head drum during recording and reproduction which enables the acoustic noise level to be reduced. In the case of head-level variations which manifest themselves similarly during every revolution of the head drum, for example as a result of wobbling or hunting of the head drum, the tracks written by the two heads of one pair of heads are yet recorded parallel to each other on the record carrier. The tracks may then be warped. However, if the apparatus comprises positioning means for positioning the head pair in a direction transverse to the track, said tracks can still be read correctly. Moreover, if the positioning means are constructed as a dynamic tracking system, only one actuator is needed, on which the pair of heads is arranged. In the known apparatus, in which the heads of a pair of write or read heads are spaced at 180.degree. from each other along the head drum circumference and which comprises a dynamic tracking system, "false lock" may occur. This means that tracks are sometimes recorded or read in a wrong sequence. This is because the heads are not disposed at the same level relative to each other. Said false lock problem does not occur in the apparatus in,accordance with the invention because the heads of a pair of write or read heads are now arranged close to each other and have a rigid mechanical coupling to each other, said heads being arranged, for example, on one actuator. Consequently a control system for maintaining the heads at the same level, as is required in the known apparatus, is not needed now. In a first embodiment n=1 and M=0. This means that one head pair is arranged on the head drum and records or reads one track pair during every revolution of the head drum. If the wrapping angle is smaller than 180.degree. the correction circuit in the recording apparatus should provide a time compression of the signal by a factor of .alpha./180 and, consequently, the correction circuit in the reproducing apparatus should provide a time expansion of the signal by a factor of 180/.alpha.. If the wrapping angle is larger than 180.degree. the correction circuit in the recording apparatus should provide a time expansion of the signal and the correction circuit in the reproducing apparatus should provide a time compression of the signal. In another embodiment n=1 and M=1. This means that only one head pair is arranged on the head drum. This head pair writes or reads one track pair during every two revolutions of the head drum. During recording the signal should then be time-compressed by a factor of .alpha./360 and during reproduction the signal must be time-expanded by a factor of 360/.alpha.. The apparatus may be characterized further in that it comprises a second pair of write or read heads having different azimuth angles and arranged on the rotatable head drum, said second pair of heads being arranged close to each other and having a mechanically rigid coupling to each other. An apparatus comprising two or more head pairs which are equidistantly spaced along the circumference enables information to be recorded or read at different tape speeds but with a constant speed of rotation of the head drum. An apparatus comprising two head pairs which are 180.degree. spaced apart on the head drum can record at the "normal" tape speed and at a tape speed which is twice as high. In the first case, only one pair of heads records adjoining tracks on the record carrier. For the parameters n and M this means that n=2 and M=1. At a tape speed which is twice as high, a spacing will be obtained between two tracks recorded directly after each other by one pair of heads, in which spacing the other pair of heads can record exactly one track. At this speed both pairs of heads consequently record tracks on the record carrier. For the parameters n and M, this means that n=2 and M=0. Obviously, the same applies during reproduction. If during recording the tracks have been recorded at the normal tape speed only one pair of read heads is used for reading the information during reproduction. If during recording the tracks have been recorded at twice the normal tape speed, both pairs of read heads are used during reproduction, the tape speed being equal to that during recording. The information recorded by the second pair of write heads may comprise additional information, for example the finest detail of an encoded video picture or an entirely different signal, or it enables a higher resolution standard to be adopted. Recording or reading information at different tape speeds and constant speed of rotation of the head drum is also possible in the case of more than two pairs of write or read heads. For example, in the case of three pairs of heads writing and reading at the normal tape speed is effected with one write head or read head (n=3 and M=2), while at three times the speed, all the three heads are operative (n=3 and M=0). It is to be noted that recording and reproducing a digital video signal by means of at least one pair of heads which are arranged close to each other and which have a mechanically rigid coupling with each other is known and is described in the publication "An experimental digital video recording system" by Driessen et al in IEEE Trans. on CE, Vol. CE-32, No. 3, August 1986, pp. 362-70. However, in this system, the video signal is not time compressed or expanded. Neither is one signal component delayed in time relative to the other.
Enter dates Hotel Highlights Michelin-starred dining As captured by Cezanne and Van Gogh The slickest service Overview Best known for its eponymous two-Michelin-starred restaurant, Baumanière hotel will have you salivating over postcard-worthy panoramas of Provence, too. Set among villages, olive groves and vineyards where Cézanne and Van Gogh found inspiration, the rooms of this historic hillside home are airy and light, seamlessly synthesizing the surround-sound extras of 21st-century living with the hotel’s 17th-century frame and handmade furniture. Smith Extra Here's what you get for booking Baumanière with us: A recipe book by the Michelin-starred owner and chef Jean-André Charial Facilities Need To Know Rooms Check–out Rates Double rooms from $203.27 (€182), excluding tax at 10 per cent. Please note the hotel charges an additional local city tax of €1.50 per person per night on check-out. ⓘ More details Rates exclude breakfast (€26 for Continental). At the hotel TV, wireless Internet access. In-room massages can be arranged. Our favourite rooms The junior suites at the front of the house enjoy views over the terrace and the swimming pool. Poolside There’s an outdoor pool in front of the main terrace. Also There is a tennis court, and guests have access to the grounds of the whole estate. Children Welcome. Under-12s stay free. An extra bed in the apartments for older children costs €30. Food & Drink ◐View Gallery Hotel Restaurant With two Michelin stars, the restaurant at Baumanière is world-renowned. The kitchen is headed by two brothers, Sylvestre and Michael Wahid – the former overseeing the delectable mains and the latter creating the lavish desserts – a dynamic duo whose fantastic à la carte features caviar, foie gras, Grand Marnier-soaked crêpes and a dazzling array of premiers grands crus. Lunch is 12pm–2pm; dinner 7.30pm–9.30pm. Hotel Bar Drinks are available from the restaurant until midnight. Try the estate’s own wine, L’Affectif. Room service A room-service menu is available 12 noon to 3pm and 7pm–10pm. Smith Insider Dress code Refined sophistication. Top table Near the window overlooking the terrace and garden. Local Guide Local restaurants La Cabro d’Or is also located on the Baumanière estate (+33 (0)4 90 54 33 21), with dining on the wonderful patio under the trees during the warmer months. Most ingredients among the fish, shellfish and meat dishes are sourced locally, including vegetables and olive oil from the estate. Nearby in Eygalières, Chez Bru aka Le Bistrot d’Eygalières (+33 (0)4 90 90 60 34) is another Michelin-starred honeypot for gourmets. In Maussane-les-Alpilles, La Place on Avenue Vallée des Baux (+33 (0)4 90 54 23 31) is an excellent bistrot run by the same people as Baumanière, and does wonderful Alpilles lamb dishes. Local bars For lighter snacks and drinking, head to alleyways around the Château des Baux in nearby Les Baux de Provence and see which of the little bars and cafés takes your fancy. Many have terraces wonderful views over the surrounding countryside. Baumanière Planes The closest airports are Avignon (35km) and Nîmes (40km), but Marseille offers the most choice and is only 60km away. The concierge can arrange a taxi. Trains The closest TGV station is in Avignon, 25km from the hotel. If you arrive by train, the concierge can organise a taxi for the next part of your journey; there is also an Avis office at the station. Automobiles From the north, take the Avignon Sud exit of the A7 toward Noves and then Saint Rémy de Provence followed by Les Baux de Provence. From the south-east or south-west, take the Saint-Martin-de-Crau exit of the A54 in the direction of Maussane-les-Alpilles and then Les Baux de Provence. The hotel is below this hilltop plateau village. Avignon is 35 minutes away by car. There’s covered valet parking at the hotel. Reviews Anonymous review With a song in my heart, I met Mrs Smith at the airport and off we soared to Marseille. In my pocket were five lines of spider-scrawl directions I’d taken down on the phone to the hotel that early and bleary morning. The weather was spectacular, so at the airport we coughed up the extra few euros and upgraded from a rental car that looked like it should have been reserved for Noddy and Big E… Baumanière Anonymous review by Nick Moran, Visa-collector and author With a song in my heart, I met Mrs Smith at the airport and off we soared to Marseille. In my pocket were five lines of spider-scrawl directions I’d taken down on the phone to the hotel that early and bleary morning. The weather was spectacular, so at the airport we coughed up the extra few euros and upgraded from a rental car that looked like it should have been reserved for Noddy and Big Ears to a sporty convertible Mégane. Now, be warned: Baumanière is hidden away in an obscure part of southwest Provence. It’s a good hour’s drive from Marseille if you know where you’re going, and an infinite puzzle if you don’t. So, lost and tetchy, somewhere between Nice and Barcelona, we bought a map (‘la carte’ en Français – something I discovered it was worth knowing). Having thrown away my smudged scribblings and given Mrs Smith a speed lesson in navigating, I made steady progress towards Baumanière. When you peel off the highway, the world changes. The drive takes you between Salon and Arles, and every town looks like it’s out of one of the Stella Artois idents that pop up during films on Channel 4. We resisted the temptation to join in a communal summer party at a village along the way, and continued towards our destination, the hot, moist air making a giant herbal humidor of olive, garlic and rosemary. We wound our way up a small mountain, crossing the castle ruins of Château des Baux. Rolling down to our final target was like descending into Shangri-La, but without those annoying singing children. I find carparks give a fair indication of a hotel’s calibre, and Baumanière’s screamed ‘understated’ and ‘high-end’. The pristine gravel path crunched beneath our tyres as we slid in between our car’s rich relations. There is a huge emblem projected onto the rocks above Baumanière that looks Egyptian or Masonic. The light flattered the exquisite architecture of this 30-roomed hotel and the majestic cliffs behind. It might just as well have read ‘class’. A garçon appeared from nowhere, welcoming us by name. This exquisite cordiality continued into the tiny high-polished hardwood and limestone check-in, and as we travelled in the crocodile-skin lift up the single floor to our bedroom. Our room looked just superb: gave us a warm glow that lingered for days. Spacious, lofty and slightly asymmetrical, it matched 17th-century origins with modern-day surround-sound extras. The creative lighting design supplied switches ready to match any mood. Handmade wood-block furniture and a ten-foot satin chaise longue were pure design-museum pieces. The dull stuff (minibar, safe) was hidden behind a false wall. French doors opened out on to a view of the verdant grounds, then olive groves and vineyards. The bathroom held its own, too. I spent a foolish few seconds pressing a jade pebble on the wall in an effort to turn the lights on, only to discover that the walls were embossed with seashells and stones. The bath was bigger than the car we nearly hired. A patio in front, protected from any light drizzle by a canopy of fig trees, is where Baumanière guests eat some of the best food in France. A glance at the prices might shave the edges off your appetite: we opted for just the one course, while I kept an eye on everything served around me as it either burst into flames or was cut from its bone with the hiss of Sabatier. The wine list arrived, the size of a pantomime fairy-tale book, and after struggling like a nine-year-old with the Sunday papers, I let our waiter select something with the decimal point nearer the front end of the price. The chef meekly approached us and asked our opinion; I told him it was excellent and with reassured strides he bowled back to a hot kitchen. We ended on a shared crêpe Suzette and some crystallised local fruits, and retired to bed trying to pretend this was the sort of place we come to all the time. The next day we took a little sun in the small but beautiful grounds and a dip in the icy-cold Twenties pool, then walked up to the castle carved into the mountain. The Château des Baux is touristy without being tacky; the ancient alleyways are lined with shops full of local products, and the bars and cafés are cheap and friendly. A steep walk up to the remains of the fortifications rewarded us with a wondrous view; I wondered whether it was that great artists were drawn here or if reasonable artists were just blessed with great things to paint. Down the hill, on the way back to the hotel, we came across Cathédrale d’Images, a huge cave that hosts sound-and-light shows, with locally inspired masterpieces projected onto its walls. As our farewell to this fine land, we took the car on a burn around some of the local towns; then, after getting utterly lost for a final time, we headed back for a few nightcaps at the hotel. Now I can safely say I know exactly where in the South of France Baumanière is. It’s en Provence, in the village of Exquisite, near Perfection, just above Timeless, in the state of Class. The Guestbook Reviews of Baumanière from Smith members Whenever you book a stay through us, we’ll invite you to comment when you get back. Read the Guestbook entries below to see what real-life Mr & Mrs Smiths have said about this hotel… No Smith members have posted their reviews of Baumanière yet. You could be the first! Price information If you haven’t entered any dates, the rate shown is provided directly by the hotel and represents the cheapest double room (excluding tax) available in the next {dayrange} days. Prices have been converted from the hotel’s local currency ({currency}{rate_ex}), via openexchangerates.org, using today’s exchange rate.
Ten Top Earnings-Call Questions Email this article To* Please enter your email address* Subject* Comments* Shocked by the suddenness and depth of the economy’s free-fall, many companies are choosing to provide less guidance on future financial performance in their earnings releases and conference calls, investor relations experts say. Some companies that had been giving quarterly guidance are shifting to annual guidance, others that offered quarterly guidance a year out are limiting their forecasts to the next quarter, and some are withdrawing guidance altogether, according to IR consultants. An analysis of second-quarter earnings releases by the Corporate Executive Board’s Investor Relations Roundtable showed that while 64 percent of S&P 500 companies provided guidance, only a third of those were doing so on a quarterly basis. Data on changes to guidance following the third quarter, after the autumn financial meltdown had begun, couldn’t be obtained at presstime, but the trend is nonetheless clear. Recommended Stories: Simply put, many companies don’t have the wherewithal right now to make meaningful earnings or top-line projections. “This has hit them very hard and very fast, and they’re just in the beginning stages of trying to understand how it’s going to affect their customer, market, and business,” said Elizabeth Saunders, head of the capital markets group at FD Ashton Partners, a business communications consulting firm. “Most of [the financial meltdown] happened in a four-week period, and few companies are nimble enough to figure out right away that, for example, demand for product is going to drop 18 percent in Japan.” To be sure, there’s significant risk in eliminating or watering down guidance — namely, that analysts’ estimates may become more varied, which in turn may cause extra volatility in share prices. But Saunders, who helps clients with their earnings releases and call scripts, said that if a company can’t see into the future as well as the Street expects, it should put guidance on hold. If CFOs don’t give detail on future performance during an earnings call, what should they talk about instead? The best thing, if it’s justifiable, is to portray the company’s risk profile in a positive light compared to its competitors. “You’ve got an incredibly skittish buy-side community out there,” said Saunders. “They’re trying to figure out how to reallocate stocks in their portfolio and probably thinking that top-line growth is somewhat less important than lower risk.” So the CFO should explain, say, that the company doesn’t need access to the credit markets because it’s in cash-generation mode, has no significant capital expenditures to make, or has a credit facility that isn’t slated to be renegotiated for two years. Such things aren’t normally brought up in earnings calls — but times are hardly normal. Another possible area to highlight is any expectation that the poor economy actually might produce new customers. In a recession consumers will be “trading down” — shifting to lower-end products — so it’s a good idea to point out pricier competitors. “Articulate where you expect to gain ground in tough times, because everyone knows you’re going to lose ground in some segments of your market,” advised Cameron Doolittle, senior director of the Finance Leadership Exchange practice at the Corporate Executive Board, which provides research and other support for corporate executives. Above all, the experts counseled, don’t go into an earnings call armed simply with an updated version of the previous quarter’s script. Analysts and investors will want to know what you’re doing differently in the face of the financial crisis, particularly with regard to strategic planning. “This is something new — nobody ever cared about the strategic planning process before, except the companies,” said Saunders. “Now portfolio managers are asking whether you’re working on a 2009 plan that reflects what you saw in the business three months ago, or are relooking at the plan in light of what’s going on now.” Analysts too are trying to verify that “you’re not just trying to stamp out the same formula and hoping for adequate results during the tough time,” said Doolittle. “Before, analysts were just trying to fill in items on their spreadsheets, like what your tax rate was. Now they want to figure out what about a company is different, or what it might be doing to position itself to thrive in this market rather than just survive.” And they’re not necessarily asking nicely. “We’re seeing analysts asking more probing questions in a more adversarial posture about what’s going on in some of the line items. The intensity and ferocity of the questions is something we haven’t seen for awhile,” Doolittle noted. Not only do they want to know about what short-term assets you might be able to unload to meet your cash obligations, he said, they’re also digging deeper into long-term investments as well, asking how accessible those are. In fact, it behooves CFOs to boost their level of preparation across the board. In an analysis of earnings calls between September 23 and October 16, Doolittle’s group identified 10 categories of questions that came up again and again, with analysts and investors seeking very detailed responses: • Impact of market slowdown. A typical question: “Are you seeing a dramatic deterioration in export conditions in your conversations with manufacturing clients since the end of the third quarter?” • Industry-specific environment. “Is the decline evenly split between sectors, or is it one of them that pulls everything else down?” • Short-term growth plans. “In terms of the magnitude of these investments, will you be maintaining this level throughout the year?” • Currency fluctuations. “What percentage of year-over-year increase is attributable to the weaker U.S. dollar?” • Market liquidity and leasing. “Do you see any threats of ongoing purchases of capital equipment, given the economic environment and the tightening of the credit and lease financing markets?” • Loans. “Gross impaired loans increased in the quarter. Can we expect relatively full recovery or resolution of these?” • Supply-and-demand environment. “Are you seeing major slowdowns taking place, maybe with retailers pushing back some orders or putting projects on hold?” • Customer credit. “Are you considering any programs to help customers with long-term payback to the company instead of them having to achieve financing on their own?” • Product pricing structure. “How did this pricing increase come about late in the year, what was retailers’ reaction to it, and how is it going in terms of implementation?” • Sales growth expectations. “Is it fair to assume that your Q4 factory sales growth rate year-over-year will pick up, so that this will be the lowest sales growth for awhile?”
Hearing instruments for the elderly hearing impaired. A comparison of in-the-canal and behind-the-ear hearing instruments in first-time users. The purpose of this investigation was to compare negative and positive experiences between two matched groups of elderly first-time users of hearing instruments (HI). One group had been supplied with behind-the-ear hearing instruments (BTE), the other with in-the-canal hearing instruments (ITC). There were 20 persons in each group. All were visited in their homes. Those who needed extra help were offered follow-up at the Hearing Centre. ITC were found to be superior to BTE as regards time-of-use, operational difficulties and undesirable sound experiences. ITC were also used in more difficult listening situations. Successful instruction and follow-up was more easily achieved with ITC users than with BTE users. ITC are recommended as the preferred instrument for elderly first-time HI users, at least for hearing losses not exceeding 60 dB PTA, provided the subject's dexterity and anatomical conditions permit fitting of ITC.
Pages Tuesday, October 16, 2007 Culturalism: Racism of the 21st Century Dear Korean,At the time of the 1992 Watts riots, I heard a commentator on NPR say that one source of tension between Korean shopkeepers and blacks was that in Korean culture, a shopkeeper isn't expected to be chatty with customers. Is there any truth in this?Andrew B. Dear Andrew, No, there is zero truth to it. Korean shopkeepers are not different from any other shopkeepers in the world. If anything, they tend to be friendly with the neighbors so that they can boost the sale. The Mama Hong case in Los Angeles that the Korean wrote about earlier would be a good example. (It's towards the end of the post.) But the Korean wants to point out a larger problem suggested by your question: It's the impulse to explain minority people's behavior with a "cultural difference", real or imagined. For the sake of convenience, let's call this "culturalism". Culturalism started in a benign way. It started as "multiculturalism", in which people are supposed to understand and accept the differences of other people from a different culture. For example, a multiculturalist would not recoil at the fact that Korean people eat rancid kimchi. Instead, a multiculturalist would ask and learn about the history and the significance of the food in Korean culture. Multicuturalist would make the link between kimchi and other rancid, fermented food that she loves, such as cheese. She might even try it out herself, suppressing the gag reflex. What does a culturalist do in contrast? He sees that strange-looking people are eating strange-smelling food, and thinks to himself, "Well that's odd, but I guess it's their culture," and walks away without doing more. Essentially, culturalism is a lazy multiculturalism; culturalism sees the cultural difference, and stops there. (For a typical culturalist attitude, see this post.) The "acceptance" in multiculturalism comes from the fact that the more you learn about a different culture, the more you realize that it is not too different from your own after all. A friend of the Korean, after having lived many years in Japan said this: "Japan is exactly like America, except just the opposite. If you understand that, you understood Japanese culture." The Korean could not agree more. It seems like there is "acceptance" in culturalism as well, but it's more like neglect. Instead of understanding the fundamental similarities between a different culture and that of its own, culturalism simply throws on the label of "cultural difference" -- the label might as well be "exotic", "mystical" or "incomprehensible"-- and calls it a day. Culturalism is at least better than some alternatives. In Europe, people want immigrants to entirely lose any hint of their home country and essentially become 100 percent French or Italian, only with different skin tones. (If you'd like, refer to this as "assimilationism".) No foreign food, no foreign garb, and definitely no foreign language. Some lawmakers in France, for example, tried to require Muslim girls to take off their headscarf when they attend public school, because the hijabs were un-French. Compared to that, culturalism at least leaves the minority people alone. But culturalism is dangerous, in the exact same way racism is dangerous. Both culturalism and racism only look at a single character about an individual or a group, and purport to know something about that individual or a group. That knowledge, of course, is either false, misleading, or unrepresentative. (In fact, because discussing race in America became such a stroll-through-a-minefield-leading-to-easy-social-suicide, "culture" became the new code word for talking about race. There is no more discussion about "what black people do." Instead, the discussion starts with "In urban culture" or "In hiphop culture".) The most fundamental danger of culturalism should be plain: it continues ignorance under the guise of tolerance. This is exactly how Asian Americans continue to feel that they are forever foreign in the only country that they have known and lived. The moment a culturalist senses that he is speaking to a person from a different culture, the culturalist simply stops trying to understand that person, because the "cultural difference" gives a dead end to an understanding. The "shutting down" from the culturalist is what makes Asian Americans feel foreign -- all of a sudden, the common ground between the two has disappeared. Another danger of culturalism is that the "culture" that culturalists have in mind may be completely distorted. This is because culturalists often rely on one or two minority persons' word for what minority culture is. But often the minority people themselves do not know the full extent of their own culture. The Korean has seen many cases of the following: A second-generation Korean American, who grew up in a small town with no other Koreans and very few Asians, attributes every quirk and oddity of her parents to the Korean culture. Invariably, such a person's perception of Korean culture is completely distorted, because she is unable to sort out what is attributable to Korean culture, and what is attributable to her own parents' personalities. (See this post for an example.) So any non-minority person hearing about a different culture by a minority who doesn't have the full grasp of his own culture will end up having the same distorted view of that culture. The trouble gets worse because of the fact that there is no good way to verify even the strangest cultural differences, since minorities are by definition not too many, so asking another minority is difficult. (And that's the reason why the Korean started this blog.) A related problem is that a culture has many different aspects, often self-conflicting. Furthermore, in the case of a conflict, a culturalist simply chooses the most foreign aspect and writes it off as "cultural difference," without trying to understand the aspect and make it un-foreign. For example, who defines black culture -- the articulateness and strength of Colin Powell or Condoleeza Rice, or thugged-out, pimp-smacking Tupac or 50 Cent? The Korean doesn't know, but he knows this much: When most people talk about "black culture", they sure as hell are not talking about being articulate. Lastly, culturalism is harmful for minorities themselves, because it gives an excuse for them to cover up their own shortcomings. Why can rappers go on calling women bitches and hos? Because it's the hiphop culture! Korean shopkeepers in 1992 were not in tension with black folks because their culture made them to; they were because they were racists and they hated black people. But hey, Koreans could make some shit about cultural differences, and dumb white people would buy it, just like they buy an overpriced dish at an exotic Korean restaurant that tastes like vomit. Managing this blog has been a daily struggle against culturalism. Every day, the Korean's inbox is flooded with people who ask typically culturalist questions. What in Korean culture makes my co-worker rude? What is it about Korean culture that makes my boyfriend act in a certain way? Please, stop and think for yourselves for a change. Stop looking for a quick "cultural" answer so that you can write the question off without getting the right answer. Realize that we are all humans, and in the end, we are all the same. 17 comments: best post yet, in my humble opinion. You have articulated many things that I have been unable to frame. I started reading this blog because I'm a Brit working in Korea, but am delighted to find many wider issues addressed here. I don't usually leave comments on random blogs, but is William a troll or what? Simple answers don't necessarily make them false. Also, attributing everything that a person does to culture would surely negate the entire field of psychology, wouldn't it? And probably economics as well. And the other social sciences which posit that there are fundamental patterns of behaviour that humans follow no matter their race or culture. Anyway, thank you for such a wonderful post, Mr. Korean, I've been in and out of your blog for a while and it's been a pleasure to read. "you have clearly and vastly underestimated the power of the values, beliefs and assumptions that make up a persons world view—all of which find their being in a persons cultural (and lay at the core of a person so deeply that they are not even aware of them!)" Academics have been pushing this poppycock for years. It doesn't trump our common humanity. The Korean for the win! The simplest answer is Koreans tends to look down on Afro-American or Blacks in general, I'm not going to deny this Coz I'm Korean too. Basically Koreans isn't only group of people that look down on blacks or express racist attitude towards blacks, of course there are many open minded Koreans out there too and they tends to be more reasonable and non biased against to blacks, but mostly Koreans dislike Blacks this is pretty much same for other East Asian descents. If Blacks wanted to get respects from others then they must work harder. I disagree. I don't believe that a person's race should determine how hard they have to work for respect. People's race does not determine whether they deserve respect their actions do. Deciding that every black person you meet doesn't deserve your respect because of the actions of other black people is ridiculous. It also suggests that you believe all Korean people should be respected regardless of their actions simply for being Korean. I believe all people deserve to be respected and I choose to respect people for their individual accomplishments in life. Your argument is a cop out. It is you simply refusing to change your way of thinking. You don't want to look past a persons race so you suggest the race its self try harder to win approval. Instead perhaps you could try harder to give people the respect they deserve. The problem is that people focus too much on race. Your race does not define you. It doesn't control your choices or make you any better or worse then anyone else. Trust me if you saw me you wouldn't be able to judge me based on my race. So how would you know whether to respect me or not? I'm sorry to say you don't sound like one of those "open-minded Koreans" and that's a shame. To William (if you're still here): I'm sure it wasn't meant to be taken so literally when the OP said that 'we are all the same'. It's just his conversational style of writing. (I read that phrase as meaning 'we're not as different as it might seem'.) Also, how one's worldview is shaped by cultural factors is a different issue from the one the OP is pointing out here, i.e. many people have a tendency to exoticise, or if you prefer, 'othering' minority cultures, sometimes even their own, like the Korean restaurant that takes advantage of those they see as 'dumb white people'. And it seems pretty obvious to me that it isn't the OP himself who sees white people as dumb. Best post in my opinion. Korean, you said it all. I'm from a multicultural country with a racist past, the white Australian policy. Thankfully times have changed and my Chinese wife, I'm anglo Australian, has never complained of racism or discrimination of any form. Unfortuantely that wasn't my experience in Korea where I taught english for five years after graduating in anthroplogy. I learned hangul quite fluently but I still was subjected to varying degrees of racism everyday while living in Korea. This saddened me as I was being judged purely on my DNA, which I didn't choose, not the content of my character. I understand that Korea has endured a difficult history and has been insular for centuries. However I wonder if the Korean advocates a major change in governmental and educational policy which might promote tolerance and the revolutinary concept that Koreans are not the superior, pure and chosen race spawned by Tangun. I wish more Koreans in Korea would follow the sentiments of the Korean. However the prevailing values of the Korean are American not intrinsically Korean. Sadly. Hi! I just wanted to comment on the "french" part of this post (being French myself, he!). "Some lawmakers in France, for example, tried to require Muslim girls to take off their headscarf when they attend public school, because the hijabs were un-French." I'll answer based on those three words: "tried", "Muslim girl", "un-French". 1, "tried": it's become a law (loi n°2004-228 du 15 mars 2004)2, "Muslim girls". That law is as much about Muslim girls as it is about Muslim boys, Jewish, Sikh, Catholic or whatever-religion-they-have people.3, "un-French". That law is NOT about nationality, it is about "conspicuous religious signs or clothing", hijab, kippa, sikh turbans, christian crosses alltogther. The law forbides any type of obvious, conspicuous religious sign at public schools, no more, no less. The French, and the french government cares a lot about secularism, and that's what this law is about. Secularism in France goes all the way back to the French Revolution and is one of the "great" principles of La République. With the educational system being one of the institutions of said Republic, you'd expect that principle to be followed in schools. I'm actually surprised the French government didn't make a fuss about it before 2004! I'm not gonna say it has nothing to do at all with the hijab: that law is the result of a series of incidents involving hijab-wearing students, those incidents themselves being the result of the complicated history between French government and immigration, wich results from.... Hum, anyway. It was more about "how to bring more secularism to school to prevent those incidents". Adding non-christian festivals like Yom Kippour and Aid el-Kabbir to the school hallidays was even considered.The law was well recieved by pretty much everyone, Muslims included (except of course that loud minority). Those who made the most fuss about it were in foreign countries, where it was most likely reported wrongly. As far as French people go, it was deemed reasonable enough, if unnecessary. Did you maybe got confused and meant to talk about the law forbidding the integral veil in public places? (edit: no you didn't, it was voted in 2010) Now THAT's a ridiculous, useles (it's a mere 2000 women we're speaking about!) and shameful law. And a coward one on top, desguising itself under "no hiding one's face". It caused a violent uproar, and it was (and still is) deemed, with reason, discriminatory, wrongful, dangerous and liberty killer. I still can't believe the government went with it and voted it anyway. And I can't believe how much I wrote, when the post is not about France and the French. (...selfcentered much?) But still, I believe assimiliationism and the law on secularism in school to be two different things. And I'm sorry about any spelling or grammar mistake I made, feel free to correct them if you want to. First the harder to break is the bad image that black people give ok i talk for me because i'm really a-kind-of-black-person-who-is-harder-to-find-or-you-never-tought-it-was-existing because i love kpop, manga and that asian stuff and even a lot of black people (under the ''hip-hop'' influence) say that I'm weird or that this kind of music is not for me. So sometimes I take the time to watch carefully the others black person around me: -A thing that everyone knows is that black people under the ''hip hop'' influence really don't care about asian people.We can say that korean people are even more foreign to black people because for them Korea don't even exist because they (not only them but a majority of people think that) think that all asian people come from China.-It means that korean people are not really open to cultural difference but also black person are not really open to knows about asian culture (also a lot of person are too). I know my goal is not to get korean friends but I think I must stay away from them because it's even more strange someone who is not from Korea who loves their stuff (I don't mean that if you're not from Korea you can't love their music,movies and all that but white people it's ok but black it' too strange for a korean) But what is boring is that people like me is always associated to the hip-hop's lovers and live with the judgement of bad person or with that ''culturalism'' and in my school the best way to get popular is to turn to that hip-hop influence and it seems that almost everyone around did it so I have like a forever-alone feeling....... A land mine you should be aware of about discussing race in the USA is that not a few African-Americans take offense at being described as "articulate". In any other context anyone would take it as a compliment, but there's a history of the word being used to damn with faint praise. I don't know the politically correct way to talk about it when an African-American has exceptional verbal skills. I don't know if it's the same phenomenon, but it may be the same kind of reaction as some Asian-looking people have to "You speak English so well!". To leave nothing unsaid, everything I've seen from you says you're an honest believer in treating human beings without prejudice. I have to quibble with one detail about this post: that Korean shopowners had tensions with black people because the shopowners were racist and hated black people. Sure, there was racial animus on both sides (and ample room for misunderstanding due to language/cultural barriers), but one big issue that isn't talked about is the fact that Koreans were playing the classic "middleman minority" role. Search This Blog Contribute to AAK! Wiki TK's All-Korean Twitter Like What You See? (Since 4/27/2011) Hot Posts on Facebook About TK The Korean is a Korean American living in Washington D.C. / Northern Virginia. He lived in Seoul until he was 16, then moved to Los Angeles area. The Korean refers to himself in the third person because he thinks it sounds cool.
Population pharmacokinetics and Bayesian estimation of mycophenolic acid concentrations in stable renal transplant patients. Therapeutic drug monitoring of mycophenolic acid (MPA) may minimise the risk of acute rejection after transplantation. Area under the curve (AUC) rather than trough concentration-based monitoring is recommended and models for AUC estimation are needed. To develop a population pharmacokinetic model suitable for Bayesian estimation of individual AUC in stable renal transplant patients. The population pharmacokinetics of MPA were studied using nonlinear mixed effects modelling (NONMEM) in 60 patients (index group) receiving MPA on a twice-daily basis. Ten blood samples were collected at fixed timepoints from ten patients and four blood samples were collected at sparse timepoints from 50 patients. Bayesian estimation of individual AUC was made on the basis of three blood concentration measurements and covariates. The predictive performances of the Bayesian procedure were evaluated in an independent group of patients (test group) comprising ten subjects in whom ten blood samples were collected at fixed timepoints. A two-compartment model with zero-order absorption best fitted the data. Covariate analysis showed that bodyweight was positively correlated with oral clearance. However, the weak magnitude of the reduction in variability (from 34.8 to 28.2%) indicates that administration on a per kilogram basis would be of limited value in decreasing interindividual variability in MPA exposure. Bayesian estimation of pharmacokinetic parameters using samples drawn at 20 minutes and 1 and 3 hours enabled estimation of individual AUC with satisfactory accuracy (bias 7.7%, range of prediction errors 0.43-15.1%) and precision (root mean squared error 12.4%) as compared with the reference value obtained using the trapezoidal method. This paper reports for the first time population pharmacokinetic data for MPA in stable renal transplant patients, and shows that Bayesian estimation can allow accurate prediction of AUC with only three samples. This method provides a tool for therapeutic drug monitoring of MPA or for concentration-effect studies. Its application to MPA monitoring in the early period post-transplantation needs to be evaluated.
Cannabis Governors Talk Legalization Meeting in Washington this weekend, the National Governors Association discussed, among other things, the legalization of cannabis. Some say the buzz around the marijuana legalization talks is just smoke. The state executives, looking to Colorado and Washington for the experience the two pioneer states have gained since their recent legalization, appear to be taking a cautious approach on the topic. It’s been nearly three months since Colorado legalized the recreational use of marijuana, but Governor John Hickenlooper warns other governors not to rush into following his lead on the matters of cannabis. The state executives, both Democratic and Republican, are reported to have expressed a broad concern, not only for children, but also for public safety, in the case of growing and spreading use of recreational marijuana consumption. He said he’s been approached by half a dozen governors inquiring about Colorado’s experience, some of whom felt this was a wave coming their way. When being asked by other governors, as he frequently has, he states that they don’t have facts, and further he says that they have insufficient data to predict what the unintended consequences may be, so he urges caution. In conclusion, the democrat added that he’d wait a few years, rather than rush forward to legalize cannabis. States are closely observing Washington and Colorado, as these national pioneers establish themselves after their initiative to legalize recreational consumption of marijuana. A group of marijuana supporters now hopes to add Alaska to the list, making it the third state. While confirming that Colorado’s early tax revenue collections on pot sales have exceeded their projections and their expectations, Hickenlooper still cautioned that tax revenue alone is absolutely the wrong reason to even consider the legalization of recreational marijuana. Medical marijuana however, is currently legal in 20 states and the District of Columbia. In the state of Florida, a vote will be conducted in November where a proposed constitutional amendment legalizing medical cannabis consumption will come to a decision. The Obama administration has given states a green light to further conduct their own experiments with the marijuana regulation. President Obama himself recently made the headlines in an interview where he stated that he didn’t consider marijuana to be more harmful than alcohol in terms of the impact it has on the individual consumer. He further added that he wouldn’t encourage people to smoke marijuana, and that he’d told his daughters he thinks it’s a bad idea, a waste of time and also not very healthy. Democratic Governor Maggie Hassan of New Hampshire stated herself to be opposed to the idea of cannabis legalization because the rate of substance abuse, being high among the youth, is already a struggle in her state. She did, however, call for a comprehensive look at our sentencing practices and overall criminal laws, adding that she doesn’t think they youth should have a criminal record for a first offence nor be sent to jail. Another Democrat, Maryland Governor Martin O’Malley, formerly a mayor of Baltimore, a city where drug addiction has been a struggle, said that in a matter of a few years, Colorado’s experience would speak for itself, whether they’d manage to reduce harm without other, possibly unforeseen, adverse impacts. He also added that most job opportunities for the youth in his state, coming from agencies and firms of the federal sector, require a drug test. He concluded that he doesn’t believe for the sake of economic and, as previously noted, opportunity reasons, that Maryland should serve as a laboratory of democracy. Another Democrat, Washington state Governor Jay Inslee, said his state was succeeding in creating a legal hemp market and offered some advice to his colleagues. He said their will was to de-criminalize marijuana and so far it’s working well. Last year, the Justice Department announced that as long as the state-legal marijuana businesses follow the given series of strict guidelines, it would largely stay clear. The memo was not to give a carte blanche to all would-be marijuana entrepreneurs, but the legal pot market found it encouraging.The Obama administration has further provided banks with guidance on how to approach business with cannabis companies, as of earlier this month, in an effort to make them more comfortable working with licensed and regulated marijuana businesses. 5 Responses to "Cannabis Governors Talk Legalization" The “War on Marijuana” has been a complete and utter failure. It is the largest component of the broader yet equally unsuccessful “War on Drugs” that has cost our country over a trillion dollars. Instead of The United States wasting Billions upon Billions more of our tax dollars fighting a never ending “War on Marijuana”, lets generate Billions of dollars, and improve the deficit instead. It’s a no brainer. The Prohibition of Marijuana has also ruined the lives of many of our loved ones. In numbers greater than any other nation, our loved ones are being sent to jail and are being given permanent criminal records which ruin their chances of employment for the rest of their lives, and for what reason? Marijuana is much safer, and healthier to consume than alcohol. Yet do we lock people up for choosing to drink? Let’s end this hypocrisy now! The government should never attempt to legislate morality by creating victim-less “crimes” because it simply does not work and costs the taxpayers a fortune. Marijuana Legalization Nationwide is an inevitable reality that’s approaching much sooner than prohibitionists think and there is nothing they can do to stop it! Legalize Nationwide! Support Each and Every Marijuana Legalization Initiative! None of the Governors or other figures quoted above offers any evidence that they have even considered the emerging distinction between “smoke” as an inhalant ingestion technique (obsolete!) and VAPORIZATION. For example, @Manuel “smoked socially”, quit 13 years ago, possibly hasn’t had an occasion to learn anything firsthand about the revolution in vaping since 2003 when a Chinese engineer designed the first “e-cigarette”– and today there are Pen Vapes with which to inhale pure cannabis vapors (minus the heat shock, carbon monoxide and 4221 combustion toxins found in a joint, spliff, blunt etc. causing harms which get blamed on cannabis). You can also vape with a cheap handmade utensil (search “long-drawtube one-hitter”). As @Janet said, “THE STATUS QUO IS OVER”– not only in law but in dosage regulation (25-mg single toke replaces 700-mg $igarette, 500-mg joint etc.). Fear of “drug” overdoses, sudden or chronic, vanishes from the planet along with 6,000,000 deaths a year from tobacco $igarettes. The US is having a criminal justice crisis. We have 25% of all of the world’s prisoners and we have 67 million of our citizens burdened with criminal records and many millions of mainly minorities disenfranchised. The failed and very expensive war on drugs is the great driver. We will not be able to actually address this outrageous national disgrace without taking bold steps in dealing with the drivers. Marijuana is probably not much of a gateway drug to harder things such as opiates because many of those are on doctor Rx. But we can move the underground above ground and give people who chose to be intoxicated safer and legal alternatives. We can regulate and one and tax. Like it or not a great many Americans already smoke weed. That is a fact of life. Also the polling is clear the current policies are untenable. We can not have 1 in 37 under criminal justice supervision and 1 in 103 incarcerated and call ourselves land of the free because why would’t Putin poke us in the eye for our total hypocrisy? We are hypocritical and suffer under provably bad leadership on the subject of criminal justice just look at our disastrous court system. Go and look at the statistics on the disparity and hypocrisy. It is a national disgrace and every governor knows this to be true without exception. The status quo is over. Look at the state and federal budgets and see how much is squandered on a failed cultural war. I think the decriminilization of marijuana is the way to go. The President, while I do not agree with him in every case, did speak wisely on this issue. Marijuana really is no worse than cigarettes and alcohol, yet many people act like it’s a “gateway drug,” when every sensible human being knows that the gateway drugs are cigarettes and alcohol. I smoked cannabis socially in college and so did most of my friends. Today I’m a responsible, hardworking father of a four year old son. Though I no longer consume cannabis (it’s been more than 13 years since the last time I used recreationally) because it’s not good for my lungs, I don’t think of it in the same terms that I think of cocaine, heroin or any other “hard drugs” which can kill via overdose. Obviously, recreational use should not be encouraged — just like recreational use of alcohol and tobacco should not be encouraged — but we already have several such users and this way it can be regulated and we take money out of the hands of criminal enterprises which could be funnelled toward far more dangerous challenges for our future as a nation.
County OKs solar farm Effingham County commissioners have given their initial approval to a large solar farm off Lowground and McCall roads. Commissioners voted unanimously to rezone more than 287 acres of land, where Gregory Electric hopes to install a large solar farm. “Georgia Power has a program for solar farms where they will buy the power back,” said zoning director George Shaw. The tract had been zoned planned development and currently is growing timber. A solar farm is an acceptable conditional use under AR-1 zoning. The land is owned by Suetian Enterprises, Coastal Savannah Properties and the Christian family. The entire tract won’t be turned into a solar farm, Shaw said, because there are considerable wetlands on the property. Commissioners also approved a buffer for the property, with 60 feet on the Lowground Road side and 100 feet on the McCall Road side. “We wouldn’t mind putting in vegetation,” Todd Delello of Gregory Electric, which is building the solar project. “It’s going to be hard to see.” There also is only a difference in three feet of elevation across the tract. “The land is pretty flat,” Delello said. The expected load for the solar array is 16 megawatts of AC and 21 megawatts of DC.