text
stringlengths 8
5.77M
|
---|
GUIDED SEARCH
Product Type
Supplies
Manufacturer
Evergreen Scale Models
Evergreen Scale Models is the World's largest producer of polystyrene plastic shapes, strips, and sheet materials in metric and inch sizes.
Why choose polystyrene plastic when building a model railway? It is inexpensive, readily available, white in color, and it glues, sands, cuts, and paints well. So you can discover how to build clean and accurate models with Evergreen plastic products.
Evergreen plastics are commonly used by model train enthusiasts as well as architects, artists, and school projects.
|
<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>androidx.versionedparcelable</groupId>
<artifactId>versionedparcelable</artifactId>
<version>1.0.0</version>
<packaging>aar</packaging>
<name>VersionedParcelable and friends</name>
<description>Provides a stable but relatively compact binary serialization format that can be passed across processes or persisted safely.</description>
<url>http://developer.android.com/tools/extras/support-library.html</url>
<inceptionYear>2018</inceptionYear>
<licenses>
<license>
<name>The Apache Software License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
<developers>
<developer>
<name>The Android Open Source Project</name>
</developer>
</developers>
<scm>
<connection>scm:git:https://android.googlesource.com/platform/frameworks/support</connection>
<url>http://source.android.com</url>
</scm>
<dependencies>
<dependency>
<groupId>androidx.annotation</groupId>
<artifactId>annotation</artifactId>
<version>1.0.0</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>androidx.collection</groupId>
<artifactId>collection</artifactId>
<version>1.0.0</version>
<scope>compile</scope>
</dependency>
</dependencies>
</project>
|
A wild fan-made Pokémon game appears!
11 August 2016
By Matthew Fick
In development for nearly a decade, Pokémon Uranium has just been released for PC. This fan-made game is entirely free, taking players on a journey through the Tandor Region and bringing in 150 all-new “Fakémon”.
Important stuff
Contact us
If you'd like to speak to someone about placing an advertisement on this website, email [email protected].
Any concerns, complaints, compliments, bug-reports, or general word-speaking with regards to the website can be sent to [email protected].
|
Uses of scapular island flap in pediatric axillary burn contractures [correction of conractures].
Pediatric axillary post-burn contractures one of the most challenging problems which follow treatment of the upper extremity burns. We preferred to use scapular flaps for surgical treatment of pediatric axillary contractures instead of skin grafting or Z-plasties. In this clinical study we present 13 pediatric cases treated with scapular island flaps. In pediatric scapular flap cases, the technique which we used was to extend the flap's pedicle dissection was continued to the level of bifurcation of subscapular artery. Bypassing the flap triangular space allowed us to cover the anterior part of the axillary contractures. We observed that the scapular flap repairs have many benefits to skin grafting including no recurrence of contracture and stable coverage of the shoulder joint. The other advantages of scapular island flap are that the donor site is closed primarily, and it provides an adequate amount of pliable skin while not compromising the function and range of motion of joints. In conclusion, the island scapular flap is a good choice for reconstruction of various axillary contractures in pediatric population.
|
Arterial to end-tidal carbon dioxide tension difference during laparoscopy. Magnitude and effect of anaesthetic technique.
The relationship between arterial carbon dioxide tension and end-tidal carbon dioxide tension was studied in 25 patients during laparoscopy. Thirteen patients received general anaesthesia and 12 epidural anaesthesia. The overall mean difference between arterial and end-tidal carbon dioxide tensions was 0.44 kPa (95% confidence intervals 0.28-0.60 kPa) which was significantly less than that reported in studies during other procedures. The reasons for this difference are probably associated with the physiological changes induced by CO2 pneumoperitoneum and steep Trendelenburg positioning. The choice of anaesthetic technique did not affect the arterial to end-tidal carbon dioxide tension difference significantly (p greater than 0.9).
|
Concentrations of dopamine and noradrenaline in hypophysial portal blood in the sheep and the rat.
The concentrations of dopamine, noradrenaline and their respective primary neuronal metabolites 3,4-dihydroxyphenylacetic acid (DOPAC) and 3,4-dihydroxyphenylethyleneglycol (DHPG) were measured in the hypophysial portal and peripheral plasma of sheep and rats by combined gas chromatography-mass spectrometry. Hypophysial portal and jugular blood samples were taken at 5- to 10-min intervals for 3-7 h from six conscious ovariectomized ewes. Blood was also collected for 30 min under urethane anaesthesia from the cut pituitary stalk from 16 pro-oestrous female and five intact male rats. In ovariectomized ewes, noradrenaline concentrations were higher in hypophysial portal plasma than in peripheral plasma (6.6 +/- 0.8 vs 2.2 +/- 0.4 nmol/l). In contrast, dopamine was undetectable (less than 1 nmol/l) in the portal and peripheral plasma of all ewes. Plasma levels of DOPAC and DHPG in portal and jugular samples were similar. In all pro-oestrous female rats, plasma concentrations of dopamine were higher in portal blood than in jugular blood (8.0 +/- 1.4 vs 4.8 +/- 0.6 nmol/l). Detectable concentrations of dopamine were measured in the portal plasma of two out of five male rats. Noradrenaline concentrations were higher in portal plasma than in peripheral plasma of both female (8.3 +/- 1.7 vs 3.7 +/- 0.6 nmol/l) and male (14.8 +/- 2.7 vs 6.1 +/- 1.2 nmol/l) rats. These data show that noradrenaline, but not dopamine, is secreted into the long portal vessels in sheep. The results suggest that there are species differences in the secretion of hypothalamic dopamine into hypophysial portal blood.
|
Q:
python: unexplainable infinite recursion with __repr__
Here's a piece of code, which goes into an infinite recursion loop, which consists only of __repr__ function, seemingly calling itself. But I really can't see, how it calls itself. Moreover, I can't even understand, how it was called:
class MyList(list): #this is storage for MyDict objects
def __init__(self):
super(MyList, self).__init__()
class MyDict(dict):
def __init__(self, mylist):
self.mylist = mylist #mydict remembers mylist, to which it belongs
def __hash__(self):
return id(self)
def __eq__(self, other):
return self is other
def __repr__(self):
return str(self.mylist.index(self)) #!!!this is the crazy repr, going into recursion
def __str__(self):
return str(self.__repr__())
mylist = MyList()
mydict = MyDict(mylist)
mydict.update({1:2})
print str(mylist.index(mydict)) #here we die :(
Execution of this code results in:
Traceback (most recent call last):
File "test_analogue.py", line 20, in <module>
print str(mylist.index(mydict))
File "test_analogue.py", line 13, in __repr__
return str(self.mylist.index(self))
File "test_analogue.py", line 13, in __repr__
...
... (repetition of last 2 lines for ~666 times)
...
File "test_analogue.py", line 13, in __repr__
return str(self.mylist.index(self))
RuntimeError: maximum recursion depth exceeded while calling a Python object
Do you understand, how str(mylist.index(mydict)) managed to call __repr__? I'm completely puzzled. Thanks!
A:
>> mylist.index('foo')
ValueError: 'foo' is not in list
You never actually added mydict to mylist, so the index method tries to raise this error. The error contains the repr of the dict. The repr of the dict, of course, tries to look up its index in the list that it isn't in, and this raises an exception, whose error message is calculated using the repr of the dict, which of course, tries to look up its index in the list that it isn't in, and...
|
Q:
How to bring a widget in-front of the other elevated widget in android studio
I have trouble bringing a non elevated widget in-front of an elevated widget.
as you can see in the screen shot, when I elevate the layer everything is going to missed up.
How can i achieve this problem?
Screen Shot from the App:
Screen Shot from the app
A:
I fixed it guys.
What I did is:
I created a cardview in front of my imageview and and set the corner radius to 100 then I moved my rectangle camera effect into cardview.
Now everything is going well.
|
MHz Networks brings you The Top Story- the only digital short delivering each day's biggest global story back to back in English, as told by our premier international news broadcast partners. Today's Topic 02/03/14: During the tense election process, anti-government protesters stopped candidates from registering, blocked delivery of ballot boxes and prevented people from casting their votes. As a result, voting was disrupted in 69 out of the country's 375 electoral districts. As told by Arirang TV, Euronews, France 24 and CCTV News.
|
1. Introduction {#sec1-ijms-20-03027}
===============
The high-grade neuroepithelial tumor with BCOR alteration (HGNET-BCOR) is a rare brain tumor first described as a distinct entity in 2016 \[[@B1-ijms-20-03027]\]. HGNET-BCOR represents 3% of tumors with a prior diagnosis of "Primitive neuroectodermal tumor of the central nervous system (CNS-PNET)" according to the World Health Organization (WHO) diagnostic lexicon of 2007 \[[@B1-ijms-20-03027]\]. The designation "CNS-PNET" has been removed from the updated fourth edition of the WHO classification of CNS tumors; however, this new molecular entity is not yet included \[[@B2-ijms-20-03027]\]. It is unclear whether CNS HGNET-BCOR should be classified in the category of "mesenchymal, nonmeningothelial tumors" or in that of "embryonal tumors."
HGNET-BCOR is a highly aggressive entity which can metastasize outside the brain \[[@B3-ijms-20-03027],[@B4-ijms-20-03027],[@B5-ijms-20-03027],[@B6-ijms-20-03027]\]. HGNET-BCOR primarily affects children and is characterized by somatic internal tandem duplications (ITDs) within the C-terminus of the BCL-6 co-repressor (*BCOR*) gene and by *BCOR* overexpression \[[@B1-ijms-20-03027]\]. The same duplication has also been described in clear cell sarcoma of the kidney \[[@B7-ijms-20-03027]\], soft tissue undifferentiated round cell sarcoma of infancy (URCSI), and primitive myxoid mesenchymal tumor of infancy (PMMTI) \[[@B8-ijms-20-03027]\]. Preliminary survival data suggest that the CNS HGNET-BCOR entity has poor overall survival \[[@B1-ijms-20-03027]\], but no standard therapy protocols exist for this tumor.
We have previously described a personalized therapy protocol including elements from the pediatric rhabdoid and soft-tissue sarcoma protocol, radiation, and Arsenic trioxide to treat a pediatric HGNET-BCOR patient, achieving a complete remission that lasted for six months \[[@B5-ijms-20-03027]\]. However, the tumor cells became resistant to the regimen, highlighting the need to identify new treatment approaches for this tumor.
The insulin-like growth factor (IGF) 2 is overexpressed in HGNET-BCOR \[[@B1-ijms-20-03027],[@B5-ijms-20-03027]\]. IGF2 acts via the IGF1 receptor (IGF1R) promoting cell proliferation. The IGF pathway regulates cellular growth, proliferation, and survival. It is important in the development of several pediatric cancers, including sarcoma, glioma, neuroblastoma, medulloblastoma (MB), and Wilms tumor \[[@B9-ijms-20-03027]\]. Different strategies have been tested to overcome IGF1R signaling, including IGF1R blockade by monoclonal antibodies, small-molecule tyrosine kinase inhibitors of IGF1R, and ligand-neutralizing strategies \[[@B10-ijms-20-03027]\]. In spite of promising preclinical data, IGF1R inhibitors have not had success as single agents in clinical trials, and no formally approved drugs are available. However, compounds developed to inhibit other kinases, like ceritinib, a highly potent inhibitor of anaplastic lymphoma kinase (ALK), have shown off-target activity on IGF1R and may become relevant for the development of therapy protocols targeting IGF1R \[[@B11-ijms-20-03027]\].
Good preclinical HGNET-BCOR models are needed to evaluate standard and novel treatment options. The only available preclinical model to date is the human primary cell culture PhKh1 that we have generated in our laboratory from the extracranial inoculation metastasis of a pediatric HGNET-BCOR patient (P1) \[[@B5-ijms-20-03027],[@B12-ijms-20-03027]\]. Here, we further characterize PhKh1 cells and use them to determine their sensitivity to chemotherapy agents and to IGF1R inhibition to support the development of novel treatment approaches.
2. Results {#sec2-ijms-20-03027}
==========
2.1. Characterization of the HGNET-BCOR in Vitro Model PhKh1 {#sec2dot1-ijms-20-03027}
------------------------------------------------------------
We have previously described a HGNET-BCOR primary culture (PhKh1) isolated from the extracranial metastasis of a HGNET-BCOR patient \[[@B5-ijms-20-03027],[@B12-ijms-20-03027]\]. The PhKh1 cells share similar features to the metastatic tissue of origin, including the presence of the BCOR ITD, the overexpression of *BCOR*, and the activation of the SHH and WNT pathways \[[@B12-ijms-20-03027]\]. Here, we further characterized the cells by western blot analysis, DNA methylation analysis, copy number profile, and electron microscopy ([Figure 1](#ijms-20-03027-f001){ref-type="fig"}). To assess if *BCOR* is also translated into a protein, we performed western blot analysis with nuclear and cytosolic fractions of PhKh1 cells. A band of about \~200 kDa corresponding to the expected size of BCOR was detected mainly in the nuclear fraction ([Figure 1](#ijms-20-03027-f001){ref-type="fig"}A), according to the expected function of BCOR as a transcriptional regulator \[[@B13-ijms-20-03027]\]. We then performed DNA methylation analysis using a chip-based assay and the brain tumor classification tool, recently described by Capper et al. (classifier version v11b4) \[[@B14-ijms-20-03027]\]. DNA methylation profiling is highly robust and reproducible, and such profiles have been widely used to classify CNS tumors. PhKh1 cells and the metastatic tissue of P1 that was used to generate PhKh1 cells were both classifiable as HGNET-BCOR by 850k DNA methylation analysis. Chromosomal aberrations were similar between PhKh1 cells and the metastatic tumor tissue used to isolate the cells, being characterized mainly by a gain of chromosome (chr) 7 and the loss of a part of chr 11 ([Figure 1](#ijms-20-03027-f001){ref-type="fig"}B,C). No recurrent copy number changes have been described so far in HGNET-BCOR, and most cases show a flat profile in copy number analysis \[[@B1-ijms-20-03027],[@B3-ijms-20-03027]\]. These data, however, were derived from the analysis of primary tumors, and gain of chr 7 and loss of a part of chr 11 may be features of more aggressive and metastasizing HGNET-BCOR tumor cells. Finally, electron microscopy of PhKh1 cell revealed that they have a size of about 8 µm and have the ability to phagocytize other cells ([Figure 1](#ijms-20-03027-f001){ref-type="fig"}D).
Taken together, our data demonstrate that PhKh1 cells have an aggressive phenotype, recapitulate features of the tumor of origin, and can be therefore used as an in vitro model to study HGNET-BCOR biology and to perform preclinical tests.
2.2. IGF2 and IGF1R are Highly Expressed in HGNET-BCOR {#sec2dot2-ijms-20-03027}
------------------------------------------------------
The expression of *IGF2* and *IGF1R* in HGNET-BCOR has been suggested previously by experiments based on Affymetrix arrays and RNA seq analysis \[[@B1-ijms-20-03027],[@B5-ijms-20-03027]\]. Here, we validated the expression of *IGF2* and *IGF1R* by qRT-PCR in the tumors isolated from three HGNET-BCOR patients. The three patients had BCOR ITDs of different lengths ([Figure 2](#ijms-20-03027-f002){ref-type="fig"}A). As a control, material derived from a hepatoblastoma tumor was used (P4). Hepatoblastoma have a high *IGF2* expression due to genetic and epigenetic alterations in the *IGF2-H19* region \[[@B15-ijms-20-03027]\]. Normal liver was used as a reference. Two HGNET-BCOR samples (P1 and P2) showed expression of *IGF2* comparable to that in the hepatoblastoma sample ([Figure 2](#ijms-20-03027-f002){ref-type="fig"}B). In one HGNET-BCOR sample, the expression was even stronger ([Figure 2](#ijms-20-03027-f002){ref-type="fig"}B, P3). The expression of *IGF1R* in all HGNET-BCOR tumors was stronger than in the hepatoblastoma sample ([Figure 2](#ijms-20-03027-f002){ref-type="fig"}C). The expression of IGF1R was confirmed at the protein level for the P1 tumor ([Figure S1](#app1-ijms-20-03027){ref-type="app"}). High expression of *IGF2* and *IGF1R* was maintained in PhKh1 cells ([Figure 2](#ijms-20-03027-f002){ref-type="fig"}D).
These results confirm the assumption that the upregulation of *IGF2* and *IGF1R* is a common feature of HGNET-BCOR.
2.3. HGNET-BCOR is Responsive to Actinomycin D, Vinca Alkaloids, and IGF1R Inhibitors {#sec2dot3-ijms-20-03027}
-------------------------------------------------------------------------------------
For our drug screening in the PhKh1 model, we selected 11 compounds, including chemotherapeutics that we previously selected for our personalized treatment of HGNET-BCOR and three IGF1R kinase inhibitors, namely, linsitinib \[[@B16-ijms-20-03027]\], picropodophyllin \[[@B17-ijms-20-03027]\], and PQ401 \[[@B18-ijms-20-03027]\]. All compounds were tested at concentrations in the range of 1--100,000 nM and were ranked by their respective IC~50~ values ([Table 1](#ijms-20-03027-t001){ref-type="table"} and [Figure S2](#app1-ijms-20-03027){ref-type="app"}). Among the top hits in our drug screen were approved compounds in pediatrics, such as actinomycin D, vincristine, vinblastine, and doxorubicin. Platinum derivatives and etoposide were only effective at very high concentrations. PhKh1 cells were sensitive to inhibition with the kinase inhibitors linsitinib ([Figure 3](#ijms-20-03027-f003){ref-type="fig"}A) and picropodophyllin ([Figure 3](#ijms-20-03027-f003){ref-type="fig"}B), with an IC~50~ \< 1µM, but not to PQ401 ([Figure 3](#ijms-20-03027-f003){ref-type="fig"}C). Because kinase inhibitors may be non-selective, we further analyzed the dependency of PhKh1 proliferation on IGF1R by using an IGF1R blocking antibody. Incubation of PhKh1 cells with the Alpha IR-3 neutralizing monoclonal antibody \[[@B19-ijms-20-03027]\] significantly reduced the proliferation of PhKh1 cells in a concentration-dependent manner ([Figure 3](#ijms-20-03027-f003){ref-type="fig"}D). The proliferation of DAOY cells, an MB cell line used as a negative control, was not affected by the treatment with the blocking antibody against IGF1R, in accordance with a previous report \[[@B20-ijms-20-03027]\].
Altogether, our drug screen showed that some of the conventional chemotherapeutic agents used in pediatric neuro-oncology are cytotoxic to PhKh1 cells at low concentrations and that IGF1R is a new potential target for therapy.
2.4. HGNET-BCOR is Responsive to Ceritinib {#sec2dot4-ijms-20-03027}
------------------------------------------
Because no specific IGF1R inhibitors are approved by any country's medicines regulatory agency, we searched for small molecules with an off-target activity on IGF1R. Ceritinib is approved for ALK-positive lung cancer and can also inhibit IGF1R \[[@B11-ijms-20-03027]\]. We first tested if the kinase domain of ALK was mutated in the PhKh1 cells, but we could not detect any mutation (data not shown). Ceritinib was able to inhibit the proliferation of PhKh1 cells with an IC~50~ of 0.31 µM ([Table 1](#ijms-20-03027-t001){ref-type="table"} and [Figure 4](#ijms-20-03027-f004){ref-type="fig"}A). Whereas many traditional chemotherapeutic agents inhibit proliferation in a timeframe of hours or a few days of treatment, targeted therapies that affect cancer-relevant pathways can require a longer time period to impact cellular growth and survival. To study the long-term effect of IGF1R inhibition, we incubated PhKh1 cells for 9 days with linsitinib or ceritinib. At the IC~50~ concentration, both drugs significantly inhibited the growth of the cells ([Figure 4](#ijms-20-03027-f004){ref-type="fig"}B). Notably, linsitinib and ceritinib at a concentration of 1 µM were able to stop the proliferation of PhKh1 cells. The 1 µM concentration was selected because it reflects a concentration that can be achieved in the plasma of patients \[[@B21-ijms-20-03027],[@B22-ijms-20-03027]\].
In summary, ceritinib at a clinically achievable concentration is a potent and specific growth inhibitor of HGNET-BCOR cells.
2.5. Ceritinib Acts Via the IGF1R/AKT Pathway {#sec2dot5-ijms-20-03027}
---------------------------------------------
For signaling transmission, IGF1R has to be phosphorylated. A cluster of three tyrosine residues, located at position Tyr1131, Tyr1135, and Tyr1136 within the kinase domain, is critical for receptor autophosphorylation \[[@B23-ijms-20-03027]\]. In PhKh1 cells grown under serum starvation, IGF1R was not phosphorylated ([Figure 5](#ijms-20-03027-f005){ref-type="fig"} lane 1). IGF2 stimulation induced the phosphorylation of IGF1R ([Figure 5](#ijms-20-03027-f005){ref-type="fig"} lane 2). Ceritinib was able to inhibit the phosphorylation of IGF1R ([Figure 5](#ijms-20-03027-f005){ref-type="fig"}, lane 3). DMSO, which was used to dissolve ceritinib, had no effect on IGF1R phosphorylation ([Figure 5](#ijms-20-03027-f005){ref-type="fig"}, lane 4). As expected, linsitinib was also able to inhibit IGF1R phosphorylation ([Figure S1](#app1-ijms-20-03027){ref-type="app"}).
IGF2 signals through the IGF1R to activate the PI3K/AKT/mTOR or the RAS/RAF/MEK/ERK pathway \[[@B24-ijms-20-03027]\]. Ceritinib was able to block the phosphorylation of AKT ([Figure 5](#ijms-20-03027-f005){ref-type="fig"}, lane 3). No effect of ceritinib on ERK phosphorylation was observed ([Figure 5](#ijms-20-03027-f005){ref-type="fig"}, lane 3).
These data indicate that in HGNET-BCOR, ceritinib can affect cell proliferation by inhibiting the phosphorylation of IGF1R and its downstream effector AKT.
2.6. Figures {#sec2dot6-ijms-20-03027}
------------
{#ijms-20-03027-f001}
{#ijms-20-03027-f002}
{#ijms-20-03027-f003}
{#ijms-20-03027-f004}
{#ijms-20-03027-f005}
3. Discussion {#sec3-ijms-20-03027}
=============
HGNET-BCOR is a highly malignant tumor with a poor prognosis \[[@B1-ijms-20-03027],[@B3-ijms-20-03027],[@B5-ijms-20-03027]\] that is in need of new therapeutic approaches. HGNET-BCOR has an aggressive phenotype and can develop extra-CNS metastases \[[@B3-ijms-20-03027],[@B4-ijms-20-03027],[@B6-ijms-20-03027],[@B12-ijms-20-03027]\]. The HGNET-BCOR model we used here to identify new therapeutic drugs consists of primary cells isolated from extracranial metastases of a HGNET-BCOR patient and represents a particularly aggressive form of the disease. The malignant nature of the metastatic HGNET-BCOR cells is underlined by the ability of PhKh1 cells to phagocytose other cells. This feature has been described for highly aggressive cells with high metastatic potential \[[@B25-ijms-20-03027]\]. Cannibalism is a key survival option for malignant cancers \[[@B26-ijms-20-03027]\]. Because of the dependence of the phagocytosis process on the microtubule cytoskeleton, it might be speculated that microtubule inhibitors like vincristine and vinblastine can act as effective drugs. Our data show indeed the potential of *Vinca* alkaloids for the development of new therapeutic protocols. However, while vincristine is associated with highly variable neurotoxicity that often necessitates dose reductions, thereby compromising efficacy, vinblastine monotherapy has shown promising activity and a low-toxicity profile in patients with pediatric low-grade glioma \[[@B27-ijms-20-03027]\]. Doxorubicin and actinomycin D were also effective in inhibiting PhKh1 cell proliferation but they are generally not used in the treatment of brain tumors or metastasis in the brain because of their low penetration across the blood--brain barrier. However, they could be relevant for the treatment of HGNET-BCOR patients with extracranial metastases.
Activation of the IGF2 pathway has been described in several pediatric tumor entities, and preclinical findings using inhibitors of the IGF axis have demonstrated antitumor activity \[[@B28-ijms-20-03027]\]. In this work, we show that HGNET-BCOR tumors expressed high levels of *IGF2* and that the proliferation of the primary HGNET-BCOR cells PhKh1 could be reduced by targeting the IGF1R receptor on the plasma membrane with a blocking antibody or with intracellular kinases inhibitors. To date, neither monoclonal antibodies nor small-molecule tyrosine kinase inhibitors directed against IGF1R have been approved, but drug repurposing may represent a way forward to rapid clinical use. Ceritinib gained US Food and Drug Administration approval in 2014 for the treatment of patients with ALK-positive metastatic non-small cell lung cancer (NSCLC) who have progressed on or are intolerant to crizotinib. Ceritinib inhibits the kinases of proto-oncogene receptor tyrosine kinase (ROS1) and IGF1R, although it is most active against ALK \[[@B11-ijms-20-03027]\]. In our model of HGNET-BCOR, ceritinib was able to affect cell proliferation and to inhibit the phosphorylation of IGF1R. Notably, the concentration of 1 µM ceritinib that we used in long-term experiments can be achieved in the plasma of patients \[[@B22-ijms-20-03027]\]. Lung cancer patients with brain metastases also respond to the treatment with ceritinib, suggesting that this drug is effective across the blood--brain barrier \[[@B29-ijms-20-03027]\]. The concentration achieved in the mouse brain after oral administration was low, with a brain-to-plasma ratio below 0.3 \[[@B30-ijms-20-03027]\], but the blood-to-brain exposure ratio of ceritinib in humans has yet to be determined. The inhibition of P-glycoprotein (P-GP/ABCB1) and breast cancer resistance protein (BCRP/ABCG2) may improve the accumulation of ceritinib in the brain \[[@B30-ijms-20-03027]\]. A phase I study (NCT01742286) in pediatric patients with ALK-aberrant malignancies has shown that ceritinib can be given to pediatric patients, with acceptable toxicities, especially if the drug is taken with food.
IGF1R can act via the AKT and ERK1/ERK2 pathways to promote cell growth, differentiation, and proliferation. Ceritinib was able to reduce the phosphorylation of AKT in HGNET-BCOR, suggesting that a combination of ceritinib with mTOR inhibitors may potentiate the treatment. The inhibition of the phosphorylation of IGF1R and AKT after incubation with ceritinib has been described in rhabdomyosarcoma, and a synergistic effect of ceritinib and dasatinib was observed \[[@B31-ijms-20-03027]\]. Whether this synergy also exists in HGNET-BCOR remains to be elucidated.
We have previously shown that targeting the SHH pathway using arsenic trioxide (ATO) in combination with irradiation can induce a complete remission in a pediatric patient, but resistance to this therapy developed \[[@B5-ijms-20-03027]\]. IGF2 is an SHH pathway-responsive molecule in some cell types \[[@B19-ijms-20-03027],[@B32-ijms-20-03027]\], and the activation of the SHH pathway in HGNET-BCOR may explain the high expression of *IGF2*. However, the regulation of *IGF2* by the SHH pathway is complex, and other not yet identified regulators may cooperate with the SHH pathway to induce *IGF2* expression in HGNET-BCOR. Other mechanisms such as changes in the 3' untranslated region of *IGF2* \[[@B33-ijms-20-03027]\] and altered methylation of the *IGF2* promoter should be also considered \[[@B15-ijms-20-03027]\]. Whether the SHH and IGF pathways act synergistically to support HGNET-BCOR proliferation remains to be evaluated. In the case of a synergy, co-targeting of both pathways through a combination of ATO and ceritinib could improve the efficacy of a targeted therapy.
In conclusion, we have demonstrated that HGNET-BCOR is very sensitive to inhibition by *Vinca* alkaloids, doxorubicin, actinomycin D, and ceritinib. All these drugs are already approved or in clinical trials for malignancies in pediatric oncology, and priority should thus be given to validate them first in animal models and finally in multimodal therapies in future clinical trials for HGNET-BCOR patients. Finally, the use of the off-target IGF1R inhibitor ceritinib may accelerate the development of clinical protocols for the treatment of tumor entities driven by IGF2 and IGF1R.
4. Materials and Methods {#sec4-ijms-20-03027}
========================
4.1. Patient (P) Tissue Samples {#sec4dot1-ijms-20-03027}
-------------------------------
P1, P2, and P3 are HGNET-BCOR patients; P1 and P3 are male, and P2 is a female patient. The clinical history of P1 and P2 has already been described \[[@B11-ijms-20-03027]\]. P3 is a male patient diagnosed with HGNET-BCOR in the left temporal lobe at the age of two years. P4 is a five-year-old male child with hepatoblastoma. Normal liver tissue was obtained from a five-year-old male child with no liver pathology, used as a control. This study was performed in agreement with the declaration of Helsinki. In accordance with the ethics committee of Rhineland-Palatinate, the patients' parents agreed with the scientific use of the surplus material. The patient samples were collected and used in the study after obtaining informed consents from the patients' parents.
4.2. Cells {#sec4dot2-ijms-20-03027}
----------
The primary PhKh1 cells of HGNET-BCOR were previously described \[[@B5-ijms-20-03027],[@B11-ijms-20-03027]\]. Only low-passage cells were used in the experiments. DAOY cells were obtained directly from a cell bank that performs cell line characterizations (ATCC) and passaged for fewer than 6 months after receipt. PhKh1 cells were maintained in a humidified incubator with 5% CO2 at 37 °C and cultured to a confluency of 70--80% in advanced DMEM (GibcoTM, Thermo Fisher Scientific, MA, USA) supplemented with 10% Human Serum (Sigma-Aldrich Co., MO, USA), 2 mM L-Glutamin (Sigma-Aldrich Co., MO, USA), and Penicillin--Streptomycin (GibcoTM, Thermo Fisher Scientific, MA, USA) to final concentrations of 100 U/mL and 100 µg/mL, respectively. DAOY cells were cultivated under the same conditions as PhKh1 cells, with the difference that the serum component was 10% fetal calf serum (Gibco™, Thermo Fisher Scientific, MA, USA) instead of 10% human serum.
4.3. Nucleic Acid Extraction {#sec4dot3-ijms-20-03027}
----------------------------
DNA was extracted using the QIAamp DNA FFPE kit (Qiagen, Hilden, Germany). RNA extraction was performed using the RNeasy FFPE Kit for FFPE material (Qiagen). RNA was converted to cDNA using the PrimeScript RT Reagent Kit with gDNA Eraser (Takara Bio Europe, Saint-Germain-en-Laye, France). Quality control was performed using a 2100 Bioanalyzer (Agilent Technologies, Waldbronn, Germany). Because of the expected low quality of the RNA extracted from FFPE material, the protocol for cDNA synthesis was changed. Instead of utilizing the RT Primer Mix that is included in the kit, we used gene-specific reverse primers, and the reaction samples were incubated for 60 min at 42 °C instead of 15 min at 37 °C.
4.4. RT-PCR and qRT-PCR {#sec4dot4-ijms-20-03027}
-----------------------
qRT-PCR was performed using the LightCycler 480 II Detection System and Software (Applied Biosystems, Darmstadt, Germany) with KAPA SYBR FAST LightCycler 480 Kit (PeqLab, Erlangen, Germany). The following primers were used: *IGF2*: 5'-GACCGTGCTTCCGGACA and 5'-TCGAGCTCCTTGGCGAGC; *IGF1R*: 5'-GGCCTTCTGGACAAGCCAG and 5'-AGACCTCCCGGAAGCCAG; *HPRT*: 5'-TGACACTGGCAAAACAATGCA and 5'-GGTCCTTTTCACCAGCAAGCT.
4.5. DNA Sequencing {#sec4dot5-ijms-20-03027}
-------------------
The coding region of ALK containing the kinase domain was analyzed from the cDNA using primers described in \[[@B34-ijms-20-03027]\]. The region containing the BCOR ITD was amplified using the primers 5'-GGAAATTGTCACCATTGCAGAGG and 5'-TGTACATGGTGGGTCCAGCT. Sanger Sequencing was performed as previously described \[[@B11-ijms-20-03027]\].
4.6. In Vitro Drug Screening {#sec4dot6-ijms-20-03027}
----------------------------
All inhibitors were commercially purchased (Selleck Chemicals, TX, USA) and dissolved in DMSO (Sigma-Aldrich Co., MO, USA) at a concentration of 10 mM, with the exception of actinomycin D which was dissolved at a concentration of 1 mM. The cells were plated in triplicate at a density of 5000 cells/well in a 96-well plate; each plate contained a nontreatment condition, vehicle condition, and blank. Viable cells were quantified using the WST-1 reagent (Roche, Mannheim, Germany). Dose--response curves were plotted after 72 h to determine the half-maximal inhibitory concentration (IC~50~) using the GraphPad Prism v.5 (GraphPad Software, San Diego, CA, USA). For IGF1R blockade, the cells were incubated with an anti-IGF1R blocking antibody (clone αIR3, Merck Millipore, Darmstadt, Germany) for 48 h at a concentration of 1 µg/mL or 10 µg/mL. For long-term experiments, 5000 cells were plated in triplicate and incubated with different concentrations of ceritinib, linsitinib, or vehicle alone for 9 days. Statistics were performed using Student's *t-*test.
4.7. Phosphorylation Assay {#sec4dot7-ijms-20-03027}
--------------------------
PhKh1 cells were cultured in DMEM (high glucose, no glutamine, no phenol red) (GibcoTM, Thermo Fisher Scientific, MA, USA) which was stripped with dextran-coated charcoal (Sigma-Aldrich Co., MO, USA). Dextran-coated charcoal-stripped DMEM was supplemented with 10% human serum (Sigma-Aldrich Co., MO, USA), 2 mM L-Glutamin (Sigma-Aldrich Co., MO, USA), 1 mM Sodium Pyruvate (GibcoTM, Thermo Fisher Scientific, Massachusetts, USA), and Penicillin--Streptomycin (GibcoTM, Thermo Fisher Scientific, MA, USA) to final concentrations of 100 U/mL and 100 µg/mL, respectively. IGF2 (R&D Systems Inc., MN, USA) was dissolved in sterile PBS (Sigma-Aldrich Co., MO, USA) to a final concentration of 100 µg/mL. Three million PhKh1 cells were plated into 10 cm culture dishes and grown in charcoal-stripped DMEM for 1 day. The monolayers were washed with PBS and serum-starved overnight. The cells were treated with 1 µM ceritinib, 1 µM linsitinib, or vehicle alone for 2 h, followed by incubation with 20 ng/mL IGF2 for 15 min. After washing the cells with PBS, lysis ensued with 50 mM Tris·HCl, pH 8.0, 150 mM sodium chloride, 5 mM magnesium chloride, 1% Triton X-100, 0.5% sodium deoxycholate, 0.1% SDS, 40 mM sodium fluoride, 1 mM sodium orthovanadate, and the cOmplete protease inhibitor cocktail (Roche Diagnostics, Indianapolis, IN, USA). Lysates were analyzed by Western blot
4.8. Preparation of Subcellular Compartments and Lysis of Tumor Tissue {#sec4dot8-ijms-20-03027}
----------------------------------------------------------------------
PhKh1 cells were cultured as described above. Nuclear and cytosolic extracts were generated according to Holden and Horton \[[@B35-ijms-20-03027]\]. The cells were lysed in buffer 1 (150 mM NaCl, 50 mM HEPES pH 7.4, 1% NP-40, cOmplete protease inhibitor cocktail for 1 h at 4 °C on a shaker. After centrifugation at 7000 rcf for 15 min at 4 °C, the supernatant containing the cytoplasm was isolated, and the pellet containing the nuclei was resuspended in buffer 2 (150 mM NaCl, 50 mM HEPES pH 7.4, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate, cOmplete protease inhibitor cocktail) and incubated at 4 °C for 1 h. After centrifugation and ultrasound treatment, the supernatant contained proteins of the nuclear fraction. Tumor tissue was disrupted using the TissueLyser II (Qiagen, Hilden, Germany) and lysed with 50 mM Tris·HCl, pH 8.0, 150 mM sodium chloride, 5 mM magnesium chloride, 1% Triton X-100, 0.5% sodium deoxycholate, 0.1% SDS, 40 mM sodium fluoride, 1 mM sodium orthovanadate, and cOmplete protease inhibitor cocktail) for 1 h at 4 °C. After centrifugation, the supernatant contained the extracted proteins. Protein concentration of the samples was measured by Bradford Assay.
4.9. Western Blot Analysis {#sec4dot9-ijms-20-03027}
--------------------------
The lysates were loaded on SDS-PAGE gels, followed by blotting onto polyvinylidene difluoride membranes (BioRad Laboratories, Inc., CA, USA). The following antibodies were obtained from Cell Signaling Technology (MA, USA): GAPDH (14C10) (cat \# 2118, 1:1000 dilution), Lamin B1 (D4Q4Z) (cat\# 12586, 1:1000 dilution) (Cell Signaling Technology), IGF1 Receptor β (D23H3) XP® (cat \# 9750, 1:1000 dilution), Phospho-IGF1 Receptor β (Tyr1131)/Insulin Receptor β (Tyr1146) (cat \# 3021, 1:1000 dilution), AKT (cat \# 9272, 1:1000 dilution), Phospho-AKT (Ser473) (cat \# 9271, 1:1000 dilution). The antibodies ERK1/ERK2 (cat \# MAB1576, 0.5 µg/mL dilution) and Phospho-ERK1 (T202/Y204)/ERK2 (T185/Y187) (cat \# MAB1018, 0.5 µg/mL dilution) were obtained from R&D Systems Inc. (MN, USA). The anti-BCOR antibody (cat\# ab88112, 0.5 µg/mL dilution) was purchased from Abcam (Cambridge, UK). The HRP-linked secondary antibodies were obtained from Cell Signaling Technology (MA, USA) (anti-mouse IgG (cat\# 7076)) and Sera Care (MA, USA) (anti-rabbit IgG (cat\# 074-1516)). Detection was done by SuperSignal™ West Dura Extended Duration Substrate (Thermo Fisher Scientific, MA, USA), and imaging was performed on Fusion Pulse TS (Vilber Lourmat, Eberhardzell, Germany).
4.10. Transmission Electron Microscopy (TEM) {#sec4dot10-ijms-20-03027}
--------------------------------------------
Transmission electron microscopical analysis was performed with an EM 410 (Phillips). Samples of 0.5 × 0.5 cm were fixed in buffered glutaraldehyde (2.5%) overnight, post-fixed with OsO4, and then embedded in Agar 100 (Plano), which was let polymerize for at least 24 h. Ultrathin sections were cut with the Ultracut E microtome (Leica). Visualization took place at 80 KV, and images were photodocumented.
4.11. DNA Methylation Analysis {#sec4dot11-ijms-20-03027}
------------------------------
DNA methylation analyses were performed using an EPIC 850k DNA methylome chip (Illumina, San Diego, USA). We used standard protocols for tissue and DNA processing. Hybridization and processing of the chips were performed as indicated by the manufacturer. Data were preprocessed using Illumina Genome Studio, and further analysis was performed after uploading raw data (idat files) onto the platform MolecularNeuroPathology.org \[[@B14-ijms-20-03027]\].
We are grateful to the patients and families who contributed tumor tissue for the further study of this tumor entity.
Supplementary Materials can be found at <https://www.mdpi.com/1422-0067/20/12/3027/s1>.
######
Click here for additional data file.
Conceptualization, C.P.; Formal analysis, P.N.H., K.F., and C.P.; Investigation, N.V., S.H., L.S., H.B., C.S., N.L., N.B., L.R., P.N.H., and K.F.; Methodology, N.V.; Project administration, C.P.; Resources, A.R., F.A., C.S., and D.S.; Supervision, J.F.; Visualization, N.V., P.N.H., K.F., and C.P.; Writing -- original draft, C.P.; Writing -- review & editing, N.V., A.R., D.S., P.N.H., and K.F.
This research received no external funding
The authors declare no conflict of interest.
ABCB1
ATP-Binding Cassette Subfamily B Member 1
ABCG2
ATP-Binding Cassette Subfamily G Member 2
AKT
Protein kinase B
ALK
Anaplastic Lymphoma Kinase
BCOR
BCL-6 Co-Repressor
BCRP
Breast Cancer Resistance Protein
CNS-PN
Primitive neuroectodermal tumors of the central nervous system
ERK
Extracellular signal-regulated Kinase
HGNET
High-grade neuroepithelial Tumor
IGF
Insulin-like growth factor
IGF1R
Insulin-like growth factor receptor
ITD
Internal Tandem Duplication
MB
Medulloblastoma
MEK
Mitogen/Extracellular signal-regulated Kinase
mTOR
Mechanistic Target Of Rapamycin
NSCLC
Non-small cell lung cancer
P-GP
P-Glycoprotein
PI3K
phosphatidyl-inositol-3 kinase
qRT-PCR
quantitative Real-Time Polymerase Chain Reaction
RAF
Rapidly Accelerated Fibrosarcoma Kinase
RAS
Rat Sarcoma Oncogene
ROS1
c-ros oncogene 1
SHH
Sonic Hedgehog
URCSI
Soft tissue undifferentiated round cell sarcoma of infancy
ijms-20-03027-t001_Table 1
######
Overview of chemotherapeutics and insulin-like growth factor receptor (IGF1R) inhibitors tested in vitro on PhKh1 cells. The IC50, class of the drug, its mechanism of action, and tumor entity generally treated with the drugs are indicated. FDA: Food and Drug Administration.
Drug IC~50~ nM (±SD) Class Mechanism of Action Entity
---------------------------- ----------------- ------------------------------- ------------------------------------- -----------------------
Actinomycin D (*n* = 3) \<1 Antibiotic Intercalation into DNA Sarcoma
Vinblastine (*n* = 2) \<1 *Vinca* alkaloid Binds tubulin Sarcoma
Vincristine (*n* = 3) 6.4 (±4) *Vinca* alkaloid Binds tubulin Sarcoma, Brain tumors
Doxorubicin (*n* = 4) 89.3 (±65) anthracycline Intercalation into DNA Sarcoma
Ceritinib (*n* = 5) 277 (±99) Kinase inhibitor ALK, ROS1, IGF1R inhibitor Lung Cancer
Linsitinib (*n* = 3) 516.7 (±89) Kinase inhibitor IGF1R Inhibitor Not yet FDA-approved
Picropodophyllin (*n* = 3) 475.1 (±111) Kinase inhibitor IGF1R Inhibitor Not yet FDA-approved
Etoposide (*n* = 3) 5808 (±795) Derivative of podophyllotoxin Forms complex with topoisomerase II Sarcoma
Cisplatin (*n* = 3) 6976 (± 3100) Platin derivative Binds to DNA Sarcoma, Brain tumors
Carboplatin (*n* = 3) \>10,000 Platin derivative Binds to DNA Brain tumors
PQ401 (*n* = 3) \>10,000 Kinase inhibitor IGF1R Inhibitor Not yet FDA-approved
[^1]: These authors contributed equally to this work.
|
Background {#Sec1}
==========
Amyloidosis is a group of disorders caused by protein misfolding and aggregation. Systemic amyloidosis is a consequence of circulating amyloidogenic protein monomers which deposit in various tissues with variable affinities, causing tissue damage and multi-organ dysfunction. The term amyloid is a misnomer based on Rudolf Virchow's mistakenly identifying the material as starch (Amylin) \[[@CR1]\]. More than 40 different proteins have been identified as a precursor for amyloid formation in humans \[[@CR2]\]. The most common forms of systemic amyloid are AL amyloid seen in plasma cell dyscrasias, AA amyloid associated with inflammatory conditions, and TTR amyloidosis due to either familial gene mutation or wild type protein, formerly called senile amyloidosis. The clinical picture and prognosis of amyloidosis is dependent on organ(s) affected and whether organ dysfunction can be identified by symptoms or quantified by functional testing, particularly heart, kidney, and nervous system which are frequently involved and trigger testing.
Cardiac involvement is manifest in approximately one-third to one-half of all AL patients at the time of diagnosis \[[@CR3]\]. The predominant presenting symptom is rapidly progressive heart failure with preserved ejection fraction. Unfortunately, symptoms of cardiac decompensation are also a major risk factor for mortality. Kyle et al. showed that in 168 patients with systemic amyloidosis, those who presented with congestive heart failure had a median survival of 4 months \[[@CR4]\]. Renal amyloidosis is frequently, but not necessarily always, associated with high grade proteinuria. The trigger for amyloid workup is usually nephrotic syndrome and/or decline of renal function. Renal amyloidosis is not as rare as it was once thought to be. Retrospective analysis of 17 years of renal biopsies in the Czech Republic revealed that 43% of cases of nephrotic syndrome above the age of 60 were due to amyloidosis \[[@CR5]\]. Neuropathy is another significant manifestation of neuron-avid monomers, which can be devastating leading to peripheral and/or autonomic neuropathies with crippling manifestations \[[@CR6], [@CR7]\].
The frequently observed rapid functional recovery of kidney and heart with therapies that limit or abolish monomer production is often attributed to direct monomer and/or oligomer toxicity \[[@CR8], [@CR9]\]. But, it also underscores the imperative of establishing the diagnosis of amyloid as early as possible so that effective therapy can be instituted. Unfortunately, the diagnosis of amyloid in tissue sections is challenging. Identification of fibrils by electron microscopy (EM) is highly specific but because of the patchy nature of the disease, and the magnification level of EM, the sensitivity is very poor a region of interest can be identified using another method. Thioflavin-T is very sensitive \[[@CR10], [@CR11]\] but its specificity is not trusted by some experts. Interpretation of Thioflavin-T staining is also plagued by its tendency to bleach, as well as the subjective assessment of intensity based on comparison to a very strong positive control. Congo red, despite having lower sensitivity, is the standard agent used to identify amyloid in tissues. While the apple-green birefringence seen under crossed polarized light is specific for amyloid material, staining with Congo red is technically difficult resulting in inconsistent staining. Moreover, variation of mounting media and limitations of the examining microscopes increase both false-negative and false-positive results. Pitfalls of the staining techniques have been addressed elsewhere \[[@CR12]--[@CR14]\]. In this paper we show that the use of a microscope built specifically for polarized light increases sensitivity of identifying amyloid in Congo red stained sections. We also describe minor pitfalls in examination of Congo red stained slides.
Methods {#Sec2}
=======
The metallurgical microscope used in the current report is built specifically for polarized microscopy. These microscopes are widely available from different manufactures and cost between \$13,000--\$20,000 which is comparable to standard clinical microscopes. The microscope is equipped with strain free objectives, condenser, and eyepieces. The strain free optics is critical to eliminate the false optical effects generated by stressed glass under polarized light. These spurious artifacts can interfere with the ability to evaluate birefringence produced by Congo red stained amyloid deposits. The condenser, besides having strain free lenses, is designed to produce perfect parallel beam of light. One of the most important features of a polarized microscope is the 360° circular rotating stage. This allow easy and full rotation of each examined field in the specimen. The microscopes we used have a fixed polarizer fitted in the condenser and an analyzer with a precise degree dial for cross setup. The illustration accompanying the test is generic and can apply to any polarized microscope (Microscope illustration).
Such microscope is ideal for polarized microscopy and is superior to a clinical microscope for examining Congo red stained sections. We obtained similar results using similar microscopes from Leica and Nikon. Using Sénarmont compensator did not provide added benefit. The same slides were also examined using a clinical microscope in which the analyzer has a built-in λ compensator, which is common equipment sold with clinical microscopes. The λ compensators are useful in examining crystals such as uric acid but can hamper detection of amyloid deposits. All the polarized microscopes were equipped with a circular rotating mechanical stage.
We specifically selected cardiac, salivary gland, and brain biopsies, which were initially deemed negative when examined by clinical microscope but were later confirmed positive for amyloidosis. These cases cover 3 types of amyloid AL, ATTR, and FGA (Table [1](#Tab1){ref-type="table"}). Regardless of the amyloid type, the polarized microscope was superior to clinical microscope. Commercially purchased positive controls were also examined to demonstrate the validity of the techniques. Although Congo staining technique vary from lab to lab, we found no significant difference between in-house prepared stain, vs Leica, vs Dayko stainsTable 1List of reviewed cases, tissue examined, and amyloid type
Results {#Sec3}
=======
We found that the apple green birefringence is more readily visible and with higher intensity when the slides are examined using a metallurgical microscope compared to the standard clinical microscope. Fig. [1](#Fig1){ref-type="fig"} shows an image of biopsies examined by the metallurgical microscope for comparison purpose, the representative same field imaged with a standard microscope is shown. Additional file [1](#MOESM1){ref-type="media"} has a series of biopsy samples examined by clinical and metallurgical microscope. As it is known that Congo red can be fluorescent, we examined the slides with fluorescent microscope using Texas red filter. Figure [2](#Fig2){ref-type="fig"} shows representative image displaying the red fluorescence of Congo red in tissues. Fluorescence is more sensitive but less specific for amyloid stained with Congo red \[[@CR15]\].Fig. 1Congo red stained salivary gland section examined under crossed polarized light from patient with AL amyloid imaged using clinical microscope (A) and same field examined using Metallurgical microscopeFig. 2Congo red examined with fluorescent microscope using a Texas-red filter
A plastic coverslip can obliterate the ability to examine slides properly {#Sec4}
-------------------------------------------------------------------------
We examined 2 samples (kidney and heart) from 2 different institutions where plastic cover slips were used. We found that plastic coverslips scatter light and inconsistently polarize in a pattern that makes it impossible to cross the microscope polarizer and analyzer properly to obtain a dark field. Figure [3](#Fig3){ref-type="fig"}a shows a comparison of the light passed through a slide with plastic cover slip at the edge of the cover. The plastic coverslip allowed the light to go through while glass did not. This prevents examining the slide under crossed polarized light. Unfortunately, the outside labs exhausted both biopsy tissues from these patients and it was not possible to procure more material. To further examine the effect of plastic coverslips we obtain cover slips from 2 different manufacturers. Both have ill effect on examining Congo red stained sections. Figure [3](#Fig3){ref-type="fig"}b showed positive control viewed through glass and plastic coverslips. We also observed that plastic effect was different depending on the manufacturer and as shown when rotating the slides. Video recording of rotated slides covered with glass or plastic coverslips is seen in Additional file [1](#MOESM1){ref-type="media"}.Fig. 3**a** Negative impact of plastic coverslips in polarized microscopy, due to disorganized polarizing effect of some plastic polymers, it is difficult to obtain proper crossing of polarized light and light pass through on cover slip side (1) and proper dark field without the coverslip (2). **b** Negative impact of plastic cover slip in polarized microscopy showing positive control sample examined with glass cover slip (left) or plastic coverslip (right)
Rotating slides using a circular stage is important {#Sec5}
---------------------------------------------------
The orientation of the Congo red stained amyloid fibrils in relation to the plane of the light path can alter detection. Figure [4](#Fig4){ref-type="fig"} shows samples imaged, then re-imaged after rotating the stage. It is evident that the apple green birefringence can be seen only at one angle. This highlights the need to use a mechanical circular stage, not found on clinical microscopes.Fig. 4shows a sample imaged then re-images after rotating the stage. 4A rotated 45 degrees and 4b rotated 60 degrees. The green birefringence is no longer visible as indicated by the arrows
Blue hue {#Sec6}
--------
A common clinical analyzer has a polarizing filter with a built-in compensator which usually adds color and is often used for crystal examination. The use of such an analyzer in examining Congo red stained specimens, frequently leads to a field that is excessively bright and sometimes forces the observer to partially uncross the analyzer in order to accentuate the apple green birefringence. However, this results in appearance or enhancement of a bluish-green hue (Fig. [5](#Fig5){ref-type="fig"}) mainly due to collagen and other matrix proteins \[[@CR16], [@CR17]\].Fig. 5Congo red stained slide images using standard microscope with an analyzer that has a build-in compensator. Blue hue (black arrows) compared to apple green (frame arrows). Sometimes this is caused by partial uncrossing of an analyzer with built in compensator
Discussion {#Sec7}
==========
In this report, we show that suitable microscopy equipment can increase the sensitivity of identifying the amyloid-specific birefringence in Congo red-stained tissue sections. Early diagnosis of systemic amyloidosis is essential to reducing morbidity and mortality of the disease. Despite the seriousness of the disease and the benefit of early detection, an accurate pathologic diagnosis is still challenging. Spotty nature of disease, variation in organ-to-organ of the density of amyloid deposits, and the difficult of reproducible tissue staining, all increase the odds of false negative and false positive results. A negative tissue pathology report can effectively exclude the diagnosis of amyloidosis, which is then frequently never reconsidered among differential.
The standard of care for multiple myeloma, mono, and polyclonal gammopathy is conservative follow up, unless there is identifiable end-organ damage or amyloidosis \[[@CR18]\]. Accordingly, missing an early diagnosis of amyloidosis can deprive a patient from receiving lifesaving treatment and can lead to costly and sometimes invasive investigations to pursue alternative diagnoses. In the case of transthyretin or fibrinogen amyloidosis, early liver transplantation usually arrests disease progression and can even be curative \[[@CR19]--[@CR21]\]. Therefore, even a marginal improvement in sensitivity of detection of amyloid in tissue specimens will help in assuring that patients with this serious and frequently fatal disease can be treated promptly and receive accurate prognostic information.
The real prevalence of amyloidosis is not known. A retrospective evaluation of kidney biopsies suggests that amyloidosis is not as rare as it is thought to be accounting to 43% of nephrotic proteinuria above age of 60 \[[@CR5]\].
In single center experience 31% of patients with multiple myeloma patients has had confirmed evidence of amyloidosis \[[@CR22]\]. The prevalence of multiple myeloma is dwarfed by the prevalence of monoclonal gammopathy, which can be as high as 8.4% depending on the race \[[@CR23]--[@CR25]\].
When the diagnosis is missed discipline-specific bias leads to ascribing to organ dysfunction to diabetes in case of renal disease and neuropathy and cardiac symptoms on hypertension or ischemia. Yet, there is no systematic data that examine the accuracy of these presumptive etiologies, and it is not inconceivable that some fraction of these patients may be incorrectly classified.
Owing to the patchy nature of amyloidosis, especially during its early stages, amyloid deposition could be restricted to just a small area of the tissue biopsy only visible at a limited angle of slide viewing. Thorough examination of each section using a mechanical rotating stage to view slides at variable angles is essential to avoid missing such deposits. We also recommend that plastic cover slips be avoided as they can interfere with the ability to perform crossed polarized light examination and reduce ability to identify subtle or low density amyloid deposits. Low density deposits are enough to make the diagnosis due to the patchy nature of the disease. When a sample is deemed negative or equivocal, there is a need to follow previous published modifications like the use of polar mounting media or omitting the alcohol differentiation step when examining collagen rich tissue to avoid interference \[[@CR26]--[@CR29]\]. Finally, the use of proper optics like those of a metallurgical microscope is essential to avoid missing the presence of small deposits of amyloid in Congo red-stained tissue.
Conclusions {#Sec8}
===========
There is variability on the reporting of Congo red stained slides between different labs and pathologists. We identified important pearls that can improve the ability to identify amyloid material in Congo red stained tissues. We found that it is critical to use microscope with proper strain free optics and avoid the use of polarizer with built-in compensator. The use of mechanical rotating stage will reduce the chance of missing subtle or low-level amyloid deposits which can only produce birefringence at specific angles. Last, plastic cover slips can lead to inability to examine the slides under crossed polarized light. Improving sensitivity of the Congo red evaluation can aid in early diagnosis of amyloid and will have tremendous impact on clinical outcome of some patients.
Additional file
===============
{#Sec9}
Additional file 1:Included a video recording of congored stained tissue covered by eith glass or plastic coverslips and rotated. (PPTX 11971 kb)
AA
: Amyloid-A
AL
: Amyloid Light chain
EM
: Electron microscopy
TTR
: Transthyretin
We would like to thank Zeiss USA, Leica USA, and Nikkon USA for allow us to compare their polarized microscopes.
Funding {#FPar1}
=======
No funding. Any additional cost was incurred by the submitting author personal funds.
Availability of data and materials {#FPar2}
==================================
Supplemental data included.
AE carried out design, microscope testing, image capture, image comparison, drafting manuscript, and figures generation. CM contributed cardiac amyloid patient information and manuscript editing. KI contributed, false negative rate alert that triggered the work, clinical microscopy image capture, image comparison evaluation, and manuscript editing. All authors read and approved the final manuscript.
NA, all the samples used were part of standard clinical care of subjects.
NA
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
Q:
"Last 100 bytes" Interview Scenario
I got this question in an interview the other day and would like to know some best possible answers(I did not answer very well haha):
Scenario: There is a webpage that is monitoring the bytes sent over a some network. Every time a byte is sent the recordByte() function is called passing that byte, this could happen hundred of thousands of times per day. There is a button on this page that when pressed displays the last 100 bytes passed to recordByte() on screen (it does this by calling the print method below).
The following code is what I was given and asked to fill out:
public class networkTraffic {
public void recordByte(Byte b){
}
public String print() {
}
}
What is the best way to store the 100 bytes? A list? Curious how best to do this.
A:
Something like this (circular buffer) :
byte[] buffer = new byte[100];
int index = 0;
public void recordByte(Byte b) {
index = (index + 1) % 100;
buffer[index] = b;
}
public void print() {
for(int i = index; i < index + 100; i++) {
System.out.print(buffer[i % 100]);
}
}
The benefits of using a circular buffer:
You can reserve the space statically. In a real-time network application (VoIP, streaming,..)this is often done because you don't need to store all data of a transmission, but only a window containing the new bytes to be processed.
It's fast: can be implemented with an array with read and write cost of O(1).
A:
I don't know java, but there must be a queue concept whereby you would enqueue bytes until the number of items in the queue reached 100, at which point you would dequeue one byte and then enqueue another.
public void recordByte(Byte b)
{
if (queue.ItemCount >= 100)
{
queue.dequeue();
}
queue.enqueue(b);
}
You could print by peeking at the items:
public String print()
{
foreach (Byte b in queue)
{
print("X", b); // some hexadecimal print function
}
}
A:
Circular Buffer using array:
Array of 100 bytes
Keep track of where the head index is i
For recordByte() put the current byte in A[i] and i = i+1 % 100;
For print(), return subarray(i+1, 100) concatenate with subarray(0, i)
Queue using linked list (or the java Queue):
For recordByte() add new byte to the end
If the new length to be more than 100, remove the first element
For print() simply print the list
|
@using AddingDefaultSecurityHeaders
@addTagHelper "*, Microsoft.AspNetCore.Mvc.TagHelpers"
|
In a move to further establish the Islamic caliphate, ISIS militants have imposed the mandatory removal of all Syriac and Christian teachings from school curricula.
Educational institutes across Mosul and Nineveh Plain bearing Christian names, such as the St Thomas Christian school, will also be renamed under the new policy issued by the extremist group on September 5.
The statement instructs schools to "teach and serve the Muslims in order to improve the people of the Islamic state in the fields of all religious and other sciences."
"This announcement is binding," the group warned. "Anyone who acts against it will face punishment."
According to Agenzia Fides, the abolition of the teachings is part of the groups' plan to propagate its jihadist ideology among younger generations.
However, numerous families have reportedly withheld sending their children to school since the new teaching year began last Monday out of fear and uncertainty.
"What's important to us now is that the children continue receiving knowledge correctly, even if they lose a whole academic year and an official certification," a Mosul resident told the AP.
The Ministry of Education in Iraq introduced the Syriac language and teaching of Christianity to public schools last year in a bid to preserve the culture and languages of pre-existing indigenous Christian groups.
Such communities faced a drastic fall in numbers due to a surge of migrants after the fall of the country's Baath Party.
|
Fluorescent location of human colonic tumour cells by means of an enzyme-inhibitor complex.
We studied the enzymic status of the tumour cell surface protease, guanidinobenzoatase (GB) in frozen sections of a human colonic tumour grown in nude mice and also in human colons. Active enzyme was demonstrated by the binding of a synthetic fluorescent probe for the active centre of guanidinobenzoatase (GB). It was observed that tissue derived inhibitors of GB blocked the binding of this fluorescent probe and that enzyme inhibitor complex formation could be controlled by lowering the pH of the medium with lactic acid. The presence of an inhibitor of GB in the mouse tumour extract was taken advantage of by making two fluorescent derivatives of this inhibitor; both of which located GB on colonic tumour cells in frozen sections of human colon.
|
[Effect of the yellow locus on sensitivity of drosophila sex cells to chemical mutagens].
The effect of the yellow (y) locus on germ cell sensitivity to the alkylating agent ethyl methanesulfonate (EMS) has been studied in Drosophila. Since DNA repair is one of the most important factors that control cell sensitivity to mutagens, the approaches used in our experiments aimed at evaluating the relationship between germ-cell mutability and activity of DNA repair. Germ-cell mutability and repair activity were assessed using several parameters, the most important of which was the frequency of the recessive sex-linked lethal mutations (RSLLM). In one series of experiments, the adult males of various genotypes (Berlin wild; y; y ct v; y mei-9a) were treated by mutagenic agents and then crossed to Basc females. Comparative analysis of germ-cell mutability as dependent on genotype and the stage of spermatogenesis showed that the yellow mutation significantly enhanced the premeiotic cell sensitivity to EMS, presumably, due to the effect on DNA repair. In the second series of experiments, the effect of the maternal DNA repair was studied and, accordingly, mutagen-treated Basc males were crossed to females of various genotypes including y and y mei-9a ones. The crosses involving y females yielded F1 progeny with high spontaneous lethality, whereas in F2, the frequency of spontaneous mutations was twice higher. The germ cell response to EMS depended also on female genotype: the effect of yellow resulted in increased embryonic and postembryonic lethality, whereas the RSLLM frequency decreased insignificantly. The latter result may be explained by elimination of some mutations due to 50% mortality of the progeny. The results obtained using the above two approaches suggest that the yellow locus has a pleiotropic effect on the DNA repair systems in both males and females of Drosophila.
|
Coinverse has launched a new bitcoin banking platform that aims to bring consumer and merchant bitcoin transactions and bitcoin bill payment to the broader Latin American market.
The Brazil-based startup sees an unmet demand for user-friendly bitcoin solutions, asserting that its domestic competitors are better catered to experienced bitcoin users and speculative traders.
Speaking to CoinDesk, Coinverse financial director Safiri Felix estimated that more than 90% of Brazilians that buy bitcoin view the technology as an investment. As such, he argues there are few existing services that seek to utilize bitcoin’s underlying technology as a way to transfer value.
“We wanted to create something that let would us reach the regular user – not just the speculative users and traders,” he said, adding:
“We’re trying to make bitcoin accessible to mainstream people.”
Settling boletos in bitcoin
The Coinverse platform gives users in the Brazilian market buy and sell utility, but also provides them the opportunity to pay off their boletos bancários using bitcoin.
A boleto bancário is a government-issued financial document comparable to an invoice. Consumers may select this payment option when making online purchases. The boleto specifies an amount of money owed to the merchant, to be paid within a certain period of time.
Boletos can be paid through online banking, or in person at any bank, banking agency, supermarket, post office or other merchant client registered to accept boleto payments.
Now, customers can also submit their boletos to Coinverse and pay them in bitcoin. In turn, Coinverse will pay the issuing merchant the owed amount in fiat currency.
Solving merchant adoption
Felix also views merchants as essential to wider bitcoin adoption in Brazil, and aims to offer a processing solution to these customers.
“Here in Brazil, we have the need to encourage merchant adoption,” he said. “That’s the reason we decided to implement the solution with boleto. Because with this solution, the Brazilian consumer can buy almost anything over Internet commerce and pay using bitcoins, if not directly.”
Until the recent entrance into this market by competitor BitInvest and now Coinverse, however, he said that there had yet to be a compelling service that could encourage these businesses to get involved with bitcoin.
He added that he believes Latin American merchants are curious about adopting bitcoin, but surmised that many are waiting for bigger companies to move first on the initiative.
A consumer-friendly approach
Felix pointed to spending statistics that suggest Brazilians do much of their shopping abroad as evidence that bitcoin could become a compelling solution for domestic commerce.
Further, he believes that when bitcoin is more broadly accepted as a currency worldwide, users will begin to buy it before traveling or purchasing goods and services abroad, creating demand in Brazil.
Before the launch of its universal service, Coinverse operated a bitcoin ATM manufactured by Genesis Coin. However, the company has no immediate plans to launch a wider network of these machines.
Rather, Coinverse sought to use its ATM a means of meeting and becoming acquainted with bitcoin community members, with whom they could discuss how to best position a bitcoin solution for the general consumer market.
Felix said:
“Our focus is to expand the market and to make digital currencies more friendly.”
Additional reporting contributed by Pete Rizzo.
Images via Coinverse; Shutterstock
|
#!/usr/bin/env ruby
APP_PATH = File.expand_path('../config/application', __dir__)
require_relative '../config/boot'
require 'rails/commands'
|
IN THE COURT OF APPEALS OF THE STATE OF IDAHO
Docket No. 46533
STATE OF IDAHO, )
) Filed: March 2, 2020
Plaintiff-Respondent, )
) Karel A. Lehrman, Clerk
v. )
) THIS IS AN UNPUBLISHED
BRIAN RAY McGRAW, ) OPINION AND SHALL NOT
) BE CITED AS AUTHORITY
Defendant-Appellant. )
)
Appeal from the District Court of the Fourth Judicial District, State of Idaho, Ada
County. Hon. Michael J. Reardon, District Judge.
Judgment of conviction for possession of a controlled substance, affirmed.
Eric D. Fredericksen, State Appellate Public Defender; Andrea W. Reynolds,
Deputy Appellate Public Defender, Boise, for appellant.
Hon. Lawrence G. Wasden, Attorney General; Ted S. Tollefson, Deputy Attorney
General, Boise, for respondent.
________________________________________________
LORELLO, Judge
Brian Ray McGraw appeals from his judgment of conviction for possession of a
controlled substance. McGraw argues that the district court lacked subject matter jurisdiction
when it accepted his plea and at sentencing. We affirm.
I.
FACTUAL AND PROCEDURAL BACKGROUND
This is the second appeal in this case. In the first appeal, we reversed the district court’s
order suppressing evidence discovered during a vehicle search and remanded the case for further
proceedings. State v. McGraw, 163 Idaho 736, 741, 418 P.3d 1245, 1250 (Ct. App. 2018). The
opinion stated, in relevant part, that “the district court erred in granting Killeen’s and McGraw’s
motions to suppress and in dismissing their cases on that basis.” Id.
1
After remand, McGraw pled guilty to possession of a controlled substance,
I.C. § 37-2732(c). During sentencing, in discussing what would be the appropriate amount of
credit for time served, counsel for McGraw noted the procedural posture of the case was “weird”
because the district court granted an oral motion to dismiss the case after the State indicated that
it could not proceed without the evidence that was the subject of McGraw’s motion to suppress.1
In response, the district court noted it set aside its order of dismissal. 2 Counsel for McGraw then
noted that the remittitur did not set aside the dismissal, although he said it “would make sense
that the order to dismiss be set aside.” After the parties presented their sentencing arguments, the
district court inquired whether there was any legal cause why judgment should not be entered.
Although both parties responded, “No,” the district court noted its concern over whether the
State was required to refile because this Court “did not address the dismissal.” Before imposing
sentence, the district court requested briefing on the procedural posture of the case following
remand.
Pursuant to the district court’s request, the State submitted a brief arguing that it was not
necessary to refile the charging document in this case following remand because, once the case
was remanded, the case was reinstated to the procedural posture prior to the order granting
suppression. Counsel for McGraw did not file a brief, but indicated counsel agreed with the
State’s legal analysis. Ultimately, the district court agreed with the State as well. The district
court thereafter imposed sentence and entered a judgment of conviction. McGraw appeals.
1
There was no written order dismissing McGraw’s case. Rather, both the motion to
dismiss and the court’s decision to grant the motion were oral. While an order granting a motion
to dismiss is appealable, a written order is required in order to appeal such a determination.
I.A.R. 11 (identifying appellate judgments and orders and requiring that a copy of the order or
judgment be attached to the notice of appeal); I.A.R. 14(a) (an appeal may be made by filing a
notice of appeal within forty-two days from the date of the filing stamp of the clerk of the court
on any judgment or order of the district court). The written order appealed by the State was the
district court’s order granting McGraw’s motion to suppress, which reads: “[T]he motion to
suppress evidence is granted.” The order was silent as to dismissal.
2
There is nothing in the record that indicates when the order of dismissal was set aside,
perhaps because of the oral nature of the order in the first instance.
2
II.
STANDARD OF REVIEW
A question of jurisdiction is fundamental, cannot be ignored, and should be addressed
before considering the appeal’s merits. State v. Kavajecz, 139 Idaho 482, 483, 80 P.3d 1083,
1084 (2003). The question of subject matter jurisdiction is a question of law over which this
Court exercises free review. State v. Svelmoe, 160 Idaho 327, 330, 372 P.3d 382, 385 (2016).
III.
ANALYSIS
McGraw argues that the district court erred in concluding that it had subject matter
jurisdiction over this case upon remand after McGraw. Specifically, McGraw argues that the
State had to file a new charging document before proceeding against him because our opinion in
McGraw did not expressly set aside the district court’s prior order dismissing the case. 3 The
State argues that the district court correctly determined it had subject matter jurisdiction because
our opinion in McGraw returned the case to the posture it was in prior to the order granting
McGraw’s motion to suppress. We hold that the district court had subject matter jurisdiction to
enter judgment against McGraw on remand without the State refiling a new charging document.
Subject matter jurisdiction refers to a court’s abstract power to hear cases of a certain
class or character. State v. Jakoski, 139 Idaho 352, 355, 79 P.3d 711, 714 (2003). A court
cannot enter a judgment against a defendant in the absence of subject matter jurisdiction. See
State v. Branigh, 155 Idaho 404, 411, 313 P.3d 732, 739 (Ct. App. 2013). One way Idaho courts
obtain subject matter jurisdiction over criminal cases is through the filing of an information or
complaint alleging an offense was committed within the state of Idaho. State v. Rogers, 140
Idaho 223, 228, 91 P.3d 1127, 1132 (2004). Absent a statute or rule extending jurisdiction, a
district court’s jurisdiction over a criminal case terminates when a dismissal order becomes final.
State v. Johnson, 152 Idaho 41, 47, 266 P.3d 1146, 1152 (2011). Generally, upon issuance of a
remittitur from an appellate court, the jurisdiction of a district court reattaches. State v. Billups,
3
It is unclear how McGraw can reconcile his insistence that his case was dismissed with
prejudice with his assertion that the State could and was required to refile the charge against him
following remand. Because we conclude that the State was not required to refile, we need not
resolve this apparent inconsistency.
3
163 Idaho 889, 891, 421 P.3d 220, 222 (Ct. App. 2018). Under I.A.R. 38(c), district courts then
have the authority to take those actions that are consistent with and necessary to comply with the
appellate court’s opinion. Billups, 163 Idaho at 891, 421 P.3d at 222; State v. Bosier, 149 Idaho
664, 667, 239 P.3d 462, 465 (Ct. App. 2010).
This Court’s opinion in McGraw concluded the district court erred in granting McGraw’s
motion to suppress and dismissing his case on this basis. The actions consistent with and
necessary to comply with our opinion required reinstating McGraw’s case to its presuppression
and predismissal status. See Billups, 163 Idaho at 892, 421 P.3d at 223 (noting that, although the
appellate opinion did not include a specific directive to remand, reversal of defendant’s
conviction based on the conclusion that the defendant’s pretrial motion to suppress should have
been granted returned the case to its status prior to denial of the suppression motion). That is
what occurred in this case. Nothing in our opinion in McGraw stripped the district court of its
jurisdiction over this case. See Billups, 163 Idaho at 893, 421 P.3d at 224. Thus, the district
court had jurisdiction to accept McGraw’s guilty plea, impose sentence, and enter judgment
against McGraw following our remand.
IV.
CONCLUSION
The district court correctly concluded that it had subject matter jurisdiction over this case
upon remand after McGraw. Thus, McGraw has failed to show that the district court erred in
entering judgment absent the State filing a new information. Accordingly, McGraw’s judgment
of conviction for possession of a controlled substance is affirmed.
Chief Judge HUSKEY and Judge GRATTON, CONCUR.
4
|
---
abstract: 'From a re-analysis of $\pi^+\pi^-\to\pi^+\pi^-$ and $\pi^+\pi^-\to K\bar K$ data, we found for $\pi\pi$ S-wave interaction below 2 GeV, $\sigma(400)$, $f_0(980)$, $f_0(1500)$ and $f_0(1780)$ clearly show up. $f_0(1370)$ can be included with a very small branching ratio to $\pi\pi$. $f_0(1300)$ and $f_0(1590)$ are not real resonances and are due to interference effects of above resonances. The $\sigma(400)$ with a width about 700 MeV is mainly produced by t-channel exchange force.'
author:
- |
Bing-Song Zou\
Queen Mary and Westfield College\
London E1 4NS, United Kingdom
title: '$\pi\pi$ S-WAVE INTERACTION AND $0^{++}$ PARTICLES [^1] '
---
= 10000 =5.75in =8.75in =-1.0in =-0.40in
Nearly all known mesons are bound states of a quark and an antiquark. The $q\bar q$ mesons containing u, d and s quarks can be ascribed into various $^{2S+1}L_J$ SU(3) nonets according to their spin S, orbital angular momentum L and total angular momentum J. Among the lowest nonets, $^1S_0$, $^3S_0$ and $^3P_2$ nonets are well established; $^3P_1$ and $^1P_1$ are also settled although there are still some uncertainties [@PDG]; only $^3P_0$ nonet is very problematic: it has two opening positions for its isoscalar part, while there are too many candidates, but none of them can fill in without controversy. Now let’s have a look at these candidates.
- $\sigma(300\sim 800)$ with width $\Gamma = 200\sim 1000 MeV$[@PDG1].
It was listed in old PDG booklets[@PDG1] more than twenty years ago, but has been dropped out in newer versions. However it is needed in the $\sigma$ model, the extended Nambu-Jona-Lasinio model and the nucleon-nucleon scattering models; it is also needed to explain the low energy enhancement in $\pi\pi$ invariant mass spectra from various production processes.
- $f_0(980)$ with a peak width about 50 MeV[@PDG].
Due to its very narrow peak width, it was regarded to be difficult to be ascribed as a $q\bar q$ state. Many exotic explanations have been proposed for it, such as $K\bar K$ molecule[@Isgur; @Julich], multiquark state[@Jaffe], and Gribov’s minion [@Gribov], etc.
- $f_0(1300)$ with $\Gamma = 200\sim 400 MeV$ and $\Gamma_{\pi\pi}/\Gamma > 90\%$.
It was commonly ascribed as a $q\bar q$[@PDG]. But we will see that it in fact does not exist.
- $f_0(1370)$ with $\Gamma = 200\sim 400 MeV$ and $\Gamma_{\pi\pi}/\Gamma < 20\%$.
It is needed in fitting Crystal Barrel data on $\bar pp$ annihilations [@CB1; @CB2] and some other data[@PDG], and was found decaying into $\pi\pi$, $\eta\eta$ and dominantly to $4\pi$. But it needs further confirmation[@PDG].
- $f_0(1500)$ with $\Gamma = 90\sim 150 MeV$.
This resonance was observed to decay into $\pi^0\pi^0$, $\eta\eta$, $\eta\eta'$ and $4\pi^0$ in $\bar pp$ annihilations by Crystal Barrel Collaboration[@CB1; @CB2]. Then clear signals for it were also found in $J/\Psi\to\gamma 2\pi^+2\pi^-$[@Bugg] and central production processes of $pp\to pp\pi^+\pi^-$ and $pp\to pp(2\pi^+2\pi^-)$[@WA]. All these production processes are traditionally believed to favour glueballs and the mass of the $f_0(1500)$ is very close to the lattice-QCD prediction by UKQCD group[@UKQCD]. Therefore the glueball explanation was suggested for it[@Close]. Meanwhile it was also suggested to be $\rho\rho -\omega\omega$ molecule[@Torn1] and $q\bar q$ state[@Zou].
- $f_0(1590)$ with $\Gamma = 160\sim 200 MeV$.
The $f_0(1590)$ was observed by GAMS collaboration in $\pi^-p$ reactions at 38 GeV/c and was regarded as a glueball candidate for a long time [@PDG].
- $f_J(1710)$ with $\Gamma\approx 140 MeV$.
The $f_J(1710)$ has been clearly seen in “glue rich" $J/\Psi$ radiative decay and its spin may be 0. But in central production, a structure at the same mass was seen, but favors spin 2. So its spin is still uncertain[@PDG].
- $f_0(1750-1820)$ with $\Gamma\approx 150 MeV$.
There are some new evidences for this resonance in $J/\Psi$ radiative decay[@Bugg] and $\gamma\gamma$ fusion[@L3]. It is not established yet.
From the list, there are obviously too many $0^{++}$ particles to be explained as $q\bar q$ mesons. Even assuming two radial excitation $^3P_0$ $q\bar q$ states below 2 GeV, we should only have four $0^{++}$ $q\bar q$ mesons. Then what are the others? Does it mean there are definitely some exotic particles among them? Before we make any conclusion we should consider another possibility: not all of them are real resonances.
A good place to examine them is S-wave $\pi\pi$, $K\bar K$ and $\eta\eta$ scattering amplitudes. If a $0^{++}$ resonance has a substantial coupling to $\pi\pi$, then it should show up clearly in the $\pi\pi\to\pi\pi$ S-wave amplitude. Therefore we first examined the existing and commonly used $\pi\pi\to\pi\pi$ S-wave phase shifts[@CM]. We found [@ZB1; @ZB2] that among $0^{++}$ particles listed above only a broad $\sigma(400)$ with a width about 700 MeV and the $f_0(980)$ clearly show up. The broad $\sigma(400)$ can be naturally explained by the t-channel $\rho$ exchange[@Julich; @ZB2]. The $f_0(980)$ has a large decay width about 400 MeV, but appears as a narrow structure with a width about 46 MeV due to $K\bar K$ threshold effect[@ZB1]. It is dominantly $s\bar s$ mixed with $K\bar K$ virtual states[@ZB2; @Torn2].
Then how about other $0^{++}$ particles? From modern experimental results, the old CERN-Munich solution[@CM] of $\pi\pi$ S-wave phase shifts is questionable for energies above 1200 MeV. As a second step we re-analysed their original data for $\pi^-p\to\pi^-\pi^+n$ at 17.2 GeV. We found[@BSZ] that $f_0(1500)$ clearly shows up in the $\pi\pi$ S-wave phase shifts. Recently we also re-analysed the original data for $\pi^+\pi^-\to K\bar K$ from Argone[@Argone] and Brookhaven[@Brook]. We found[@BZ] that a $f_0(1750-1820)$ is needed. The isoscalar $\pi\pi\to\pi\pi$ S-wave amplitude squared is obtained as shown in Fig.\[fig:ampl\]. This is a very interesting spectrum. The broad $\sigma(400)$ by t-channel $\rho$ exchange provides a very broad background, three resonances $f_0(980)$, $f_0(1500)$ and $f_0(1780)$ superpose on it and therefore appear as dips. The peaks at 800, 1300 and 1590 MeV are caused by the interference effects, so they are not additional resonances. For $\pi\pi\to\eta\eta$ S-wave intensity, the peak of $f_0(1590)$ is also caused by a dip around $f_0(1500)$.
=11.0cm
As to $f_0(1370)$, since it has a very small branching ratio to $\pi\pi$, it may have little influence on $\pi\pi$ S-wave amplitude and therefore is not excluded, but needs further confirmation. For $f_J(1710)$ it is quite possible containing two components. Preliminary result from BES Collaboration[@BES] suggests that it is composed of a $2^{++}$ component below 1.7 GeV and a $0^{++}$ resonance at $f_0(1780)$.
In summary, for $\pi\pi$ S-wave interaction below 2 GeV, $\sigma(400)$, $f_0(980)$, $f_0(1500)$ and $f_0(1800)$ clearly show up. $f_0(1370)$ can be included with a very small branching ratio to $\pi\pi$. $f_0(1300)$ and $f_0(1590)$ are not real resonances and are due to interference effects of above resonances. The $\sigma(400)$ with a width about 700 MeV is mainly produced by t-channel exchange force. The $f_0(980)$ has a large decay width as well as a narrow structure width due to $K\bar K$ threshold effect. It is dominantly $s\bar s$ mixed with $K\bar K$ virtual states, as well as some $u\bar u+d\bar d$. Others can also be accommodated by $1 ^3P_0$ and $2 ^3P_0$ $q\bar q$ nonets, though we cannot exclude the possibility that $f_0(1500)$ may be a glueball or glueball mixed with $q\bar q$. All $0^{++}$ $q\bar q$ mesons are expected to have large mixing of $s\bar s$ with $u\bar u + d\bar d$ [@OZI]. They may also have admixture of virtual meson-meson states and some glue components.
It is a pleasure to thank Prof. T.H.Ho for his nomination and the organisers for their invitation to participate this nice school. I am greatly indebted to Prof. D.V.Bugg for his advice and collaboration. I also gratefully acknowledge the support of K.C.Wong Education Foundation, Hong Kong, for a visit to Beijing where part of this talk was prepared and presented.
[99]{}
Particle Data Group, Phys. Rev. [**D50**]{} (1994) 1173, and reference therein. Particle Data Group, Phys. Lett. [**50B**]{} (1974) 74, and reference therein. J.Weinstein and N.Isgur, Phys. Rev. Lett. [**48**]{} (1982) 659; Phys. Rev. [**D27**]{} (1983) 588; [**41**]{} (1990) 2236. D.Lohse, J.W.Durso, K.Holinde and J.Speth, Nucl. Phys. [**A516**]{} (1990) 513; G.Janssen et al., Phys. Rev. [**D52**]{} (1995) 2690. R.L.Jaffe, Phys. Rev. [**D15**]{} (1977) 267; N.N.Achasov and G.N.Shestakov, Z. Phys. [**C41**]{} (1988) 309. V.N.Gribov, Lund preprint LU-TP 91-7 (March 1991); F.E.Close et al., Phys. Lett. [**B319**]{} (1993) 291. Crystal Barrel Collaboration, V.V.Anisovich et al., Phys. Lett. [**B 323**]{} (1994) 233; V.V.Anisovich, D.V.Bugg, A.V.Sarantsev and B.S.Zou, Phys. Rev. [**D50**]{} (1994) 1972; 4412. Crystal Barrel Collaboration, C.Amsler et al., Phys. Lett. [**B321**]{} (1994) 431; [**B380**]{} (1996) 453. D.V.Bugg, I.Scott, B.S.Zou, V.V.Anisovich, A.V.Sarantsev, T.Burnett and S.Sutlief, Phys. Lett. [**B353**]{} (1995) 378. WA91 Collaboration, F.Antinori et al., Phys. Lett. [**B353**]{} (1995) 589. G.Bali et al. (UKQCD), Phys. Lett. [**B307**]{} (1993) 378. C.Amsler and F.E.Close, Phys. Lett. [**B353**]{} (1995) 385; Phys. Rev. [**D53**]{} (1996) 295; V.V.Anisovich, Phys. Lett. [**B364**]{} (1995) 195. N.Törnqvist, Phys. Rev. Lett. [**67**]{} (1992) 556; Z. Phys. [**C 61**]{} (1994) 525. B.S.Zou, AIP Conf. Proc. No. [**338**]{} (1995) 498; E.Klempt et al., Phys. Lett. [**B361**]{} (1995) 160. L3 Collaboration, M.Acciarri et al., Phys. Lett. [**B363**]{} (1995) 118. B.S.Zou and D.V.Bugg, Phys. Rev. [**D 48**]{} (1993) 3948. B.S.Zou and D.V.Bugg, Phys. Rev. [**D 50**]{} (1994) 591. N.Törnqvist, Phys. Rev. Lett. [**49**]{} (1982) 624; Z. Phys. [**C68**]{} (1995) 647. B.Hyams et al., Nucl. Phys. [**B 64**]{} (1973) 134; W.Maenner, in AIP conf. Proc. No. [**21**]{} (1974); L.Rosselet et al., Phys. Rev. [**D15**]{} (1977) 574. D.V.Bugg, A.V.Sarantsev and B.S.Zou, Nucl. Phys. [**B471**]{} (1996) 59. A.J.Pawlick et al., Phys. Rev. [**D15**]{} (1977) 3196. A.Etkin et al., Phys. Rev. [**D25**]{} (1982) 1786. D.V.Bugg and B.S.Zou, in preparation. BES collaboration, to be published. Y.C.Zhu, private communication. H.J.Lipkin and B.S.Zou, Phys. Rev. [**D53**]{} (1996) 6693.
[^1]: Talk presented at 34th Course of International School of Subnuclear Physics, Erice, Sicily, July 3-12,1996
|
MINNEAPOLIS, Aug. 21, 2019 /PRNewswire/ -- Vireo Health International, Inc. ("Vireo" or the "Company") (CNSX: VREO; OTCQX: VREOF), a leading science-focused, multi-state cannabis company, today announced that it will report financial results for its second quarter ended June 30, 2019 on Thursday, August 29, 2019 before the market opens.
The Company will hold a conference call and webcast to discuss its business and financial results that same day at 8:30 a.m. Eastern Time (7:30 a.m. Central Time). A live audio webcast of the call will be available in the Events & Presentations section of Vireo's website at https://investors.vireohealth.com/events-and-presentations/default.aspx. The conference call may also be accessed by dialing 866-211-3165 (Toll-Free) or 647-689-6580 (International) and entering conference ID 4049456. A webcast replay will be available for one year on Vireo's website.
About Vireo Health International, Inc.
Vireo Health International, Inc.'s mission is to build the cannabis company of the future by bringing the best of medicine, engineering and science to the cannabis industry. Vireo's physician-led team of more than 350 employees provides best-in-class cannabis products and customer experience. Vireo cultivates cannabis in environmentally-friendly greenhouses, manufactures pharmaceutical-grade cannabis extracts, and sells its products at both company-owned and third-party dispensaries. The Company is currently licensed in eleven markets including Arizona, Maryland, Massachusetts, Minnesota, Nevada, New Mexico, New York, Ohio, Pennsylvania, Puerto Rico, and Rhode Island. For more information about the company, please visit www.vireohealth.com.
Contact Information
Investor Inquiries
Sam Gibbons
Vice President, Investor Relations
[email protected]
(612) 314-8995
Media Inquiries
Albe Zakes
Vice President, Corporate Communications
[email protected]
(267) 221-4800
SOURCE Vireo Health International, Inc.
Related Links
http://www.vireohealth.com
|
/*
* Copyright 2018 Netflix, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.netflix.titus.master.agent.service.cache;
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.function.Function;
import com.netflix.titus.api.agent.model.AgentInstance;
import com.netflix.titus.api.agent.model.AgentInstanceGroup;
import rx.Completable;
import rx.Observable;
import rx.Single;
public interface AgentCache {
List<AgentInstanceGroup> getInstanceGroups();
AgentInstanceGroup getInstanceGroup(String instanceGroupId);
Optional<AgentInstanceGroup> findInstanceGroup(String instanceGroupId);
Set<AgentInstance> getAgentInstances(String instanceGroupId);
AgentInstance getAgentInstance(String instanceId);
Optional<AgentInstance> findAgentInstance(String instanceId);
Single<AgentInstanceGroup> updateInstanceGroupStore(String instanceGroupId, Function<AgentInstanceGroup, AgentInstanceGroup> function);
Single<AgentInstanceGroup> updateInstanceGroupStoreAndSyncCloud(String instanceGroupId, Function<AgentInstanceGroup, AgentInstanceGroup> function);
Single<AgentInstance> updateAgentInstanceStore(String instanceId, Function<AgentInstance, AgentInstance> function);
Completable removeInstances(String instanceGroupId, Set<String> agentInstanceIds);
Observable<CacheUpdateEvent> events();
}
|
4 Comments
And now a Yankee!! At some of the other baseball web sites I frequent, there isn’t much love for old Larry. One quick note, Topps is picking their 60 best cards for their 60th aniversity next year. You can vote for 10 cards once a day at their web site. The Ernie Banks rookie card is the only Cub. The number of Mickey Mantle cards is over kill.
Good one. I guess I didn’t know old Larry has World Series rings. I pitched with some guys in the low(I mean really low) minors that finally made it to the bigs with the Orioles and got rings in 1970. Of course, in 1969 and 1971 they lost the fall classic. I don’t hold grudges. In 1969 I was following the Series on the radio and The Amazing Mets seemed like they had all the lot in the World(series that is). In 1971, my Orioles were beat by a better team. Baltimore had better pitching on paper, four, count them, FOUR 20 game winners, but the Bucco’s won anyway. Clemente simply killed us.
I wish Mr. Rothschild well with the Yanks. Who knows, this may be his dream job and we are all dreamers at heart. I wanted to be a major leaguer but uncle Sam had better ideas for me. No regretts. I got to see the world. My granson is a soccer player. I go to his games. He is just starting to play some T-ball. He’s just seven. My dream is for him to play for the Cubs. He comes to my house and looks at all the pictures on the wall or ball teams. I have a few Cubs team picures, the ones that show just there heads from Santo’s days. Did you know Santo led the league in triples one year? So did Timmy McCarver. Hit’m where they ain’t.
Well, I like the web site. Anything the Cubs I like. In fact I like all the major league teams to some degree but not the Red Sox. Their scout told me when I was 16 yrs old that I should be a farmer and give up on baseball. Of course the next year I grew four inches, dropped down my arm angle to high side arm and started throwing bee-bees. I got drafted when I was 21. The best guess from the guys in the know back then was that I was throwing in the low 90s. But that was about it, fastball, changeup..nada. It was the sinker that I threw for one season that would have, could have gotten me to the show.
It’s kind of ironic your site is called bullpen Brian. My old pitching coach in the minors was this guy in a wheel chair named Brian something, maybe Betters or Britton. Something like that. That was over 40 years ago now. Time by Pink Floyd, great song. Enuff for now.
I can tell your baseball days are very fond memories for you–even if you fell short of the show.
I too, spent a long time in my early 20s gunning for the majors, only it was behind a radio mic!
To some degree, that dream isn’t over…you just never know, I guess.
I’m sure Rothschild had some doubts he’d ever end up with New York.
Didn’t know McCarver led the league in triples, either!
Good for your grandson. Maybe he’s talented enough to help the US win the World Cup some day!
Or maybe he should become a farmer? JK:)
By the way, one of my favorite pitchers growing up was side-winding Scott Sullivan (Cincy, CHW, KC).
I caddied for him a couple of times at a local golf course.
Dude always left me a nice tip and tickets for the upcoming home series. Class act.
|
Michigan on-the-job deaths decline in 2012
The number of workplace deaths in Michigan declined in 2012, as an estimated 127 workers died on the job compared to 141 in 2011, according to preliminary figures from an annual report compiled by Michigan State University.
Agriculture saw the largest number of deaths at 17, while the construction industry had the second most at 15, followed by transportation and warehousing with 13, according to the Michigan Fatality Assessment and Control Evaluation program, or MIFACE.
“This reduction in fatality rates is encouraging, but worker deaths are almost always preventable and even one is too many,” said Kenneth Rosenman, director of the Division of Occupational and Environmental Medicine.
The report shows that motor vehicles caused the most deaths with 32, followed by 28 homicides, 12 machine-related deaths and 11 falls. Guns were involved in 71 percent of all workplace homicides.
“We monitor these figures for the same reasons we track any other public health problem,” Rosenman said. “It’s important to know the scope of the problem so we can plan interventions to solve it. And these numbers make it clear that Michigan has a way to go in terms of getting all companies to create a culture of safety on the job.”
The MIFACE data are being released in preparation for Workers Memorial Day on April 29. Workers and public health professionals across the country pay tribute to the about 5,000 Americans killed each year by work-related trauma. Another 60,000 U.S. workers are estimated to die each year from cancer, lung disease and other illnesses from work-related exposures.
A Workers Memorial Day remembrance will be held at noon April 29 at Wentworth Park in Lansing. Rosenman will speak at the event, along with Norwood Jewell, director of the UAW Region 1C; and Glen Freeman, President of the Greater Lansing Safety Council.
MIFACE is a research project of MSU funded by the Centers for Disease Control and Prevention. MSU works closely with the Michigan Occupational Safety and Health Administration on the project.
|
Price :
Similar Products
Recommended Products
Tan Cobblestone Patterned Flat Paper
Our Tan Cobblestone Patterned Flat Paper has the look of a cobblestone walkway or wall. Each roll of Tan Cobblestone Patterned Flat Paper measures 4' wide x 50' long and is printed on one-side only.
WARNING
This item ships via Standard Shipping to the 48 contiguous states only.
Please allow ample time for delivery. Our estimated delivery date for this product is noted in checkout. Many items are shipped separately, so please allow extra time for these items to arrive. Shipping charges are based on the value of the merchandise and not the number of shipments. For additional shipping information, please contact our Customer Service Department at ([email protected] or 1-800-314-8736).
Customer Questions & Answers
There is a discount price for 5 or more. Does it have to be the same pattern, or a combination of all corrugated or flat? I have enough flat and corrugated each to qualify, but discount price not coming up. I had ordered from another vendor and received the discount if I ordered the minimum for discount of various papers. I would like to know before I decide to order. Purchasing out of my pocket for church, looking for best deals.
I am interested in using this on the floor to create a cobblestone alley look. I am concerned with ladies heels possibly poking holes and creating tripping hazards. Can this safely be used on floors where there will be a high level of traffic?
This paper was bought to create a VBS backdrop of Nazareth. The paper can be used directly on the wall for stone effect and also wrapped around boxes to add a 3D dimension and depth to your backdrop. Great value with the 50 feet length.
I ordered this to use as wall coverings for our Christmas play. It was so easy and fast to use and made our "Bethlehem" scene look very realistic. People thought we had spent hours painting! (When I first received it, I thought the wrong item had been sent, but after unrolling it I realized it was correct. The stones seemed too big and too widely spread out. However, when we added it to our scene...WOW! You can see the results for yourself! I'm planning on ordering more for a walkway in a Medieval play and will also be ordering the stone for a castle look!
Description:
This Bamboo Patterned Flat Paper makes a great
background for an Asian inspired party or special event. Each roll of Bamboo Flat Paper measures 48" high x 50' long and is coated with special fade-resistant ink and is acid free and recyclable.
Description:
Set the scene for a great party with
our Chalkboard Patterned Flat Paper. This durable Chalkboard Patterned Flat Paper is perfect for hiding ugly walls, covering tables and much more.Made of paperMeasures 48" wide x 50' long.
Description:
Cobblestone Patterned Corrugated Paper will transform any surface
to fit your party theme quickly, easily, and inexpensively. Each roll of cobblestone corrugated paper measures 4' wide x 25' long, and has a grid for cutting on it's reverse side. Cobblestone paper is great for creating castles, walkways, and other ...
|
*Dear Editor,*
A 22-year-old male patient presented with a nonpulsatile, diffuse headache of moderate-intensity, with no aura or other associated symptoms. In the neurological exam, he presented paralysis of the vertical conjugate gaze with fixed downward glance, bilateral eyelid retraction, insufficiency of ocular convergence, pupils nonreactive to light, and preserved pupillary reaction to accommodation, characterizing Parinaud's syndrome. Magnetic resonance imaging (MRI) showed an expansive lesion in the pineal region, with a discrete hyperintense signal in T1-weighted sequences and isointense in T2-weighted sequences, with cystic areas of diffusion, a discrete hyperintense signal in the diffusion, and marked gadolinium enhancement ([Figure 1](#f1){ref-type="fig"}). The lesion caused compression of the cerebral aqueduct and dorsal midbrain, as well as causing hydrocephalus. Histopathological analysis demonstrated a papillary neoplasm composed of cuboidal cells, with an epithelial appearance, arranged on fibroconnective stromata, with evident vascularization, and mitotic activity (4 mitotic figures per 10 high-power fields). Immunohistochemical analysis showed marked positivity for cytokeratins and for S-100 protein, together with negativity for neurofilament proteins. These findings are consistent with a papillary tumor of the pineal region (PTPR).
Figure 1Magnetic resonance imaging. **A:** Non-contrast-enhanced sagittal T1-weighted sequence showing an expansile lesion in the pineal region compressing the dorsal midbrain and presenting a predominance of high signal intensity (arrow). **B:** Axial T2- weighted sequence showing cystic images within the lesion (arrowhead). Note also the increase in the dimensions of the supratentorial ventricular system (arrows). **C:** Functional diffusion- weighted sequence, axial section, showing discrete hyperintensity. **D:** Gadolinium contrast-enhanced sagittal T1-weighted sequence showing heterogeneous enhancement.
The role of MRI in the diagnosis of brain tumors has been expanding^([@r1]-[@r3])^. The World Health Organization classifies PTPR as a grade II or III tumor. It is rare, fewer than 200 cases having been reported. The mean age at onset is 35 years, and PTPR has no predilection for either gender^([@r4])^. Its origin is uncertain, the most widely accepted hypothesis is that it originates from ependymal cells of the subcommissural organ^([@r4],[@r5])^. Histologically, PTPR is characterized by the presence of epithelial and papillary aspect structures with high cellularity and moderate-to-high mitotic activity^([@r4]-[@r6])^. It can cause headache, due to obstructive hydrocephalus, and Parinaud's syndrome^([@r7])^, due to compression of the dorsal midbrain, specifically the periaqueductal region^([@r8])^.
On MRI, PTPR typically presents as a heterogeneous, well-circumscribed mass in the pineal region, containing cystic areas, without calcifications or hemorrhages. Classically, it is described as showing a hyperintense signal in T1-weighted sequences, as observed in our case, the high signal intensity potentially being related to the high protein content of the lesion ^([@r9],[@r10])^. After intravenous administration of gadolinium, moderate heterogeneous enhancement is observed. Dissemination into the cerebrospinal fluid, although rare, occurs in up to 7% of cases^([@r4],[@r7])^. Advanced MRI sequences can reveal signs of hypoperfusion, whereas proton spectroscopy can show increases in the peaks of choline, lactate, and myo-inositol, as well as a decrease in the N-acetyl-aspartate peak^([@r11])^.
The differential diagnosis of lesions in the pineal region is broad, including germinoma, ependymoma, meningioma, pineocytoma, pineoblastoma, and glioma, although those tumors rarely present a hyperintense signal in T1-weighted sequences^([@r10],[@r11])^.
The treatment of choice is surgical resection, there being no proven benefits of the use of radiotherapy or chemotherapy^([@r11])^. Partial resection and tumors with higher mitotic and proliferative activity (high Ki-67 expression) tend to be related to a poor prognosis and to recurrence, which is reported in up to 72% of cases^([@r4],[@r11],[@r12])^.
In conclusion, Parinaud's syndrome is a warning sign of the possibility of expansile processes in the pineal region. Albeit rare, the diagnosis of PTPR should be remembered among the hypotheses, especially when there is a hyperintense signal in a T1-weighted sequence.
|
Miscellaneous Additives / Chemicals
Grape Tannin
Item #:2736
Description:Found in skins and stems of grapes, tannin adds astringency or zest to wine. Also aids in the clearing process. Tannin occurs naturally in red wines which are fermented in the skins, but must be added to white wines.
Use: Usage varies according to the grape or fruit, but generally, you would add no more than 1/4 teaspoon per gallon to fruit wines. Not needed if making wine from a kit.
|
The US Military Is Getting Smaller
With the proposed federal budget for fiscal year 2015 having been published, a broad spectrum of US-based media outlets highlighted the smaller amount given to the Pentagon.
As a result of 2013’s controversial “sequestration”–an antiseptic term for downsizing specific branches–both personnel and equipment, as well as future projects will be shed.
In reality, however, the changes are superficial. The US maintains the largest military budget on Earth even when China officially increased theirs by a meager 12%.
Totaling $575 billion, most of it is for the Department of Defense, who are taking the lion’s share: $496 billion. The peripheral amount of $79 billion is for Overseas Contingency Operations (OCO), representing foreign adventures in Afghanistan and the Middle East.
Overall military spending, when the State Department, overseas aid, and other unscrutinized expenses are added, could reach more than $700 billion. This is the lowest since the height of the Iraq War, between 2006-2007. An interesting factoid is US military spending often reflects the amount of the federal deficit.
The Pentagon expressed considerable displeasure at the budget, citing across the board cuts as detrimental to its mission. Both Defense Secretary Chuck Hagel and Army Chief of Staff General Raymond Odierno, a veteran of Iraq, emphasized how US ground forces need to shed thousands of personnel and not acquire new equipment.
Their arguments suggested a lower readiness to fight overseas rather than a genuine weakening of the US military. For example, despite the new budget, the US military keeps all its bases and may acquire new ones.
As a result of the budget, the US Army is expected to maintain just 450,000 personnel until 2020, or roughly the same size as the South Korean Army. The Marines are doing the same, reducing their numbers to 175,000. Ageing helicopters used by either branch and the National Guard are scheduled for obsolescence.
The Air Force have sacrificed the entire A-10 Thunderbolt fleet and are anticipating the F-35A to replace their attack aircraft. A new long range bomber is in the works and UAVs like the Global Hawk are replacing older models like the U-2 spy plane.
The least affected branch are the navy. Although the number of littoral warships to be built lessened, from 32 rather than the planned 52, the US Navy is continuing to reinforce East and Southeast Asia.
The US Navy’s preparation for long-range skirmishes with China along coastal areas, the so-called Air-Sea battle, calls for unmanned bombers and submarines along with sophisticated artillery.
|
Sony Posts Guide on Building AOSP 7.0 Nougat for its Xperia Smartphones
We may earn a commission for purchases made using our links.
If, for some reason, you were still of the opinion that an OEM could never be developer friendly, Sony has given yet another example of being just that.
On its official website for all developer resources, Sony has posted a build guide for building Android 7.0 Nougat, the AOSP flavor and not its skinned variant, for its Xperia branch of devices. So Sony is indeed helping the users capable of building Android 7.0 to install and experience the same on their device, before the official updates with the Xperia skin land on their device. Sony has done well to support Open Source Software on their entire line, but there will most likely be limitations on some devices — only time will tell.
The guide for building AOSP is very straightforward. You’d need a Linux environment to build if you follow the guide, and the guide also lists the tools and environment needed for the build. Once you do have build ready, you’d need a Sony device with an unlocked bootloader to flash the image using fastboot. If all went well, you should have a Sony Xperia device with Android 7.0 Nougat, brought to you courtesy of the efforts at Sony.
So go on ahead, you have something interesting to do for the weekend! Let us know your xperience in the comments below!
|
1. Field of the Invention
The present invention relates to a surgical stitching instrument providing a combined suture holder and needle in which the needle is adapted to pierce tissue to be stitched, pick up a suture from the holder after piercing the tissue, and to retract through the tissue while dragging the suture from the holder therethrough and more particularly to such an instrument which is manipulatable with one hand by a surgeon while freeing the other hand for tissue arranging, positioning or other ancillary functions particularly desirable in deep surgery. The needle is attached to and is an integral part of the instrument, thus allowing better contact of the needle and ease of manipulation through tissues even in blind stitching.
2. Description of the Prior Art
It is a well-known surgical practice to clamp a needle with its attached suture with a needle holder and to push the needle through tissues to be stitched until it exits at the opposite side of the tissue. The needle is then released and clamped again at its leading end and extracted through the tissue together with the attached suture. This procedure is satisfactory in many instances except in suturing deep structures, in the presence of bleeding or exudation, it is difficult to reapply the needle holder after the needle has pierced the tissue.
Another well-known method of suturing, especially in deep surgery, is with the use of a Boomerang needle holder. The suture is clamped with a specialized instrument manipulated by one hand, the instrument then being maneuvered to place the suture adjacent to the tissue. A needle having a hook adjacent to the point thereof is then manipulated with another specialized instrument held in the other hand to pierce the tissue and the clamping instrument maneuvered so that the hook snags the suture. The suture is then unclamped with the one hand and the needle is withdrawn along its insertion path with the other hand to draw the suture through the tissue. The instruments and needle are then put aside and the suture tied.
This procedure is disadvantageous for several reasons. Both hands are fully employed during the insertion of the suture since one hand is required to operate the needle manipulating instrument while the other hand is required to operate the suture clamping instrument. It has long been recognized that it would be highly advantageous for surgeons to have one hand free for other procedures during such stitching operations.
These conventional procedures are relatively slow since, in sequence, the suture must be clamped, the needle must be guided through the tissue, the suture clamping instrument and the needle manipulating instrument must be operated to engage the suture with the needle, and the instruments must be withdrawn without disengaging the suture so as to draw the suture through the tissue.
The needle grasping and the suture clamping instruments are both of specialized construction, require great dexterity for proper operation, and the needle manipulating instruments are frequently complicated in structure and mode of operation.
It has also long been recognized that it would be highly advantageous to provide a single instrument usable by one hand whereby simple opening and closing movements pass a suture through a tissue in deep surgery and dispose the suture for tying.
|
Understanding the role of crying and it's place in supporting your mental and physical wellbeing
Have you ever cried so deeply that it felt good? Perhaps it happened when you watched the end of a particularly sad movie or heard a song that pulled at your heart strings. Maybe you were experiencing a week of feeling overwhelmed or depressed and it finally culminated in tears.
Yet instead of feeling completely knocked out by the expenditure of energy, tears, and tissues, you felt surprisingly light afterward. It’s as if your mind and body had been rejuvenated and any dark clouds hanging over you seemed to dissapate.
If you’ve experienced this, you’re not alone. In fact, you’ve experienced a natural part of how the body and mind work together to help heal and renew themselves.
The science of emotions in the body
In his seminal book, The Body Keeps the Score, Psychiatrist Dr. Bessel Van der Kolk explains that people carry emotions on not only a mental level, but on a physical level as well. In other words, emotional energy gets stored in the body.
Dr. Van der Kolk writes that in extreme cases, the toll of unprocessed emotions can be so great that it causes adverse effects on a person’s physical health in the form of illness.
Similarly, Dr. Peter Levine, a leading somatic psychologist and the creator of the international renown modality Somatic Experiencing, discovered that humans and animals share a similar set of tools for expelling stuck emotional energy. When a prey animal like a gazelle is nearly caught by a cheetah but survives, this near-death-experience is later processed by the gazelle. In fact, nearly all vertebrates that experience near-death-experiences in the wild undergo a similar ritual of finding a safe place and physically ‘shaking out’ the energy of fear and shock that occurred earlier.
Through his observations in nature, Dr. Levine pioneered various mind-body approaches to help clients that had not benefited from traditional talk therapy use body awareness to heal stuck emotions of grief, fear, trauma, anger and more.
Why it feels good to cry
Think back to the last time you had a truly good cry. Think about what it was like to cry and how you acted. Set aside what you were thinking at the time and consider what your body was doing.
What you might be surprised to remember is that this experience didn’t simply involve tears streaming down your face; your body was quite active as well. This is because a good cry, the kind that feels cathartic afterward, is by nature a physical exercise.
If you don’t fight the process, your entire body gets involved in the expression of sadness. The body becomes tense and starts moving in a variety of ways. Your face might become constricted sometimes the point of hurting your cheeks. Often times you start clenching your fists or shaking your arms. Your neck might pull back pointing your face up to the sky or it may start twisting back and forth like you’re shaking your head to say ‘no.’ Perhaps most common of all is that you may feel shakey both during and after crying in this way.
These physical expressions of saddness are all part of the mechanism your mind and body uses to release accumulated emotional energy of hurt, anger or grief. In other words, a deep cry is one of your best tools for processing experiences that elicited a strong emotional charge in you. And the benefits of a cry like this can be felt in the resulting elevated mood or a strong sense of calm shortly thereafter.
Crying as a form of self-care
It’s worth pointing out that even our language hints at how crying helps heal us. A common way of describing someone who cried deeply is to say they ‘broke down.’
Though a breakdown might sound like a bad thing, on a physiological level, this description is truer than you may realize. Crying helps you break down the old and make space for a new experience. Crying moves you forward. Crying can be critical to finding closure in the face of grief. Crying allows you to process a painful experience that you didn’t have the time or ability to work through at the moment that you lived it.
Whether you’re experiencing a good cry in a therapist’s office, listening to a sad song, watching a movie, or simply in the privacy of your home, the experience feels good afterward because it’s the body’s natural way of healing, processing and growing. Crying is more than just cathartic, according to several leading voices in the field of somatic or body-based psychology, crying can be a necessary part of supporting your mental and physical health in the long run.
Nick is an integrative coach in private practice specializing in individual, family and inherited trauma, mindfulness, applied neuroscience and energy work. His entry into the healing arts started with Focalizing, a life changing mind-body approach to trauma resolution. Today, he is a leading practitioner and trainer of Focalizing and is a sought after facilitator in NYC and abroad.
Nearly five years into his work as a trauma healer, a mixture of intuition and serendipity moved him to work with several world renowned facilitators of Systemic Family Constellations including Suzi Tucker, Dan Cohen and Emily Blefeld. Sparked by the incredible outcomes he was witnessing in himself and his clients, Nick dedicated himself to this practice and quickly became one of the most active Family Constellations facilitators in the Northeast. Nick’s greatest strength is in the sacred space he holds for healing. Those who work with him are given the gentle support they need to allow the deepest aspects of their being to shine.
In addition to his healing practice, Nick worked in digital marketing and publishing for nearly nine years at various agencies and organizations. His experience inspires a particular interest in how people build authentic approaches to their work and connect to the ‘why’ behind work habits.
The Thrive Global Community welcomes voices from many spheres. We publish pieces written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Learn more or join us as a community member!
Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.
Why It’s Totally Normal to Cry During Yoga Class
Sign up for the Thrive Global newsletter
“People look for retreats for themselves, in the country, by the coast, or in the hills . . . There is nowhere that a person can find a more peaceful and trouble-free retreat than in his own mind. . . . So constantly give yourself this retreat, and renew yourself.”
|
Q:
How to add news Ticker on home page only
I installed a news ticker plugin,
Then I added necessary function/code in sidebar.php
<!-- BEGIN: #secondary -->
<div id="secondary" class="g1-sidebar widget-area" role="complementary">
<h3>Latest News</h3>
<div id="news">
<?php if ( function_exists('insert_newsticker') ) { insert_newsticker(); } ?>
</div>
.......
.........
</div>
Now news ticker coming on all page's sidebar.
But i want it to add in only home page sideabr.
Is there any way to do this, so if side bar is primary/home
then only display news ticker...
A:
use is_front_page() for the front page check.
if( is_front_page() && function_exists('insert_newsticker') ){
insert_newsticker();
}
More Information is_front_page()
Updated according to Comments:
<div id="secondary" class="g1-sidebar widget-area" role="complementary">
<?php if( is_front_page() ): ?>
// Your h3 tag here
<?php endif; ?>
<h3>Latest News</h3>
<div id="news">
|
The present invention relates to 1-N-phenylamino-1H-imidazole derivatives as aromatase inhibitors and to pharmaceutical compositions containing them.
Aromatase is the physiological enzyme responsible for the specific conversion of androgens such as androstenedione or testosterone, into estrogens such as estrone and estradiol, respectively (Simpson E R et al., Endocrine Reviews, 1994, 15: 342-355). Inhibition of aromatase iso, therefore, a strategy of choice to interfere with normal or pathological estrogen-induced or estrogen-dependent biological processes such as female sexual differentiation, ovulation, implantation, pregnancy, breast and endometrial cell proliferation as well as regulation of spermatogenesis or prostate cell proliferation in male or of non-reproductive functions such as bone formation or immune T cell and cytokine balance (Simpson E R et al., Recent Progress in Hormone Research, 1997, 52: 185-213 and the whole issues of Endocrine Related Cancer (1999, volume 6, nxc2x0 2) and Breast Cancer Research Treatment (1998, volume 49, supplement nxc2x0 1)).
A large number of azole derivatives are known as antifungal agents. Some imidazole or triazole derivatives have already been described as inhibitors of the enzyme aromatase. Generally, the imidazolyl or the triazolyl group is associated with aromatic rings as found in letrozole (EP-A-236 940; Lamb H M and Adkins J C, Drugs, 1998, 56: 1125-1140):
or anastrozole (EP-A-296 749; Wiseman L R and Adkins J C, Drugs Aging, 1998, 13: 321-332):
Imidazoles or triazoles linked via a methylene group to a benzotriazole are described in EP-A-293 978:
Diterbutyl phenols having a N-amino-imidazole moiety in the para position are described in U.S. Pat. No. 4,908,363 and are presented as having inflammation-inhibiting and oedema-inhibiting properties:
More recently, M. OKADA et al. (Chem. Pharm. Bull., 44 (10), 1996, 1871-1879) described a series of [4-(bromophenylmethyl)-4-(cyanophenyl)amino]-azoles and their azine analogs:
It has now been found that imidazole derivatives which invariably contain a 1-[N-phenylamino]group demonstrate an unexpectedly high potency to inhibit aromatase.
Accordingly, one object of this invention is to provide 1-[N-phenylamino]imidazole derivatives which are potent aromatase inhibitors.
Another object of this invention is to provide a pharmaceutical composition containing, as active ingredient, a 1-[N-phenylamino]imidazole derivative as depicted below or a pharmaceutically acceptable acid addition salt thereof.
A further object of this invention is to provide the use of a 1-[N-phenylamino]imidazole derivative in the manufacture of a medicament intended for treating or preventing various diseases and for managing reproductive functions in women, in men as well as in female and male wild or domestic animals.
The 1-[N-phenylamino]imidazole derivatives of this invention are represented by the following general formula (I):
and acid addition salts, solvates and stereoisomeric forms thereof, wherein:
R1 and R2 are each independently hydrogen, a (C1-C6)alkyl or a (C3-C8)cycloalkyl;
n=0, 1, 2;
R3, R4, R5 and R6 are each independently hydrogen, or a (C1-C6)alkyl, halogen, cyano, (C1-C6)alkoxy, trifluoromethyl, (C1-C6)alkylthio, (C1-C6)alkylsulfonyl, sulfonamido, acyl, (C1-C6)alkoxycarbonyl, or carboxamido group;
R3 and R6 together with the phenyl ring bearing them can also form a benzofurane or a N-methylbenzotriazole.
In the description and claims, the term xe2x80x9c(C1-C6)alkylxe2x80x9d is understood as meaning a linear or branched hydrocarbon chain having 1 to 6 carbon atoms. A (C1-C6)alkyl radical is for example a methyl, ethyl, propyl, isopropyl, butyl, isobutyl, tert-butyl, pentyl, isopentyl or hexyl radical.
The term xe2x80x9chalogenxe2x80x9d is understood as meaning a chlorine, bromine, iodine or fluorine atom.
The term xe2x80x9c(C3-C8)cycloalkylxe2x80x9d is understood as meaning a saturated monocyclic hydrocarbon having 3 to 8 carbon atoms. A (C3-C8)cycloalkyl radical is for example a cyclopropyl, cyclobutyl, cyclopentyl, cyclohexyl, cycloheptyl or cyclooctyl radical.
The term xe2x80x9c(C1-C6)alkoxyxe2x80x9d is understood as meaning a group OR in which R is a (C1-C6)alkyl as defined above. A (C1-C6)alkoxy radical is for example a methoxy, ethoxy, propoxy, isopropoxy, butoxy, isobutoxy, tert-butoxy, n-pentyloxy or isopentyloxy radical.
The term xe2x80x9cacylxe2x80x9d is understood as meaning a group
in which Rxe2x80x2 is hydrogen or a (C1-C6)alkyl as defined above.
Compounds of formula (I) form acid addition salts, for example with inorganic acids such as hydrochloric acid, hydrobromic acid, sulfuric acid, nitric acid, phosphoric acid and the like or with organic carboxylic acids such as acetic acid, propionic acid, glycolic acid, pyruvic acid, oxalic acid, malic acid, fumaric acid, tartaric acid, citric acid, benzoic acid, cinnamic acid, mandelic acid, methanesulfonic acid and the like.
Preferred compounds of formula (I) are those wherein:
n is 0 or 1;
R1 and R2 are each independently hydrogen or (C1-C6)alkyl;
R3 is cyano or trifluoromethyl;
R4 is hydrogen, (C1-6)alkyl, halogen, cyano, (C1-C6)alkoxy, trifluoromethyl, (C1-C6)alkylthio, (C1-C6)alkylsulfonyl or (C1-C6)alkoxycarbonyl;
R5 is hydrogen, halogen, (C1-C6)alkoxy or trifluoromethyl;
R6 is hydrogen;
or R3 and R6 together with the phenyl ring form a N-methylbenzotriazole.
Also preferred are the compounds of formula (I) wherein:
n is 0 or 1;
R1, R2 and R6 are each hydrogen;
R4 is halogen, cyano or trifluoromethyl.
Especially preferred compounds of formula (I) are those wherein R3 is cyano; those wherein R5 is hydrogen or trifluoromethyl; and those wherein n is 1.
Valuable compounds are selected from the group consisting of:
4-[N-(1H-imidazol-1-yl)-N-(4-trifluoromethylphenylmethyl)amino]benzonitrile
4-[N-(1H-imidazol-1-yl)-N-(4-chlorophenylmethyl)amino]benzonitrile,
4-[N-(1H-imidazol-1-yl)-N-(4-cyanophenylmethyl)amino]benzonitrile,
4,4xe2x80x2-[N-(1H-imidazol-1-yl)amino]bis-benzonitrile,
4-[N-(1H-imidazol-1-yl)-N-(4-fluorophenylmethyl)amino]benzonitrile,
4-[N-(1H-imidazol-1-yl)-N-(3,4-difluorophenylmethyl)amino]benzonitrile, and the acid addition salts, solvates or stereoisomeric forms thereof.
By virtue of their capability to inhibit aromatase, and thus to exhaust all sources of endogenous estrogens, the compounds of the present invention can be used alone or in combination with other active ingredients for the treatment or the prevention of any estrogen-dependent disorder or for the management of estrogen-regulated reproductive functions, in humans as well as in wild or domestic animals.
The breasts being sensitive targets of estrogen-stimulated proliferation and/or differentiation, inhibitors of aromatase are especially useful in the treatment or prevention of benign breast diseases in women, gynecomastia in men and in benign or malignant breast tumors with or without metastasis both in men and women (Brodie A M and Njar V C, Steroids, 2000, 65: 171-179; Pritchard K I, Cancer, 2000, 85, suppl 12: 3065-3072), or in male or female domestic animals.
Due to the involvement of estrogens in the mechanisms of ovulation, implantation and pregnancy, inhibitors of aromatase according to the invention can be used, respectively, for contraceptive, contragestive or abortive purposes in women (Njar V C and Brodie A M, Drugs, 1999, 58: 233-255) as well as in females of wild or domestic animal species.
The uterus is another reproductive organ responsive to estrogenic stimulation and inhibition of aromatase is therefore useful to treat or prevent endometriosis, benign uterine diseases or benign or malignant uterine tumors with or without metastasis in women (Njar V C and Brodie A M, Drugs, 1999, 58: 233-255) or in female domestic animals.
The ovary being the physiological source of estrogen, inhibitors of aromatase can be used to treat abnormal or untimely ovarian estrogen production such as polycystic ovary syndrome or precocious puberty, respectively (Bulun et al., J Steroid Biochem Mol Biol, 1997, 61: 133-139). Ovarian as well as non-ovarian but estrogen-producing benign or malignant tumors with or without metastasis (Sasano H and Harada N, Endocrine Reviews, 1998, 19: 593-607) may also benefit from treatment with aromatase inhibitors according to the invention.
In males, prostate and testicular tissues are also responsive to estrogenic stimulation (Abney T O, Steroids, 1999, 64: 610-617; Carreau S et al., Int J Androl, 1999, 22: 133-138). Therefore, aromatase inhibitors can be used to treat or to prevent benign (Sciarra F and Toscano V, Archiv Androl, 2000, 44: 213-220) or malignant prostate tumors with or without metastasis (Auclerc G et al., Oncologist, 2000, 5: 36-44) or to treat, prevent or control spermatogenesis functions or malfunctions, in men as well as in male wild or domestic animals.
Estrogens are also known to be implicated in the regulation of bone turnover; therefore, aromatase inhibitors may be useful, alone or in combination with other antiresorbtive or proosteogenic agents, in the treatment or prevention of bone disorders according to appropriate therapeutic sequences or regimens.
In addition, estrogens are involved in the regulation of the balance between Th1 and Th2 predominant immune functions and may therefore be useful in the treatment or prevention of gender-dependent auto-immune diseases such as lupus, multiple sclerosis, rheumatoid arthritis and the like.
When the compounds of formula (I) are administered for the treatment or prevention of estrogen-dependent disorders, they can be combined with one or several other sexual endocrine therapeutic agents. In the case of the control or management of reproductive functions such as male or female fertility, pregnancy, abortion or delivery, the compounds of formula (I) can be combined with for example a LH-RH agonist or antagonist, an estroprogestative contraceptive, a progestin, an anti-progestin or a prostaglandin. When the compounds of formula (I) are intended for the treatment or prevention of benign or malignant diseases of the breast, the uterus or the ovary, they can be combined with e.g. an anti-estrogen, a progestin or a LH-RH agonist or antagonist. In the case of the treatment or prevention of benign or malignant diseases of the prostate or the testis, the compounds of formula (I) can be combined with for example an antiandrogen, a progestin, a lyase inhibitor or a LH-RH agonist or antagonist.
The term xe2x80x9ccombinedxe2x80x9d refers herein to any protocol for co-administration of a compound of formula (I) and one or more other pharmaceutical substances, irrespective of the nature of the time of administration and the variation of dose over time of any of the substances. The co-administration can for example be parallel or sequential.
The invention thus also relates to a method of treating or preventing the above-mentioned diseases, comprising the administration to a subject in need thereof of a therapeutically effective amount of a compound of formula (I) or a pharmaceutically acceptable acid addition salt thereof, optionally in combination with another active ingredient.
For the treatment/prevention of any of these diseases, the compounds of formula (I) may be administered, for example, orally, topically, parenterally, in dosage unit formulations containing conventional non-toxic pharmaceutically acceptable carriers, adjuvants and vehicles. These dosage forms are given as examples, but other dosage forms may be developed by those skilled in the art of formulation, for the administration of the compounds of formula (I). The term parenteral as used herein includes subcutaneous injections, intravenous, intramuscular, intrastemal injection or infusion techniques. In addition to the treatment of humans, the compounds of the invention are effective in the treatment of warm-blooded animals such as mice, rats, horses, cattle sheep, dogs, cats, etc.
The pharmaceutical compositions containing the active ingredient may be in a form suitable for oral use, for example, as tablets, troches, lozenges, aqueous or oily suspensions, dispersible powders or granules, emulsions, hard or soft capsules, or syrups or elixirs. Compositions intended for oral use may be prepared according to any method known to the art for the manufacture of pharmaceutical compositions and such compositions may contain one or more agents selected from the group consisting of sweetening agents, flavoring agents, coloring agents and preserving agents in order to provide pharmaceutically elegant and palatable preparations. Tablets contain the active ingredient in admixture with non-toxic pharmaceutically acceptable excipients which are suitable for the manufacture of tablets. These excipients may be for example, inert diluents, such as calcium carbonate, sodium carbonate, lactose, calcium phosphate or sodium phosphate; granulating and disintegrating agents, for example, corn starch, or alginic acid; binding agents, for example starch, gelatin or acacia, and lubricating agents, for example, magnesium stearate, stearic acid or talc. The tablets may be uncoated or they may be coated by known techniques to delay disintegration and absorption in the gastrointestinal tract and thereby provide a sustained action over a longer period. For example, a time delay material such as glyceryl monostearate or glyceryl distearate may be employed.
They may also be coated by the technique described in U.S. Pat. Nos. 4,256,108; 4,166,452 and 4,265,874 to form osmotic therapeutic tablets for controlled release.
Formulations for oral use may also be presented as hard gelatin capsules wherein the active ingredient is mixed with an inert solid diluent, for example, calcium carbonate, calcium phosphate, or kaolin, or as soft gelatin capsules wherein the active ingredient is mixed with water or an oil medium, for example peanut oil, liquid paraffin, or olive oil.
Aqueous suspensions contain the active ingredient in admixture with excipients suitable for the manufacture of aqueous suspensions. Such excipients are suspending agents, for example sodium carboxymethylcellulose, methylcellulose, hydroxypropylmethylcellulose, sodium alginate, polyvinylpyrrolidone, gum tragacanth and gum acacia; dispersing or wetting agents, for example a naturally-occurring phosphatide, such as lecithin, or condensation products of an alkylene oxide with fatty acids, for example polyoxyethylene stearate, or condensation products of ethylene oxide with long chain aliphatic alcohols, for example heptadecaethyleneoxycetanol, or condensation products of ethylene oxide with partial esters derived from fatty acids and a hexitol such as polyoxyethylene sorbitol monooleate, or condensation products of ethylene oxide with partial esters derived from fatty acids and hexitol anhydrides, for example polyethylene sorbitan monooleate. The aqueous suspensions may also contain one or more preservatives, for example ethyl, or n-propyl, p-hydroxybenzoate, one or more coloring agents, one or more flavoring agents, and one or more sweetening agents, such as sucrose, saccharin or aspartame.
Oily suspensions may be formulated by suspending the active ingredient in a vegetable oil, for example arachis oil, olive oil, sesame oil or coconut oil, or in mineral oil such as liquid paraffin. The oily suspensions may contain a thickening agent, for example beeswax, hard paraffin or cetyl alcohol. Sweetening agents such as those set forth above, and flavoring agents may be added to provide a palatable oral preparation. These compositions may be preserved by the addition of an anti-oxidant such as ascorbic acid.
Dispersible powders and granules suitable for preparation of an aqueous suspension by the addition of water provide the active ingredient in admixture with a dispersing or wetting agent, suspending agent and one or more preservatives. Suitable dispersing or wetting agents and suspending agents are exemplified by those already mentioned above. Additional excipients, for example sweetening, flavoring and coloring agents, may also be present. The pharmaceutical compositions of the invention may also be in the form of an oil-in-water emulsions. The oily phase may be a vegetable oil, for example olive oil or arachis oil, or a mineral oil, for example liquid paraffin or mixtures of these. Suitable emulsifying agents may be naturally-occuring phosphatides, for example soy bean, lecithin, and esters or partial esters derived from fatty acids and hexitol anhydrides, for example sorbitan monooleate, and condensation products of the said partial esters with ethylene oxide, for example polyoxyethylene sorbitan monooleate. The emulsions may also contain sweetening and flavouring agents.
The pharmaceutical compositions may be in the form of a sterile injectable aqueous or oleagenous suspension. This suspension may be formulated according to the known art using those suitable dispersing or wetting agents and suspending agents which have been mentioned above. The sterile injectable preparation may also be a sterile injectable solution or suspension in a non-toxic parenterally-acceptable diluent or solvent, for example as a solution in 1,3-butanediol. Among the acceptable vehicles and solvents that may be employed are water, Ringer""s solution and isotonic sodium chloride solution. In addition, sterile, fixed oils are conventionally employed as a solvent or suspending medium. For this purpose any bland fixed oil may be employed including synthetic mono- or diglycerides. In addition, fatty acids such as oleic acid find use in the preparation of injectables.
Dosage levels of the order of from about 0.0001 mg to about 1 mg/kg of body weight per day are useful in the treatment of the above-indicated conditions, or alternatively about 0.1 mg to about 10 mg per patient per day.
The amount of active ingredient that may be combined with the carrier materials to produce a single dosage form will vary depending upon the host treated and the particular mode of administration. Dosage unit forms will generally contain between from about 1 mg to about 100 mg of active ingredient, typically 2 mg, 5 mg, 10 mg, 20 mg, 40 mg, 50 mg, 60 mg, 80 mg, or 100 mg.
It will be understood, however, that the specific dose level for any particular patient will depend upon a variety of factors including the age, body weight, general health, sex, diet, time of administration, route of administration, rate of excretion, drug combination and the severity of the particular disease undergoing therapy.
The 1-N-phenyl-amino-1H-imidazole derivatives of formula (I) of the invention and their acid addition salts can be prepared following the general scheme 1.
According to scheme 1 aniline derivative (1) is condensed with the aldehyde of formula (2) and the imine intermediate is reduced with sodium borohydride or hydrogenated using palladium or platinium oxide as catalyst to afford the N,N-disubstituted aniline (3). Said aniline (3) can also be prepared by reaction of a halogeno derivative (8) with an aniline of formula (1).
The N,N-disubstituted aniline (3) is converted to its nibroso derivative using standard conditions, then reduced to afford the 1,1-disubstituted hydrazine of formula (4).
Alternatively, the 1,1-disubstituted hydrazine (4) can be prepared by selective N-alkylation of a compound of formula (7) with a compound of formula (8) using the conditions described by U. LERCH and I. Kxc3x96NIG (Synthesis, 1983, 2, 157-8).
Then, condensation of (4) with dialkyloxy-alkyl-isothiocyanate derivatives or ethylenedioxy-alkyl-isocyanate derivatives, affords the thiosemicarbazide (5) which is transformed to the 1-amino-imidazole-2-thione (6) by treatment with an acid like acetic acid or sulphuric acid.
Desulfurization of (6) in acetic acid, following the conditions described by S.GRIVAS and E.RONNE in Acta Chemica Scandinavia, 1995, 49, 225-229, gives the final 1-N-phenylamino-1H-imidazole derivative (I), which is optionally converted to one of its pharmaceutically acceptable acid addition salts. Alternatively compound (I) where R3 or R6 is an electron-withdrawing group can be obtained by condensation of the N-imidazoloaniline (9) with the halogeno derivative (8).
|
Q:
UI Design / Flow - what should "Save" and "Cancel" do?
I had a discussion with a co-worker earlier today about a design pet peeve of mine, and after some searching regarding UI design principles I can't really find anything regarding this particular scenario.
In many applications (mostly web, but windows as well) I see a form that allows the user to add/edit/delete rows of data. This form has "Save" and "Cancel" buttons that only affect the editable fields - record addition/deletion occurs the instant a user clicks "Add" or "Delete".
Example:
In this case, what should the "Save" and "Cancel" buttons do?
My position is that the "Save" and "Cancel" buttons should affect everything (every editable field and every add/edit/delete action) on the form since contextually there is nothing to indicate that they only affect a particular set of actions and/or fields.
My co-worker's position is that it's completely understandable that the "Save" and "Cancel" buttons only affect the fields, and that users won't really notice that additions/deletions are persisted without clicking "Save".
I realize some of this may be "what do the users want/need", but I'm curious what other developers think.
A:
I feel an ambiguous design is a bad design.
Make it clear to the user what will happen. Perhaps an 'add' link in the row you're modifying. Or use ajax like functionality to add it on the fly (this becomes harder when you require data for certain fields).
Or color the row a different color when it's unsaved. Or fade from green to the normal color after it's been added. The cancel button, if clicked, should alert the user that unsaved data will be lost (whatever that actually means in this scenario). It might say "the following records will not be saved..."
The fact that both of you disagree means users will also. And a surprised user is an unhappy user.
A:
Your coworker is right; Save and Cancel should only affect field edits, not additions or deletions. Additions and deletions are perceived by the user (and generally implemented) as separate operations.
Conceptually, a record must be added to the database before you can perform an edit on it, so to implement a save in the way that you propose requires that you wrap the entire thing in a transaction.
Remember, the user can always undo a record addition by deleting it. So conceptually, the "Cancel" is the Edit undo, while the "Delete" is the Add undo.
|
package org.ripple.bouncycastle.asn1.x9;
import java.util.Enumeration;
import org.ripple.bouncycastle.asn1.ASN1EncodableVector;
import org.ripple.bouncycastle.asn1.ASN1Object;
import org.ripple.bouncycastle.asn1.ASN1OctetString;
import org.ripple.bouncycastle.asn1.ASN1Primitive;
import org.ripple.bouncycastle.asn1.ASN1Sequence;
import org.ripple.bouncycastle.asn1.DERSequence;
import org.ripple.bouncycastle.asn1.DERTaggedObject;
/**
* ANS.1 def for Diffie-Hellman key exchange OtherInfo structure. See
* RFC 2631, or X9.42, for further details.
*/
public class OtherInfo
extends ASN1Object
{
private KeySpecificInfo keyInfo;
private ASN1OctetString partyAInfo;
private ASN1OctetString suppPubInfo;
public OtherInfo(
KeySpecificInfo keyInfo,
ASN1OctetString partyAInfo,
ASN1OctetString suppPubInfo)
{
this.keyInfo = keyInfo;
this.partyAInfo = partyAInfo;
this.suppPubInfo = suppPubInfo;
}
public OtherInfo(
ASN1Sequence seq)
{
Enumeration e = seq.getObjects();
keyInfo = new KeySpecificInfo((ASN1Sequence)e.nextElement());
while (e.hasMoreElements())
{
DERTaggedObject o = (DERTaggedObject)e.nextElement();
if (o.getTagNo() == 0)
{
partyAInfo = (ASN1OctetString)o.getObject();
}
else if (o.getTagNo() == 2)
{
suppPubInfo = (ASN1OctetString)o.getObject();
}
}
}
public KeySpecificInfo getKeyInfo()
{
return keyInfo;
}
public ASN1OctetString getPartyAInfo()
{
return partyAInfo;
}
public ASN1OctetString getSuppPubInfo()
{
return suppPubInfo;
}
/**
* Produce an object suitable for an ASN1OutputStream.
* <pre>
* OtherInfo ::= SEQUENCE {
* keyInfo KeySpecificInfo,
* partyAInfo [0] OCTET STRING OPTIONAL,
* suppPubInfo [2] OCTET STRING
* }
* </pre>
*/
public ASN1Primitive toASN1Primitive()
{
ASN1EncodableVector v = new ASN1EncodableVector();
v.add(keyInfo);
if (partyAInfo != null)
{
v.add(new DERTaggedObject(0, partyAInfo));
}
v.add(new DERTaggedObject(2, suppPubInfo));
return new DERSequence(v);
}
}
|
The Old Swimmin' Hole (1921 film)
The Old Swimmin' Hole is a 1921 American silent comedy film directed by Joe De Grasse based on the poem The Old Swimmin' Hole by James Whitcomb Riley. A reviewer for Exhibitors Herald summarized, "The theme of the picture is a light one—just the pleasant little love story of a country schoolboy and girl in the era of the youth of Tom Sawyer."
The film's lack of intertitles has been described as innovative. "This marks an advance in film making," the same reviewer claimed. "Their absence is not realized for some time after the feature has proceeded, a certain indication that it has been skillfully welded together without them and their place supplied by good acting."
Cast
Charles Ray as Ezra Hull
Laura La Plante as Myrtle
James Gordon as Mr. Hull
Blanche Rose as Mrs. Hull
Marjorie Prevost as Esther
Lincoln Stedman as Skinny
Lon Poff as Professor Payne
References
External links
Category:1921 films
Category:American silent feature films
Category:American films
Category:American black-and-white films
Category:1920s comedy films
Category:Films based on poems
Category:Films directed by Joseph De Grasse
Category:First National Pictures films
Category:Films based on works by James Whitcomb Riley
Category:American comedy films
|
Attempts to modify lung granulomatous responses to Schistosoma japonicum eggs in low and high responder mouse strains.
A radioisotopic assay for acute granulomatous hypersensitivity (AGH) to lyophilized eggs of Schistosoma japonicum has been used to further examine responses to egg antigens in various inbred strains of mice. The ranking of responsiveness in mice from high (C57BL/6), intermediate (BALB/c) to low (CBA/H) was not influenced by high or low egg-sensitization regimens. However, the low responsiveness of responder mice sensitized with eggs by the intraperitoneal compared with the subcutaneous route of injection appears to be an egg dose-related phenomenon. The high AGH responsiveness of C57BL/6 mice can be increased further by sensitization with eggs in the presence of purified pertussigen from Bordetella pertussis but CBA/H mice treated identically remain low responders. The monoclonal anti-egg antibody, P.41, which produces a prominent bleb-type circumoval precipitate with eggs, has been shown to be directed against major 'immunopathologic antigen(s)' of S. japonicum eggs. Thus, C57BL/6 mice were sensitized for AGH by injection of soluble extracted egg antigen (SEA) bound to an immunoabsorbent of P.41 antibody on Sepharose. No success has been achieved in modulating AGH in C57BL/6 mice by injection of hyperimmune antisera raised against lyophilized eggs in either high or low responder mouse strains. This failure is in line with previous results using antisera as well as monoclonal anti-egg antibodies. The consistent failure to demonstrate a modulating effect of antibodies in this compared with other laboratories may be related to the use of lyophilized rather than viable eggs. The data suggest that activities of antisera in granuloma modulation in murine schistosomiasis japonica result from egg destruction or inhibition of production of immunopathologic antigens by eggs rather than through effects on immunopathologic immune responses.
|
# Maintainer: atom2013 <[email protected]>
_realname=pip
pkgbase="python-${_realname}"
pkgname=("python-${_realname}")
pkgver=20.1.1
pkgrel=1
pkgdesc="The PyPA recommended tool for installing Python packages"
url="https://pip.pypa.io/"
arch=('any')
license=('MIT')
depends=('python' 'python3-setuptools')
makedepends=('python' 'python3-setuptools')
provides=("python3-${_realname}")
replaces=("python3-${_realname}")
conflicts=("python3-${_realname}")
source=(${_realname}-${pkgver}.tar.gz::https://github.com/pypa/${_realname}/archive/${pkgver}.tar.gz)
sha256sums=('fa20f7632bab63162d281e555e1d40dced21f22b2578709454f9015f279a0144')
build() {
cd "${srcdir}/${_realname}-${pkgver}"
python3 setup.py build
}
package() {
cd "${srcdir}/${_realname}-${pkgver}"
python3 setup.py install --prefix=/usr --root="${pkgdir}"
install -D -m644 LICENSE.txt "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE"
}
|
All posts tagged "Park Geunhye"
By Kim Tae-gyu THE HAGUE ― Japanese Prime Minister Shinzo Abe’s charm offensive obviously failed to win over President Park Geun-hye. In a trilateral summit, mediated by U.S. President Barack Obama, the two met for the first...
President Park Geun-hye said that unification would be a huge boon for the nation’s economic growth and likened the potential benefits of it to hitting the jackpot. She disclosed on Monday how she will govern the country...
President lays out 3-year-plan for $30,000 era, 70% employment By Kim Tae-gyu President Park Geun-hye said Monday that she is prepared to meet with North Korean ruler Kim Jong-un and Japanese Prime Minister Shinzo Abe. However, she...
By Kim Tae-gyu, Chung Min-uck President Park Geun-hye on Monday denounced Prime Minister Shinzo Abe’s controversial visit to the Yasukuni Shrine last week, sending a message that her government will deal sternly with Japan’s nationalist moves. Abe,...
By Kang Seung-woo Japanese Prime Minister Shinzo Abe all but squandered the chance to meet with President Park Geun-hye by visiting the Yasukuni Shrine on Thursday, analysts said. The strained bilateral ties between the two leaders, which...
By Jun Ji-hye The government was in close consultation with allies and neighboring countries, Friday, regarding any possible provocative action by North Korea, according to the Ministry of Unification spokesman. The move followed the surprise announcement that...
|
Erythropoietin treatment in the sixth posttransplant month as a prognostic factor for renal allograft survival.
The purpose of this work was to assess the prognostic value of the need for erythropoietin (EPO) treatment at 6 months after transplantation. We retrospectively reviewed the outcomes of 143 consecutive cadaveric kidney transplants performed between January 2000 and April 2004, functioning at 6 months postransplantation. Patients were divided into two groups: group EPO6m (n = 24) received EPO treatment in the sixth month, and a control group (n = 119) did not receive EPO. Renal function deterioration (RFD) was considered to be a sustained decrease in creatinine clearance (CrCl) greater than 20% between the sixth month postransplant and the last visit. Mean follow-up was 38 +/- 16 months. The mean ages of the donor (57 +/- 9 vs 49 +/- 12 years; P = .001) and the recipient (59 +/- 12 vs 47 +/- 17 years; P = .000) were greater in the EPO6m group. Delayed graft function (83% vs 48%; P = .001) was more frequent in the EPO6m group. At 6 months after transplantation the EPO6m group showed lower hemoglobin (11.52 +/- 1.71 vs 13.32 +/- 1.69 g/dL; P = .000), higher serum creatinine (2.31 +/- 0.72 vs 1.65 +/- 0.53 mg/dL; P = .000), lower CrCl (33.53 +/- 10.83 vs 53.6 +/- 17.58 mL/min; P = .000), and similar proteinuria. RFD was more common in the EPO6m group (38% vs 10%; P = .026), with a different pattern of evolution of CrCl (-0.098 +/- 0.176 vs +0.093 +/- 0.396 mL/min/mo, P = .000). Multivariate analysis demonstrated that treatment with EPO at 6 months was the only predictor of RFD (RR 4.46; 1.58 to 12.58; P = .005). The need for EPO at 6 months postransplant was a good predictor of later renal allograft deterioration, more sensitive than serum creatinine or proteinuria.
|
Q:
Frustrating python syntax error
I am writing a script to automate HvZ games at my college and have run into this strange frustrating syntax error:
File "HvZGameMaster.py", line 53
class players(object):
^
SyntaxError: invalid syntax
Here is the offending code
class mailMan(object):
"""mailMan manages player interactions such as tags reported via text messages or emails"""
def __init__(self, playerManager):
super(mailMan, self).__init__()
self.mail = imaplib.IMAP4_SSL('imap.gmail.com')
self.mail.login(args.username,args.password)
self.mail.list()
# Out: list of "folders" aka labels in gmail.
self.mail.select("inbox") #connect to inbox.
def getBody(self, emailMessage):
maintype = emailMessage.get_content_maintype()
if maintype == 'multipart':
for part in emailMessage.get_payload():
if part.get_content_maintype() == 'text':
return part.get_payload()
elif maintype == 'text':
return emailMessage.get_payload()
def getUnread(self):
self.mail.select("inbox") # Select inbox or default namespace
(retcode, messages) = self.mail.search(None, '(UNSEEN)')
if retcode == 'OK':
retlist = []
for num in messages[0].split(' '):
print 'Processing :', messages
typ, data = self.mail.fetch(num,'(RFC822)')
msg = email.message_from_string(data[0][1])
typ, data = self.mail.store(num,'-FLAGS','\\Seen')
if retcode == 'OK':
for item in str(msg).split('\n'):
#finds who sent the message
if re.match("From: *",item):
print (item[6:], self.getBody(msg))
retlist.append((item[6:], self.getBody(msg).rstrip())
#print (item, self.getBody(msg).rstrip())
class players(object): #<-the problem happens here
"""manages the player"""
def __init__(self, pDict):
super(players, self).__init__()
self.pDict = pDict
#makes a partucular player a zombie
def makeZombie(self, pID):
self.pDict[pID].zombie = True
#makes a partucular player a zombie
def makeHuman(self, pID):
self.pDict[pID].zombie = False
As far as I can tell what I have written is correct and I have checked to make sure it is all tabs and not spaces I have made sure i don't have any erroneous \r's or \n's floating around (all \n's are where the should be at the end of the line and I'm not using any \r's)
You can find all my code for this project here if you would like to try running it yourself
A:
There is an unbalanced (missing) parenthesis on the line above the line raising the error:
retlist.append((item[6:], self.getBody(msg).rstrip())
Note that some editors have matching parenthesis highlighting, and key combinations for moving back and forth across matched parentheses. Using an editor with these features can help cut down on these errors.
|
Welcome to the Psychology Department
The Department of Psychology at the University of West Georgia is unique in that our theoretical roots are in humanistic, transpersonal, and critical psychology. Our courses range from classically humanistic concerns - like the centrality of human subjective experience in psychology, holistic approaches to psychological understanding, human growth and development, and the enhancement of human potential - to contemporary attention to transpersonal and spiritual horizons. Themes such as the meaning of genuine community, sociality, understanding oneself and others, and the myriad ways through which we grow and develop are central to our academic learning environment.
Why Humanistic Psychology?
Our program, founded on a Humanistic framework, is non-traditional in many senses of the word. Our Department focuses on growth and development in the individual and community. This approach allows the professors to teach courses they are most passionate about. Being self-motivated, expressive, and involved in the program are key aspects of being a part of the Psychology program.
The mission of the Department of Psychology at the undergraduate and graduate levels is to approach the subject matter of psychology in ways that facilitate the understanding of oneself and others as: (1) foundational to personal growth and development; (2) critical to a deeper understanding of the nature of psychology itself; and (3) central to professional development.
This long-standing emphasis of the Department is consistent with the University’s goal: to foster educational excellence in a personal environment.
In addition, the Department seeks to provide an educational environment in which students and faculty can address social and personal issues in a specifically psychological manner. This emphasis requires knowledge of humanistic and alternative approaches to psychology as well as acquaintance with the discipline’s traditional topics and definition as a social science. Such a broad scope of concerns accords well with the University’s emphasis on critical scholarly inquiry and creativity.
|
1. Field of the Invention
The invention relates to a method of controlling a clock signal in a circuit receiving an external clock signal and transmitting an internal clock signal. The invention relates also to a circuit for controlling a clock signal.
2 Description of the Related Art
A circuit for controlling a clock signal is generally comprised of a feedback system synchronization circuit such as phase locked loop, and is presently requested to eliminate clock skew in short synchronization time.
In order to meet with such request, a lot of circuits have been suggested in the following documents, for instance:
(a) Japanese Unexamined Patent Publication No. 8-237091 PA1 (b) 1996 Symposium on VLSI Circuit, pp. 112-113 PA1 (c) 1996 Symposium on VLSI Circuit, pp. 192-193 PA1 (d) Proceedings of IEEE 1992 CICC 25.2 PA1 (e) IEICE TRANS. ELECTRON, Vol. E79-C, No. Jun. 6, 1996 pp. 798-807 PA1 (i) Japanese Unexamined Patent Publication No. 5-152438 PA1 (g) Japanese Unexamined Patent Publication No. 6-244282
FIGS. 1 to 6A and 6B illustrate circuits suggested in the above-listed prior art (a) to (e), respectively. As mentioned later in detail, the above-mentioned documents (a) to (g) do not suggest detecting clock delay unlike the present invention.
FIG. 1 illustrates a synchronization delay circuit having been suggested in Japanese Unexamined Patent Publication No. 8-237091.
The illustrated synchronization delay circuit is comprised of a synchronization delay circuit macro 908, an input buffer 903, a dummy delay circuit 905, and a clock driver 904. The synchronization delay circuit macro 908 is comprised of a first row of delay circuits 901 for measuring a time difference, and a second row of delay circuits 902 for reproducing the thus measured delay time. A clock signal is transmitted in the second row of delay circuits 902 in a direction opposite to a direction in which a clock signal is transmitted in the first row of delay circuits 901. The dummy delay circuit 905 is designed to have delay time equal to a sum (td1+td2) of delay time td1 of the input buffer 903 and delay time td2 of the clock driver 904.
The dummy delay circuit 905 is usually comprised of an input buffer dummy 905A having the same structure and hence the same delay time as that of the input buffer 903, and a clock driver dummy 905B, in order to equalize the delay time thereof to a sum (td1+td2) of the delay time td1 of the input buffer 903 and delay time td2 of the clock driver 904.
An external clock signal 906 is input into the first row of delay circuits 901 through the input buffer 903 and the dummy delay circuit 905, and output through the second row of delay circuits 902. The thus output clock signal is driven by the clock driver 904 to thereby turn into an internal clock signal 907, which is transmitted to internal circuits (not illustrated).
With reference to FIG. 1, the first row of delay circuits 901 has the same delay time as that of the second row of delay circuits 902. The first row of delay circuits 901 measures a certain period of time, and the second row of delay circuits 902 reproduces the thus measured period of time. A signal input into the first row of delay circuits 901 is advanced through the first row of delay circuits 901 by a desired period of time, and then, a signal is advanced in the second row of delay circuits 902 through the same number of delay devices as the number of delay devices through which the signal has passed in the first row of delay circuits 901. As a result, the second row of delay circuits 902 can reproduce a period of time having been measured by the first row of delay circuits 901.
Processes by which a signal is advanced in the second row of delay circuits 902 through the same number of delay devices as the number of delay devices through which the signal has passed in the first row of delay circuits 901 is grouped into two groups with respect to a direction or directions in which a signal is transmitted in the first and second rows of delay circuits 901 and 902. In addition, a length of the second row of delay circuits 902 is determined either by selecting an end of the length or by entirely selecting a row. Hence, the above-mentioned processes can be grouped into four groups.
For instance, as to the former grouping, each of FIGS. 4 and 5 illustrates a circuit in which a clock signal is advanced in the first row of delay circuits 901 in the same direction as a direction in which a clock signal is advanced in the second row of delay circuits 902, and the number of elements constituting the second row of delay circuits 902 is determined by an output terminal of the second row of delay circuits 902. Each of FIGS. 2 and 3 illustrates a circuit in which a clock signal is advanced in the first row of delay circuits 901 in a direction opposite to a direction in which a clock signal is advanced in the second row of delay circuits 902, and the number of elements constituting the second row of delay circuits 902 is determined by an input terminal of the second row of delay circuits 902.
As to the latter grouping, each of FIGS. 2 and 5 illustrates a circuit in which a length of the second row of delay circuits 902 is determined by selecting an end of the length, whereas each of FIGS. 3 and 4 illustrates a circuit in which a length of the second row of delay circuits 902 is determined by selecting an entire length.
FIG. 2 illustrates a circuit having been suggested in the above-listed document (a), FIG. 3 illustrates a circuit having been suggested in the above-listed document (e), FIG. 4 illustrates a circuit having been suggested in the above-listed document (c), and FIG. 5 illustrates a circuit having been suggested in the above-listed documents (b) and (d).
Hereinbelow is explained an operation for removing clock skew with reference to timing charts illustrated in FIGS. 6A, 6B, 7A, and 7B. (A) Clock delay in a circuit having no synchronization delay circuits
FIG. 6A illustrates a circuit having no synchronization delay circuits. An external clock signal 906 is input through an input buffer 903, and is driven by a clock driver 904 to thereby turn into an internal clock signal 907. A delay time difference between the external clock signal 906 and the internal clock signal 907 is equal to a sum of delay time td1 of the input buffer 903 and delay time td2 of the clock driver 904. As illustrated in FIG. 6B, the sum (td1+td2) is clock skew in the illustrated circuit. (B) Principle in removal of clock delay by means of a synchronization delay circuit
A synchronization delay circuit removes clock skew, based on that a clock pulse is input thereinto every clock cycle tCK. Specifically, a delay circuit having delay time defined as (tCK-(td1+td2)) is positioned between an input buffer having delay time td1 and a clock driver having delay time td2, and is designed to have delay time equal to a clock cycle tCK (td1+tCK-(td1+td2)+td2 =tCK). As a result, an internal clock signal transmitted from the clock driver has the same timing as that of an external clock signal. (C) Removal of clock delay by means of a synchronization delay circuit
FIG. 7B is a timing chart of a synchronization delay circuit.
A synchronization delay circuit needs 2 clock cycles (2.times.tCK) to operate. In a first cycle, a synchronization delay circuit measures delay time (tCK (td1+td2)) dependent on a clock cycle, and determines delay for a delay circuit which reproduces delay time (tCK-(td1+td2)). In a second cycle, the thus measured delay time (tCK-(td1+td2)) is used.
As illustrated in FIG. 7A, a dummy delay circuit 905 and a row of delay circuits 901 are used for measuring the delay time (tCK-(td1+td2)) dependent on a clock cycle, in the first cycle.
A first pulse in successive two pulses in an external clock signal 906 is input through an input buffer 903, and is transmitted through a dummy delay circuit 905 and a row of delay circuits 901 during a clock cycle tCK starting when the first pulse leaves the input buffer 903 and terminating when a second pulse leaves the input buffer 903. Since the dummy delay circuit 905 has delay time defined as (td1+td2), a period of time in which the external clock signal 906 is advanced through a first row of delay circuits 901 is defined as (tCK-(td1+td2)).
A second row of delay circuits 902 is designed to have delay time equal to the above-mentioned period of time (tCK-(td1+td2)) in which the external clock signal 906 is advanced through the first row of delay circuits 901.
The delay time of the second row of delay circuits 902 can be set in accordance with any one of the above-mentioned four processes.
In the second cycle, a clock signal transmitted from the input buffer 930 advances through the second row of delay circuits 902 having delay time defined as (tCK-(td1+td2)), and then, is output through the clock driver 904. Thus, there is produced an internal clock signal 907 having delay time tCK.
The thus produced internal clock signal 907 has a cycle of 2.times.tCK and has no clock skew.
However, the above-mentioned synchronization delay circuits are accompanied with the following problems.
The first problem is that since dummy delay of a clock signal is fixed, it is necessary to estimate fixed dummy delay in advance. It would be possible to design a dummy delay circuit for each one of chips in a device in which clock delay can be estimated in advance, such as a micro-processor and a memory device. However, it would be quite difficult to design a dummy delay circuit for such a device in which clock delay is dependent on wiring layout of a chip, as an application specific integrated circuit (ASIC).
The second problem is that, as illustrated in FIGS. 8A and 8B, there is a difference both in dependency of delay time on a temperature and in dependency of delay time on a source voltage between a clock driver and a clock driver dummy, and in addition, even in a device in which clock delay can be estimated in advance, such as a micro-processor and a memory device.
The third problem is that it is impossible to eliminate a delay time difference in an internal clock signal made synchronized with an external clock signal, as having been indicated in the above-mentioned document (e), because a delay circuit for measuring a delay difference and a delay circuit for reproduction are both accomplished by determining the number of stages in a delay circuit row, and further because there is a time difference in a period of time for charging and discharging between those delay circuits. This causes dependency of a delay error or a delay time difference inherent to a digital circuit, on a clock cycle.
The fourth problem is that it is necessary to entirely drive a row of delay circuits when a clock cycle is to be reproduced by means of the row of delay circuits, resulting in an increase in load capacity and an increase in current consumption.
|
---
abstract: |
Following problems posed by Gyárfás [@gyarfas_survey], we show that for every $r$-edge-colouring of $K_n$ there is a monochromatic triple star of order at least $n/(r-1)$, improving Ruszinkó’s result [@diam5].
An edge colouring of a graph is called a local $r$-colouring if every vertex spans edges of at most $r$ distinct colours. We prove the existence of a monochromatic triple star with at least $rn/(r^2-r+1)$ vertices in every local $r$-colouring of $K_n$.
author:
- 'Shoham Letzter [^1]'
bibliography:
- 'bib.bib'
date: '21 April, 2013'
title: Large monochromatic triple stars in edge colourings
---
Introduction
============
A very simple observation, remarked by Erdős and Rado, is that when the edges of $K_n$ are $2$-coloured there exists a monochromatic spanning component. One can generalize this and look for large monochromatic components satisfying certain conditions. For example, it is an easy exercise to show that every $2$-colouring of $K_n$ has a spanning component of diameter at most $3$ (see [@manuscript], [@mubayi]). As a further generalization, one can consider edge colourings with more than two colours. Gyárfás [@gyarfas] extended the above observation by showing that every $r$-colouring of $K_n$ has a monochromatic component with at least $n/(r-1)$ vertices. This is tight when there exists an affine space of order $r-1$ and $(r-1)^2$ divides $n$. Füredi [@furedi] improved this bound in the case when there exists no affine space of order $r-1$, showing that for such $r$ every $r$-colouring of $K_n$ has a monochromatic component with at least $n/(r-1-(r-1)^{-1})$ vertices.
A *double star* is a tree obtained by joining the centres of two stars by an edge. Gyárfás [@gyarfas_survey] proposed the following problem.
\[prob\_1\] Is it true that for $r\ge 3$ every $r$-colouring of $K_n$ contains a monochromatic double star of size at least $n/(r-1)$?
For $r=2$ the answer to this question is negative. It is shown in [@double_star] (and implicitly in [@domination]) that when $K_n$ is $2$-coloured there is a monochromatic double star of size at least $3n/4$. This can be shown to be asymptotically tight using random graphs. The best known result so far for $r\ge 3$ was obtained by Gyárfás and Sárk[ö]{}zy [@double_star]. They showed that when the edges of $K_n$ are $r$-coloured there is a monochromatic double star of size at least $\frac{n(r+1)+r-1}{r^2}$.
A weaker version of the above problem is as follows.
\[prob\_2\] Is there a constant $d$ for which in every $r$-colouring of $K_n$ there exists a monochromatic component of diameter at most $d$ and size at least $n/(r-1)$?
Note that an affirmative answer to the first problem implies an affirmative answer to this one with $d=3$, which would be best possible (see [@fowler]). Ruszinkó [@diam5] solved the last problem with $d=5$, showing that for every $r$-colouring of $K_n$ there is a monochromatic component of diameter at most $5$ with at least $n/(r-1)$ vertices.
The first main result of this short note proves a weaker version of the first problem. A *triple star* is a tree obtained by joining the centres of three stars by a path of length $2$.
\[thm\_triple\_star\] Let $G=K_n$ be $r$-edge-coloured with $r\ge 3$. Then $G$ contains a monochromatic triple star with at least $n/(r-1)$ vertices.
Note that this is sharp in certain cases, namely whenever $n/(r-1)$ is a sharp lower bound for general monochromatic components in $r$-colourings of $K_n$. The claim in the theorem does not hold for $r=2$. In [@domination] it is implicitly shown that every $2$-coloured $K_n$ contains a monochromatic triple star of size $7n/8$. Furthermore, this is shown to be asymptotically tight using random colourings. As an immediate corollary of theorem \[thm\_triple\_star\] we answer problem \[prob\_2\] with $d=4$, improving Ruszinkó’s result.
Let $r\ge 3$. In every $r$-colouring of $K_n$ there is a monochromatic subgraph of diameter at most $4$ on at least $n/(r-1)$ vertices.
A *local $r$-colouring* is an edge colouring in which for every vertex the edges incident to it have at most $r$ distinct colours. In [@local_colouring] it is shown that in every local $r$-colouring of $K_n$ there is a monochromatic component with at least $\frac{rn}{r^2-r+1}$ vertices. This is sharp when there exists a projective plane of order $r-1$ and $r^2-r+1$ divides $n$. In [@double_star] it is shown that in local $r$-colourings of $K_n$ there is a monochromatic double star of size at least $\frac{(r+1)n+r-1}{r^2+1}$. Moreover, it is shown that for local $2$-colouring of $K_n$ there exists a monochromatic double star of size at least $2n/3$, and, as mentioned above, this is a sharp lower bound for the size of general monochromatic connected components. Our second main result shows that the above lower bounds for monochromatic components can be achieved also for components which are triple stars. Namely,
\[thm\_local\_triple\_stars\] Let $G=K_n$ be $r$-locally-coloured with $r\ge 3$. Then $G$ contains a monochromatic triple star with at least $\frac{rn}{r^2-r+1}$ vertices.
As before, the following corollary is immediate.
Let $r\ge 3$. In every $r$-local-colouring of $K_n$ there exists a monochromatic component of diameter at most $4$ with at least $\frac{rn}{r^2-r+1}$ vertices.
We prove theorem \[thm\_triple\_star\] in section \[sec\_colouring\], and theorem \[thm\_local\_triple\_stars\] in section \[sec\_local\_colourings\]. In the last section \[sec\_conclusion\] we finish with some concluding remarks and open problems.
Triple stars in edge colourings {#sec_colouring}
===============================
We assume to the contrary of the statement in the theorem that $G$ contains no monochromatic triple star of the given size. Let $G_1$ be a subgraph of $G$ which is a monochromatic double star of maximal order and let $U$ be its vertex set. Denote the colour of the edges of $G_1$ by $r$. By our assumption $|U|< n/(r-1)$. Let $a>0$ satisfy $|U|= n/(r-1)-a$ (note that $a$ may not be an integer).
Consider the bipartite graph $G_2$ with bipartition $U\cup (V(G)\setminus U)$ and edge set $E$, containing the edges between $U$ and $V(G)\setminus U$ not coloured by $r$ in $G$. Note that for every vertex $u\in U$ less than $a$ edges between $u$ and $V(G)\setminus U$ have colour $r$, as otherwise there would be an $r$-coloured triple star with at least $ n/(r-1)$ vertices, contradicting our assumption. Therefore $$\label{eqn_num_of_edges}
|E|> |U|(n-|U|)-a|U|.$$
We use the following lemma which is due to Mubayi [@mubayi] and Liu, Morris and Prince [@highly_connected]. We present the proof here for the sake of completeness.
\[lem\_bipartite\_double\_star\] Let $G=(V, E)$ be a bipartite graph with bipartition $V=A\cup B$. Then $G$ contains a double star with at least $(\frac{1}{|A|}+\frac{1}{|B|})|E|$ vertices.
For a vertex $v\in V$, let $d(v)$ denote the degree of $v$ in $G$ and for an edge $e=(a,b)\in E$, let $c(e)=d(a)+d(b)$. By the Cauchy-Schwartz inequality, $$\begin{aligned}
&\sum_{e\in E}c(e)=\sum_{a\in A}d(a)^2+\sum_{b\in B}d(b)^2\ge
\frac{1}{|A|}(\sum_{a\in A}d(a))^2+\frac{1}{|B|}(\sum_{b\in B}d(b))^2=(\frac{1}{|A|}+\frac{1}{|B|})|E|^2.\end{aligned}$$ Therefore, there is an edge $e\in E$ with $c(e)\ge(\frac{1}{|A|}+\frac{1}{|B|})|E|$, i.e. $G$ contains a double star of the required size.
By considering the edges with the majority colour, the lemma implies that $G_2$ has a monochromatic double star $G_3$ with at least $(\frac{1}{|U|}+\frac{1}{n-|U|})\frac{|E|}{r-1}$ vertices. Using inequality \[eqn\_num\_of\_edges\] for the size of $E$ and the expression for the size of $U$, $G_3$ has at least the following number of vertices. $$\begin{aligned}
&\frac{1}{r-1}\cdot\frac{n}{|U|(n-|U|)}\cdot\,(\,|U|(n-|U|)-a|U|\,)=\\
&\frac{n}{r-1}-a\frac{n}{r-1}(\frac{1}{ \frac{r-2}{r-1}n +a})>\\
&\frac{n}{r-1}-\frac{a}{r-2} \ge \frac{n}{r-1}-a=|U| \end{aligned}$$ Note that we use here the fact that $r\ge 3$. This implies that $G_3$ has more than $|U|$ vertices, contradicting the choice of $U$ as the vertex set of the largest monochromatic double star of $G$. We have thus reached a contradiction to the initial assumption, i.e. $G$ contains a triple star of the required size.
Triple stars in local edge colourings {#sec_local_colourings}
=====================================
As in the proof of theorem \[thm\_triple\_star\], we take $U$ to be the vertex set of the largest monochromatic double star, and assume it has $\frac{rn}{r^2-r+1}-a$ vertices, where $a>0$. We define the bipartite graph $G_2$ as before, and obtain the same inequality \[eqn\_num\_of\_edges\] for $|E|$. The following lemma generalizes lemma \[lem\_bipartite\_double\_star\] from the previous section. A weaker form of this lemma appears in [@local_colouring].
Let a bipartite graph $G=(V, E)$ with bipartition $V=A\cup B$ be edge coloured. Let $r,t$ be such that every vertex $x\in A$ is incident to edges of at most $r$ distinct colours, and every vertex $y\in B$ is incident to edges of at most $t$ colours. Then $G$ contains a monochromatic double star with at least $(\frac{1}{|A|r}+\frac{1}{|B|t})|E|$ vertices.
For a vertex $v\in A\cup B$ denote by $I(v)$ the number of colours used in the set of edges in $G$ incident with $v$. For a colour $k$ denote by $d_k(v)$ the number of $k$-coloured-edges containing $v$. For en edge $e=(a,b)$ in $G$ of colour $k$ let $c(e)=d_k(a)+d_k(b)$. Then by the Cauchy-Schwartz inequality, using the properties of the colouring, $$\begin{aligned}
&\sum_{e\in E}c(e)=\sum_{a\in A}\sum_{k\in I(a)}d_k(a)^2+\sum_{b\in B}\sum_{k\in I(b)}d_k(b)^2\ge\\
&\frac{1}{|A|r}(\sum_{a\in A}\sum_{k\in I(a)}d_k(a))^2+\frac{1}{|B|t}(\sum_{b\in B}\sum_{k\in I(b)}d_k(b))^2=(\frac{1}{|A|r}+\frac{1}{|B|t})|E|^2.\end{aligned}$$ And the claim follows.
Note that every vertex in $V(G)\setminus U$ spans edges of at most $r$ colours in $G_2$ and every vertex in $U$ spans edges with at most $r-1$ colours in $G_2$, using the fact that $G$ is locally $r$-coloured, and the definition of $G_2$. Thus $G$ contains a monochromatic double star with at least the following number of vertices. $$\begin{aligned}
&(\frac{1}{|U|(r-1)}+\frac{1}{(n-|U|)r})|E|\,>\,
(\frac{1}{|U|(r-1)}+\frac{1}{(n-|U|)r})(|U|(n-|U|)-a|U|)=\\
&\frac{n-|U|}{r-1}+\frac{|U|}{r}-\frac{a}{r-1}-a\frac{|U|}{(n-|U|)r}=\\
&\frac{(r-1)n}{r^2-r+1}+\frac{a}{r-1}+\frac{n}{r^2-r+1}-\frac{a}{r}-\frac{a}{r-1}-a\frac{\frac{rn}{r^2-r+1}-a}{\frac{(r-1)^2rn}{r^2-r+1}+a}\ge\\
&\frac{rn}{r^2-r+1}-\frac{a}{r}-\frac{a}{(r-1)^2}\,>\,
\frac{rn}{r^2-r+1}-a=|U|.\end{aligned}$$ As in theorem \[thm\_triple\_star\], we reached a contradiction to the choice of $U$, thus we have a monochromatic triple star of the required size.
Concluding Remarks {#sec_conclusion}
==================
Problem \[prob\_1\] which is the original question posed by Gyárfás, remains open. Is it true that for $r\ge3$ every $r$-colouring of $K_n$ contains a monochromatic triple star with at least $n/(r-1)$ vertices? It may also be interesting to consider the weaker version of this question, taking $d=3$ in problem \[prob\_2\]. Does every $r$-colouring of $K_n$ contain a diameter $3$ monochromatic subgraph of size at least $n/(r-1)$? Finally, it may be interesting to address the same questions in the context of local $r$-colourings (for $r\ge 3$). Namely, is it true that every local $r$-colouring contains a component of diameter at most $3$ with at least $\frac{rn}{r^2-r+1}$ vertices? If so, is there such a component which is a double star?
Acknowlodgements {#acknowlodgements .unnumbered}
================
I would like to thank Miklós Ruszinkó for introducing me to the subject of finding large monochromatic components in edge colourings of $K_n$ and for some useful discussions.
[^1]: Department of Pure Mathematics and Mathematical Statistics, Centre for Mathematical Sciences, Wilberforce Road, Cambridge, CB3 0WB, UK. Email: [email protected]
|
// This file was procedurally generated from the following sources:
// - src/arguments/args-trailing-comma-spread-operator.case
// - src/arguments/default/cls-expr-async-gen-meth.template
/*---
description: A trailing comma should not increase the arguments.length, using spread args (class expression async generator method)
esid: sec-class-definitions-runtime-semantics-evaluation
features: [async-iteration]
flags: [generated, async]
info: |
ClassExpression : class BindingIdentifieropt ClassTail
1. If BindingIdentifieropt is not present, let className be undefined.
2. Else, let className be StringValue of BindingIdentifier.
3. Let value be the result of ClassDefinitionEvaluation of ClassTail
with argument className.
[...]
14.5.14 Runtime Semantics: ClassDefinitionEvaluation
21. For each ClassElement m in order from methods
a. If IsStatic of m is false, then
i. Let status be the result of performing
PropertyDefinitionEvaluation for m with arguments proto and
false.
[...]
Runtime Semantics: PropertyDefinitionEvaluation
AsyncGeneratorMethod :
async [no LineTerminator here] * PropertyName ( UniqueFormalParameters )
{ AsyncGeneratorBody }
1. Let propKey be the result of evaluating PropertyName.
2. ReturnIfAbrupt(propKey).
3. If the function code for this AsyncGeneratorMethod is strict mode code, let strict be true.
Otherwise let strict be false.
4. Let scope be the running execution context's LexicalEnvironment.
5. Let closure be ! AsyncGeneratorFunctionCreate(Method, UniqueFormalParameters,
AsyncGeneratorBody, scope, strict).
[...]
Trailing comma in the arguments list
Left-Hand-Side Expressions
Arguments :
( )
( ArgumentList )
( ArgumentList , )
ArgumentList :
AssignmentExpression
... AssignmentExpression
ArgumentList , AssignmentExpression
ArgumentList , ... AssignmentExpression
---*/
var arr = [2, 3];
var callCount = 0;
var C = class {
async *method() {
assert.sameValue(arguments.length, 4);
assert.sameValue(arguments[0], 42);
assert.sameValue(arguments[1], 1);
assert.sameValue(arguments[2], 2);
assert.sameValue(arguments[3], 3);
callCount = callCount + 1;
}
};
// Stores a reference `ref` for case evaluation
var ref = C.prototype.method;
ref(42, ...[1], ...arr,).next().then(() => {
assert.sameValue(callCount, 1, 'method invoked exactly once');
}).then($DONE, $DONE);
|
Featured Agent
Pedro Cuberos
Pedro Cuberos
Broker Associate
Broker-Associate Pedro Cuberos has been an award-winning, multi-lingual member of the Brown Harris Stevens brokerage firm since 2005. Catering to Miami’s diverse clientele, Pedro is fluent in English, Spanish, Italian and Portuguese, providing comprehensive real estate expertise, multi-faceted marketing platforms and the ultimate in client care.
Focused on the single family home and condominium markets, Pedro’s market reach encompasses the areas of Miami Beach, Downtown Miami, Brickell, Key Biscayne, Coconut Grove, Coral Gables and Pinecrest.
A real estate professional since 1996, Pedro is a resource powerhouse for those looking to purchase or sell in Miami. His background work and education experience within both Urban and City planning, coupled with extensive knowledge of both the residential and commercial markets, empower both buyers and sellers alike to make the most educated, comprehensive and sophisticated real estate decisions.
In addition to his real estate acumen, Pedro has worked in interior remodeling and decor. For those wanting to increase the appeal of their home, Pedro offers a comprehensive package, from simple staging to extensive remodeling.
Pedro's Property Listings
The collection of luxury oceanfront and waterfront real estate in Miami Beach and Southern Florida
Information presented on this website is believed to be accurate but is not warranted. Oral representations should not be considered valid. Information presented on this site (and in generated reports) may contain listings from both Brown Harris Stevens and other brokers who cooperate with Brown Harris Stevens. The content on this webpage (including drawings, photos, text and other materials) may have been provided by developers, third parties or others, and may have been extracted from a developer's marketing materials. Brown Harris Stevens does not warrant the accuracy of such information. You should consult your Purchase Agreement, Contract and Condominium Documents for accurate information. Brown Harris Stevens is an independent broker and has no developer affiliation, unless otherwise noted.
|
from django.contrib import admin
from django.contrib.sites.shortcuts import get_current_site
from django.utils.translation import ugettext_lazy as _
from registration.models import RegistrationProfile
class RegistrationAdmin(admin.ModelAdmin):
actions = ['activate_users', 'resend_activation_email']
list_display = ('user', 'activation_key_expired')
raw_id_fields = ['user']
search_fields = ('user__username', 'user__first_name')
def activate_users(self, request, queryset):
"""
Activates the selected users, if they are not alrady
activated.
"""
for profile in queryset:
RegistrationProfile.objects.activate_user(profile.activation_key)
activate_users.short_description = _("Activate users")
def resend_activation_email(self, request, queryset):
"""
Re-sends activation emails for the selected users.
Note that this will *only* send activation emails for users
who are eligible to activate; emails will not be sent to users
whose activation keys have expired or who have already
activated.
"""
site = get_current_site(request)
for profile in queryset:
if not profile.activation_key_expired():
profile.send_activation_email(site)
resend_activation_email.short_description = _("Re-send activation emails")
admin.site.register(RegistrationProfile, RegistrationAdmin)
|
CYP2D6 and CYP1A1 mutations in the Turkish population.
Drugs and carcinogens are substrates of a group of metabolic enzymes including cytochrome p450 enzymes and gluthatione S-transferases. Many of the genes encoding these enzymes exhibit functional polymorphisms that contribute individual cancer susceptibility and drug response. Molecular studies based on these polymorphic enzymes also explain the aetiology of cancer and therapeutic management in clinics. We analysed the cytochrome p4501A1 (CYP1A1) and 2D6 (CYP2D6) variant genotype and allele frequencies by PCR-RFLP in Turkish individuals (n=140). The frequency of the CYP1A1*2A mutant allele was found to be 15.4%, and the CYP2D6*3 and *4 mutant allele (poor metabolizer) frequencies were 2.5% and 13.9%, respectively. This study presents the first results of CYP1A1 and CYP2D6 mutant allele distributions in the Turkish population and these data provide an understanding of epidemiological studies that correlate therapeutic approaches and aetiology of several types of malignancy in Turkish patients.
|
import QtQuick 2.7
import QtQuick.Window 2.0
import QtQuick.Controls 2.0
Window{
width: 400
height: 300
visible: true
flags: Qt.Window | Qt.WindowStaysOnTopHint
Text{
anchors.top: parent.top
anchors.topMargin: 4
anchors.horizontalCenter: parent.horizontalCenter
text: qsTr("主页二")
}
Button{
height: 32
width: 60
anchors.centerIn: parent
text: qsTr("下一页")
onClicked: changePage();
}
Button{
height: 32
width: 32
text: "X"
anchors.right: parent.right
anchors.rightMargin: 4
anchors.top: parent.top
anchors.topMargin: 4
onClicked: {
Qt.quit()
}
}
Loader{
id: pageLoader
}
function changePage(){
pageLoader.source = "main.qml"
close();
}
}
|
Stabilizing lithium-sulphur cathodes using polysulphide reservoirs.
The possibility of achieving high-energy, long-life storage batteries has tremendous scientific and technological significance. An example is the Li-S cell, which can offer a 3-5-fold increase in energy density compared with conventional Li-ion cells, at lower cost. Despite significant advances, there are challenges to its wide-scale implementation, which include dissolution of intermediate polysulphide reaction species into the electrolyte. Here we report a new concept to mitigate the problem, which relies on the design principles of drug delivery. Our strategy employs absorption of the intermediate polysulphides by a porous silica embedded within the carbon-sulphur composite that not only absorbs the polysulphides by means of weak binding, but also permits reversible desorption and release. It functions as an internal polysulphide reservoir during the reversible electrochemical process to give rise to long-term stabilization and improved coulombic efficiency. The reservoir mechanism is general and applicable to Li/S cathodes of any nature.
|
Gulde, Mississippi
Gulde is an unincorporated community in Rankin County, Mississippi, United States.
Gulde was established as a flag station on the Alabama and Vicksburg Railway in 1858, and is said to be named for a railroad official.
The Gulde Church is located south of the settlement, and the Gulde Cemetery is north.
References
Category:Populated places in Rankin County, Mississippi
Category:Populated places in Mississippi
|
JACKSON, Miss. (WJTV) – The Jackson Zoo will be closed beginning October 1, 2019. According to Mayor Chokwe Antar Lumumba, the zoo will be undergoing renovations and a change in management.
The announcement was made at Thursday’s special city council meeting. However, no date was given for a reopening.
The mayor says, $300,000 in renovations will take place during the closure. Some zoo employees will also be switched to contract employees. This means they will lose health benefits. However, they will be paid during the closure, according to Mayor Lumumba.
This temporary closure comes nearly a month after council members approved $200,000 to help the Jackson Zoo stay afloat throughout the year.
Thursday’s meeting drew on for hours. Council members also discussed an ordinance that prohibits protesting near health care facilities, and the city’s water billing and meter problem — in which the mayor says the city has been making progress.
|
Israel at the 1964 Summer Paralympics
Israel participated in the 1964 Summer Paralympics in Tokyo. 20 Israeli athletes won seven gold medals, three silver and eleven bronze, enabling their country to finish 7th on the medal table.
The Israeli delegation was composed of ten IDF veterans and ten athletes of the Israel Sports Center for the Disabled. Other excelling athletes were banned from participating due to indecent behavior. The delegation was headed by Mr. Arieh Fink, head of the rehabilitation department of the Israel Ministry of Defense, and accompanied by Mr. Gershon Huberman, director of the Israel Sports Center for the Disabled. Further members of the delegation were basketball coaches Shimon Shelah and Jacob Hendelsman, Mrs. Edna Medalia and two medical nurses.
In the 1964 Summer Paralympic Games, Israel participated for the first time in the weightlifting field, following Shalom Dalugatch's achievement of criteria by breaking the previous Paralympics' record. Delegation member Israel Even-Sahav was the sole athlete, of all participating states, asked to take part in rehearsals to the opening ceremony.
The delegation's travel expenses were divided in accordance with its composition: The IDF veterans were sponsored by the Ministry of Defense, sports organizations, the Olympic Committee and the athletes themselves, while the Center's athletes were sponsored by Japanese parties, most prominently by businessman Saul Eisenberg.
The eldest member of the delegation was Michael Ben-Naftali (40) and the youngest Jacob Ben-Arie (14).
Israel was ranked 7th on the medal table, winning seven gold medals, three silver and 11 bronze and achieving three world records.
Medalists
Athletes
IDF veterans:
Michael Ben-Naftali
Shmuel Ben-Zakai
Zvi Ben-Zvi
Reuven Hebron
Simcha Lustig
Menachem Morba
Avraham Mushraki
Joseph Sharav
Yoel Singer
Dan Wagner
Israel Sports Center for the Disabled representatives:
Jacob Ben-Arie
Shalom Dalugatch
Michal Escapa
Israel Even-Sahav
Yitzhak Galitzki
Israel Globus
Baruch Hagai
Avraham Keftelovich
Batia Mishani
Zipora Rosenbaum
External links
International Paralympic Committee
Israeli Paralympic Committee
References
Category:Nations at the 1964 Summer Paralympics
1964
Paralympics
|
Did public health travel advice reach EURO 2012 football fans? A social network survey.
We posted a survey on the Union of European Football Associations (UEFA)’s EURO 2012 Facebook profile to evaluate whether public health travel advice, specifically on the importance of measles vaccination,reached fans attending EURO 2012. Responses suggested that these messages were missed by 77% of fans. Social networks could serve as innovative platforms to conduct surveys, enabling rapid access to target populations at low cost and could be of use during upcoming mass gatherings such as the Olympics.
|
Abdominoperineal resection following anterior resection.
A series of 11 patients undergoing abdoninoperineal resection for "suture line recurrence" following anterior resection is presented. Five-year survival is 10%. Technically, the procedure is difficult and major problems are encountered, including large blood loss and ureteral complications. These patients had an inadequate distal margin of resection at the time of anterior resection. The survival of this group of patients underscores the importance of making the correct judgment about anterior or abdominoperineal resection at the time of the initial presentation of the patient. The phrase "suture line recurrence" is a misnomer; all of these patients had advanced pelvic malignancy. If the adequacy of the distal margin is questionable or a distal margin of 5 cm cannot be obtained safely at the time of anterior resection, abdominoperineal resection should be performed, as the opportunity for cure of a recurrence should this rule be compromised is limited.
|
The tubular articles of this invention are typically comprised of a thermally insulating yarn which may be supported by an inner tubular wire core. In a particular application, the articles are gaskets used, for example, as over door seals.
Woven tubular articles have been used for seals for oven doors for many years. These articles are typically made from a combination of an inner tubular support member formed of knitted wire and an outer tubular member made either by braiding, knitting or weaving an insulating material such as fiberglass yarn. Such structures have proven to be durable at the high temperatures used in self cleaning ovens and provide a good seal despite repeated openings and closures of the oven door over many years of use.
Methods of attaching a tubular gasket to an oven or oven door surface have typically comprised providing a retaining member which extends along the gasket and locking the retaining member between sheet metal pieces of the oven or by providing clamps at spaced locations around the periphery of the gasket.
An alternative form of gasket has attachment means comprised of a wire form having space attachment protrusions which fit into corresponding holes in surface to which the gasket is to be attached is shown in the prior art.
The present invention relates to an improvement in the fastening of gaskets to mounting surfaces and also to a novel resilient fastener which is simple to install onto a tubular gasket, easy to manufacture and effective in retaining the gasket to a support surface.
The present invention provides for an improved clip having a head with an apex, a pair of shoulders and a neck and a base attached to the head. The base may comprise of at least one coil course having a variable or constant radius of curvature.
Applicants novel improvement further provides a one piece resilient wire strand a portion defining a head and a second portion defining a base, the head and base perpendicular to one another and the head capable of protruding from the gasket while the base is engaged with an interior thereof.
Applicants novel invention further comprises a sealing apparatus comprising a gasket and a clip engaged with the gasket, the clip having a head and a coiled base, the coiled base being perpendicular to the plane of the head and adapted to be engaged with the metal core of the gasket, as being enclosed in an interior thereof.
The doors of many appliances, such as ovens, refrigerators, microwaves, etc., have flexible tubular gaskets around their perimeter for sealing and a variety of other reasons. See for example U.S. Pat. Nos. 4,986,033 and 4,822,060, the specifications and drawings of which are incorporated herein by reference and attached hereto.
|
When a black couple attempted to buy a house in suburban Brookfield in 2015, they were told they had to put down the entire down payment to buy the house as-is. When a white couple visited the same home, they were told they they could hold it with a $1,500 deposit, known as earnest money — and the owner was even willing to fix all the building code violations.
But the white couple were “testers,” volunteers that pose as potential buyers or renters to investigate housing discrimination. They only went to the home after the black couple contacted the HOPE Fair Housing Center in Wheaton.
HOPE was one of six Illinois organizations recently awarded money under a $1.5 million grant from the Department of Housing and Urban Development for testing and enforcement, but testing dates back to the 1960s housing campaign led by Martin Luther King Jr. in Chicago.
That’s the basis of what fair housing organizations do today. They send two people, similar except for one detail: to visit the same landlord or housing provider. The volunteers are trained to carefully observe how they’re treated and report back. A testing center might send a white man and a black man, a mother and a single woman, a person with a Section 8 housing voucher and a person without.
Testing cases can originate in a number of ways. An organization might conduct a random test in a gentrifying neighborhood or they might follow up with a specific complaint.
The Gold Coast has the highest rate of racial discrimination complaints in the city, according to a WBEZ of analysis of federal fair housing data. Mallory told a John Marshall Law School professor about trying to rent an apartment in the downtown neighborhood. Her professor suspected she had been discriminated against and convinced her to go from student to client. The law school fair housing clinic took her case and targeted the Gold Coast apartment building for testing.
The testing was done to determine if the same apartment building engaged in racial discrimination. Mallory and Tiffany were clearly told different things. Testing is a tool that’s still relevant today.
Before the Fair Housing Act, racism was open and virulent. “No blacks allowed” signs were common. Today, it’s illegal to deny housing based on race, color, national origin, sex, religion, familial status, and disability. Additionally, in Chicago, it’s illegal to discriminate against someone for source of income, such as student loans or Section 8 housing vouchers.
Racism and other discrimination can be hard to prove. After Mallory left the building, she knew she hadn’t been treated fairly. It was a familiar feeling, but she shrugged it off. Testing confirmed that she had been discriminated against and allowed her to file a complaint.
”[Testing is] one way to confirm whether something happened and it makes sure we’re not filing frivolous lawsuits so we have a good leg to stand on when we go forward,” said Amrita Narasimhan, testing director at John Marshall Law School Fair Housing Legal Support Center & Clinic.
Mallory said she received a confidential settlement, and the employees of that apartment building were required to complete fair housing training. The couple in Brookfield received $120,000 in a 2017 settlement, and the real estate agent and owner of the house agreed to comply with the Fair Housing Act going forward.
Natalie Moore is WBEZ’s South Side reporter. You can follow her on Twitter at @natalieymoore. Illustrations by Paula Friedrich, WBEZ’s interactive producer. You can follow her on Twitter at @pauliebe.
|
A 4-month-old girl died Wednesday after she was left in a scorching hot van outside a Florida day care center for nearly five hours, authorities said.
Jacksonville police received a call around 1 p.m. from a day care employee who said she’d discovered the infant “still strapped in her child seat unresponsive,” the sheriff’s office said in a press release.
The employee had checked the van after the infant’s mother called the day care to make after-school arrangements, but the infant hadn’t been checked-in that morning, the release said.
The girl, who had been left in the van since approximately 8:25 a.m., was rushed to a hospital where she was pronounced dead, the sheriff’s office said. Temperatures in Jacksonville on Wednesday had reached 92 degrees, according to reports.
BABY DIES AFTER BEING LEFT IN HOT CAR IN INDIANAPOLIS
The van’s driver and daycare co-owner, 56-year-old Darryl Ewing, was arrested and booked into jail on child neglect charges, the sheriff’s office said.
CLICK HERE TO GET THE FOX NEWS APP
Investigators said Ewing was responsible for maintaining a driver’s log documenting all of the children in the van. Ewing had logged the two of the victim’s siblings, but not the victim, the release said. An investigation is ongoing.
|
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package zipkin
import (
"errors"
"io"
"math/rand"
"testing"
zipkinmodel "github.com/openzipkin/zipkin-go/model"
"github.com/stretchr/testify/assert"
"go.opentelemetry.io/collector/consumer/pdata"
otlptrace "go.opentelemetry.io/collector/internal/data/opentelemetry-proto-gen/trace/v1"
"go.opentelemetry.io/collector/internal/data/testdata"
"go.opentelemetry.io/collector/internal/goldendataset"
)
func TestInternalTracesToZipkinSpans(t *testing.T) {
tests := []struct {
name string
td pdata.Traces
zs []*zipkinmodel.SpanModel
err error
}{
{
name: "empty",
td: testdata.GenerateTraceDataEmpty(),
err: nil,
},
{
name: "oneEmpty",
td: testdata.GenerateTraceDataOneEmptyResourceSpans(),
zs: make([]*zipkinmodel.SpanModel, 0),
err: nil,
},
{
name: "oneEmptyOneNil",
td: testdata.GenerateTraceDataOneEmptyOneNilResourceSpans(),
zs: make([]*zipkinmodel.SpanModel, 0),
err: nil,
},
{
name: "noLibs",
td: testdata.GenerateTraceDataNoLibraries(),
zs: make([]*zipkinmodel.SpanModel, 0),
err: nil,
},
{
name: "oneEmptyLib",
td: testdata.GenerateTraceDataOneEmptyInstrumentationLibrary(),
zs: make([]*zipkinmodel.SpanModel, 0),
err: nil,
},
{
name: "oneEmptyLibOneNilLib",
td: testdata.GenerateTraceDataOneEmptyOneNilInstrumentationLibrary(),
zs: make([]*zipkinmodel.SpanModel, 0),
err: nil,
},
{
name: "oneSpanNoResrouce",
td: testdata.GenerateTraceDataOneSpanNoResource(),
zs: make([]*zipkinmodel.SpanModel, 0),
err: errors.New("TraceID is nil"),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
zss, err := InternalTracesToZipkinSpans(test.td)
assert.EqualValues(t, test.err, err)
if test.name == "empty" {
assert.Nil(t, zss)
} else {
assert.Equal(t, len(test.zs), len(zss))
assert.EqualValues(t, test.zs, zss)
}
})
}
}
func TestInternalTracesToZipkinSpansAndBack(t *testing.T) {
rscSpans, err := goldendataset.GenerateResourceSpans(
"../../../internal/goldendataset/testdata/generated_pict_pairs_traces.txt",
"../../../internal/goldendataset/testdata/generated_pict_pairs_spans.txt",
io.Reader(rand.New(rand.NewSource(2004))))
assert.NoError(t, err)
for _, rs := range rscSpans {
orig := make([]*otlptrace.ResourceSpans, 1)
orig[0] = rs
td := pdata.TracesFromOtlp(orig)
zipkinSpans, err := InternalTracesToZipkinSpans(td)
assert.NoError(t, err)
assert.Equal(t, td.SpanCount(), len(zipkinSpans))
tdFromZS, zErr := V2SpansToInternalTraces(zipkinSpans)
assert.NoError(t, zErr)
assert.NotNil(t, tdFromZS)
assert.Equal(t, td.SpanCount(), tdFromZS.SpanCount())
}
}
|
Real sexual health doesn’t come in a blue pill. Pills can mask problems for a while, but lasting vigour is possible with the use of herbs to reverse the effects of clogged arteries, depleted hormones, and low energy. Central to this plan for male renewal are the phytoandrogens - plant substances that naturally boost testosterone levels. This book tells you what herbs, foods and supplements to take and which to avoid.
|
Home/UBC Psychology professor Kiley Hamlin shares research on early moral cognition with the Dalai Lama
UBC Psychology professor Kiley Hamlin shares research on early moral cognition with the Dalai Lama
October 22, 2014
Prof. Kiley Hamlin took part in the sold-out event Educating the Heart in the Early Years: A Dialogue with the Dalai Lama on October 22, 2014 at UBC’s Chan Centre for Performing Arts.
Revered worldwide for his compassion, quick wit and intelligence, the Dalai Lama is one of UBC’s most distinguished Honorary Doctorates.
This unique dialogue featured a keynote address by the Dalai Lama and a panel of leading researchers from UBC who discussed the science behind the Dalai Lama’s belief that consciously teaching children to be compassionate and altruistic in their earliest years has a profoundly positive effect on their social, emotional and spiritual well-being throughout life.
Dr. Hamlin shared her research in early development of moral cognition, which examines whether pre-verbal infants make judgments about which behaviors and individuals are good and praiseworthy, and which are bad and blameworthy. Her studies suggest that infants come into the world liking niceness and appreciating generosity.
Photos courtesy of Martin Dee and Michael Krausz.
Revered worldwide for his compassion, quick wit and intelligence, the Dalai Lama is one of UBC’s most distinguished Honorary Doctorates.
This unique dialogue features a keynote address by the Dalai Lama and a panel of leading researchers from UBC who will discuss the science behind the Dalai Lama’s belief that consciously teaching children to be compassionate and altruistic in their earliest years has a profoundly positive effect on their social, emotional and spiritual well-being throughout life.
– See more at: http://www.chancentre.com/whats-on/sold-out-educating-heart-early-years-dialogue-dalai-lama#sthash.sWR1WJC8.dpuf
: Educating the Heart in the Early Years: A Dialogue with the Dalai Lama – See more at: http://www.chancentre.com/whats-on/sold-out-educating-heart-early-years-dialogue-dalai-lama#sthash.sWR1WJC8.dpuf
: Educating the Heart in the Early Years: A Dialogue with the Dalai Lama – See more at: http://www.chancentre.com/whats-on/sold-out-educating-heart-early-years-dialogue-dalai-lama#sthash.sWR1WJC8.dpuf
: Educating the Heart in the Early Years: A Dialogue with the Dalai Lama – See more at: http://www.chancentre.com/whats-on/sold-out-educating-heart-early-years-dialogue-dalai-lama#sthash.sWR1WJC8.dpuf
: Educating the Heart in the Early Years: A Dialogue with the Dalai Lama – See more at: http://www.chancentre.com/whats-on/sold-out-educating-heart-early-years-dialogue-dalai-lama#sthash.sWR1WJC8.dpuf
|
Thursday, November 8, 2007
On The Work Related Arena
I've heard tales of bloggers getting into trouble for writing about their work. Getting disciplined or sacked has been known to happen. It has has even happened to a friend of mine. So I've decided, coward that I am (?) that this will not be a blog whereon I will indulge in office gossip, office politics, or negative criticism related to my work place, or the people who work here.
Is this because I appreciate the money my employer plans to fire my way every month for the next eleven months? Yes, indeed. Absolutely. I thought I’d make that clear now in case you wonder why I don’t write negatively about my employer in the future.
Like many a soul beneath our moon, I'm a wage slave. As such, given my inability to create money out of thin air; given, moreover, my unwillingness to seek to procure the means of habitation and sustenance by ways that circumvent the ‘money system’ in a manner commonly referred to as ‘crime’, it seems I've little choice but to exhibit, at least by failing to be critical, the outward signs of an inward gratitude to the sources of my income for that income. For the requirement of this dutiful deference, I do not blame my company, but rather look reproachfully at the worldwide system of mechanised wage-slavery of which it, and indeed I, form a part. The fact that I don't have a reliable source of private income of my own is also, of course, acutely relevant. As is the broader, wider, deeper fact that we have to use this absurd, abstract stuff called money in the first place.
Anyway, from now on, if I significantly disapprove of something at work I shall try, on this blog at least, to pass over it in silence. To be the recipients of my private work related rants will be the function of my friends and family, if cause for this arises, and if I feel I will not bore them too much.
Maybe you are thinking: 'You're being over paranoid'. You may be right. But of course, I can’t know this. So I don't know this. On the other hand, I promise not to invent positive stories about my employer out of thin air. To be sycophantic towards my overlord is not my ambition.
By now you're perhaps supposing that I have something to be silent about; otherwise, why do I go to the trouble of the preceding paragraphs? As it happens, you'd be wrong. So far, beyond the normal, predictable ‘growing pains, and ‘wriggling in’ nuisances associated with starting work in a new place, everything has been fine. Really, it has been. Certainly better than I expected. Ok, it would be nice if the internet worked as well in my office, at my own desk, as it works in the classroom -though even there it’s slower than it could be. And I would like to be driven to and from work each day in a car like some teachers are, instead of in our small, somewhat cramped bus. But apart from that, conditions have been very acceptable. The people I work with are at various places on the scale from fine to great – which is just as well, since my extra-work social life is struggling in its infancy.
Just as welcome are the students I’ve so far been lucky enough to teach. Before I came out to the Gulf I’d heard that Gulf Arabs are not a pleasure to teach; that they are lazy, unmotivated and just don’t care; that because they're soaking in the wealth and privilege of oil they don’t need to better themselves and don’t need to learn English to get a better job, which is so different from the situation for students in Slovakia. Maybe the account I'd heard was a worst case scenario. But I heard it nevertheless. It conditioned my expectations. It’s true, those Gulf Arabs about whom I’d heard this were University students, not employees of the Oil industry as my students are. Before I started here, I’d suspected that the already employed, who actually need English for their work, as Kuwaitis in the oil industry do, might turn out to be more motivated and serious.
I can't speak about Kuwaiti university students, however. Whatever the reason may be, mine are certainly motivated and keen (as well as punctual, which always helps). This is nice and means I don’t have to over-play the role of entertainer, or be an arouser of attention; or, on the other hand, feel that I ought to be following the utterly demeaning, lamentable path of that which we in the educational profession try to disguise as something other than what it is: discipline and correction.
They certainly like to ask a lot of questions. Luckily, six years teaching experience and the manageable challenges of intermediate grammar have allowed these questions to be a stimulus to the rhythm and flow of the lessons, not irksome or embarrassing. My students are assiduous about detail. They are very keen to understand everything as well as they can. I’ve tried to keep them speaking as much as possible. They’ve liked this, I'm fairly sure.
I have two women in my class and four men. One of my students is very religious, in that he wears the robes and beard of the Wahabbis. He only joined us recently. Though pleasant he has a more somber countenance than the others, who are jollier. I wasn’t sure if he’d like doing what we in the business call ‘pair work’ with the women, so I haven’t tried putting them together. To my agreeable surprise, however, the women haven’t minded interacting with the other men, though usually they prefer sitting together. In Oman the women and men in my friend's classes sat segregated in separate sides of the classroom. In Saudi all lessons are all male or all female.
All of my students are Kuwaitis, except for one Egyptian man, the most fastidious learner of all. I can honestly say, so far at least, that they've been a pleasure to teach. They’ve made me feel welcome in Kuwait and have given me lots of useful tips and information about life here.
The use of the so called ‘interactive white board’ has been very helpful. All the contents of the book can be readily displayed on a screen, which certainly helps. Even better, I no longer have to cue CD's and tapes for the listenings.
I teach from 8 until 10.35am daily with two ten minute breaks. So not for very long, in other words. Indeed, I only teach 50% of my contracted hours, though this might change at any time. Indeed, it might very well next week. For the rest of the day I prepare lessons and have been looking over the exams. I have lunch, drink a lot of coffee and, when I’m not busy, make use of the internet facilites, which seem relatively unrestricted.
Some noteworthy aspects of work are:
We get brought coffee to our tables in the staff room by a Bangladeshi waiter
Indians do our photocopying for us. Though they are not always there to do it
As I enter and leave work, I pass my finger over a fingerprint machine
The canteen staff wear masks
The food is actually very good (in my opinion)
My students call me 'sir' - sometimes. They never did that in Slovakia.
A high proportion of the staff are British
Everyone I meet on the premises works for the oil industry in some capacity.
Today we had two false fire alarms. One was planned, the other went off on its own accord for mysterious, as yet uninvestigated, reasons.
I spend half an hour a day traveling to and from work - in a bus.
I always wear a tie
Now it's Thursday evening, which means the weekend has just begun. Until September 1st the weekend in Kuwait began on Wednesday evenings. Friday is the sacred day in Islam, so this was non-negotiable, but it was decided in that for business purposes it would be wise not to continue losing two Western business days a week. The Emirates was the third of the Arabian Peninsula countries to make this shift, in September 2006, after Bahrain and Qatar. Kuwait is the 4th. By making these changes, these lands now line up with the weekending customs of Egypt, Lebanon, Syria and Jordan.
|
1. Introduction {#sec1}
===============
There are two basic molecular mechanisms of recognition of microorganisms by phagocytic cells: opsonin-dependent, and opsonin-independent. The former mechanism requires serum components, opsonins, which act by binding to the surface of the microorganisms at one end and to specific receptors on the phagocyte surface at the other. The best-known opsonins are Immunoglobulin G (IgG) which binds via its Fc domain to the Fc receptor (FcR) on the phagocytes, and the C3b and iC3b fragments of the C3 component of complement, which bind to the complement receptors CR1 and CR3, respectively, on the phagocyte surface. Many different types of bacteria interact with phagocytic cells in serum-free media *in vitro* in the absence of opsonins, Certain integrins (CR3 among others) serve as receptors for microbial surface ligands in nonopsonic phagocytosis \[[@B1]\].
IgG is the most abundant Ig class in serum, constituting over 75% of circulating immunoglobulin. It mediates key effector functions through interaction with Fc*γ* receptors. Fc*γ* receptors are divided generally into three main classes, Fc*γ*RI(CD64), Fc*γ*RII (CD32), and Fc*γ*RIII (CD16), each with distinct structural and functional properties. Fc*γ*RI is a high-affinity receptor for monomeric IgG (*K* ~*a*~: 10^9^--10^10^/M) with three extracellular Ig-like domains expressed constitutively by monocytes and macrophages, as well as by many myeloid progenitor cells. In contrast to Fc*γ*RI, the other two classes of Fc*γ* receptor, Fc*γ*RII and Fc*γ*RIII, display low affinity for monomeric IgG. They are capable of binding to aggregated IgG through multimeric low-affinity, high-avidity interactions, which are particularly important in the recognition and binding of antibody-antigen complexes during an immune response. IgG binding to low-affinity Fc*γ*R can trigger a range of effector and immunoregulatory functions, including degranulation, phagocytosis, and regulation of antibody production. Fc*γ*RII is expressed by diverse cell types: Fc*γ*RIIa isoform by myeloid cells, including polymorphonuclear leucocytes, monocytes, macrophages, platelets, and certain types of endothelial cells and, Fc*γ*RIIb by B cells, monocytes, and macrophages, while Fc*γ*RIIc expression is restricted solely to natural killer (NK) cells. The other low-affinity Fc*γ* receptor, Fc*γ*RIII, is in two isoforms. Although Fc*γ*RIIIa and Fc*γ*RIIIb share high levels of sequence homology, they exhibit distinct structural differences. Fc*γ*RIIIa is a transmembrane protein that associates with the Fc*γ*R chain, whereas Fc*γ*RIIIb is processed posttranslationally as a glycosylphosphatidylinositol- (GPI-) anchored protein, lacking transmembrane and intracellular domains. The Fc*γ*RIIIa isoform is expressed widely by several leucocyte cell types, including macrophages, NK cells, and subsets of T cells and monocytes, while Fc*γ*RIIIb is expressed constitutively only by neutrophils \[[@B2]\].
The receptors for complement molecules, designated complement receptors 1 (CD35) and 3 (CD11b), present on all phagocytic cells, are only weakly expressed on the surface of resting neutrophils and are mostly stored in intracellular granules (CD35 in secretory vesicles and CD11b in both specific and gelatinase granules and secretory vesicles). Secretory vesicles (SV) are the most likely to release contents via degranulation, followed by gelatinase, specific, and azurophil granules. In addition to exposure to proinflammatory cytokines, even relatively simple physical stress, different anticoagulant types, temperature changes, and isolation of leukocytes can trigger rapid degranulation of neutrophil SVs. Fusion of SVs with the plasma membrane leads to increased CD35 and CD11b levels at the cell surface \[[@B3]\].
Do the infectious and other inflammatory diseases induce alterations in the expression of the opsonin receptors of phagocytes?
2. Receptor Expression Measurements {#sec2}
===================================
Erythrocytes were lysed in anticoagulated blood by adding 10 volumes of 0.83% NH~4~Cl followed by 15 min of incubation at room temperature. Leukocytes were separated by centrifugation. Before the measurements of receptor expression, leukocytes (3 × 10^5^) were incubated in 50 *μ*L of gHBSS with monoclonal antibodies (0.4 *μ*g) in polystyrene flow cytometer vials for 30 min at +4°C. After incubation, the cells were washed once with cold gHBSS and resuspended in cold gHBSS. Leukocytes incubated with mouse unspecific immunoglobulins served as controls for correction of leukocyte autofluorescence. A relative measure of receptor expression was obtained by determining the mean fluorescence intensity of 5000 leukocytes. In the case of neutrophil Fc*γ*RI, the percentage of fluorescence positive cells (%) was also determined. In neutrophils, which express only few Fc*γ*RI (MFI \< 4.0) on the cell membrane, the %-value varied between 5 and 70 which described the changes in expression levels better than the MFI. At a high expression levels (MFI value \>4.0), the %-value was 95--100. When %-value was 100 regardless of the activation state of leukocyte (i.e., in the case of CR1, CR3, and Fc*γ*RII in neutrophils, monocytes, and eosinophils, Fc*γ*RI in monocytes, and Fc*γ*RIII in neutrophils), only MFI was presented. Measurement of leukocyte receptor expression was performed using fluorescently (FITC or PE) labelled receptor-specific monoclonal antibodies. The receptor panel studied for two-colour immunofluorescence analysis and the mAbs are presented in [Table 1](#tab1){ref-type="table"}.
3. Receptor Expression in Health and Diseases {#sec3}
=============================================
Earlier, we have performed few studies where we have measured the receptor expression in various rather small patient groups \[[@B4]--[@B7]\]. The summary of the results from these studies is presented in [Table 2](#tab2){ref-type="table"}. In monocytes, all the receptors were upregulated in bacterial and viral infections. In neutrophils, CR1, CR3, Fc*γ*RI, and Fc*γ*RII were upregulated, while Fc*γ*RIII was downregulated in bacterial infections. CR1 and Fc*γ*RII were downregulated, while CR3 and Fc*γ*RI were upregulated in viral infections. These results led us to conclude that the receptor expression could be used as a basis for the differential diagnosis of bacterial and viral infections.
4. Prospective Study {#sec4}
====================
In this study, standard clinical laboratory data (neutrophil count, serum C reactive protein level (CRP), and erythrocyte sedimentation rate (ESR)) and quantitative flow cytometric analysis of neutrophil complement receptors, CR1 and CR3, as well as Fc*γ*RI (CD64) were obtained from 292 hospitalized febrile patients. After microbiological confirmation or clinical diagnosis, 135 patients were found to have either bacterial (*n* = 89) or viral (*n* = 46) infection. The patient data was compared to 60 healthy controls. The grouping of the patients into subgroups is presented in [Figure 1](#fig1){ref-type="fig"}. The mean of parameters measured in the patient samples are presented in [Table 3](#tab3){ref-type="table"}. The average expression levels of CR1 and CR3 on neutrophils in bacterial infections were over threefold and twofold higher, respectively, compared with viral infections and controls. According to receiver operating characteristic (ROC) curve analysis, neutrophil CR1 displayed 92% sensitivity and 85% specificity in distinguishing between bacterial and viral infections ([Figure 2(a)](#fig2){ref-type="fig"}). Compared with other measured variables, such as neutrophil CR3, neutrophil count, CRP, and ESR, neutrophil CR1 had the most effective differential capacity. The lower diagnostic accuracy of CR3 compared with CR1 may be explained by the phenomenon that CR3 is expressed not only from rapidly releasing secretory vesicles like CR1, but also from specific and gelatinase granules \[[@B8]\]. The differential capacity of CR1 and CR3 was lost when EDTA, instead of heparin, was used as an anticoagulant ([Table 3](#tab3){ref-type="table"}) due to defaults in extracellular calcium in blood samples. The behaviour of CRP and ESR was similar to the expression of neutrophil CR1 in that they were significantly higher in bacterial than in viral infections. In addition to the measured variables, we defined a computational variable by multiplying the neutrophil count, mean fluorescence intensity (MFI) of FITC-conjugated CR1-specific monoclonal antibodies on neutrophils and MFI of PE-conjugated CR3-specific monoclonal antibodies on neutrophils (= neutrophil count × relative number of CR1 on neutrophils × relative number of CR3 on neutrophils). The index obtained by taking the base-10 logarithm of this factorial represents the total number of neutrophil complement receptors per blood sample volume (TNCR index, [Table 3](#tab3){ref-type="table"}.) The TNCR index has somewhat higher specificity (89% versus 85%) than neutrophil CR1 in distinguishing between bacterial and viral infections \[[@B9]\].
5. Distinguishing between Bacterial and Viral Infections with the Clinical Infection Score (CIS) Point \[[@B9], [@B10]\] {#sec5}
========================================================================================================================
To determine whether the diagnostic yield of measured individual variables increases upon combination, we estimated the clinical infection score (CIS) point consisting of four variables, including CRP (ROC curve cutoff point = 77 mg/L), ESR (28 mm/h), mean amount of CR1 on neutrophil (MFI of 8.7) and TNCR index (3.4). For every variable measured, a result less than the cutoff point was converted to a variable score point of 0, that between the cutoff point and an additional second cutoff value (161 mg/L for CRP, 42 mm/h for ESR, MFI of 13.5 for CR1 and 3.9 for TNCR index), was converted to a variable score point of 1, and that greater than the additional second cutoff point value was converted to a variable score point of 2 ([Figure 2(a)](#fig2){ref-type="fig"}). An additional second cutoff value of a variable was the maximum value detected in patients with viral infection. The maximum virus value of higher than the average value of bacterial infection (epidemic nephropathy, ESR of 112 mm/h) was ignored when additional second cutoff values were put in their places. We obtained CIS points that varied between 0 and 8 by combining variable scores ([Figure 2(b)](#fig2){ref-type="fig"}). At a cutoff point of \>2, the CIS points differentiated between microbiologically confirmed bacterial infection (*n* = 46) and viral infection (*n* = 38) with 98% sensitivity and 97% specificity \[[@B9]\].
6. Distinguishing between dsDNA and ssRNA Virus Infections with the DNA Virus Score (DNAVS) Point \[[@B11]\] {#sec6}
============================================================================================================
Similarly to CIS point, we estimated the DNA virus score (DNAVS) point consisting of four variables, including mean amount of CD64 on neutrophil (ROC curve cutoff point = MFI of 1.7), neutrophil CD64% (82%), percent of lymphocytes (29%), and lymphocyte count (1.9 × 10^9^/L). For every variable measured, a result less than the cutoff point was converted to a variable score point of 0, that between the cutoff point and an additional second cutoff value (MFI of 2.5 for CD64, 96% for neutrophil CD64%, 56% for percent of lymphocytes, and 2.8 × 10^9^/L for lymphocyte count) was converted to a variable score point of 1, and that greater than the additional second cutoff point value was converted to a variable score point of 2. An additional second cutoff value of a variable was the maximum value detected in patients with ssRNA virus infection ([Figure 3](#fig3){ref-type="fig"}). After data conversion, we obtained SUM that varied between 0 and 8 by adding four variable score points together. Next, we defined a DNAVS point by multiplying the SUM, CD64 factor (CF), and haematopoietic factor (HF) (DNAVS point = SUM × CF × HF). CF of 0.25 was used when variable score point of both receptor variables was 0. If variable score point of both haematopoietic variables was 0, then HF of 0.5 was used. In all the other cases, CF and HF were 1. At a cutoff point of higher than or equal to 1.5, the DNAVS points differentiated between dsDNA and ssRNA virus infections with 95% sensitivity and 100% specificity \[[@B11]\].
7. Distinguishing between Bacterial Infections, Viral Infections and Inflammatory Diseases with the Analysis of Fc**γ**RI Expression \[[@B12]--[@B14]\] {#sec7}
=======================================================================================================================================================
The average number of Fc*γ*RI on the surfaces of both neutrophils and monocytes was significantly increased in patients with febrile viral and bacterial infections, compared to healthy controls. Furthermore, we describe a novel marker of febrile infection, designated "CD64 score point", which incorporates the quantitative analysis of Fc*γ*RI expressed on both neutrophils and monocytes, with 94% sensitivity and 98% specificity in distinguishing between febrile infections and healthy controls. By contrast, analysis of Fc*γ*RI expression on neutrophils and monocytes displayed poor sensitivity (73% and 52%) and specificity (65% and 52%) in distinguishing between bacterial and viral infections, and the levels did not differ significantly between systemic (sepsis), local, and clinically diagnosed bacterial infections. Thus, the increased number of Fc*γ*RI on neutrophils and monocytes is a useful marker of febrile infection but cannot be applied for differential diagnosis between bacterial and viral infections or between systemic and local bacterial infections \[[@B12]\].
As noticed above, the expression of neutrophil CD35 is higher in bacterial than in viral infections. Neutrophil CD35-based differentiation between bacterial and viral infections can be improved by generating the CIS point. We further developed CD64/CIS point bivariate dot-plot graph (Figures [4(a)](#fig4){ref-type="fig"}--[4(d)](#fig4){ref-type="fig"}), where the vertical and horizontal lines are set to represent the optimal cutoff point, MFI value of 1.5 for neutrophil Fc*γ*RI and 2.5 for CIS point value, respectively. The bivariate dot-plot graph can be divided into four quadrants: upper left quadrant (ULQ), upper right quadrant (URQ), lower left quadrant (LLQ), and lower right quadrant (LRQ). Now, 92% of bacterial infections are located in URQ whereas viral infections are located in LLQ (35%) or in LRQ (61%). Inflammatory diseases distributed to LLQ (14%), ULQ (43%), and URQ (43%) \[[@B13]\].
8. Detecting Gram-Positive Sepsis \[[@B15]\] {#sec8}
============================================
In Gram-negative bacterial infection (*n* = 21), the average amount of CD11b on neutrophils was significantly higher than in gram-positive bacterial infection (*n* = 22). On the contrary, CRP level was significantly higher in Gram-positive than in gram-negative bacterial infection. By dividing the serum CRP value by the amount of CD11b on neutrophils, we derived a novel marker of Gram-positive sepsis, CRP/CD11b ratio, which displayed 76% sensitivity and 80% specificity for the detection of Gram-positive sepsis (*n* = 17) among febrile patients with microbiologically confirmed or clinically diagnosed bacterial infection.
9. Conclusion {#sec9}
=============
Treating viral illnesses or noninfective causes of inflammation with antibiotics is ineffective and contributes to the development of antibiotic resistance, toxicity and allergic reactions, leading to increasing medical costs. A major factor behind unnecessary use of antibiotics is, of course, incorrect diagnosis. For this reason, timely and accurate information on whether the infection is bacterial in origin would be highly beneficial in the fight against antibiotic resistance. The analysis of the expression levels of Fc*γ*RI, CR1, and CR3, along with CRP and ESR data, provides a novel application to the diagnosis of infectious and inflammatory diseases. The best clinical benefit from the quantitative analysis of these markers will be obtained when the individual variables are combined to generate the CIS point method for a reliable bacterial infection marker, DNAVS point for differentiating between DNA and RNA virus infections, CD64 score point for a marker of febrile infection and CRP/CD11b ratio for a marker of gram-positive sepsis.
{#fig1}
{#fig2}
{#fig3}
{#fig4}
######
Monoclonal antibodies used in receptor expression studies.
Clone Conjugate Specificity CD group Isotype
------------------ ----------- ------------- ---------- -------------
Fc*γ*Rs
22 FITC Fc*γ*RI CD64 IgG mouse
2E1 PE Fc*γ*RII CD32 IgG2a mouse
3G8 FITC Fc*γ*RIII CD16 IgG1 mouse
CRs
J3D3 FITC CR1 CD35 IgG1 mouse
Bear1 PE CR3 CD11b IgG1 mouse
Isotype controls
679.1Mc7 FITC/PE Irrelevant --- IgG1 mouse
U7.27 PE Irrelevant --- IgG2a mouse
######
Receptor expression changes in various diseases compared to healthy controls.
Receptor Bacterial infection Viral infection Kidney cancer Atopic dermatitis
----------------- --------------------- ----------------- --------------- -------------------
Neutrophils
CR1/CD35 +++ (−) no change \+
CR3/CD11b +++ \+ ++ (+)
Fc*γ*RI/CD64 +++ +++ (+) no change
Fc*γ*RII/CD32 \+ (−) no change (+)
Fc*γ*RIII/CD16 (−) no change no change (−)
Monocytes
CR1/CD35 +++ ++ \+ (+)
CR3/CD11b +++ ++ +++ (+)
Fc*γ*RI/CD64 +++ +++ \+ (−)
Fc*γ*RII/CD32 (+) no change ++ (+)
The +/− without parentheses indicates a significant increase/decrease in the expression of receptor in question compared to healthy control.
The +/− in parentheses represents an insignificant increase/decrease in the expression of receptor in question compared to healthy control.
\+ or − = 0%--50% increase or decrease compared to healthy control, ++ = 50%--100% increase compared to healthy control, and +++ = more than 100% increase compared to healthy control.
######
Parameters measured in the patient material expressed as mean (S.D.). Receptor expression data from both heparin and EDTA anticoagulated blood samples are presented.
Variables Microbiologically confirmed Healthy Control Clinically diagnosed
------------------ ----------------------------- ----------------- ---------------------- ----------- -----------
CRP (mg/L) 232 (135) 40 (41) --- 217 (103) 43 (49)
ESR (mm/h) 65 (28) 19 (19) --- 69 (27) 22 (15)
WBC (×10^9^/L) 11 (4.9) 7.7 (4.1) 4.8 (1.3) 9.8 (4.8) 5.9 (1.4)
PMNL (%) 71 (14) 51 (22) 51 (9.8) 74 (13) 49 (20)
PMNL (×10^9^/L) 8.2 (3.6) 3.5 (2.0) 2.6 (0.9) 7.5 (3.9) 2.6 (1.0)
Heparin sample
Neutrophil CR1 21 (9.9) 5.7 (2.9) 6.3 (2.2) 20 (7.5) 6.4 (3.3)
Neutrophil CR3 100 (51) 54 (23) 49 (18) 104 (45) 59 (35)
TNCR index 4.1 (0.5) 2.9 (0.5) 2.8 (0.3) 4.0 (0.5) 2.9 (0.5)
CIS point 6.2 (1.7) 0.6 (1.0) --- 6.3 (1.9) 0.6 (1.2)
EDTA sample (*n* = 15) (*n* = 6) (*n* = 18)
Neutrophil CR1 8.3 (2.4) 6.2 (2.8) 4.8 (1.3) --- ---
Neutrophil CR3 34 (12) 36 (11) 28 (6.0) --- ---
[^1]: Academic Editors: J. Blanco and C. D. Jun
|
The Ten Commandments is the most morally influential piece of legislation ever written. To give a good idea of how relevant each of the ten is, take the third commandment, one of the two most misunderstood commandments (the other is "Do not Murder," which I explained previously).
Is there such a thing as "the worst sin" -- one sin that is worse than all others?
In fact, there is.
I am aware that some people differ. They maintain that we can't declare any sin worse than any other. "To God, a sin is a sin," is how it's often expressed. In this view, a person who steals a stapler from the office is committing as grievous a sin in God's eyes as a murderer.
But most people intuitively, as well as biblically, understand that some sins are clearly worse than others. We are confident that God has at least as much common sense as we do. The God of Judaism and Christianity does not equate stealing an office item with murder.
So, then, what is the worst sin?
The worst sin is committing evil in God's name.
How do we know?
From the third of the Ten Commandments. This is the only one of the ten that states that God will not forgive a person who violates the commandment.
What does this commandment say?
It is most commonly translated as, "Do not take the name of the Lord thy God in vain. For the Lord will not hold guiltless" -- meaning "will not forgive" -- whoever takes His name in vain."
Because of this translation, most people understandably think that the commandment forbids saying God's name for no good reason. So, something like, "God, did I have a rough day at work today!" violates the third commandment.
But that interpretation presents a real problem. It would mean that whereas God could forgive the violation of any of the other commandments -- dishonoring one's parents, stealing, adultery or even committing murder -- He would never forgive someone who said, "God, did I have a rough day at work today!"
Let's be honest: That would render God and the Ten Commandments morally incomprehensible.
As it happens, however, the commandment is not the problem. The problem is the translation. The Hebrew original doesn't say "Do not take;" it says "Do not carry." The Hebrew literally reads, "Do not carry the name of the Lord thy God in vain."
This is reflected in one of the most widely used new translations of the Bible, the New International Version, or NIV, which uses the word "misuse" rather than the word "take:"
"You shall not misuse the name of the Lord your God."
This is much closer to the original's intent.
What does it mean to "carry" or to "misuse" God's name? It means committing evil in God's name.
And that God will not forgive.
Why not?
When an irreligious person commits evil, it doesn't bring God and religion into disrepute. But when a religious person commits evil in God's name he destroys the greatest hope for goodness on earth -- belief in a God who demands goodness, and who morally judges people.
The Nazis and Communists were horrifically cruel mass murderers. But their evils only sullied their own names, not the name of God. But when religious people commit evil, especially in God's name, they are not only committing evil, they are doing terrible damage to the name of God.
In our time, there are, unfortunately, many examples of this. The evils committed by Islamists who torture, bomb, cut throats and mass murder -- all in the name of their God -- do terrible damage to the name of God.
It is not coincidental that what is called the New Atheism -- the immense eruption of atheist activism -- followed the 9/11 attack on America by Islamist terrorists. In fact, the most frequent argument against God and religion concerns evil committed in God's name -- whether it is done in the name of Allah today or was done in the past in the name of Christ.
People who murder in the name of God not only kill their victims, they kill God, too.
That's why the greatest sin is religious evil.
That's what the third commandment is there to teach: Don't carry God's name in vain. If you do, God won't forgive you.
You can see this commentary, animated with text and graphics, at www.prageruniversity.com. It was released, along with the other nine commandments, this month.
I wish my readers a Merry Christmas and a Happy Hanukkah. And remember, just as evil in God's name is atheism's best friend, goodness in God's name is theism's best friend. So make a donation to the Salvation Army. They do immense good in God's name. There is a red kettle at my website http://www.dennisprager.com.
|
Adaptation by older individuals repeatedly exposed to 0.45 parts per million ozone for two hours.
To test for an increased reaction to ozone (O3) in older individuals following an initial exposure, and to test for adaptation and its duration, we exposed 10 men and 6 women (60-89 years old) in an environmental chamber to filtered air and 3 consecutive days of O3 exposure (0.45 ppm), followed by a fourth O3 exposure day after a two day hiatus. Subjects alternated 20-min exercise (minute ventilation = 27 L) and rest periods for 2 hours during each exposure. Subjects rated from one to five, 16 possible respiratory/exercise symptoms prior to and following the exposure. Pulmonary function tests were performed before, and during each rest period and following the exposure. Metabolic measurements were obtained during each exercise period. No significant changes in any symptom question occurred, in spite of a threefold increase in the total number of reported symptoms during O3 exposure. Small but significant pre-to-post decrements on the first and second O3 days in forced vital capacity (FVC-111 and 104 mL), forced expiratory volume in 1 (FEV1-171 and 164 mL) and 3 seconds (FEV3-185 and 172 mL) occurred without concomitant changes in any flow parameter of the forced expiratory maneuver. No differences in the group mean response in FVC, FEV1 or FEV3 on the third or fourth day of O3 exposure and the filtered air exposure were found. The observed changes were due to significant physiological changes in eight of the subjects. Unlike young subjects, no evidence of an increased pulmonary function response to a second consecutive O3 exposure was observed.(ABSTRACT TRUNCATED AT 250 WORDS)
|
Saw VI Red Band Puts Your Friends on the Wheel of Death!
The red band clip from Lionsgate’s Saw VI, which premiered exclusively on mobile phones at the San Diego Comic-Con this past July, has returned over on the official Saw VI website along with a bit more gore-soaked fun!
When you visit the site, just click on the “Wheel of Death” option and dig! But that’s not all! Here are the full details of the “Wheel of Death”: “Watch an exclusive red band clip from the film and then via Facebook Connect choose 6 of your friends to place on the wheel of death. But beware – you will have some tough choices ahead of you…”
Official SynopsisSpecial Agent Strahm is dead, and Detective Hoffman has emerged as the unchallenged successor to Jigsaw’s legacy. However, when the FBI draws closer to Hoffman, he is forced to set a game into motion, and Jigsaw’s grand scheme is finally understood.
Dig on the trailer below, and look for the film in theatres on October 23rd.
|
Q:
How to add onClick event on a dynamically created button
I am trying to append a block of element everytime I receive a message.
It is basically a simple chat.
So every time a new chat is received , the new message will be appended to the div.
But how I'm not able to add an onClick event to it
This is the line : <button class="likeBtn" onClick={this.pointButton}><i class="far fa-heart"></i></button>
let msgLeft = document.getElementById("msg-left");
if (this.state.username === data.username) {
msgLeft.insertAdjacentHTML(
"beforeend",
`<div class="main-c-container-right">
<div class="c-image"></div>
<div class="c-text-container">
<div class="c-name">
${data.username}
</div>
<div class="c-text">
<div class="c-pointer"></div>
${data.message.replace(/\n/g, "<br>")}
</div>
<div class="c-time">
${currentTime}
</div>
<button class="likeBtn" onClick={this.pointButton}><i class="far fa-heart"></i></button>
</div>
</div>`
);
A:
Basically what you need to do is to hold the array of elements somewhere (local state, redux,etc..) and display the array of elements on the JSX.
If you need to add a new element to this array, you just need to update this value with the new element and React will know how to re-render everything again.
You don't need to directly manipulate the DOM while using React.
|
Q:
mongo query is not working properly on linux server PHP mongo linux
I have a mongo query which works nice on my local machine on windows
but on the server, which uses linux the same query is not working
By is not working I mean that it executes correctly but is not able to find the data that corresponds to this criteria.
MDB::alloc()->{COLL_wall}->remove(
array(
'_id' => new MongoId($wid),
'$or' => array(
array(wall_owner => $this->id),
array(wall_writter => $this->id)
),
wall_owner => $wallOwner
),
array(
'safe' => true
)
);
what can be the problem?
A:
Most probably the problem is that your machine on linux the version of mongo is much older and for this reason "or" statement is not working there
Check both your versions and upgrade if necessary.
|
95 Mich. App. 462 (1980)
291 N.W.2d 82
PEOPLE
v.
BLACKMON
Docket No. 78-3940.
Michigan Court of Appeals.
Decided February 20, 1980.
Frank J. Kelley, Attorney General, Robert A. Derengoski, Solicitor General, William L. Cahalan, Prosecuting Attorney, Edward Reilly Wilson, Principal *463 Attorney, Appeals, and Frank J. Bernacki, Assistant Prosecuting Attorney, for the people.
Carl Ziemba, for defendant on appeal.
Before: M.F. CAVANAGH, P.J., and M.J. KELLY and D.S. DeWITT,[*] JJ.
D.S. DeWITT, J.
On July 19, 1978, defendant, Aaron Tyrone Blackmon, was found guilty, in a bench trial, of assault with intent to do great bodily harm less than murder, MCL 750.84; MSA 28.279, and possession of a firearm in the commission of a felony (felony-firearm), MCL 750.227(b); MSA 28.424(2). Defendant was sentenced on August 3, 1978, to a term of imprisonment for five to ten years on the assault conviction, to be served consecutively with a two-year mandatory term of imprisonment on the felony-firearm conviction. Defendant appeals as of right and raises the issue of whether his waiver of trial by jury was valid and binding.
Michigan law specifies the circumstances under which a defendant may effectively waive his right to a jury trial. MCL 763.3; MSA 28.856. The waiver must be in a writing, signed by the defendant, and must be made a part of the record of defendant's case. In addition, defendant's waiver "must be made in open court after the said defendant has been arraigned and has had opportunity to consult with counsel". MCL 763.3; MSA 28.856.
An examination of the lower court file indicates that the defendant executed a written "Waiver of Trial By Jury" form on the same date on which defendant's trial commenced, July 18, 1978. The transcript of the trial proceedings further discloses *464 that the following exchange ensued between the court and defendant's counsel, Ira H. Harris:
"THE COURT: Have you filed a written waiver?
"MR. HARRIS: Yes, your Honor.
"THE COURT: That firearm count is on this case?
"MR. HARRIS: Yes, your Honor.
"THE COURT: All right, because I don't know anything about this case and that's the way it should be.
"MR. HARRIS: Very well.
"THE COURT: But I'm always concerned about that firearm count. This case has been pretrialed and everything, you tried to work out anything?
"MR. HARRIS: People offered no reduced plea.
"THE COURT: All right. Are you ready to proceed?
"MR. PEARL: Yes, your Honor.
"THE COURT: You make your opening statement."
The question before this Court is whether the statutory requirement of a waiver "made in open court" is met by evidence on the record that a written waiver was executed by the defendant on the date of trial and was referred to by defense counsel as filed in response to the court's inquiry in that regard. We hodl that the above facts do not constitute sufficient compliance with the statutory direction.
In People v McKaig, 89 Mich App 746; 282 NW2d 209 (1979), two members of a panel of this Court recently stated the rule that a valid waiver does not require an oral acknowledgment where it is apparent that the waiver was made in open court. Id, 750. We agree with the McKaig rule as a general proposition but disagree with that Court's conclusion that it is applicable to cases where the record shows merely that a written waiver was executed on the date of defendant's trial. Here, as in McKaig, defendant was represented by an attorney *465 and made no claim that "he did not sign the waiver in open court". Id. We are unsatisfied, however, that these facts rise to the level of an acknowledgment in open court of a waiver of the fundamental right to jury trial. Acknowledgement before a deputy clerk of the court, which is the customary procedure utilized in executing a written jury trial waiver,[1] is not the equivalent of acknowledgement before a judge for purposes of complying with the statute's "open court" requirement.
While other trial records lacking an oral acknowledgement may nevertheless contain sufficient additional evidence that a waiver was executed in open court, those facts are not presently before this Court. In the interest of insuring that waivers are properly executed and acknowledged, however, we reiterate the advice offered in McKaig:
"We encourage trial judges to supplement the written waiver with an oral acknowledgement by the defendant. This practice eliminates any doubt as to whether or not the waiver was made in open court." McKaig, supra, 750-751.
Defendant's other contention of error merits no discussion.
Reversed and remanded for a new trial.
NOTES
[*] Circuit judge, sitting on the Court of Appeals by assignment.
[1] The waiver in this case was on a printed form containing a printed provision for acknowledgment before a deputy clerk.
|
This week the Casual CInecast reviews William Goldman and Rob Reiner's beloved classic film, The Princess Bride! As always, with our Casually Criterion episodes, we discuss the upcoming Criterion Collection announcements. For this episode, we go over the titles being released in May 2019, & more!
On our last Twitter poll, users voted on which Criterion film we should review next. At the time, it was close to Valentine's day, so the theme was "love". The winner of that poll was none other than the 1987 cult-classic, The Princess Bride (Spine #948), written by William Goldman and directed by Rob Reiner. The film features an all-star cast including Robin Wright, Cary Elwes, Mandy Patinkin, Andre the Giant, & more! Does this classic stand the test of time?
|
Incidence and effects of West Nile virus infection in vaccinated and unvaccinated horses in California.
A prospective cohort study was used to estimate the incidence of West Nile virus (WNV) infection in a group of unvaccinated horses (n = 37) in California and compare the effects of natural WNV infection in these unvaccinated horses to a group of co-mingled vaccinated horses (n = 155). Horses initially were vaccinated with either inactivated whole virus (n = 87) or canarypox recombinant (n = 68) WNV vaccines during 2003 or 2004, prior to emergence of WNV in the region. Unvaccinated horses were serologically tested for antibodies to WNV by microsphere immunoassay incorporating recombinant WNV E protein (rE MIA) in December 2003, December 2004, and every two months thereafter until November 2005. Clinical neurologic disease attributable to WNV infection (West Nile disease (WND)) developed in 2 (5.4%) of 37 unvaccinated horses and in 0 of 155 vaccinated horses. One affected horse died. Twenty one (67.7%) of 31 unvaccinated horses that were seronegative to WNV in December, 2004 seroconverted to WNV before the end of the study in November, 2005. Findings from the study indicate that currently-available commercial vaccines are effective in preventing WND and their use is financially justified because clinical disease only occurred in unvaccinated horses and the mean cost of each clinical case of WND was approximately 45 times the cost of a 2-dose WNV vaccination program.
|
Category: What Is Trail Running
When it’s the medical transportation business another industry, in the event the info you’re gathering is absolutely free, it’s likely going to cost you the most! Actually, in other words, the medical transportation market is exploding! There are many explanations as to why the medical industry keeps growing exponentially to include technological advances, population development, […]
Contrary to what a lot of people may believe, hypnosis isn’t some mind control technique. Bear in mind that the purpose of every type of hypnosis is to communicate with a different individual’s unconscious mind. Some learn conversational hypnosis to just impress family and friends. Conversational Hypnosis is an art that is chiefly composed of […]
A superb angler respects our natural resources and wishes to conserve them for others to relish. If you’re a new fisherman, they you likely do not have a good deal of pole casting experience. Fishing stipulates all types of possibilities for blogs. Falling through the ice If you’re out with a buddy ice fishing and […]
Treatment for mouth sores varies dependent on the reason, but a lot of them require only time to heal. Although suppressive treatment significantly lessens the possibility of passing HSV to a partner, there’s still a risk. The absolute most effective treatment to lessen the length of a cold sore is prescription medication. Factors like high […]
When you’re in truth, you truly feel spiritually guided. Your faith can’t waver for a single second. You need to see that faith is always dependent on the promises of God. Finding out how to pray effectively is much less complicated as you may think. Praying truly is the main activity on mission. A berakhah […]
Be conscious of the surface that you set the train tracks upon. Toy trains are available in all shapes and sizes, with unique styles, art and brands to pick from. Like the deluxe Brio sets above, it might easily be the sole train set you ever will need to purchase your youngster. There are various […]
If you wish to understand how to pull men, all you should do is exude your feminine side whilst talking to them. It’s incredible how well men perceive you whenever you smile more. Understanding how to to draw men is actually easy. Attracting men is only the very first step. Men aren’t scared of relationships. […]
Pull off to a side if you give up to have a look at a map or a view so other individuals can acquire past. Rest assured is that whatever distance you select, Mohican volunteers will be present to help you complete your trip. The 50K distance was dropped from the list of feasible distances. […]
At times the runners are known as harriers (dogs). However, for a growing number of runners, marathons are now inadequate. There’s a half marathon and a 10km too. The Race couldn’t be conducted without great volunteers! Several decades past, ultra races were confined to just a couple. Caffeine is the most frequently consumed non-nutritional drug […]
Perth’s coast is built for runners and even when you hate running on sand you’re able to follow the footpaths and run for up to 25 kilometres (or no more than one!) Just twenty minutes west of downtown Knoxville is among the best kept secrets in the region. It is among the few parks in […]
|
Featured Items
Article
Section
Breadcrumbs
Paternity
Simply put, paternity means fatherhood. Establishing paternity gives a child a legal father. It also gives the father both rights and obligations related to helping take care of his child. It is important for the child to know who they are. By knowing both parents, a child gains a sense of identity and belonging. Both parents have the right to establish a healthy relationship with their child(ren) and a responsibility to care for their child(ren). Making the relationship legal from the beginning provides a greater opportunity for a healthy relationship and insures the father's rights to a relationship with his child. Legal fathers have all of the same parental rights and responsibilities as the mother, including the right to seek custody or parenting time.
Parents and their children should know about potentially inherited health problems. Establishing paternity provides the child a greater likelihood of having access to this information. In addition, establishing paternity is the first step in making plans to provide the financial support a child needs.
With legal paternity established, the child will have access to:
Social Security dependent or survivor benefits
Inheritance rights
Veteran's benefits
Life and health insurance benefits
How is Paternity Established?
A man is presumed to be a child's legal father if;
He and his wife are married when the child is born, or
If the child is born no later than 300 days after the marriage ends
In all other cases, legal paternity must be established in one of two ways;
Paternity Affidavit, or
Court Order
A paternity affidavit is a legal document that permits a man and a woman to declare, under penalty of perjury, that the man is the biological father of a child. A properly executed paternity affidavit establishes legal paternity (fatherhood) and parental rights and responsibilities, without the necessity of obtaining a court order. A paternity affidavit may be completed at the hospital within 72 hours of the child's birth or at your local health department any time before the child is emancipated. If paternity is established by paternity affidavit, the Department of Health will add the father's name to the child's birth certificate.
The second way paternity can be established is by an order from the court. Either parent may file an action in an appropriate Indiana court seeking determination of paternity. The county prosecutor's office may also file an action if the case is a Title IV-D support case. After the action is filed, the court will set a hearing date and notice will be provided to both parties. At the hearing, the parties may agree to paternity without the benefit of genetic testing, request genetic testing to determine paternity, or the court may hear evidence and make a decision about whether or not paternity should be established. If genetic testing is ordered by the court, the parties will be tested and the court will hold off on deciding the issue of paternity until the genetic testing results are available to the court. Either parent or the county prosecutor's Title IV-D child support office may request genetic testing, commonly know as DNA testing.
For more information about different ways of establishing paternity and its importance to the child, please click on the following links:
For assistance with establishing paternity, contact the local county health department or the local county IV-D prosecutor's office. A private attorney may also be able to assist in establishing paternity.
For sample Paternity Affidavit forms, click on the following links:
Hospital Paternity Affidavit Form (to be completed by the Hospital/Birthing Center only)
A legal DNA test follows a Chain of Custody documentation process to ensure you receive accurate and legally defensible results. Genetic testing must be performed by an accredited laboratory. A home paternity test would not be admissible for legal purposes.
This list of accredited laboratories located throughout Indiana is provided as a courtesy only and is not a list of laboratories recommended or endorsed by the Child Support Bureau.
Please contact laboratories directly for current pricing and test locations.
|
Arterial, portal or combined arterio-portal regional chemotherapy in experimental liver tumours?
The most appropriate route for regional administration of chemotherapeutic drugs to liver tumours was studied in a standardized rodent model: cells of Novikoff hepatoma were transplanted into the central liver lobe of Sprague-Dawley rats. From day 5 to day 12 after transplantation, the liver was continuously perfused with 420 mg/kg 5'-fluoro-2-deoxyuridine by subcutaneous osmotic micropumps via the hepatic artery (n = 20), the portal vein (n = 20) or both vessels together (n = 12). The tumour multiplication factor (TMF) and the vascularization of the tumour were evaluated. Arterial and combined infusion led to a highly significant reduction in TMF, but combined infusion was not more effective than arterial alone. Portal infusion had no significant effect. There was no correlation between vascularization and tumour response in arterial infusion, but a strong correlation in portal infusion. Thus chemotherapy via the portal route may be effective in selected tumours with considerable portal vascularisation.
|
---
abstract: 'We investigate blending, binarity and photometric biases in crowded-field CCD imaging. For this, we consider random blend losses, which correspond to the total number of stars left undetected in unresolved blends. We present a simple formula to estimate blend losses, which can be converted to apparent magnitude biases using the luminosity function of the analyzed sample. Because of the used assumptions, our results give lower limits of the total bias and we show that in some cases even these limits point toward significant limitations in measuring apparent brightnesses of “standard candle” stars, thus distances to nearby galaxies. A special application is presented for the OGLE-II $BVI$ maps of the Large Magellanic Cloud. We find a previously neglected systematic bias up to 02–03 for faint stars ($V\sim18\fm0-19\fm0$) in the OGLE-II sample, which affects LMC distance measurements using RR Lyrae and red clump stars. We also consider the effects of intrinsic stellar correlations, i.e. binarity, via calculating two-point correlation functions for stellar fields around seven recently exploded classical novae. In two cases, for V1494 Aql and V705 Cas, the reported close optical companions seem to be physically correlated with the cataclysmic systems. Finally, we find significant blend frequencies up to 50–60% in the samples of wide-field exoplanetary surveys, which suggests that blending calculations are highly advisable to be included into the regular reduction procedure.'
author:
- |
L. L. Kiss[^1][^2] & T. R. Bedding\
\
School of Physics, University of Sydney 2006, Australia
date: 'Accepted ... Received ..; in original form ..'
title: 'Photometric biases due to stellar blending: implications for measuring distances, constraining binarity and detecting exoplanetary transits '
---
techniques: photometric – methods: statistical – binaries: eclipsing – binaries: visual – stars: oscillations – stars: planetary systems – stars: novae, cataclysmic variables
Introduction
============
Classical crowded field photometry attempts to detect and measure brightnesses of individual stars that are heavily affected by the presence of close neighbours. For ground-based observations, crowding depends on the angular density of objects and the atmospheric seeing conditions. Stellar blending (unresolved imaging of overlapping stars) can be a significant component of the total ambiguity known as the confusion noise – see Takeuchi & Ishii (2004) for an excellent historic review and a general formulation of the source confusion statistics. Recent studies for which blending was important include measuring: the luminosity function of individually undetectable faint stars (Snel 1998); the extragalactic Cepheid period-luminosity relation (Mochejska et al. 2000, Ferrarese et al. 2000, Gibson et al. 2000); and Cepheid light curve parameters (Antonello 2002). Extensive investigations can also be found about blending and microlensing surveys (Alard 1997; Wozniak & Paczynski 1997; Han 1997, 1998; Alcock et al. 2001). Stellar blending in general is difficult to model, because significant contribution may be due to physical companions, which are common among young stars, including Cepheids (Harris & Zaritsky 1999, Mochejska et al. 2000).
Here we attempt to determine the effects of random blending with a new approach that includes corrections for the excess of double stars. This work was motivated by recent cases in which stellar blending played a degrading role. For instance, wide-field photometric surveys of the galactic field are characterized by confusion radii of 10-20$^{\prime\prime}$ (Brown 2003) and can suffer from strong blending, even in regions far from the galactic plane. Other examples include the presence of a close optical companion of the classical nova V1494 Aql (at a separation of about 15), which heavily affected late light curves of the eclipsing system (Kiss et al. 2004).
Inspired by these problems and the availability of deep, all-sky star catalogues like the USNO B1.0 (Monet et al. 2003), we decided to perform simple calculations in terms of observational parameters such as the confusion radius – constrained by the seeing or the pixel size of the detector – and stellar angular density. To emphasize the importance of the problem, we present magnitude and amplitude biases for unresolved blends of $\Delta m$=0, 1, 2 and 3 mag in Table \[biases\]. Though fairly trivial, the numbers clearly show that even for $\Delta m$=3 mag, blending can affect brightness and variability information to a highly significant extent. Also, these numbers span the magnitude ranges in which we are interested: usually 3 to 5 mag wide samples are considered for the chance of random blending. This is a different approach compared to the case of general blending, which includes very faint blends, too. For instance, Han (1997) showed that for a typical 15 seeing disk towards the galactic Bulge, model luminosity functions of the Milky Way predict $\sim$36 stars within that area; of these, only 0.75% is expected to be brighter than 18 mag. Obviously, those faint blends do not affect photometry of bright stars. This study focuses on a much more specific problem: for a given field of view and confusion radius, what can be derived about random blending probabilities from the observed stellar angular distributions? Can we assign systematic errors based on these probabilities?
The paper is organized as follows. In Sect. 2 we discuss blending in random stellar fields. We present a simple formula to estimate random blend losses, which has been tested by extensive simulations. We also discuss the effects of visual double stars by determining the two-point correlation. As an application, in Sect. 3 we investigate probable photometric biases in deep OGLE-II $BVI$ observations of the Large Magellanic Cloud (Udalski et al. 2000). In Sect. 4 we analyse stellar fields around seven recently erupted classical novae, all located in densely populated regions near to the galactic plane. We investigate blending rates in photometric survey programs HAT (Bakos et al. 2002, 2004), STARE[^3] (Brown 2003, Alonso et al. 2003) and ASAS (Pojmanski 2002) in Sect. 5. Concluding remarks are given in Sect. 6.
unblended $\Delta m=0$ $\Delta m=1$ $\Delta m=2$ $\Delta m=3$
--------------- ----------- -------------- -------------- -------------- --------------
$m_{\rm obs}$ 000 $-$075 $-$036 $-$016 $-$007
$A_{\rm obs}$ 100 039 061 080 091
$A_{\rm obs}$ 010 005 007 009 $\sim$010
$A_{\rm obs}$ 001 0005 0007 0009 $\sim$001
: Biases in apparent magnitude ($m_{\rm obs}$) and amplitude of variation ($A_{\rm obs}$) for blending stars of magnitude difference $\Delta m$.[]{data-label="biases"}
Random blending and binarity: basic relations
=============================================
Hereafter we distinguish two cases that need different approaches:
[**Case 1:**]{} There is only one dataset to analyse for blending probability, characterized by a typical confusion radius; no additional information exists based on a catalogue or high-resolution imaging with much smaller confusion radius.
[**Case 2:**]{} We can compare the observations with an additional source of data, which can be considered as unbiased by random blending.
Case 1 refers to those studies where the completeness of a catalogue is investigated or where the observations were deeper than any existing catalogue. Examples discussed in this paper include the deep OGLE-II $BVI$ map of the Large Magellanic Cloud and galactic novae in the USNO B1.0 catalogue. Case 2 corresponds to the exoplanetary surveys with very large confusion radii (up to 20$^{\prime\prime}$), which can be well characterized using whole-sky star catalogues of a magnitude better angular resolution. Case 2 is much simpler because one can always check whether an interesting object is a blend or not. However, if ensemble properties of large groups of stars are considered, blending must be taken into account because high fractions of stars are in fact blends when observed at large confusion radius.
Random blend loss
-----------------
The idea behind our calculations is the following. When taking an image of a very crowded field, the detection efficiency is limited by the confusion radius ($r_c$), which is the smallest angular distance between two resolvable stars. If the distance between two neighbours is smaller than $r_c$, we detect only one object. This means the number of detected stars, $N_d$, will be smaller than $N$ by the number of objects in an area $\delta S=\pi r_c^2$. The difference $N-N_d$ is the blend loss. Because of this, the detected “stars” will appear to be brighter than they really are. Our aim is to estimate the number of stars lost due to random merging and the corresponding mean magnitude biases.
Let us consider a sample of $N$ single stars spread randomly over a field with area $S$. The mean number of stars within an area $\delta S$ is
$$\delta N = N ~{\delta S \over S}=n ~\delta S,$$
where $n$ is the number density. If we assume that blend losses are predominantly due to close pairs of stars then the mean number of close neighbours within $\delta S$ can be expressed as ${{\textstyle\frac{1}{2}}}n\delta S$. In other words, we lose on average ${{\textstyle\frac{1}{2}}}n \delta S$ stars for every $\delta S$ area element that has been found to contain a star. Consequently, for a detected set of $N_d$ stars, the total blend loss will be
$$N-N_d={{\textstyle\frac{1}{2}}}N_d~N~{\delta S \over S}$$
Rearranging this and introducing $x=\delta S/S$ give the estimated total number of objects:
$$N={N_d \over 1-{{\textstyle\frac{1}{2}}}N_d~x}.$$
Note that $\frac{1}{x}$ can be thought of as the number of resolution elements in the image.
Monte Carlo simulations
-----------------------
Equation 3 allows us to estimate the actual number of objects in our image, based on the number we have detected and the size of our resolution elements. We tested Eq. 3 with Monte Carlo simulations. Two different survey areas were chosen to mimic real observations: 0.1 square degree with 1,000 objects and 1 square degree with 10,000 objects. This way the surface density was kept constant and the effects of statistics could be checked. The confusion radius was varied between 1$^{\prime\prime}$ and 40$^{\prime\prime}$. For each simulation, we filled the survey area with $N$ randomly placed points and retained only those, that fell outside the confusion area for all previously placed points (i.e. after placing the first point, the second one was kept only if it was outside the confusion area of the first point; the third one was kept only if it was outside the confusion areas of the first two points; and so on). Every simulation was repeated one hundred times and the average blending losses were compared to those of calculated via Eq. 3.
The results are shown in Fig. \[mcsim\]. Fractional losses range between 0.1% and over 65%, with practically no difference between 1,000 and 10,000 objects. The calculated blend losses are in excellent agreement with the true values, except for the largest confusion radii. We stopped the simulations when the integrated confusion area reached the full survey area, because after that point, one can place infinitely many objects without detecting them. This is why the loss tends to be underestimated after $r=30^{\prime\prime}$. Also, this is where the mean number of objects within a confusion area exceeds 2, so that triplets and higher multiplets are no longer negligible. Keeping these restrictions in mind, it is remarkable how well the blend losses agree with Eq. 3. However, one question has to be addressed before we turn to real datasets, for which binarity may be significant: to what extent can we assume that seemingly random stellar fields can indeed be approximated by a uniform random Poisson process? The answer can be found by checking the distance distributions of all pairs in a real sample.
![Blend losses as a function of confusion radius in 0.1 deg$^2$ ([*top*]{}) and 1 deg$^2$ ([*bottom*]{}) fields, where the stellar density was set to 10$^4$ star/deg$^2$ in both fields. The heavily overlapping black dots represent one hundred simulations at each radius, while open circles connected with the solid lines show the predicted losses from Eq. 3.[]{data-label="mcsim"}](ME719f1.eps){width="80mm"}
The effects of double stars
---------------------------
![Pair-separation distributions for a 1 deg$^2$ field around V705 Cas (black line) and a random simulation (grey line). The lower panel is a zoom of the upper one showing the presence of about 5,000 resolved double stars.[]{data-label="b2"}](ME719f2.eps){width="80mm"}
We compare two histograms in Fig. \[b2\]. The thick black line shows the pair-distance distribution for USNO B1.0 stars around Nova (V705) Cas 1993 (21,520 stars with 1400$<$red mag$<$1799), while the grey line is for a simulation with 21,600 random points with the same field of view. As expected, the overall shapes of the distributions closely follow a Poisson-distribution, with slight boundary effect for the large distances (comparable to the diameter of the survey field), causing a somewhat sharper cut-off than a pure Poissonian.
We have three conclusions based on these kinds of comparisons for fields in the Milky Way:
- The overall distributions agree very well with the pure random simulations. There are slight indications for different shapes (see, e.g., the upper panel of Fig. \[b2\] around $r=1500^{\prime\prime}$ and $3000^{\prime\prime}$), but the differences hardly exceed the intrinsic scatter of the data.
- The main disagreement occurs for the smallest pair-separations (typically for $r<10^{\prime\prime}$), which must be due to a large number of binary stars. For example, in the field around V705 Cas, the pair excess suggests about 5000 visual double stars with separations less than 7$^{\prime\prime}$. In denser regions even higher numbers can occur: the 1 deg$^2$ USNO B1.0 field around Nova (V4745) Sgr 2003 contains about 20,000 double stars with red mags$<$170 and separations under 10$^{\prime\prime}$. Considering that the Washington Double Star Catalogue (Mason et al. 2001[^4]) contains approximately 100,000 astrometric double and multiple stars, it is evident that most faint binary stars have yet to be identified and catalogued (see also Nicholson 2002).
- The contamination by physical binary stars can therefore be a significant factor that may distort the results of random blending.
![The two-point correlation function for the 1 deg$^2$ field around V705 Cas. The dashed line shows a power-law fit to the data between 2 and 8 arcsecs ($1+\xi(r)\sim r^{-1.3}$).[]{data-label="2pcf"}](ME719f3.eps){width="80mm"}
Star-star correlations can be taken into account with the two-point correlation function, $\xi(r)$, which is defined by the joint probability of finding an object in both of the surface elements $\delta S_1$ and $\delta S_2$ at separation $r_{12}$:
$$P=n^2~\delta S_1~\delta S_2[1+\xi(r_{12})].$$
Note that homogeneity and isotropy are assumed, so that $\xi$ is a function of the separation alone (Peebles 1980). If we can calculate this excess probability for pairs, then multiplying random blend losses from Eq. 3 by $(1+\xi(r_c))$ will correct the blending rate for the double stars. Neglecting higher correlation functions is a reasonable assumption, because the majority of stars belongs to single or double systems (about 90%, Abt 1983). Also, it is clear that even with this correction, the calculated blending rate will be a lower limit, because we do not have any information on the frequency of unresolved close binaries.
We can determine $\xi(r)$ using the recipe that leads to Eq. (47.14) in Peebles (1980): place $N_t$ points at random in the survey area; let $n_p(t)$ be the number of pairs among these trial points at separation $r$ to $r+\delta r$, and let $n_p$ be the corresponding number of pairs in the real catalogue of $N$ objects. Since $\xi=0$ for the trial points, the estimate of $\xi$ for the data is:
$$1+\xi(r)={n_p \over n_p(t)} {N_t^2 \over N^2}.$$
We show an example in Fig. \[2pcf\], which in our case is simply the ratio of the two histograms plotted in Fig. \[b2\] (because $N_t$ was equal to $N$). The statistical uncertainty of the first point at $r=1^{\prime\prime}$ is $\pm0.01$, far less than the symbol size, but a much larger systematic error is likely because that radius is comparable to the image scale of plates on which the catalogue is based. The dashed line in Fig. \[2pcf\] shows a power-law fit ($\xi(r)\sim r^\alpha$) for the linear regime of the log-log plot. This diagram suggests that for the given dataset of 21,500 stars, the probability of finding two stars with 1 arcsec separation is about 20 times larger than for the pure random case. For a 1$^{\prime\prime}$ confusion radius Eq. 3 gives $N-N_d=56$, which means the corrected blending loss, including the measurable fraction of double stars, is about 1120 stars. In other words, the chance of being a blend in this dataset is at least 5.2%.
To conclude, it is always advisable to check star-star correlations before applying Eq. 3, and only if $\xi(r)$ is close to zero can we assume that double stars do not contribute significantly to blending (in practice, it means that the observations did not resolve binaries, probably because of the large distance).
Magnitude bias in the OGLE-II observations of the Large Magellanic Cloud
========================================================================
Our first application is an analysis of the OGLE-II $BVI$ observations of the Large Magellanic Cloud (Udalski et al. 2000). These data were obtained with the 1.3m Warsaw telescope at the Las Campanas Observatory for more than 7 million stars in the central 4.5 deg$^2$ of the LMC. The completeness of the resulting catalogue is high down to stars as faint as $B\approx20$ mag, $V\approx20$ mag and $I\approx19.5$ mag. The median seeing of the observations was 13, with no observations made when the seeing exceeded 16–18 (Udalski et al. 2000). Here we estimate blend losses in typical fields in the LMC and determine the corresponding magnitude biases in $V$ and $I$.
This is a Case 1 blending, since no existing catalogue has a smaller confusion radius than the OGLE-II observations, except small field-of-view observations with the Hubble Space Telescope (Olsen 1999), which were used to estimate MACHO blend biases by Alcock et al. (2001). To investigate possible photometric biases, we downloaded a representative field from the OGLE public archive[^5]. We chose LMC\_SC5, centered at RA(2000)=$5^{\rm h}23^{\rm m}48^{\rm s}$, Dec(2000)=$-69^\circ41^\prime05^{\prime\prime}$, which contains about 460,000 stars. The majority of stars are fainter than 18 mag (both in $V$ and $I$), so we restricted the sample to stars between 18 mag and 21 mag. We used the two-point correlation function to constrain possible binarity effects and also to estimate the confusion radius. Because the stellar density varies gradually across the bar of the LMC, we selected several subsamples within which the density was constant. Here we discuss two of them, one in the middle of the bar (hereafter Region 1, $80.7<{\rm
RA[^\circ]}<80.8$, $-69.7<{\rm Dec[^\circ]}<-69.6$) and one further south (hereafter Region 2, $80.7<{\rm RA[^\circ]}<80.8$ and $-70<{\rm
Dec[^\circ]}<-69.9$). Other subsamples yield practically identical characteristics for blending systematics.
![Two-point correlation functions in $V$ and $I$ in Region 1.[]{data-label="ogle2pcf"}](ME719f4.eps){width="80mm"}
Region band $N_d$ $N$ $N-N_d$ $(N-N_d)/N_d$
-------- ------ ------- ------ --------- ---------------
1 $V$ 7598 9151 1553 20%
2 $V$ 5354 6081 727 14%
1 $I$ 7728 9340 1612 21%
2 $I$ 6073 7026 953 16%
: Star counts, estimated total numbers of stars, blend losses and blending frequencies for two OGLE fields in the LMC.[]{data-label="ogleloss"}
In Fig. \[ogle2pcf\] we show the two-point correlation functions for Region 1 in $V$ and $I$ bands. The step size in radius was $\delta r=0\farcs2$, and we plotted the functions with linear scaling, so that changes around the confusion radius could be noticed more easily. We do not see a significant excess amount of correlated pairs, which is not surprising given the distance to the LMC. There is a slight rise of the correlation function for $1\farcs4
\leq r \leq 3^{\prime\prime}$ that may be attributed to the widest binary stars. However, the correlation goes down quickly for $r<1\farcs0$. Since the best seeing of the OGLE images reached 08 (Udalski et al. 1998), we adopt this value as the confusion radius. Our choice is also consistent with “critical radius” chosen as 075 by the OGLE-team, within which objects were treated as identical (Udalski et al. 1998). The blend losses estimated using Eq. 3 are shown in Table \[ogleloss\].
{width="160mm"}
Apparently, about one in five-six stars is a blend having an unresolved companion that is within 3 mags in brightness (since our data sample covers the range 18 to 21 mag). We have to stress that these numbers are only lower limits because of the incompleteness of the OGLE data below $\sim$20 mag, which means the systematic errors we derive are still underestimated. Our main purpose is to point out the existence of surprisingly large systematic errors that can be determined from a single dataset alone. Deeper data (like in Alcock et al. 2001) would only shift these systematics toward larger values, but that is beyond the scope of the present paper.
Having estimated the fraction of blended stars, we are now in a position to calculate the extent to which they will bias the photometry. We assume that the probability of a star blending with a neighbour does not depend on its brightness, so that blended and unblended stars have the same luminosity function (LF) within each region. This is likely to introduce a slight systematic error, but we estimate this to have a marginal effect on the final outcome of the investigation. Upper panels in Fig. \[oglebias\] contain the normalized $V$ and $I$ luminosity functions ($\sum_{18}^{21}{\rm LF}(m)=1$), where the prominent peaks at $V\approx19$ mag and $I\approx18$ mag correspond to the red clump (Udalski 2000). The $I$-band decline for $I>20$ mag shows the decreasing completeness (see Table 4 in Udalski et al. 2000), which is less severe in $V$. The next step was to carry out a Monte Carlo simulation for each region and each filter: we placed $N$ stars at random into the given survey area, where $N$ is the corresponding number in the fourth column of Table \[ogleloss\]. The luminosity function was set to the one shown in Fig. \[oglebias\]. Then we determined which stars were “blends” in the random set and calculated the integrated magnitude of each blend. The magnitude difference between the blend and the brightest star within it was assigned as a bias at the integrated magnitude. Finally, these bias values were averaged for every 0.1 mag bin of the luminosity function (unblended stars were taken into account by adding zero bias). The whole procedure was repeated one hundred times and the results are shown in the lower panels of Fig. \[oglebias\]. Small changes to the input luminosity functions affected the final bias distributions at a level lower than the scatter visible in the plots.
We can draw several conclusions based on Fig. \[oglebias\]. Firstly and most importantly, blending introduces systematic errors as large as 02–03 (for instance, the mean bias between $V=18-19$ mag is 025 in Region 1 and 018 in Region 2), which can have serious consequences if neglected. Details of OGLE-II reduction, tests of photometric accuracy and incompleteness were published in Udalski et al. (1998) and they did not mention random blending as a possible source of systematic biases. Very recently, Alcock et al. (2004) presented an analysis of first-overtone RR Lyrae stars in the MACHO database, for which systematic magnitude biases due to blending were also estimated. At $V\approx19\fm3$, where RR Lyrae stars concentrate, Alcock et al. (2004) arrived at $\Delta V$ ranging from $-0\fm11$ to $-0\fm21$ for various assumptions (they made artificial star tests for various stellar densities and calculated the differences between the input and recovered magnitudes of artificial stars). Our results are based on a very different approach applied to a very different dataset, and the agreement suggests that random blending must be taken into account in such high-level crowding as the present one. Secondly, the gradual decrease of the bias towards fainter magnitudes shows the effect of incompleteness: as was shown by Alcock et al. (2001) for the MACHO database, systematic errors increase monotonically over the examined magnitude range. For that reason, the calculated biases between 18 and 19 mag can be considered as lower limits for the fainter magnitudes; the magnitude range of RR Lyrae stars is therefore very likely to suffer from a bias up to 02–03, which has been neglected in the past. This leads to our third conclusion: results based on the OGLE-II data that supported the LMC “short” distance scale ($(m-M)_{\rm LMC}\approx18\fm3$) are likely to suffer from this systematic error. For example, Udalski (2000) calibrated the zero-point of the distance scale using RR Lyrae variables and red clump stars to be $(m-M)_{\rm LMC}=18\fm24$ with 007 systematic error. However, blending correction significantly decreases the apparent conflict between this and the “standard” value (185, e.g. Alves 2004), which shows the importance of blending biases.
It must be stressed that ground-based observations of more distant galaxies may suffer from even larger systematic errors. Detecting variable stars by the image subtraction method circumvents the problem (see a recent application by Bonanos & Stanek 2003), but measuring apparent magnitudes (and hence distance) requires use of space telescopes. Calculations like the present one can help estimate systematics but more reliable results for nearby galaxies need high-resolution observations.
Field star distributions around seven recent novae
==================================================
Star (year) Progenitor details Ref.
-------------------- --------------------------------------------- ------
V1494 Aql (1999/2) Red mag 156 1
V2275 Cyg (2001) Red mag 188 (USNO A2.0) 2
V382 Vel (1999) Quiescent magnitude $V=16\fm56$ 3
V4743 Sgr (2002/2) Red mag 167 (USNO A2.0) 4
V4745 Sgr (2003) DSS red plate, $R\approx17\fm9$, apparently
blended with a faint companion to SE 5
V705 Cas (1993) Candidate precursor of red mag about
170, northern component of a merged
pair with separation about 2 arcsec 6, 7
V723 Cas (1995) DSS red plate, mag 18-19 8
: Seven bright nova outbursts with reported progenitor candidates in the last decade.[]{data-label="novae"}
References: 1 – Pereira et al. (1999); 2 – Schmeer (2001); 3 – Platais et al. (2000); 4 – Haseda et al. (2002); 5 – Brown et al. (2003); 6 – Skiff et al. (1993); 7 – Munari et al. (1994); 8 – Hirosawa et al. (1995)
{width="140mm"}
----------- ----------- -------- ------------ ------------ -------------- ------- ------ -----
Star Mag range $N_d$ $N(m<m_u)$ $\Delta N$ $1+\xi(r_c)$ blend A1 A2
rate
V1494 Aql 15–16 8,333 13,229 19 4 0.9% 0.5% 75%
V2275 Cyg 18–19 22,249 47,255 137 1.5 0.9% 1.2% –
V382 Vel 16–16.6 6,125 19,291 10 15 2.4% 0.3% –
V4743 Sgr 16–17 10,775 20,165 32 5 1.5% 0.6% –
V4745 Sgr 17–18.5 35,997 62,042 485 3 4% 2.7% –
V705 Cas 16.5–17.5 8,066 17,374 18 10 2.2% 0.5% 90%
V723 Cas 18–19.5 18,932 31,302 99 3 1.6% 1% –
----------- ----------- -------- ------------ ------------ -------------- ------- ------ -----
Our next case study was inspired by the independent discovery of the close optical companion of nova V1494 Aql (Kiss et al. 2004). We selected several bright nova outbursts in the last decade for which progenitor star candidates have been identified in Digitized Sky Survey plates or deep star catalogues (most often in the USNO A1.0 and A2.0 catalogue releases). The available details on these candidates are summarized in Table\[novae\]. We examined the following questions: How probable is it that the candidates are unrelated stars located only by chance at the nova coordinates? In cases where a star was found a few arcseconds from the nova position, (like for V1494 Aql and V705 Cas), how probable is is that they are physically related to the nova system?
To find out the answers, we downloaded and analysed 1 deg$^2$ USNO B1.0 fields around each nova using the USNO Flagstaff Station Integrated Image and Catalogue Archive Service [^6]. These fields provide a complete coverage down to $V=21\fm0$, 02 astrometric accuracy at J2000, 0.3 mag photometric accuracy in up to five colours and 85% accuracy for distinguishing stars from nonstellar objects (see Monet et al. 2003 for more details). The latter issue is less relevant in these fields because of the low galactic latitudes. In two cases we found incompleteness of the data, either in the form of large empty areas in the given field (V4745 Sgr) or sudden jumps in the stellar density within certain rectangular areas (V4743 Sgr). Both fields are located in the densest regions of the Milky Way, which explains the difficulties of measuring wide-field Schmidt-plates in such crowded fields. We took these into account in our analysis (e.g. in the latter case we omitted patches of lower density, yielding a decreased survey area).
Since this is Case 1 blending, we calculated random blend losses for each nova by choosing all stars within 1–1.5 mag in brightness of the reported progenitor (when the accuracy was worse, we chose the wider range). For this, we used red magnitude data in the catalogue. We assumed a 15 confusion radius, based on the pixel scale of the scanned Schmidt-plates (usually about 09/pixel, Monet et al. 2003) and the fact that the optical companion of V1494 Aql located at 14 was unresolved in the data. Two-point correlation functions (Fig. \[2pcfs\]) were determined via Eq. 5, filling up the survey areas by 25,000 points at random. Blend losses were then multiplied by the two-point correlation functions at $r=1\farcs5$. Finally, we calculated the chance of finding a random blend within 15 of the nova coordinates. We summarize the results in Table \[novaresult\] and in Fig. \[2pcfs\] (for V705 Cas we got essentially the same as in Fig. \[2pcf\] for a wider magnitude range, so that there was no reason to repeat the plot).
It is evident from Fig. \[2pcfs\] that the samples are dominated by binary stars in every field for separations under $\sim10^{\prime\prime}$. The highest correlation was found around V382 Vel, where the number of 1-arcsec doubles exceeds the number expected for a random field by a factor of 23! In other words, at least 22 of every 23 1-arcsec pairs are physically related (binary) stars. It is also prominent that there is a slope decrease in every correlation function for the smallest radii, which we interpret as a result of confusion losses. For that reason, we estimated $\xi(r_c=1\farcs5)$ (6th column in Table\[novaresult\]) from extrapolated linear fits over the log-log plot of the 2–6 arcsecs data, rather than from interpolations between 1 and 2 arcsecs.
The numbers in Table \[novaresult\] clearly show that random blend loss stays well below 1%; even for the worst case, V4745 Sgr, it is only 1.4% of $N_d$. After multiplying by the estimated two-point correlation function values, the blending rate still remains around 1–2%. The eighth column in Table \[novaresult\] (A1) confirms that progenitors can be identified with high confidence, even in the central regions of the Milky Way.
{width="140mm"}
The most interesting numbers can be found in the last column of Table\[novaresult\]. They follow from the probability interpretation of the two-point correlation function. Since $(1+\xi(r))$ reflects the excess probability for finding a pair in a sample at distance $r$, $1+\xi(r)=4$ (first row in Table\[novaresult\]) means that there are four times more pairs than in a random sample, so that for any given pair the chance of being correlated is 75%. Therefore, these data suggest that it is quite possible that the optical companions of V1494 Aql and V705 Cas form wide hierarchical triple systems with the cataclysmic components. Interestingly, for V1494 Aql this is also supported by our knowledge of the optical companion. In Kiss et al. (2004) we assigned late F-early G as an approximate spectral type of the companion. The corresponding absolute magnitude for a main sequence star is about $M_V=$+40, which agrees within the error bars with the quiescent absolute magnitude of the nova (Kiss et al. 2004). Hence, the close apparent magnitudes of the pre-nova and the companion suggest similar distances, consequently the possibility of physical correlation. For V705 Cas, we do not have similar supporting evidence, but it is an intriguing possibility that these optical components are in triple systems with the novae. If confirmed, their presence could be used, for example, to derive independent distances via spectroscopic parallaxes, which in turn would allow one to calculate accurate absolute magnitudes of the nova eruptions.
This case study shows that seeing-limited images are not significantly affected by random blending, even in the galactic plane. We now turn to the case of images with much lower spatial resolution.
Wide-field photometric surveys
==============================
In the recent years there has been an explosion in the number of small and medium-sized robotic telescopes that monitor selected regions of the sky with various instruments (e.g., Hessman (2004) listed 80 different projects). We selected three representative small instruments that are currently running, to investigate Case 2 blending. They are:
1. The Hungarian Automated Telescope (HAT) which, in its present status, observes stars between $I=9-12$ mag in selected fields of the northern sky. The data were reduced with PSF-fitting photometry (Bakos et al. 2004). We downloaded the 1 deg$^2$ USNO B1.0 field in Hercules (RA(2000)=$17^{\rm h}36^{\rm m}$, Dec(2000)=$37^\circ30^\prime$, $b\approx30^\circ$).
2. The STellar Astrophysics & Research on Exoplanets (STARE) project, observing between $R=9-12\fm5$ in various fields (Alonso et al. 2003, Brown 2003). We downloaded the 1 deg$^2$ USNO B1.0 field, centered at RA(2000)=$20^{\rm h}06^{\rm
m}$, Dec(2000)=$36^\circ00^\prime$ ($b\approx2^\circ$).
3. The All Sky Automated Survey (ASAS), which covers over 1 million stars in the southern hemisphere between $V=8\fm5-15$ mag. The data are reduced with simple aperture photometry (Pojmanski 2002, 2004). We selected two 1-deg$^2$ fields at random at two different galactic latitudes: RA(2000)=$00^{\rm h}19^{\rm m}$, Dec(2000)=$-56^\circ00^\prime$ ($b\approx-60^\circ$) and RA(2000)=$06^{\rm h}10^{\rm m}$, Dec(2000)=$-23^\circ00^\prime$ ($b\approx-20^\circ$).
Each project is characterized by $r_c\approx20^{\prime\prime}$, so that their comparison reveals the differences that depend on the galactic latitude. To match the photometric bands used in these projects, we examined USNO B1.0 infrared magnitudes for the HAT-field and red magnitudes for the remaining regions. The main aim was to get an idea of the fraction of biased stars, because although crowding problems are well-known for these instrumental setups, we did not find any quantitative description of the issue. Therefore, we calculated blending rates for various $m$ and $\Delta m$ values that covered the magnitude ranges of the projects. This rate was defined as the following ratio:
$${\rm blending~rate}={\nu(m,\Delta m,r_c) \over \nu(m)},$$
where $\nu(m,\Delta m, r_c)$ is the number of stars which have fainter neighbours within a distance $r_c$ and magnitude difference $\Delta m$; $\nu(m)$ is the total number of $m$ magnitude stars (in our case, defined as stars with apparent brightnesses within $m$ and $m+1$ mag). We assumed negligible blend losses in the USNO B1.0 catalogue over the studied magnitude and separation ranges. The results are shown in Fig. \[surveys\]. We also determined two-point correlation functions, which showed that for these bright magnitudes the overwhelming majority of pairs closer than the confusion radius are physically related double stars.
Figure \[surveys\] implies an alarming rate of blending, even for high galactic latitudes. We see that 10 to 20 percent of objects observed by the HAT and ASAS projects have blends within 3 mags, while up to 50% are affected near the galactic plane (STARE). Correlated pairs can make the situation quite bad even for the brightest stars, which means blending must be taken into account in every case. Presently available star catalogues offer a good opportunity to do that, thus we recommend to add blending information in all cases when finally reduced data are made accessible to the wider community. Also, the implementation of the image subtraction method (Alard & Lupton 1998) is highly desirable in this type of project, because the photocenter of the variable source can be used to identify which object within a blend is varying (Alard 1996; see also Hartman et al. 2004).
Conclusions
===========
In this study we investigated the effects of random blending that involves stars of similar brightnesses, leading to significant biases in the measured apparent magnitudes and amplitudes. A simple formula (Eq. 3) was derived to calculate blend losses in a real catalogue of stars based on the confusion radius, total number of detected stars and survey area. We showed that the two-point correlations must be included in calculations when studying galactic fields, where binary stars dominate for separations under 10$^{\prime\prime}$. Outside the Milky Way, we quickly lose the information on wide binaries, so that the pure random case applies. That is why the calculated blending rate always puts only a lower limit to the full blending.
We discussed three different applications, which demonstrate the importance of these phenomena. The most interesting results were presented for the OGLE-II data of the Large Magellanic Cloud. With extremely high stellar angular densities, these observations are much more biased by random blending than those fields in the galactic plane. Even though no observations were carried out for seeing larger than 16–18, we estimated that 15-20% of the sample are affected by blends that are within three magnitudes in apparent brightness. The resulting magnitude biases can reach 02–03, depending on the angular position within the LMC. This allows us to reconcile the OGLE-based distance moduli of the LMC ($\mu\approx18\fm3$) and the generally adopted “standard” one ($\mu\approx18\fm5$). Random blend calculations are therefore highly desirable in every case when significant blending is expected, especially in extragalactic observations.
The analysis of star fields around seven novae showed that progenitor identification is very secure, even in the extremely dense fields of the Bulge. Furthermore, statistical evidence points toward the existence of wide hierarchical triple systems, in which the third component lies at 1–2$^{\prime\prime}$ (i.e. several thousand AU) from the novae. Thus, it is highly desirable to measure proper motions for V1494 Aql and V705 Cas and their optical companions to confirm or reject the physical correlations.
In our last example we demonstrated high-blending rates in typical wide-field photometric surveys for exoplanet transits. In certain cases, up to 50–60% of stars can be affected by blending objects within 3 magnitudes. We recommend that cross-correlation with appropriate star catalogues should always be done as part of the regular data reduction, to flag every possibly problematic star.
Our presented method for calculating blend losses is very simple, thus may not be applicable in certain cases. For instance, when the stellar angular density shows strong gradient, like in a globular cluster or outer parts of a resolved galaxy, the basic assumption of homogeneity and isotropy will be invalid. A possible solution in such cases is to introduce a position-dependent density $n=\rho(x,y)$, where $x$ and $y$ are the image coordinates, and to integrate all equations over the inhomogeneous and anisotropic sample. In principle, the numeric implementation of this generalisation is not too difficult. Another way to treat these cases is to split the data into small segments within which $n\approx{\rm const.}$ and, after calculating blending biases, take averages over the segments.
To summarize, the experience with real datasets suggests that it is always highly advisable to estimate random blend losses when the star density and/or the confusion radius have relatively large values – Eq. 3 or deep star catalogues can be used to characterize any specific example. Where applicable, the upgrade from aperture and PSF-photometry to the image subtraction method is desirable to reduce some of the blending effects.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work has been supported by the FKFP Grant 0010/2001, OTKA Grant \#T042509 and the Australian Research Council. Thanks are due to Dr. A. Udalski, whose comments helped improve the paper. LLK is supported by a University of Sydney Postdoctoral Research Fellowship. The NASA ADS Abstract Service was used to access data and references.
Abt H.A., 1983, ARA&A, 21, 343
Alard C., 1996, IAU Symp. 173, 215
Alard C., 1997, A&A, 321, 424
Alard C., & Lupton R.H., 1998, ApJ, 503, 325
Alcock C., et al., 2001, ApJS, 136, 439
Alcock C., et al., 2004, AJ, 127, 334
Alonso R., Belmonte J.A., & Brown T., 2003, ApSS, 284, 13
Alves D.R., 2004, in: Highlights of Astronomy, Vol. 13, in press, ([astro-ph/0310673]{})
Antonello E., 2002, A&A, 391, 795
Bakos G.Á, Lázár J., Papp I., Sári P., & Green E.M., 2002, PASP, 114, 974
Bakos G., Noyes R.W., Kovács G., Stanek K.Z., Sasselov D.D., & Domsa I., 2004, PASP, 116, 266
Bonanos A.Z., & Stanek K.Z., 2003, ApJ, 591, L111
Brown T.M., 2003, ApJ, 593, L125
Brown J., et al., 2003, IAUC, No. 8123, 1.
Ferrarese L., Silbermann N.A., Mould J.R., Stetson P.B., Saha A., Freedman W.L., & Kennicutt Jr. R.C., 2000, PASP, 112, 177
Gibson B.K., Maloney P.R., & Sakai S., 2000, ApJ, 530, L5
Han C., 1997, ApJ, 490, 51
Han C., 1998, ApJ, 500, 569
Harris J., & Zaritsky D., 1999, AJ, 117, 2831
Hartman J.D., Bakos G., Stanek K.Z., & Noyes R.W., 2004, AJ, submitted ([astro-ph/0405597]{})
Haseda K., et al., 2002, IAUC, No. 7975, 1.
Hessman F.V., 2004, [http://www.uni-sw.gwdg.de/$\sim$hessman/\
MONET/links.html]{}
Hirosawa K., Yamamoto M., Nakano S., Kojima T., Iida M., Sugie A., Takahashi S., & Williams G.V., 1995, IAUC, No. 6213, 1.
Kiss L.L., Csák B., & Derekas A., 2004, A&A, 416, 319
Mason B.D., Wycoff G.L., Hartkopf W.I., Douglass G.G., & Worley C.E., 2001, AJ, 122, 3466
Mochejska B.J., Macri L.M., Sasselov D.D., & Stanek K.Z., 2000, AJ, 120, 810
Monet D.G., et al., 2003, AJ, 125, 984
Munari U., Tomov T., Hric L., & Hazucha P., 1994, IBVS, No. 3977
Nicholson M.P., 2002, [http://ad.usno.navy.mil/wds/\
unpublished/nicholson.html]{}
Olsen K.A.G., 1999, AJ, 117, 2244
Peebles P.J.E., 1980, The Large-Scale Structure of the Universe (Chapter III), Princeton University Press, Princeton
Pereira A., di Cicco D., Vitorino C., & Green D.W.E., 1999, IAUC, No. 7323, 1.
Platais I., Girard T.M., Kozhurina-Platais V., van Altena W.F., Jain R.K., & López C.E., 2000, PASP, 112, 224
Pojmanski G., 2002, Acta Astron., 52, 397
Pojmanski G., 2004, Acta Astron., in press ([astro-ph/0401125]{})
Schmeer P., 2001, IAUC, No. 7688, 2.
Skiff B., Abe H., & Bengtsson H., 1993, IAUC, No. 5904, 2.
Snel R.C., 1998, A&AS, 129, 195
Takeuchi T.T., & Ishii T.T., 2004, ApJ, 604, 40
Udalski A., 2000, Acta Astron., 50, 279
Udalski A., Szymanski M., Kubiak M., Pietrzynski G., Wozniak P., & Zebrun K., 1998, Acta Astron., 48, 147
Udalski A., et al., 2000, Acta Astron., 50, 307
Wozniak P., & Paczynski B., 1997, ApJ, 487, 55
[^1]: E-mail: [email protected]
[^2]: On leave from University of Szeged, Hungary
[^3]: See also [http://www.hao.ucar.edu/public/research/\
public/stare/stare.html]{}
[^4]: A regularly updated version can be found at\
[http://ad.usno.navy.mil/wds/]{}
[^5]: Available at [http://bulge.princeton.edu/$\sim$ogle/]{}
[^6]: http://www.nofs.navy.mil/data/fchpix/
|
---
abstract: 'In this paper, we propose two important extensions to cluster-weighted models (CWMs). First, we extend CWMs to have generalized cluster-weighted models (GCWMs) by allowing modeling of non-Gaussian distribution of the continuous covariates, as they frequently occur in insurance practice. Secondly, we introduce a zero-inflated extension of GCWM (ZI-GCWM) for modeling insurance claims data with excess zeros coming from heterogenous sources. Additionally, we give two expectation-optimization (EM) algorithms for parameter estimation given the proposed models. An appropriate simulation study shows that, for various settings and in contrast to the existing mixture-based approaches, both extended models perform well. Finally, a real data set based on French auto-mobile policies is used to illustrate the application of the proposed extensions.'
author:
- 'Nikola Počuča$^*$, Petar Jevti'' c$^{**}$, Paul D. McNicholas$^*$, and Tatjana Miljkovic$^{\dagger}$'
date: |
$^*$Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada.\
$^{**}$Department of Mathematics and Statistics, Arizona State University, Phoenix, AZ, U.S.\
$^{\dagger}$Department of Statistics, Miami University, Oxford, OH, U.S.
title: 'Modeling Frequency and Severity of Claims with the Zero-Inflated Generalized Cluster-Weighted Models'
---
<span style="font-variant:small-caps;">Key Words:</span> GCWM, CWM, clustering, automobile claims.\
<span style="font-variant:small-caps;">KEL Classification:</span> C02, C40, C60.\
Introduction {#sec:introduction}
============
A significant number of clustering methods have been proposed for sub-grouping the data in the area of computer science, biology, social science, statistics, marketing, etc. [@Ingrassia+Punzo+Vittadini+Minotti:2015] proposed a cluster-weighted model (CWM) framework as a flexible family of mixture models for fitting the joint distribution of a random vector composed of a response variable and a set of mixed-type covariates with the assumption that continuous covariates come from Gaussian distribution. CWMs with Gaussian assumptions have been proposed by [@Gershenfeld:1997], [@Gershenfeld:Schoner+Metois:1999], and [@Gershenfeld:1999] in a context of media technology. Some extensions of this class of models have been considered by [@Punzo+Ingrassia:2015], [@Ingrassia+Minotti+Punzo:2014], [@Ingrassia+Minotti+Vittadini:2012], [@subedi13; @subedi15], and [@punzo17]. These clustering methods are somewhat lacking for modelling insurance data, e.g., high excess zeros for claim count, heavy-tail loss distribution, deductible, or limits.
Sub-grouping of insurance policies based on risk classification is a standard practice in insurance. The heterogenous nature of insurance data allows for explorations of many different techniques for sub-grouping risk. As a result, there is a growing number of papers in the area of mixture modeling of univariate and multivariate insurance data to account for heterogeneity of risk. [@Lee+Lin:2010], [@Verbelen+Gong+Antonio+Badescu+Lin:2015], and [@Miljkovic+Grun:2016] proposed mixture models for univariate loss data, and mixture modeling of univariate insurance data has been extended to the multivariate context. A finite mixture of bivariate Poisson regression models with an application to insurance ratemaking was studied by [@Bermudez+Karlis:2012]. A Poisson mixture model for count data was considered by [@Brown+Buckley:2015] with application in managing a Group Life insurance portfolio. Recently, [@risks_miljkovic] reviewed two complementary mixture-based clustering approaches (CWMs and mixture-based clustering for an ordered stereotype model) for modeling unobserved heterogeneity in an automobile insurance portfolio, depending on the data structure under consideration.
In this paper, we extend the CWM family proposed by [@Ingrassia+Punzo+Vittadini+Minotti:2015] to allow for modeling of non-Gaussian continuous covariates and a zero-inflated Poisson (ZIP) claims data with excess zeros, which are commonly seen in the insurance applications. We consider a generalized cluster-weighted model (GCWM) as well as a zero-inflated GCWM. Two partitioning methods are considered with two separate expectation-maximization (EM) algorithms [@Dempster+Laird+Rubin:1977]. The EM algorithm is based on the complete-data likelihood, which encompasses the observed data together with the missing data and/or latent variables. The EM algorithm can be highly effective for maximum likelihood estimation when data is incomplete or is assumed to be incomplete. The first EM algorithm is for parameter estimation for the GCWM models, while the second EM is for parameter estimation for the GCWM. We show that the Bernoulli and Poisson GCWM accurately estimate the initialization of the EM algorithm for the zero inflated GCWM model. These models utilize individual claims data and should be useful in the areas of ratemaking and risk management.
This paper is organized as follows. The GCWM and ZI-GCWM approaches are discussed in Section \[sec:model\], and parameter estimation is discussed in Section \[sec:estmeth\]. Then, our methodology is applied to real data on French automobile claims and an extensive simulation study is conducted (Section \[sec:numapp\]). This paper concludes with a discussion and some suggestions for future work (Section \[sec:sim\]).
Methodology {#sec:model}
===========
Background
----------
Let $(\bm{X^{'}}, Y)^{'}$ be the pair of a vector of covariates $\bm{X}$ and a response variable $Y$. Assume this pair is defined on some sample space $\Omega$ that takes values in an appropriate Euclidian subspace. Now, assume that there exists $K$ non-overlapping partitions of $\Omega$, denoted as $\Omega_1, \ldots, \Omega_K$. [@Gershenfeld:1997] characterized CWMs as a finite mixture of GLMs; hence, the joint distribution $(\bm{X^{'}}, Y )^{'}$ has the form $$\begin{aligned}
f(\bm x, y; \bm{\Phi})= \sum_{k=1}^{K} \tau_k q(y|\bm{x};\bm{\vartheta}_k)p(\bm{x};\bm{\theta}_k),
\label{eq1}\end{aligned}$$ where $\bm{\Phi}:=\{\bm{\vartheta}_k, \bm{\theta}_k\}$ denotes the model parameters. The pair $q(y|\bm{x};\bm{\vartheta}_k)$ and $p(\bm{x};\bm{\theta}_k)$ are conditional and marginal distributions of $(\bm{X^{'}}, Y)^{'}$, respectively, while $\tau_k$ is the $k$th mixing proportion such that $\sum_{k=1}^{K}\tau_k=1$, $\tau_k>0$. [@Ingrassia+Punzo+Vittadini+Minotti:2015] proposed a flexible family of mixture models for fitting the joint distribution of a random vector $(\bm{X^{'}}, Y)^{'}$ by splitting the covariates into continuous and discrete, i.e., $ \bm{X}=(\bm{V}', \bm{W}')'$. The assumption of independence between continuous and discrete covariates allows us to multiply their corresponding marginal distributions. Thus, for this setting the model in is reformulated as follows $$\begin{aligned}
f(\bm{x}, y; \bm{\Phi})= \sum_{k=1}^{K} \tau_k q(y|\bm{x};\bm{\vartheta}_k)p(\bm{x};\bm{\theta}_k)=\sum_{k=1}^{K} \tau_k q(y|\bm{x};\bm{\vartheta}_k)p(\bm{v}; \bm{\theta}_k^{\star})p(\bm{w};\bm{\theta}_k^{\star\star})
\label{eq2}\end{aligned}$$ where $\bm{v}$ and $\bm{w}$ are respectively the realized vectors of continuous and discrete covariates, $q(y|\bm{x};\bm{\vartheta}_k)$ is the conditional density of $Y|\bm{x}$ with parameter vector $\bm{\vartheta}_k$, $p(\bm{v};\bm{\theta}_k^{\star})$ is the marginal distribution of $\bm{v}$ with parameter vector $\bm{\theta}_k^{\star}$, and $p(\bm{w};\bm{\theta}_k^{\star\star})$ is the marginal distribution of $\bm{w}$ with parameter vector $\bm{\theta}_k^{\star\star}$. As before, $\bm{\Phi}$ denotes the set containing all model parameters. The conditional distribution $q(y|\bm{x};\bm{\vartheta}_k)$ is assumed to belong to an exponential family of distributions and as such can be modeled in the GLM framework. The marginal distribution of continuous covariates is assumed to be Gaussian.
Unfortunately, this last assumption is too strong for use in insurance related applications, specifically in rate-making or reserving. To relax it, we develop an extension which allows for non-Gaussian covariates as discussed in the next section.
Generalized Cluster-Weighted Model
----------------------------------
We proceed to extend by splitting the continuous covariates $\bm{V}$ via $\bm{V}=(\bm U^{'}, \bm T^{'})^{'}$, where $\bm{U}$ contains the non-Gaussian covariates and $\bm{T}$ contains the Gaussian covariates. We do this to retain the possibility of having both Gaussian and non-Gaussian covariates. Then, with the final relabelling of parameter vectors, becomes $$\begin{aligned}
f(\bm x, y; \bm{\Phi})= \sum_{k=1}^{K} \tau_k q(y|\bm{x};\bm{\vartheta}_k)p(\bm{t};\bm{\theta}_k^{\star})p(\bm{w};\bm{\theta}_k^{\star\star})p(\bm{u};\bm{\theta}_k^{\star\star\star}),
\label{eq3}\end{aligned}$$ which we refer to as the generalized cluster-weighted model (GCWM). Here, $p(\bm{t};\bm{\theta}_k^{\star})$ denotes the marginal density of Gaussian covariates, with parameter vector $\bm{\theta}^{\star}_k$, and $p(\bm{u};\bm{\theta}_k^{\star\star\star})$ denotes the marginal density of the non-Gaussian covariates with parameter vector $ \bm{ \theta}_k^{\star\star\star} $.
Due to its relevance for the applications in the actuarial domain, in this work, we make the particular choice of the multivariate log-normal distribution for non-Gaussian covariates — this, however, does not reduce the generality of our developed framework. With the log-normal assumption for $p(\bm{u};\bm{\theta}_k^{\star\star\star})$, we have that $\bm{u}$ is defined on $\mathbb{R}^p_+$ with parameter vector $\bm{\theta}_k^{\star\star\star}= (\bm{\mu}_k^{\star\star\star} ,\bm{\Sigma}_k^{\star\star\star})$ and probability density function $$\label{eqn:logn}
p \left( \bm{u}; \bm{\theta}_k^{\star\star\star} \right) = \frac{1}{(\prod_{i=1}^{p}u_{i})|\bm{ \Sigma}_k^{\star\star\star} |(2 \pi)^{\frac{p}{2}}} \exp\left[-\frac{1}{2}(\ln\bm{ u}-\bm{\mu}_k^{\star\star\star})^{'}\bm{\Sigma}_k^{{\star\star\star}_{-1}}(\ln \bm {u}-\bm{\mu}_k^{\star\star\star})\right].$$ The derivation of can be found in the Appendix \[changeVarUni\], and the application of the procedure therein can be followed for other types of non-Gaussian covariates, thus generalizing the CWM model.
Zero-Inflated Generalized Cluster-Weighted Model
------------------------------------------------
Using the zero-inflated Poisson model (ZIP) as a special and most widely used case of zero-inflated models we further extend the generalized cluster-weighted model class to zero-inflated generalized cluster-weighted model (ZI-GCWM). This choice by no means reduces the generality of our approach.
We begin by noting that the single component ZIP model assumes that the inflated zeros emanate from both a Bernoulli and Poisson random variables while the non-zeros are assumed to come exclusively from the Poisson random variable [see @Lambert]. However, recent research extends the single component ZIP models to mixture models for heterogeneous count data with excess zeros [see @Bermudez+Karlis:2012]. In mixtures of ZIPs, zeros are assumed to come from multiple different Binomial and Poisson random variables.
Thus, seen in the context of GCWM, considering the ZIP model, we can split the conditional density of the response variable $Y$, i.e., $p(y|\bm{x},\bm{\vartheta}_k)$, into zero and non-zero densities for each group $k$. The conditional probability mass associated with the event $\{y=0\}$ is characterized by $q(y = 0|\bm{x};\bm{\vartheta}_{k})$. For the event $\{y > 0\}$, the response variable $Y$ is conditionally distributed with density $q(y > 0|\bm{x}; \bm{\vartheta}_{k} )$. All this considered, given the conditional density for the ZIP model, can be re-written as $$\begin{aligned}
f(\bm x, y; \Phi)= \sum_{k=1}^{K} \tau_k \left[ q(y = 0|\bm{x};\bm{\vartheta}_{k} ) + q(y > 0|\bm{x} ; \bm{\vartheta}_{k} ) \right] p(\bm{t};\bm{\theta}_k^{\star})p(\bm{w};\bm{\theta}_k^{\star\star})p(\bm{u};\bm{\theta}_k^{\star\star\star})\end{aligned}$$ which characterizes the zero Inflated Generalized cluster-weighted model (ZI-GCWM).
Specifically, have the Poisson conditional density denoted as $ q^P(y|\bm{x}; \lambda_k) $ where $y \in \{0,1,\dots\}$. Additionally, have vector ${\tilde{\bm{x}}}:= [\bm{1},\bm{x}]$ to contain the covariates $\bm{x}$ together with a placeholder for the intercept in the GLM and let $\bm{\beta}_k$ be a row coefficient vector. The link function will be chosen to be log-link such that $$\begin{aligned}
\label{g1link}
\lambda_k = e^{{\tilde{\bm{x}}}\bm{\beta}_k'} && \text{and} & & q^P(y|\bm{ x} ; \lambda_{k} ) = e^{-\lambda_k} \frac{{\lambda_k}^y}{y!}.
\end{aligned}$$ Also, have a Bernoulli model for the conditional density denoted as $ q^{B}(y|\bm{x}; \bm{\bar{\beta}}_k) $, where $\bar{\bm{\beta}_k}$ is a row coefficient vector. Here, the link function is chosen to be logit link function so that $$\begin{aligned}
\label{g2link}
\psi_k = \frac{e^{{\tilde{\bm{x}}}\bm{\bar{\beta}}_k'}}{1+ e^{{\tilde{\bm{x}}}\bm{\bar{\beta}}_k'}} && \text{and} &&
q^B(y | \bm{x} ; {\psi}_k) = \begin{cases}
\quad \psi_k, & y = 0,\\
1 - \psi_k, & y > 0.
\end{cases}
\end{aligned}$$ By creating composition of two preceding models, we have the ZIP model in which zero counts come from two random variables. One is the Bernoulli random variable, which generates structural zeros, and the other is the Poisson random variable. The coefficients $\bm{\vartheta}_{k}=\{ \bm{\beta}_{k}, \bm{\bar{\beta}}_k \}$ correspond to the two above introduced conditional densities where the coefficients can be estimated as in [@Lambert]. The $k$th component of ZIP conditional density $q(y|\bm{x}; \bm{\vartheta}_{k} )$ is $$\begin{aligned}
q( y = 0| \bm{x} ; \bm{ \vartheta}_{k} ) = \psi_k + (1 - \psi_k)e^{-\lambda_k} & & \text{and} & &
q(y > 0 | \bm{x} ; \bm{ \vartheta}_{k} ) = (1 - \psi_k)e^{-\lambda_k} \frac{\left(\lambda_k \right)^y }{y!}.
\end{aligned}$$ The parameter $\psi_k$ denotes the mean of the Bernoulli distribution of the $k$th component from which extra zeros emanate, and the parameter $ \lambda_k $ characterizes the $k$th Poisson distribution.
In our numerical example related to automobile insurance, it will be shown that this allows for a more nuanced approach to handling the inflation of zeros coming from heterogeneous sources.
Parameter Estimation {#sec:estmeth}
====================
The common approach for estimating parameters in finite mixture models is based on the EM algorithm [see @mcnicholas16a for examples]. The estimation of the developed Bernoulli-Poisson partitioning method is split into two EM algorithms. The first EM algorithm partitions the sample space, while the second EM algorithm optimizes the zero inflated portion.
Note on Bernoulli-Poisson Sample Space Partitioning
---------------------------------------------------
Unfortunately, in the context of mixture models for heterogeneous count data with excess zeros the difficulties are apparent during the maximization step of the EM algorithm when means of covariates are very close together [see @LimHwa]. The good news is that the misclassification error can be reduced using parsimonious models for the independent variables as in [@McNicholas:2010].
However, in this work, we propose a new method to rectify this problem and partition the dataset using Bernoulli and Poisson GCWMs. Furthermore, we construct a ZI-GCWM using the previously generated Bernoulli and Poisson GCWMs. In the first EM algorithm we estimate parameters pertaining to the GCWM under the assumption of a Poisson model and, separately, we carry out the same process under the assumption of a Bernoulli model. Using the obtained parameter estimates from the two separate applications of the EM algorithm, we set the initialization parameters for the second EM algorithm pertaining to parameter estimation of the ZI-GCWM. The work of [@Lambert] specifies that the MLE estimates for the separate Poisson and Bernoulli models provide an excellent initial guess, allowing EM to converge quickly for ZIPs. The Bernoulli-Poisson sample space partitioning method consists of two separate EM algorithms. The first EM algorithm is for generating the GCWM models, while the second EM is for optimizing the ZI-GCWM.
Here, the joint probability density function $f^{ZI}$ becomes $$f^{ZI}(\bm{x},y,\Phi) = \sum_{k=1}^{K} \tau_k q^{ZI}_{k}(y|\bm{x}; \bm{\bar{\beta}}_k,\bm{ \beta}_k) p(\bm{t};\bm{\theta}_k^{\star})p(\bm{w};\bm{\theta}_k^{\star\star})p(\bm{u};\bm{\theta}_k^{\star\star\star}).$$ The new conditional density is now result of a model in which each component is captured by the conditional probability density function that is a mixture of particular Bernoulli and particular Poisson densities $$\begin{aligned}
q^{ZI}_{k}(y|\bm{x}; \bm{\bar{\beta}}_k,\bm{ \beta}_k) & := q^B(y|\bm{x}; \bm{\bar{\beta}}_k) +(1- q^B(y|\bm{x}; \bm{\bar{\beta}}_k) ) q^P(y|\bm{x};\bm{\beta}_k) \nonumber \\
& = q(y = 0|\bm{x};\bm{\vartheta}_{k} ) + q(y > 0|\bm{x} ; \bm{\vartheta}_{k}), \quad k \in \{ 1, ..., K \}.
\label{ziGCWM}\end{aligned}$$
The initialization parameters for the second EM algorithm are provided by Bernoulli and Poisson GCWMs from giving parameter pairs ($ \psi_k,\lambda_k $). The second EM procedure then optimizes the zero inflated GCWM. The ZI-GCWM is compared against the standard Poisson GCWM using a likelihood ratio test which is discussed in Section \[subsec:: compareZero\].
EM Algorithm for Partitioning of Sample Space
---------------------------------------------
The EM algorithm is based on the local maximum likelihood estimation. The initial values of the parameter estimates can be generated from a variety of strategies outlined in [@initialPaperGrassiaRef]. The algorithm proceeds by alternation of the E- and M-steps to update parameter estimates. The convergence criterion of the EM algorithm is based on the Aitken acceleration. It is used to estimate the asymptotic maximum of the log-likelihood at each iteration of the EM algorithm when the relative increase in the log-likelihood function is no bigger than a small pre-specified tolerance value or the number of iterations reach a limit. To find an optimal number of components, maximum likelihood estimation is obtained over a range of $K$ groups, and the best model is selected based on the Bayesian information criterion (BIC). In this subsection, we explain the parameter estimation in line with the GCWM methodology proposed by [@Ingrassia+Punzo+Vittadini+Minotti:2015]. The proposed GCWM is based on the assumption that $q(y|\bm{x},\bm{\vartheta}_k)$ belongs to the exponential family of distributions that are strictly related to GLMs. The link function in each group $k$ defines the relationship between the linear predictor and the expected value of the distribution function. Here we are interested in the estimation of the vector $\bm {\beta}_k$, thus the distribution of $Y|\bm{x}$ is denoted by $q(y|\bm{x}; \bm{\beta}_k, \nu_k)$, where $\nu_k$ signifies an additional parameter to account for when a distribution belongs to a two-parameter exponential family.[^1]
Recall that the marginal distribution $p(\bm{x}; \bm \theta_k)$ has the following components: $p(\bm{t}; \bm \theta_k^{\star})$, $p(\bm{w}; \bm \theta_k^{\star\star})$, and $p(\bm{u};\bm \theta_{k}^{\star\star\star})$. The first marginal density $p(\bm{t}; \bm \theta_k^{\star}:=( \bm {\mu}_k^{\star}, \bm{\Sigma}_k^{\star}) )$ is modeled as a Gaussian distribution with mean $\bm {\mu}_k^{\star}$ and covariance matrix $\bm{\Sigma}_k^{\star}$. The marginal density of discrete covaraites $p(\bm{w};\bm{\theta}_{k}^{\star\star})$ is assumed to have for each finite discrete covariate in $\bm{W}$, a representative binary vector $\bm{w}^r=(w^{r1},\ldots,w^{rc_r})^{'}$, where $w^{rs}=1$ if $w_r = s\in\{1, \ldots, c_r\}$, and $w^{rs}=0$ otherwise.
Given the preceding assumptions about discrete covariates, the marginal density is written as $$\begin{aligned}
p(\bm {w}; \bm {\gamma_k})=\prod_{r=1}^{d}\prod_{s=1}^{c_r}(\gamma_{krs} )^{w^{rs}}
\label{eq31}\end{aligned}$$ for $k=1, \ldots, K$, where $\bm {\gamma}_k=(\gamma_{k1}^{'}, \ldots, \gamma_{kd}^{'})^{'}$, $\bm \gamma_{kr}=(\gamma_{kr1}^{'}, \ldots, \gamma_{krc_d}^{'})^{'}$, $\gamma_{krs} > 0$, and $\sum_{s=1}^{c_r}\gamma_{krs}$, $r=1,\ldots,q$. The density $p(\bm {w}, \bm{\gamma}_k)$ represents the product of $d$ conditionally independent multinomial distributions with parameters $\bm{\gamma}_{kr}$, $r=1,\ldots, d$. Finally, the third marginal density $p(\bm{u};\bm{\theta}_{k}^{\star\star\star})$ will be modelled with a multivariate log-normal distribution having a location parameter vector $ \bm{\mu}_k^{\star\star\star}$ and scale parameter matrix $\bm{\Sigma}_k^{\star\star\star} $.
Let $(\bm x_1, y_1),\ldots, (\bm x_n, y_n)$ be a sample of $n$ independent observations drawn from model in . Consider a latent random variable $Z_{ik}$. The realization $z_{ik}$ of the latent indicator variable takes the value of $z_{ik}=1$ indicating that observation $(\bm{x_i}, y_i)$ originated from the $k$th mixture component and $z_{ik}=0$ otherwise.
Given the sample, the complete-data likelihood function $L_c(\bm\Phi)$ is given by $$\begin{aligned}
L_c(\bm\Phi)=\prod_{i=1}^{n}\prod_{k=1}^{K}\left[{\tau_k}q(y_i|x_i; \bm \beta_k, \nu_{k})p(t_i; \bm\mu_k^{\star}, \bm\Sigma_k^{\star}) p(w_i; \gamma_k)p(u_i; \bm{\mu}_k^{\star\star\star},\bm{\Sigma}_k^{\star\star\star}) \right]^{z_{ik}},
\label{eq27}\end{aligned}$$
Taking the logarithm of , the complete-data log-likelihood is $$\begin{aligned}
\ell_c(\bm\Phi)= \sum_{i=1}^{n}\sum_{k=1}^{K}{z_{ik}}\big[&\log(\tau_{k}) + \log{q}(y_i|x_i; \bm{\beta}_k,\nu_k)\nonumber\\&+ \log p(t_i; \bm{\mu}_k^{\star}, \bm{\Sigma}_k^{\star}) + \log p(w_i; \bm{\gamma}_k) +\log {p}(u_i; \bm{\mu}_k^{\star\star\star},\bm{\Sigma}_k^{\star\star\star}) \big].\label{CompleteLiklihood}\end{aligned}$$
On the $(s+1)$th iteration, the E-step requires calculation of the conditional expectation of $\ell_c(\bm\Phi)$. Because $\ell_c(\bm\Phi)$ is linear with respect to $z_{ik}$, we simplify the calculation to the current expectation of $Z_{ik}$, where $Z_{ik}$ is the random variable corresponding to the realization $z_{ik}$. Given the previous parameters $\bm\Phi^{(s)}$ and the observed data, we calculate the current conditional expectation of $Z_{ik}$ as $$\begin{split}
{\pi_{ik}}^{(s)} &= {E}[Z_{ik} |(\bm{x_i}, y_i); \bm{\Phi}^{(s)}]\\
&= \frac{{\tau_k}^{(s)}q(y_i|x_i; \bm \beta_k^{(s)}, \nu^{(s)}_{k})p(t_i; \bm\mu_k^{{\star}(s)}, \bm\Sigma_k^{{\star}(s)}) p(w_i; \bm \gamma_k^{(s)})p(u_i; \bm{\mu}_k^{\star\star\star (s)},\bm{\Sigma}_k^{\star\star\star (s)})}{f(\bm{x}_i, y_i; \bm{\Phi}^{(s)})
\label{eq29} }.
\end{split}$$ On the M-step of the $(s+1)$th iteration, the conditional expectation of $\ell_c(\bm\Phi)$ denoted as a function $Q(\Phi|\Phi^{(s)})$ is maximized with respect to $\Phi $, where the values of $z_{ik}$ in are replaced by their current expectations $\pi_{ik}$ yielding $$\begin{split}
Q(&\bm\Phi|\bm\Phi^{(s)}) = \sum_{i=1}^{n}\sum_{k=1}^{K}{\pi_{ik}^{(s)}} \big[\log(\tau_{k}) + \log{q}(y_i|x_i;\bm{\beta}_k,\nu_k)+ \log p(t_i; \bm{\mu}_k^{\star}, \bm{\Sigma}_k^{\star}) + \log p(w_i; \bm{\gamma}_k)\\
&\qquad\qquad\qquad\qquad+\log {p}(u_i; \bm{\mu}_k^{\star\star\star },\bm{\Sigma}_k^{\star\star\star })\big] \\
&=\sum_{i=1}^{n}\sum_{k=1}^{K}{\pi_{ik}^{(s)} \log(\tau_{k}) + \sum_{i=1}^{n}\sum_{k=1}^{K}{\pi_{ik}^{(s)}}\log{q}(y_i|x_i;\bm{\beta}_k},\nu_k) +\sum_{i=1}^{n}\sum_{k=1}^{K} {\pi_{ik}^{(s)}}\log p(t_i; \bm{\mu}_k^{\star}, \bm{\Sigma}_k) \\
&\qquad\qquad\qquad\qquad+\sum_{i=1}^{n}\sum_{k=1}^{K}{\pi_{ik}^{(s)}}\log p(w_i; \bm{\gamma}_k) + \sum_{i=1}^{n}\sum_{k=1}^{K}{\pi_{ik}^{(s)}}\log {p}(u_i; \bm{\mu}_k^{\star\star\star},\bm{\Sigma}_k^{\star\star\star}).\label{Qfunction}
\end{split}$$
The M-step requires maximization of the $Q$-function with respect to $\bm \Phi$ which can be done separately for each term on the right hand side in . As a result, the parameter updates on the $(s+1)$th iteration are $$\begin{aligned}
{\hat{\tau}_k}^{(s+1)}&=\frac{1}{n} \sum_{i=1}^n \pi_{ik}^{(s)}, && && {\hat{\bm{\mu}}_k}^{\star (s+1)}=\frac{1}{\sum_{i=1}^n \pi_{ik}^{(s)}} \sum_{i=1}^n \pi_{ik}^{(s)}\bm t_i, && && {\hat{\bm \gamma}^{(s+1)}_{kr}} =\frac{\sum_{i=1}^n \pi_{ik}^{(s)} \omega^{rs}_i} {\sum_{i=1}^n \pi_{ik}^{(s)}},\end{aligned}$$ $${\widehat{\bm \Sigma^{}}_k}^{\star(s+1)}=\frac{1}{\sum_{i=1}^n \pi_{ik}^{(s)}} \sum_{i=1}^n \pi_{ik}^{(s)}(\bm t_i-\hat{\bm \mu}^{(s+1)}_k) (\bm t_i-\hat{\bm \mu}^{(s+1)}_k)^{'}.$$ Parameter updates for the log-normal distribution are as follows $$\begin{split}
{\hat{\bm \mu}_k}^{\star\star\star (s+1)}&=\frac{1}{\sum_{i=1}^n \pi_{ik}^{(s)}} \sum_{i=1}^n \pi_{ik}^{(s)}\ln \bm u_i,\\
{\widehat{\bm \Sigma}_k}^{\star\star\star(s+1)}&=\frac{1}{\sum_{i=1}^n \pi_{ik}^{(s)}} \sum_{i=1}^n \pi_{ik}^{(s)}(\ln \bm u_i-\hat{\bm \mu}^{\star\star\star(s+1)}_k) (\ln \bm u_i-\hat{\bm \mu}^{\star\star\star(s+1)}_k)^{'}.
\end{split}$$ For each $k=1,\ldots,K$, the update for $\bm{\beta}_k$ could be computed by maximizing $$\begin{aligned}
\sum_{i=1}^{n}\pi^{(s)}_{ik} \log{q}(y_i|\bm x_i;\bm \beta_k,\nu_k).
\label{eq30}\end{aligned}$$ The numerical optimization for each term is discussed in [@Wedel+DeSabro:1995] and [@Wedel:2002]. For additional implementation information, the reader is referred to the manual of the [flexCWM]{} package [@Ingrassia+Punzo+Vittadini+Minotti:2015] for ${\sf R}$ [@R18].
For modelling severity, each observation $y_i$ must be weighted according to the number of claims the client has incurred [see @frees2015 pages 118–119]. Thus is re-written as $$\begin{aligned}
\sum_{i=1}^{n}\pi^{(s)}_{ik} \mathcal{\omega}_i \log{q}(y_i|\bm x_i;\bm \beta_k,\nu_k),
\label{eqFrees}\end{aligned}$$ which is maximized to give the update for $\bm{\beta}_k$. Here, $\mathcal{\omega}_i$ is the number of claims the client occurs over the exposure period. Because $\mathcal{\omega}_i$ is constant at every EM iteration $s$, the flexCWM package is easily amenable to account for this methodological adjustment.
EM Algorithm for Zero-Inflated Model
------------------------------------
For a zero-inflated model, the EM-algorithm follows a similar procedure as above to optimize the conditional density given in . Specifically, the log-likelihood function of $\psi_k$ and $\lambda_k$ is $$\begin{split}
l(\psi_k,\lambda_k| \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n) &= \sum_{\{y_i = 0\}} \log \big[ e^{ \bm{ {\tilde{\bm{x}}}}_i \bm{\bar{\beta}}_k^{'} } + \exp{( - e^ { -\bm{{\tilde{\bm{x}}}}_i \bm{\beta}_k^{'} })} \big] \\ & + \sum_{\{y_i > 0\}} \left( y_i {\tilde{\bm{x}}}_i \bm{\beta}_k^{'} + e^{ {\tilde{\bm{x}}}_i \bm{\beta}_k^{'} } \right) - \sum_{i=1}^n \log \left(1 + e^ {{\tilde{\bm{x}}}_i \bm{\bar{\beta}}_k^{'} } \right) - \sum_{\{y_i > 0\}} \log(y_i ! ).
\end{split}$$ Due to the first term, the log-likelihood function is difficult to maximize, however [@Lambert] gives a meaningful solution. Consider a random variable ${Z^\star}_{ik}$ indicating with ${{z^\star}_{ik}} = 1$ when $y_i$ is generated from the Bernoulli random variable of partition $k$, and ${z^\star}_{ik} = 0$ when $y_i$ is generated from the Poisson random variable of the same partition. Then, the complete-data log-likelihood is $$\begin{aligned}
l_c(\psi_k,\lambda_k| \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n,{\bm{{z^\star}}_k}) &= \sum_{i=1}^n \left( {z^\star}_{ik}{\tilde{\bm{x}}}_i \bar{\bm{\beta}_k }^{'} - \log\left(1+ e^{ {\tilde{\bm{x}}}_i \bar{\bm{\beta}_k }^{'}}\right) \right) \\ & + \sum_{i=1}^n (1-{z^\star}_{ik}) (y_i {\tilde{\bm{x}}}_i \bm{\beta}_k^{'} - e^{{\tilde{\bm{x}}}_i \bm{\beta}_k^{'}})+ \sum_{i=1}^n (1-{z^\star}_{ik})\log(y_i!)\\
&= l_c(\psi_k;\{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n,{{\bm{{z^\star}}_k}}) + l_c(\lambda_k; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n,{{\bm{{z^\star}}_k}}) \\
&+ \sum_{i=1}^n (1- {z^\star}_{ik})\log(y_i!), $$ where $\bm{{z^\star}}_k := \left[{z^\star}_{1k}, ..., {z^\star}_{nk} \right]$ is a realization of $\bm{{Z^\star}}_k := \left[{Z^\star}_{1k}, ..., {Z^\star}_{nk} \right]$. Note that $l_c(\psi_k,\lambda_k|\{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n,\bm{{z^\star}}_k)$ can be separated allowing the maximization of $l_c(\psi_k; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n,\bm{{z^\star}}_k)$ and $l_c(\lambda_k; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n,\bm{{z^\star}}_k) $ independently for parameters $\psi_k$ and $\lambda_k$. With the EM algorithm, maximization of parameters are performed iteratively between estimating ${Z^\star}_{ik}$ with its expectation under current estimates for $\lambda_k$ and $\psi_k$ (E-Step), and then maximizing the conditional expectation of the complete-data log-likelihood (M-Step).
In the E-step, using current estimates $\psi_k^{(s)}$ and $ \lambda_k^{(s)} $ we calculate the expected value of ${{{Z^\star}}_{ik}}$ by its posterior mean ${\hat{{z^\star}}_{ik}^{(s)}}$ for each cluster $k$ at iteration $s$ as $$\begin{aligned}
{\hat{{z^\star}}}_{ik}^{(s)} = \begin{cases} \left[ 1 + \exp{\big(-{\tilde{\bm{x}}}_i \bar{\bm{\beta}_k}^{'(s)} - e^ {\bm{{\tilde{\bm{x}}}_i} \bm{\beta}_k^{'(s)}} \big) } \right]^{-1}, & y_{i} = 0 \\
0 \quad , & y_{i}> 0 .
\end{cases}\end{aligned}$$ The M-Step can be split into the maximization of two complete data log-likelihoods and the $\hat{\bm{{z^\star}}}_k$ calculated from the previous iteration $(s)$ as $$\begin{aligned}
l_c(\lambda_k; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n| \hat{\bm{{z^\star}}}_k^{(s)}) &= \sum_{i=1}^n (1- \hat{{z^\star}}_{ik}^{(s)}) (y_i {\tilde{\bm{x}}}_i \bm{\beta}_k^{'} - e^{{\tilde{\bm{x}}}_i \bm{\beta}_k^{'}})\label{eq7}.\\
l_c(\psi_k;\{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n|\hat{\bm{{z^\star}}}_k^{(s)}) &=\sum_{i=1}^n \left( \hat{{z^\star}}_{ik}^{(s)} {\tilde{\bm{x}}}_i \bar{\bm{\beta}_k }^{'} - \log \left(1+ e^{ {\tilde{\bm{x}}}_i \bar{\bm{\beta}_k }^{'}} \right) \right). \label{eq6}
\end{aligned}$$
The maximization of for GLM coefficients $\lambda_k$ can be carried out using a weighted log-linear Poisson regression with weights $1 - \hat{{z^\star}}_{ik}^{(s)}$ [see @McCullaghNelder1989]), yielding $\lambda_k^{(s+1)}$. While the parameter $\psi_k$ for can be maximized over a gradient yielding $\psi_k^{(s+1)}$ [see @Lambert].
Comparing Zero-Inflated Models {#subsec:: compareZero}
------------------------------
Until recently the Vuong Test for non-nested models [@vuongTest] has frequently been used to compare zero-inflated model with their non-zero inflated counterpart. However [@misuse] shows that a zero-inflated model and its non-zero inflated counterpart do not satisfy Vuong’s criteria for non-nested models, and hence such use of the test is incorrect. Furthermore, the Vuong Test fails to identify evidence of zero-deflation leading to inconsistencies in the hypothesis test [see @misuse]. To rectify this, [@newIntuitive] show that it is sufficient to test for zero-modificiation in the form of a likelihood ratio test, where the hypotheses are $$\begin{aligned}
& & H_0: \psi_k = 0 \quad \text{vs.} \quad H_1: \psi_k > 0, & &\end{aligned}$$ and the test statistic $\varphi$ is given by $$\varphi = -2 \big[l(\tilde{\lambda_k}; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n) - l(\lambda_k, \psi_k; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n )\big].
\label{LRTest}$$ The test statistic is shown to follow a chi-squared distribution ($0.5\chi^2_0+0.5\chi^2_{m}$) where $m$ is the number of degrees of freedom of the model. For our purposes, $m$ is the number of covariates selected for the Bernoulli model in (\[g2link\]) — see Likelihood Ratio Tests in [@newIntuitive]. Note that a $\chi^2_0$ distribution has 0 degrees of freedom, hence the $95{\mbox{\sl{th}}}$ percentile of a $0.5\chi^2_0+0.5\chi^2_{m}$ distribution is equal to the $90{\mbox{\sl{th}}}$ percentile of a $\chi^2_{m}$ distribution and the test statistic at a level of significance of $\alpha=0.05$ equals that of a one-tailed $\chi^2$ test, with $m$ degrees of freedom, at a level of significance of $\alpha=0.10$ [see @newIntuitive].
The function $l(\tilde{\lambda_k}; \{y_i\}_{i=1}^n,\{\bm{x}_i\}_{i=1}^n)$ is the log-likelihood of a single component GCWM Poisson model parametrized by $\tilde{\lambda_k}$. Recall that $\psi_k$ is the zero-inflation parameter of the $k$th partition. In our approach, we will be using to test for evidence of zero-inflation for $k$th group, and then using BIC for model comparisons on $k$th group. This approach quickly determines if there is zero-inflation. When evidence of zero-inflation is established, we search for the best linear model using the BIC.
Numerical Application {#sec:numapp}
=====================
Dataset
-------
We illustrate the use of ZI-GCWM on the French motor severity and frequency datasets which are available as part of the [R]{} package [CASdatasets]{} [@Dutang+Charpentier:2016]. Previously they were used by [@Charpentier:2014] who demonstrated various GLM modeling approaches for fitting frequency and severity. The dataset consists of 413,169 motor third-party liability policies with the associated risk characteristics. The loss amounts by policy ID are also provided.
Discussion and Results
----------------------
### Modelling Severity
In this section, we show improved results for GCWM over CWM in modeling French motor losses. Furthermore, we investigate the results of the GCWM model for the valuation of heterogeneous risk.
In the numerical analysis we consider the following covariates: population density ($Density$), driver age ($DriverAge$), car age ($CarAge$), car power level ($Power$), and geographical region in France ($Region$). The $CarAge$ is modelled as a categorical variable with five categories: $[0,1)$, $[1,5)$, $[5,10)$, $[10,15)$, and $15+$. Additionally, $DriverAge$ is modelled as a categorical variable with five categories: $[18,23)$, $[23,27)$, $[27,43)$, $[43,75)$, and $75+$. $Power$ is modelled into three categories as in [@Charpentier:2014]: DEF, GH, and other. The fitted model is defined with the following expression $$\begin{aligned}
g(\mathbb{E}\left[Y_{Severity}|\bm{x}, \bm{\beta}_k \right]) =
& \beta_{k0} + \beta_{kDensity}x_{Density}+ \beta_{kCar Age} x_{Car Age}+ \beta_{kDriver Age} x_{Driver Age} + \nonumber \\ & \beta_{kRegion} x_{Region} + \beta_{kPower} x_{Power} \label{regressionModel} =: \bm{{\tilde{\bm{x}}}} \bm{\beta}_k^{'}.\end{aligned}$$ The canonnical log-link function $g$ is used for the GCWM in .
Beginning with the continuous covariate $Density$, we want to inspect the shape of its univariate data to see if it follows Gaussian distribution. The left-hand side of Figure \[fig:vet1\] clearly reveals that the the $Density$ is rather skewed right with several observations that report high value of density. This indicates a need for a transformation. The log-normal transformation clearly improves the fit (see the right side of Figure \[fig:vet1\]).
![Density variable: Left figure shows the fit when Gaussian distribution is imposed (CMW approach) to highly skewed data. Right figure shows the fit when log-normal assumption is applied (GCWM approach).[]{data-label="fig:vet1"}](Untransformed_Density.pdf "fig:") ![Density variable: Left figure shows the fit when Gaussian distribution is imposed (CMW approach) to highly skewed data. Right figure shows the fit when log-normal assumption is applied (GCWM approach).[]{data-label="fig:vet1"}](transformed_Density.pdf "fig:")
Given the log-normal assumption, the result of the transformation is reflected in a better AIC and BIC for GCWM over CWM. Table \[comparingCWM\_models\] shows a considerable difference in BIC and AIC comparing CWM and GCWM. The five component CWM with a BIC of $281,680$ is significantly higher than the four component GCWM with a considerably lower BIC of $88,564$.
Model $$ AIC BIC
------- ---------- ----------------- -----------------
CWM $1$ $351,965$ $352,149$
$2$ $314,039$ $314,414$
$3$ $300,503$ $301,068$
$4$ $285,979$ $286,735$
$\bm{5}$ $\bm{280,732}$ $\bm{281,680}$
GCWM 1 $110,627$ $110,810$
$2$ $88,828$ $89,203$
$3$ $88,338$ $ 88,903$
$\bm{4}$ $ \bm{87,808} $ $ \bm{88,564} $
5 $88,009$ $88,956$
: Comparison of AIC and BIC for CWM versus GCWM for $K$ number of clusters estimated. []{data-label="comparingCWM_models"}
We now investigate the results of GCWM in relation to the valuation of risk. For practical uses, finding clusters allows us to create different classifications of risk for various groups of drivers. After fitting the model, we then inspect the size of each cluster. The GCWM approach has chosen four components as the best model to represent the data. The size of each cluster is displayed in Table \[table:sizeSev\]. Attention is brought to largest quantity of drivers that are grouped into (blue) Cluster 3. This accounts for $ 47 \% $ of all drivers and is fairly concentrated in the center of Figure \[fig:vet1a\]. From the results we can create an insurance model with the distinct characteristics.
Cluster 1 Cluster 2 Cluster 3 Cluster 4
----------- ----------- ----------- -----------
$3,064$ $1,873$ $ 7,259$ $3,194$
red green blue orange
: Size and colours of clusters for the GCWM a model.[]{data-label="table:sizeSev"}
![Showing clusters by color scheme: Cluster 1 - red, Cluster 2 - green, Cluster 3 - blue, Cluster 4 - orange for $Severity$ vs $Density$ on a log scale. []{data-label="fig:vet1a"}](sevClusterPlot.pdf)
The (blue) Cluster 3 drivers have both low variability and low level of claims, thus can be insured with a lower rate than all other drivers. Similarly, the (green) Cluster 2 drivers, have also low variability of claims but a higher level of claims, thus they should have a rate higher than (blue) Cluster 3 drivers. The (red) Cluster 1 drivers have the next highest level of claims and variability of the clusters in Figure \[fig:vet1\]. Finally, the (orange) Cluster 4 drivers have the highest level of claims and variability in claims out of any of the other clusters. From a risk management perspective, the drivers belonging to this cluster should be insured at the highest rate.
Table \[table:volSev\] shows a breakdown of the types of drivers, ordered by volatility in descending order. Beginning with V1 volatility level, these drivers tend to have claims between €$1,033$ and €$1,325$, with a standard deviation of €$52$, and a mean of €$1,171$. Moving to V2 , these drivers have the second lowest level of volatility. Also, drivers in this volatility level tend to have claims anywhere between €$395$ and €$3,221$, with a standard deviation of €$559$, and a mean of €$1,350$. Proceeding to V3 volatility level, we observe that volatility in claims is greater than the preceding levels. Drivers in this volatility level have claims anywhere between €$2$ and €$281,403$, with a mean of €$2,956$, and a standard deviation of €$11,878$. Finally, with V4 denotes the level of highest volatility. A claim in this level reach the highest recorded value of €$2,036,833$. The mean of claims in this volitility level is €$3,771$, and a standard deviation of €$40,334$.
------------------ --------- ------- ----------- ------------
Volatility Level
(Cluster) Minimum Mean Maximum $\sigma$
V1 (3) 1,033 1,171 1,325 **52**
V2 (2) 395 1,350 3,221 **559**
V3 (1) 2 2,956 281,403 **11,878**
V4 (4) 2 3,771 2,036,833 **40,334**
------------------ --------- ------- ----------- ------------
: Summarized volatility information of each cluster for Claims.[]{data-label="table:volSev"}
The coefficients in the discovered clusters are relevant for premium calculations in auto insurance. Table \[severity\_coef\_table\] (Appendix \[app:tables\]) shows the coefficients of the fitted model. In each cluster, statistical significance varies but overall the majority of coefficients are statistically significant.
In summary, the drivers have been clustered into four categories with distinct characteristics outlined in Table \[table:volSev\]. We have seen how using the results from GCWM, one can create a rate model based on clustering algorithms with various levels of risk represented in each cluster. GCWM found a group that contains a clear majority of drivers, in which the volatility of their claims was extremely low regardless of $Density$ or $DriverAge$. The results show that GCWM may potentially discover unique clusters that are otherwise hidden within the data.
### Modeling Claims Frequency
In this section, we model frequency of the French motor claims. We consider the covariates $Density$, $DriverAge$, $CarAge$, and $Exposure$. Here, $Exposure$ is used as an offset to account for the rate at which a claim occurs [see @frees2015]. The choice of covariates stems from the previously modelled single component ZIP [@Charpentier:2014]. The ZI-GCWM is fitted with the following expression: $$\begin{aligned}
g_P(\mathbb{E}\left[Y_{ClaimNb}|\bm{x}, \bm{\beta}_k \right]/x_{Exposure}) & =
\beta_{k0} + \beta_{kDensity}x_{Density}+ \beta_{kDriverAge}x_{DriverAge} \nonumber \\ & \quad\quad\quad + \beta_{kCarAge}x_{CarAge} =: \bm{{\tilde{\bm{x}}}} \bm{\beta}_k^{'}, \label{poissonReg}\\
g_{ZI}(\mathbb{E}\left[Y_{ClaimNb}|\bm{x}, \bar{\bm{\beta}}_k \right]/x_{Exposure})& = \bar{\beta}_{k0} + \bar{\beta}_{kDensity} x_{Density} =: \bm{{\tilde{\bm{x}}}} \bar{\bm{\beta}}_k^{'}, \label{zeroReg} \end{aligned}$$ where $Density$, $DriverAge$, and $CarAge$ are explanatory variables in a Poisson model, while only $Density$ is an explantory variable for a Bernoulli model. As in Section 4.2.1, we also impose a log-normal assumption on the $Density$ covariate. The link functions $g_p$ and $g_z$ are chosen to be the log link and logit link respectively as in (\[g1link\]) and (\[g2link\]). After fitting ZI-GCWM we find two zero-inflated components and one Poisson component as the best model to represent the data. The size of each cluster is displayed in Table \[table:sizeFreq\] and we note a fairly even spread of claims across Clusters 1 and 2, with Cluster 3 only consisting of $0.53 \%$ of the claims.
Cluster 1 Cluster 2 Cluster 3
----------- ----------- ----------- --
$191,601$ $219,393$ $2,175$
green red blue
: Size of clusters and their colours for the ZI-GCWM a model.[]{data-label="table:sizeFreq"}
Similarly to modelling severity, the ZI-GCWM finds clusters with unique characteristics. This is evident when looking at the Claims versus Density plot (Figure \[frequencyGraph\]). We see that the ZI-GCWM has assigned the drivers into three distinct groups based on the Density of cities. Table \[summarycovariates\] shows that Cluster 2 drivers live in the most dense areas with a mean of 7.37 km on the log-scale. Followed by Clusters 3 and 1 with a mean of 5.45 km and 4.05 km, respectively.
![Showing clusters by color scheme — Cluster 1 (green), Cluster 2 (red), Cluster 3 (blue) — for $Claim Nb$ versus $Density$ on a log-scale.[]{data-label="frequencyGraph"}](freqPlot.pdf)
Cluster Color Minimum Mean Maximum $\sigma$
--------- ------- --------- ------ --------- ----------
1 green 0.69 4.05 5.46 0.87
2 red 5.48 7.37 10.20 1.24
3 blue 5.12 5.45 5.47 0.03
: Summary of each cluster with log-normal assumptions for the $Density$ covariate measured in km on a log scale.[]{data-label="summarycovariates"}
Table \[frequencySummary\] (see Appendix \[app:tables\]) shows a summary of the coefficients for the zero-inflated model. The significance codes are the same as in Table \[severity\_coef\_table\]. In each cluster we can see that the majority of the coefficients are significant. Specifically for the coefficients pertaining to the the Bernoulli zero-count models.
Cluster BIC (CWM) BIC (ZI-GCWM) $\varphi $
--------- -------------- --------------- -------------------
1 $58,311$ $\bm{58,180}$ $ 4.61 < 154.70$
2 $76,672$ $\bm{76,301}$ $ 4.61 < 395.59 $
3 $\bm{1,488}$ $1,503$ -
All $136,471$ $135,984$ $10.64 < 270.03$
: Comparison of BIC and Chi-square ($\chi^2$) test for CWM vs ZI-GCWM model on each cluster. []{data-label="compareResults_models"}
The results for comparing ZI-GCWM to CWM are shown in Table \[compareResults\_models\]. By using the likelihood ratio test in and comparing the models across each component, The Clusters 1 and 2 have evidence of zero inflation. In addition, ZI-GCWMs have BIC values that are lower in comparison to CWMs. Cluster 3 has no evidence of zero-inflation which is in agreement with the BIC comparison. Thus, only a Poisson model is chosen for Cluster 3. Overall, we see that there is evidence of zero inflation across the entire dataset when comparing ZI-GCWM versus CWM across all clusters. Thus, for rate making purposes, the ZI-GCWM can account for zero-inflation.
Simulation Study {#sec:sim}
================
Two simulation studies are conducted to determine the validity of the log-normal assumption and the effectiveness of the Bernoulli-Poisson partitioning method. The first section outlines the need for a non-Gaussian assumption for the covariates. The second section shows the classification accuracy and other relevant analysis for the Bernoulli-Poisson method.
GCWM Simulation Study
---------------------
In this section, we show how the proposed methodology performs under different simulation settings. The simulation study is generated based on the regression coefficients of the CASdataset used in the previous section. The aim of the simulation study is to test the accuracy and ability of both GCWM a and CWM to return estimates of true parameters when one or more of the covariates is log-normal and the other two are Gaussian. This design specifically tests both models in the event when one of the covariates is non-Gaussian. The motivation behind this choice lies in fact that many covariates used in insurance are likely to come from non-Gaussian distributions. Thus, this simulation is aimed to test the relevancy of CWM, which treats all covariates as Gaussian.
We define Model 1 as the baseline model, where the coefficients are selected to be reminiscent of those estimated from **CASdataset** and reported in upper portion of Table \[severity\_coef\_table\] (see Appendix B). For the purposes of simplicity only three covariates are chosen and denoted as $X_1$ $X_2$ and $X_3$. The intercept for each component is increased to make sure simulated loss is positive. For ease of interpretation, these coefficients are then rounded and treated as true parameters for the simulation study. A simulation with a 3 group mixture model is generated from the aforementioned true parameters however, the third covariate ($X_3$) is generated from a log-normal distribution. In addition, the covariate $X_2$ for the second component is made insignificant and has no effect on the response.
Results aggregated from $1000$ runs are summarized in Table \[gcwmAccuracy\] and Table \[mseTable\]. Given a Gaussian assumption of for the residual error, we record the percentage of runs in which the error fall between a two-tailed $\% 5$ confidence interval in Table \[gcwmAccuracy\]. For example, we report $90.10\%$ accuracy for predictor $X_2$ in the component 2. This means that $90.1\%$ of the time the true parameter is estimated within a $95\%$ confidence interval. Further, we create Models 2, 3, 4 and 5 by altering the parameters of Model 1 by $+30\%$, $-30\%$, $+50\%$, and $-50\%$, respectively, and keeping the second covariate of the second component as an insignificant predictor from the CASdataset model. This is done to test the accuracy of the GCWM and its sensitivity to changes in coefficient sizes. Based on the results in Table \[gcwmAccuracy\], we can see that GCWM performs well for all simulation settings.
Mod. $$ Int. $X_1$ $X_2$ $X_3$ Int. $X_1$ $X_2$ $X_3$
------ ---- --------- --------- --------- --------- ------- ------- -------- ------- --
1 1 93.00% 90.10% 93.00% 93.10% 0.00% 0.00% 0.00% 0.00%
2 90.10% 90.10% 90.10% 0.00% 0.00% 0.00% 0.00%
3 99.20% 99.10% 99.20% 99.20% 0.00% 0.00% 0.00% 0.00%
2 1 89.80% 89.20% 89.80% 89.80% 0.00% 0.00% 4.60% 0.00%
2 89.20% 89.20% 89.20% 0.00% 0.00% 0.00%
3 99.20% 99.20% 99.20% 99.20% 0.00% 0.20% 1.70% 0.00%
3 1 100.00% 100.00% 100.00% 100.00% 0.00% 0.00% 0.00% 0.00%
2 100.00% 100.00% 100.00% 0.00% 0.00% 0.00% 0.00%
3 99.20% 99.20% 99.20% 99.20% 0.00% 0.00% 0.00% 0.00%
4 1 88.60% 86.80% 88.60% 87.00% 0.00% 0.00% 0.00% 0.00%
2 86.90% 86.90% 86.90% 0.00% 0.00% 0.00%
3 99.20% 99.20% 99.20% 99.20% 0.00% 0.00% 0.00% 0.00%
5 1 85.90% 84.90% 85.60% 85.90% 0.00% 0.00% 0.00% 0.00%
2 85.00% 84.90% 84.90% 0.00% 0.00% 0.00%
3 99.20% 99.20% 99.20% 99.20% 0.00% 0.20% 10.90% 0.00%
: GCWM a vs CWM Accuracy: covariate $X_3$ is treated as log-normally distributed, while the rest of covariates are of the Gaussian type.[]{data-label="gcwmAccuracy"}
For CWM, in line with expectations, we note that barely any of the simulation runs are estimated correctly, as most of the results are zero. This means that the performance of CWM approach is poor in presence of one non-Gaussian covariate which in this case is a log-normal covariate.
Table \[mseTable\] provides the summary of mean squared errors (MSEs). The MSE is computed using the following formula, MSE $(\beta) = \frac{\sum_i^n (\beta_i - \hat\beta_i ) ^2}{n}$. Here, $n$ accounts for the number of simulation runs, $\beta$ is the true parameter of interest while $\hat{\beta}$ accounts for its estimate. In case of GCWM, the MSE for each parameter pertaining to each model are aggregated in Table \[mseTable\]. Overall, comparing Tables \[mseTable\] and \[my-label\] it is clear that GCWM outperforms CWM in both accuracy and MSE.
Mod. $K$ $\beta_o$ MSE($\beta_o$) $\beta_1$ MSE($\beta_1$) $\beta_2$ MSE($\beta_2$) $\beta_3$ MSE($\beta_3$)
------ ----- ----------- ---------------- ----------- ---------------- ----------- ---------------- ----------- ----------------
1 1 1028 (11.353) 0.03 (0.00) 3.5 (0.00) -380 (0.09)
2 1600 (0.000) -0.01 (0.00) 1.5 (0.00) -250 (0.00)
3 40000 (0.035) -6.00 (0.00) -305 (0.00) 1100 (0.47)
2 1 1350 (0.167) 0.04 (0.00) 4.5 (0.00) -500 (0.03)
2 2080 (0.001) 0.04 (0.00) 2.0 (0.00) -325 (0.00)
3 52000 (0.012) -8.00 (0.00) 450 (0.00) 14300 (0.01)
3 1 720 (0.001) 0.02 (0.00) 2.5 (0.00) -266 (0.00)
2 1100 (0.008) 0.00 (0.00) 1.1 (0.00) -17511 (0.00)
3 28000 (0.002) -4.20 (0.00) 245 (0.00) 7700. (0.00)
4 1 1650 (13.056) 0.05 (0.00) 5.3 (0.00) -570 (0.00)
2 2400 (0.000) -0.01 (0.00) 2.3 (0.00) -375 (0.00)
3 60000 (0.051) -9.00 (0.00) -457 (0.00) 16500 (0.00)
5 1 500 (1.115) 0.02 (0.00) 2.0 (0.00) -190 (0.05)
2 800 (0.003) 0.00 (0.00) 0.8 (0.00) -120 (0.00)
3 20000 (0.000) -3.00 (0.00) -150 (0.00) 5500 (0.00)
: GCWM results: the summary of MSE for all parameters used in five models. The covariate $X_3$ is treated as log-normal distributed, while the rest of covariates are Gaussian. These results correspond to same simulated runs as those in Table \[gcwmAccuracy\].[]{data-label="mseTable"}
Mod. $K$ $\beta_o$ MSE($\beta_o$) $\beta_1$ MSE($\beta_1$) $\beta_2$ MSE($\beta_2$) $\beta_3$ MSE($\beta_3$)
------ ----- ----------- ---------------- ----------- ---------------- ----------- ---------------- ----------- ----------------
1 1 1028 ($\cdot$) 0.03 ($\cdot$) 3.5 ($\cdot$) -380 ($\cdot$)
2 1600 ($\cdot$) -0.01 ($\cdot$) 1.5 ($\cdot$) -250 ($\cdot$)
3 40000 ($\cdot$) -6.00 ($\cdot$) -305 ($\cdot$) 1100 ($\cdot$)
2 1 1350 ($\cdot$) 0.04 ($\cdot$) 4.5 ($\cdot$) -500 ($\cdot$)
2 2080 ($\cdot$) 0.04 ($\cdot$) 2.0 ($\cdot$) -325 ($\cdot$)
3 52000 ($\cdot$) -8.00 (0.006) 450 (44.1) 14300 ($\cdot$)
3 1 720 ($\cdot$) 0.02 ($\cdot$) 2.5 ($\cdot$) -266 ($\cdot$)
2 1100 (65.814) 0.00 ($\cdot$) 1.1 ($\cdot$) -17511 ($\cdot$)
3 28000 ($\cdot$) -4.20 ($\cdot$) 245 ($\cdot$) 7700. ($\cdot$)
4 1 1650 ($\cdot$) 0.05 ($\cdot$) 5.3 ($\cdot$) -570 ($\cdot$)
2 2400 ($\cdot$) -0.01 ($\cdot$) 2.3 ($\cdot$) -375 ($\cdot$)
3 60000 ($\cdot$) -9.00 ($\cdot$) -457 ($\cdot$) 16500 ($\cdot$)
5 1 500 ($\cdot$) 0.02 ($\cdot$) 2.0 ($\cdot$) -190 ($\cdot$)
2 800 ($\cdot$) 0.00 ($\cdot$) 0.8 ($\cdot$) -120 ($\cdot$)
3 20000 ($\cdot$) -3.00 (0.003) -150 (4.7) 5500 ($\cdot$)
: CWM results: the summary of MSE for all parameters used in five models. All three covariates are treated as Gaussian. These results correspond to same simulated runs as those in Table \[gcwmAccuracy\].[]{data-label="my-label"}
From Table \[my-label\] we can observe that the MSEs for most of the models and their corresponding coefficients are not calculated at all due to convergence failures and as such they are shown as $(\cdot)$. This is not surprising because Table \[gcwmAccuracy\] shows the accuracy of CWM is low when attempting to model non-Gaussian predictors as Gaussian.
In summary, our simulation results showed good performance of the GCWM approach in modeling non-Gaussian covariates. More specifically, these results show high accuracy when covariates are log-normal. In contrary, CWM fails to estimate parameters accurately when the Gaussian assumption is violated.
Bernoulli-Poisson Partitioning Simulation Study
-----------------------------------------------
In this section we show how the Bernoulli-Poisson (BP) partitioning method behaves under different conditions. The components are genereated under similar coefficients estimated from the **CASDatasets** package. Again for easy of interpretation, coefficients are rounded and treated as true parameters from which the simulated data is generated from. The mean and standard deviation of the covariates within each component was also taken into account when generating data. The first simulation examines the performance of the ZI-GCWM model for classification. We generate three components each with sample size $N=1000$ for a total of $3000$ simulated points. The model generated is similar in the mean and standard deviations presented in Table \[summarycovariates\]. Consider three simulated covariates with $$\begin{aligned}
g_P(\mathbb{E}\left[Y_{SimClaimNb}|\bm{x}, \bm{\beta}_k \right]) & =
\beta_{k0} + \beta_{kSimDensity} x_{SimDensity} + \beta_{kSimDriverAge} x_{SimDriverAge} \nonumber \\ & +
\beta_{kSimCarAge} x_{SimCarAge} := \bm{{\tilde{\bm{x}}}} \bm{\beta}_k^{'}, \label{poissonRegSim} \\
g_{ZI}(\mathbb{E}\left[Y_{SimClaimNb}|\bm{x} , \bar{\bm{\beta}}_k \right]) & =
\bar{\beta_{k0}} + \bar{\beta}_{kSimDensity} x_{SimDensity} + \bar{\beta}_{kSimDriverAge} x_{SimDriverAge} \nonumber \\ & +
\bar{ \beta}_{kSimCarAge} x_{SimCarAge} =: \bm{{\tilde{\bm{x}}}} \bar{\bm{\beta}}_k^{'}. \label{zeroRegSim}\end{aligned}$$ as their respective linear models. The covariates $x_{SimDensity}$, $x_{SimDriverAge}$, and $x_{SimCarAge}$ are considered for both the Poisson and Bernoulli models. Furthermore the link functions $g_P$ and $g_{ZI}$ are chosen to be log-link and logit-link respectively. Here, the ZI-GCWM classifies drivers based on simulated data into three components. The misclassification rate is calculated by the proportion of true labels placed in other components by the ZI-GCWM model. The results of the simulation are aggregated in Table \[misclassTable\]. We observe the total misclassification rate of $1.8 \% $, where the makority of misclassified components are between components two and three.
True Labels Misclassification Rate
------------- --------- ------------------------ ----- -------- --
1 2 3
1 992 3 5 0.80 %
2 0 990 10 1.00 %
3 15 20 965 3.50 %
1.80 %
98.23 %
0.9479
: Misclassfication rate and label comparison of generated data.[]{data-label="misclassTable"}
The simulation is expanded further to show how BP partitioning behaves over 1000 runs and under two different conditions. The first condition is defined as follows. The mean and standard deviations of covariates are taken directly from sample statistics of **CASDataset**. The second condition involves adjusting the means of two of the covariates so they are closer to each other. The goal is to show that the BP-method holds its use even when means among covariates are close. The conditions are divided into two scenarios. In the first scenario which we consider “normal", the covariate means are taken directly from the sample data. In the second scenario defined as “close", the covariate means are manipulated so that they are $20 \%$ closer to each other. This is a common problem in classification where if the means among two different components are close, then misclassification rate increases [@LimHwa]. The expansion of the simulation to $1000$ runs tests the accuracy of 3 different partitioning methods to initialize a zero-inflated model. The results of this expansion are aggregated in Table \[table:exper2\]. The Poisson partitioning method assumes that the presence of non-zeros will provide a better partitioning of the data-set. The Bernoulli partitioning method assumes that the presence of excess zeros will determine the best partitioning of the data-set. Finally, the BP partitioning method assumes that both methods are weighed equally and therefore both must be taken into account when partitioning the dataset. The mean and standard deviation of each measurement is provided in Table \[table:exper2\].
Type Condition Poisson ($\sigma $) Bernoulli ($ \sigma $) BP ($ \sigma $)
------------------------ ----------- --------- ------------- ----------- -------------- -------- --------------
Misclassification Rate normal 1.70% (6.00) 1.60% (6.00) 1.10% (0.02)
close 5.00% (7.00) 6.00% (2.00) 7.00% (4.00)
Average Purity normal 98.87% (2.00) 98.91% (2.25) 99.18% (0.81)
close 95.38% (4.00) 94.55% (1.00) 96.95% (0.48)
Adjusted Rand Index normal 0.9662 (0.07) 0.9677 (0.07) 0.9729 (0.0217)
close 0.8706 (0.08) 0.8366 (0.04) 0.8538 (0.0453)
: Aggregated results for the $1000$ run simulation, mean and standard deviations for each statistic are compared across three methods.[]{data-label="table:exper2"}
Under condition normal, the BP method shows better performance in error and is found to be less sensitive than other methods with an error rate of $ 1.10 \% $ and a standard deviation of $ 0.02 \% $. Furthermore, when the close condition is imposed, the partitioning using only the Bernoulli method has better performance in terms of accuracy. The adjusted Rand index [ARI @hubert85] is the Rand index [@rand71] corrected for chance agreement. The Rand index, for two partitions, is simply the number of pair agreements divided by the total number of pairs. The ARI takes a value 1 for perfect class agreement and has expected value 0 under random classification. The ARI measurements across all methods are promising. In particular the BP partitioning method under the normal condition has a very good ARI with a small standard deviation. The Average Purity (AP) is calculated as the average of the diagonal classification entries. The AP for the BP partitioning method is the best out of all other methods, therefore the BP method is the most relevant for classification.
Conclusion
==========
By accomplishing two main goals, in this work, we extend the class of generalized linear mixture CWM models to ZI-GCWM models. First, we proposed the methodology that allows for continuous covariates to follow a non-Gaussian distribution. This is an important extension, at least in insurance modeling context, as imposing Gaussian distribution on a skewed data may result in an suboptimal model fit. Second, we proposed a new CWM methodology that uses BP partitioning method and allows for implementation of zero-inflated CWM.
Our proposed GCWM models allow applications in predictive modeling of insurance claims by overcoming a few limitations of the current CWM models. The GCWM allows for finding clusters within claims frequency which is an important information in risk classification and modeling of claims frequency. Further, some insurance rating variables used in the predictive modeling of severity claims may not strictly follow Gaussian assumptions, for example driver’s age or car age, when treated as continuous covariates. An adequate extension to non-Gaussian covariates can be considered to relax current assumptions and improve the model fit. Given our data, we convincingly demonstrated that there is a need for a log-normal assumption in the $Density$ covariate, and by making it we have considerably improved the model fit.
The results of our extensive simulation study showed the excellent performance of the proposed models in case of modeling non-Gaussian covariates. We found that the CWM model fails to estimate the parameters accurately when the Gaussian assumption is violated. The GCWM a shows significant improvement in the model fit over the CWM model based on AIC and BIC criteria. We also tested BP partitioning of zero-inflated GCWM under different conditions and found that our proposed partitioning method has a very low misclassification rate, high average purity, and high average ARI. Our approach is highly relevant to actuarial pricing and risk management, where current practices are based on implementation of various GLM models.
Multivariate Log-Normal Distribution
=====================================
Consider a random variable $U$ having univariate log-normal distribution with parameters $\mu \in \mathbb{R}$ and $\sigma \in \mathbb{R}_+ $. Have $u \in \mathbb{R}_+$, then the probability density function of random variable $U$ is defined as [^2] $$\mathcal{LN}(u; \mu, \sigma) = \frac{1}{u\sigma\sqrt{2\pi}}\exp\left[-\frac{(\ln u - \mu)^2}{2\sigma^2} \right].$$ $X$$ X \sim \mathcal{N}(x; \mu, \sigma) $, then $U := \exp{(X)}\sim \mathcal{LN}(u; \mu, \sigma) $. To see this, let $p_U(u)$, and $ p_X(x) $ be the probability density functions of $U$ and $X$ respectively. By the change of variables theorem (see [@murphy2012machine] section 2.6.2.1) the density $p_U(u)$ is derived as $$p_U(u) = p_X(\ln u )\frac{\partial}{\partial u} \ln u = p_X(\ln u ) \frac{1}{u} = \frac{1}{u\sigma\sqrt{2\pi}}\exp\left[-\frac{(\ln u - \mu)^2}{2\sigma^2} \right].$$ We extend to a log-normal multivariate case where the random variable $\bm{U} $ is parameterized by $ \bm{\mu} \in \mathbb{R}^p$ and $\bm{\Sigma} \in \mathbb{R}_{+}^{p \times p} \label{changeVarUni} $.
Let the random variable $\bm{X}$ have multivariate normal distribution ie. $\bm{X} \sim \mathcal{MVN}(\bm{x}, \bm{\mu},\bm{\Sigma}) $, then $\bm{U} := \exp(\bm{X} ) \sim f^U(\bm{u}; \bm{\mu } , \bm{\Sigma} )$. Here have $\bm{u} \in \mathbb{R}_{+}^p $ and the probability density function $f^U$ is $$f^U(\bm{u}; \bm{\mu } , \bm{\Sigma} )= \frac{1}{(\prod_{i=1}^{p}u_{i})| \bm{\Sigma} |(2 \pi)^{\frac{p}{2}}} \exp\left[-\frac{1}{2}(\ln \bm{u} -\bm{\mu})^{'} \bm{\Sigma}^{-1}(\ln \bm{u} -\bm{\mu})\right].$$
Let $f^U(\bm{u}; \bm{\mu},\bm{\Sigma})$ and $f^X(\bm{x}; \bm{\mu},\bm{\Sigma})$ be the probability density functions of $\bm{U}$ and $\bm{X}$ respectively. By the multivariate change of variables theorem [see @murphy2012machine Section 2.6.2.1], we derive the log-normal distribution, where $ | \det K_{\ln} (u) | $ is the absolute value of the determinant for the Kacobian of the multivariate transformation $\ln(\bm{U}) = \bm{X} $. Hence, $$\begin{aligned}
| \det K_{\ln} (\bm{u}) | & = \prod_{i=1}^p u_i^{-1}, \; \text{and} \; \\
f^U(\bm{u}; \bm{\mu},\bm{\Sigma}) & = f^X(\ln \bm{u}; \bm{\mu},\bm{\Sigma}) | \det K_{\ln} (u) | \\
& = f^X(\ln \bm{u}; \bm{\mu},\bm{\Sigma})\prod_{i=1}^p u_i^{-1} \\
& = \frac{1}{(\prod_{i=1}^{p}u_{i})| \bm{\Sigma} |(2 \pi)^{\frac{p}{2}}} \exp\left[-\frac{1}{2}(\ln \bm{u} -\bm{\mu})^{'} \bm{\Sigma}^{-1}(\ln \bm{u} -\bm{\mu})\right].
\end{aligned}$$
Tables {#app:tables}
======
------------------ ---------- -------- -------- ---------- --------- -------- ---------- ------- -------- ---------- ---------- --------
V1 (blue) V2 (green) V3 (red) V4 (orange)
Coefficient [^3] Estimate Error P Estimate Error P Estimate Error P Estimate Error P
Int. 7.077 0.003 \*\*\* 6.952 0.043 \*\*\* 7.306 0.138 \*\*\* 7.212 0.136 \*\*\*
Density 0.000 0.000 -0.009 0.003 \*\* -0.006 0.011 -0.052 0.014 \*\*\*
C2 0.008 0.002 \*\*\* 0.069 0.023 \*\* -0.278 0.064 \*\*\* 0.329 0.074 \*\*\*
C3 0.002 0.002 0.222 0.023 \*\*\* -0.460 0.064 \*\*\* 0.161 0.074 \*
C4 0.004 0.002 . 0.075 0.024 \*\* -0.693 0.066 \*\*\* 0.103 0.074
C5 0.009 0.002 \*\*\* 0.102 0.027 \*\*\* -0.608 0.076 \*\*\* 0.234 0.081 \*\*
D2 -0.007 0.002 \*\*\* 0.031 0.026 -0.210 0.080 \*\* -0.690 0.068 \*\*\*
D3 -0.008 0.002 \*\*\* -0.021 0.026 -0.250 0.081 \*\* -0.834 0.069 \*\*\*
D4 -0.012 0.002 \*\*\* -0.014 0.031 -0.122 0.091 -0.753 0.084 \*\*\*
D5 -0.006 0.002 \*\* 0.078 0.032 \* 0.108 0.096 -0.182 0.083 \*
R23 0.002 0.004 -0.059 0.036 . 0.115 0.110 -0.007 0.122
R24 -0.013 0.001 \*\*\* 0.091 0.016 \*\*\* -0.279 0.042 \*\*\* -0.003 0.075
R25 -0.019 0.002 \*\*\* -0.362 0.030 \*\*\* -0.027 0.086 0.257 0.099 \*\*
R31 -0.002 0.002 0.025 0.020 0.111 0.053 \* 0.035 0.106
R52 -0.016 0.002 \*\*\* -0.002 0.019 -0.260 0.051 \*\*\* 0.015 0.085
R53 -0.013 0.002 \*\*\* 0.119 0.019 \*\*\* -0.106 0.053 \* 0.092 0.082
R54 -0.014 0.002 \*\*\* 0.099 0.026 \*\*\* -0.295 0.072 \*\*\* 0.117 0.090
R72 -0.008 0.002 \*\*\* 0.123 0.021 \*\*\* 0.003 0.056 0.239 0.088 \*\*
R74 -0.020 0.003 \*\*\* -0.125 0.050 \* -0.141 0.170 0.131 0.118
P-FGH 0.001 0.001 . 0.006 0.011 0.108 0.030 \*\*\* 0.003 0.030
P-Other 0.005 0.001 \*\*\* 0.013 0.014 0.116 0.038 \*\* 0.057 0.041
------------------ ---------- -------- -------- ---------- --------- -------- ---------- ------- -------- ---------- ---------- --------
------------------ ----------- --------- -------- ----------- --------- -------- ----------- --------- --------
Cluster 1 (green) Cluster 2 (red) Cluster 3 (blue)
Coefficient [^4] Estimate Error P Estimate Error P Estimate Error P
Intercept -4.71437 0.16072 \*\*\* -4.21884 0.23567 \*\*\* 80.34056 4.82835 \*\*\*
Density 0.37081 0.03414 \*\*\* 0.20128 0.02916 \*\*\* -15.36444 0.87135 \*\*\*
D2 -0.09767 0.05987 -0.15287 0.06008 \* -0.28067 0.3133
D3 -0.28937 0.06156 \*\*\* -0.30113 0.06088 \*\*\* -0.33352 0.31412
D4 -0.02955 0.08005 0.03986 0.0711 -1.02815 0.39399 \*\*
D5 0.6164 0.07787 0.47363 0.07558 \*\*\* 0.11692 0.46536
C2 0.67169 0.67169 \*\*\* 0.59803 0.59803 \*\*\* -1.00529 0.79119
C3 0.69215 0.69215 \*\*\* 0.85653 0.85653 \*\*\* 2.05854 0.71373 \*\*
C4 0.63158 0.63158 \*\*\* 0.76843 0.76843 \*\*\* 2.20552 0.71347 \*\*
C5 0.36033 0.36033 \*\*\* 0.52438 0.52438 \*\*\* 2.06543 0.72307 \*\*
Intercept -3.9712 0.5473 \*\*\* -1.66782 0.37735 \*\*\*
Density 0.9032 0.1041 \*\*\* 0.28258 0.04674 \*\*\*
------------------ ----------- --------- -------- ----------- --------- -------- ----------- --------- --------
Acknowledgements {#acknowledgements .unnumbered}
================
The authors express sincere gratitude to Dr. Paul Wilson at the School of Mathematics and Computer Science, University of Wolverhampton for his kind help and advice. In addition, the authors wholeheartedly thank the author and maintainer of the French Motor Policy dataset, Dr. Christophe Dutang at the Universit' e Paris Dauphine, for his generous support. Finally, the authors thank Dr. Ben Bolker at the Mathematics and Statistics Department, McMaster University for his kind advice.
[36]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
Bermúdez, L., Karlis, D., 2012. A finite mixture of bivariate poisson regression models with an application to insurance ratemaking. Computational Statistics and Data Analysis 56 (12), 3988–3999.
Biernacki, C., Celeux, G., Govaert, G., 2000. Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (7), 719–725.
Brown, G. O., Buckley, W. S., 2015. Experience rating with poisson mixtures. Annals of Actuarial Science 9 (02), 304–321.
Charpentier, A., 2014. Computational Actuarial Science with R. CRC press.
Dempster, A. P., Laird, N. M., Rubin, D. B., 1977. Maximum likelihood from incomplete data via the [EM]{}-algorithm. Journal of the Royal Statistical Society B 39, 1–38.
Dutang, C., Charpentier, A., 2016. [CASdatasets]{}. [R]{} package version 1.0-6. Frees, E. W., Derrig, R. A., Meyers, Glenn, e., 2014. Predictive Modeling Applications in Actuarial Science. Vol. 1 of International Series on Actuarial Science. International Series on Actuarial Science.
Gershenfeld, N., 1997. Nonlinear inference and cluster-weighted modeling. Annals of the New York Academy of Sciences 808 (1), 18–24.
Gershenfeld, N., Schoner, B., Metois, E., 1999. Cluster-weighted modelling for time-series analysis. Nature 397 (67171), 329–332.
Gershenfeld, N. A., 1999. The nature of mathematical modeling. Cambridge university press.
Hubert, L., Arabie, P., 1985. Comparing partitions. Journal of Classification 2 (1), 193–218.
Ingrassia, S., Minotti, S. C., Punzo, A., 2014. Model-based clustering via linear cluster-weighted models. Computational Statistics & Data Analysis 71, 159–182.
Ingrassia, S., Minotti, S. C., Vittadini, G., 2014. Local statistical modeling via a cluster-weighted approach with elliptical distributions. Journal of classification 29 (3), 363–401.
Ingrassia, S., Punzo, A., Vittadini, G., Minotti, S. C., 2015. Erratum to: The generalized linear mixed cluster-weighted model. Journal of Classification 32 (2), 327–355.
Johnson, N. L., Kotz, S., Balakrishnan, N., 1994. Continuous Univariate Probability Distributions,(Vol. 1). John Wiley & Sons Inc., NY.
Lambert, D., 1992. Zero-inflated poisson regression, with an application to defects in manufacturing. Technometrics 34, 1–14.
Lee, S. C. K., Lin, X. S., 2010. Modeling and evaluating insurance losses via mixtures of [E]{}rlang distributions. North American Actuarial Journal 14 (1), 107–130.
Lim, H., Li, W., Yu, P., 2014. Zero-inflated poisson regression mixture model. Computational Statistics and Data Analysis 71, 151–158.
McCullagh, P., Nelder, J. A., 1989. Generalized linear models. Vol. 37. CRC press.
McNicholas, P. D., 2016. Mixture Model-Based Classification. Chapman & Hall/CRC Press, Boca Raton.
McNicholas, P. D., Murphy, T. B., 2008. Parsimonious gaussian mixture models. Statistics and Computing 18 (3), 285–296. Miljkovic, T., Fern[á]{}ndez, D., 2018. On two mixture-based clustering approaches used in modeling an insurance portfolio. Risks 6 (2), 57.
Miljkovic, T., Grün, B., 2016. Modeling loss data using mixtures of distributions. Insurance: Mathematics and Economics 70, 387–396.
Murphy, K. P., Bach, F., 2012. Machine Learning: A Probabilistic Perspective. Adaptive Computation and Machi. MIT Press.
Punzo, A., Ingrassia, S., ., 2014. Parsimonious generalized linear Gaussian cluster-weighted models. Springer International Publishing.
Punzo, A., McNicholas, P. D., 2017. Robust clustering in regression analysis via the contaminated gaussian cluster-weighted model. Journal of Classification 34 (2), 249–293.
, 2018. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
Rand, W. M., 1971. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 66 (336), 846–850.
Subedi, S., Punzo, A., Ingrassia, S., McNicholas, P. D., 2013. Clustering and classification via cluster-weighted factor analyzers. Advances in Data Analysis and Classification 7 (1), 5–40.
Subedi, S., Punzo, A., Ingrassia, S., McNicholas, P. D., 2015. Cluster-weighted t-factor analyzers for robust model-based clustering and dimension reduction. Statistical Methods and Applications 24 (4), 623–649.
Verbelen, R., Gong, L., Antonio, K., Badescu, A., Lin, S., 2015. Fitting mixtures of [E]{}rlangs to censored and truncated data using the [EM]{} algorithm. ASTIN Bulletin 45 (3), 729–758.
Vuong, Q. H., 1989. Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57 (2), 307–333.
Wedel, M., 2002. Concominat variables in finite mixture modeling. Statistica Neerlandica 56 (3), 362–375.
Wedel, M., De Sabro, W., 1995. A mixture likelihood approach for generalized linear models. Journal of Classification 12 (3), 21–55.
Wilson, P., 2015. The misuse of the vuong test for non-nested models to test for zero-inflation. Economics Letters 127.
Wilson, P., Einbeck, J., 2018. A new and intuitive test for zero modification. Statistical Modelling.
[^1]: In the work of [@Ingrassia+Punzo+Vittadini+Minotti:2015] this parameter is referred to as $\lambda_k$.
[^2]: For full definition see [@johnson1995continuous]
[^3]: The significance codes are defined as $ P < 0.001 : $ (\*\*\*), $0.001 < P < 0.01:$ (\*\*), $ 0.01 < P < 0.05:$ (\*),\
$0.05 < P < 0.10 : $ (.) pertaining to the $P$ value of the specific coefficient. C\# refers to the Car Age category, D\# refers to the Driver Age category, R\# refers to the region of France, and P refers to the power category.
[^4]: The significance codes are defined as $ P < 0.001 : $ (\*\*\*), $0.001 < P < 0.01:$ (\*\*), $ 0.01 < P < 0.05:$ (\*),\
$0.05 < P < 0.10 : $ (.) pertaining to the $P$ value of the specific coefficient. C\# refers to the Car Age category, D\# refers to the Driver Age category.
|
From the current New Scientist:
In an experiment dubbed "Cola Wars", [Nick Epley] conducted a taste test with a twist: he told participants which cola was Coke and which was Pepsi before tasting began. After tasting, all they had to do was estimate what percentage of their friends would be able to distinguish between the two in a blind taste test. Studies show that people’s ability to do this is no better than chance – so an answer around 50 per cent would be right. What Epley found was intriguing. When he motivated volunteers to give a considered response – by offering them a cash payment – their answers tended to be close to 50 per cent. Subjects who were not paid, however, seemed to answer with an egocentric bias: since they knew which cola was which, they assumed that a high proportion of their friends would guess correctly (Journal of Personality and Social Psychology, vol 87, p 327). For Epley, the finding supports his idea that putting yourself inside the head of another person and considering their perspective requires a cognitive effort that simple egocentric judgements do not.
|
P.J. DeMasseo, a survivor of the Oct. 1 Las Vegas shooting, cashed a check for $1,000 Friday from the Vegas Strong Fund.
He is one of 12 people who received checks this week from the nonprofit totaling $14,800.
He also could be one of the last.
The checks — ranging in amounts from $200 to $3,900 — mark the first distributions to Oct. 1 victims by a nonprofit established in response to the shooting. But it was unclear Friday whether additional victims would receive money from the fund.
The Vegas Strong Fund is a 501(c)(3) nonprofit created by the Nevada resort industry after the shooting. The Las Vegas Victims’ Fund, a separate 501(c)(3) nonprofit that has raised more than $22 million for victims of the Strip mass shooting — and gained far more attention than the Vegas Strong Fund — isn’t expected to distribute money until March. Many victims have expressed concern with that timeline because they have immediate financial needs, and others won’t qualify for assistance from the Victims’ Fund at all.
Enter the Vegas Strong Fund. The $14,800 came from more than $12 million in commitments and cash collected so far. Most recipients will not qualify for assistance from the Las Vegas Victims’ Fund, which will benefit those who suffered physical injuries and the families of those killed.
The Vegas Strong Fund cut the 12 checks at the request of two survivors of the shooting, Jennifer Holub and Christine Caria, and a victim advocate, Anita Busch.
The three spoke with members of the Vegas Strong Fund Oct. 20 over the phone, informing them of the vast financial needs of many shooting survivors.
“We explained that people are living out of their cars, afraid of being evicted and some lost their jobs because of post-traumatic stress disorder,” Holub said. They asked the Vegas Strong Fund to help by distributing funds to people who need food and housing among other immediate needs.
Teaming up
The day after Holub, Caria and Busch spoke with members of the Vegas Strong Fund, a representative of the fund said in an email obtained by the Review-Journal, “The members of the Vegas Strong Fund are very willing to work with you and your group to help those survivors and families that need immediate assistance.”
The three women immediately began reaching out to acquaintances who needed help and asked how much money they needed for food, shelter or other needs. They passed that information along to their contacts at the Vegas Strong Fund.
Then word of the potential help began spreading to people the women didn’t know directly.
So the women began vetting people. They looked at documentation submitted to the Nevada Victims of Crime program. They looked at Facebook pages, had people send photos or videos and checked their names with people they knew were at the event.
“It took us about two to three days. Hours and hours, and hours,” Busch said.
‘Huge help’
DeMasseo was living in Montana in September and was planning to move to Las Vegas at the end of this year using the money he made from working the Route 91 Harvest festival as a bartender.
Instead, he used the money to start a new life in Oregon, where he has been living out of his car.
PTSD has made finding work in Oregon a challenge, he said.
When he first received the check he said “it didn’t seem real.”
“I didn’t really have any hope for anything but being swept under a rug,” he said.
He is trying to get by with temporary jobs while he looks for full-time work and receives mental health counseling.
“A lot of us need help. The majority of us (who worked the night of the shooting) have been out of work, because we’re just emotionally not able to work, or not physically able to work,” DeMasseo said. Tasks that used to be mundane take a lot more effort now, he said, and loud noises and random visuals trigger anxiety.
“I’m going to put the money ($1,000) aside to help me get a place to rent,” he said.
Between Sunday and Friday, the three women submitted claims for an additional 17 individuals totalling $40,222.43 and notified the victims.
“People were so, so grateful for this help,” Holub said. “They thanked us a million times.”
But help might not come to those 17 people after all.
No more checks
A member of the Vegas Strong Fund emailed the women Friday morning.
“The Vegas Strong Fund is analyzing a number of requests for continued support and evaluating how to have the most positive impact for all victims,” the email said. “The board of the Vegas Strong Fund is not, however, comfortable with continuing to issue checks to survivors because we are not in a position to either vet these claims or accommodate all the financial requests you and your team are submitting.”
Caesars Entertainment Corp. executive Jan Jones Blackhurst, the chairwoman of the fund, could not be reached for comment Friday.
Caitlin Brunner, a Henderson single mother who was a bartender at the festival, estimated that she is about $2,200 behind in her rent and utilities. She said she has also struggled to work as much as she’d like because of PTSD caused by the shooting.
She said she was so relieved when she found out money was available for her and that it was on its way.
“I was finally going to be able to catch up,” she said. “I really need that money. Now I’m stressed again. I had all my bills paid in my head. And now I don’t have anything.”
Holub said now the group is “back to square one” in getting financial assistance to people who don’t qualify for the Las Vegas Victims’ Fund.
“We hope to find another source that is sitting on funds gathered in our names to help,” she said. “We are broken, battered and not being taken care of like we and the public would expect.”
Holub said she feels “devastated” at having to go back and tell those 17 people that help isn’t available.
“So many people have had so many broken promises for help,” Holub said. “We are worried about mental health and suicides in regards to these individuals on the verge of losing their homes.”
Educators dressed in red have taken to the streets to demand more for their students.
Educators dressed in red have taken to the streets to demand more for their students. Educators from around the State are bringing the Red for Ed movement to the steps of the Nevada Legislature in Carson City, NV, and to the Grant Sawyer Building in Las Vegas. (Bizuayehu Tesfaye/Las Vegas Review-Journal) @bizutesfaye
Nature Conservancy Ranch
The Nature Conservancy just bought the 900-acre 7J Ranch at the headwaters of the Amargosa River, north of Beatty. The property could become a research station, though ranching will continue.
Swift water rescue at Durango Wash in Las Vegas
On Thursday, February 14, 2019, at approximately 8:42 a.m., the Clark County Fire Department responded to a report of a swift water incident where people were trapped in the Durango wash which is located near 8771 Halcon Ave. Personnel found one person who was trapped in the flood channel. The individual was transported to the hospital in stable condition. Video by Clark County Fire & Rescue.
Flooding at E Cheyenne in N. Las Vegas Blvd.
Quick Weather Around the Strip
Rain hits Las Vegas, but that doesn't stop people from heading out to the Strip. (Mat Luschek/Review-Journal)
Matt Stutzman who was born without arms shoots arrows with his feet and hits the bullseye with remarkable accuracy. (Bizuayehu Tesfaye/Las Vegas Review-Journal) @bizutesfaye
Secretary of Air Force Emphasizes the Importance of Nellis AFB
US Secretary of the Air Force Heather Wilson visited Nellis Air Force Base during Red Flag training and described how important the base is to the military.
Former Northwest Academy student speaks out
Tanner Reynolds, 13, with his mother Angela McDonald, speaks out on his experience as a former student of Northwest Academy in Amargosa Valley, which includes abuse by staff member Caleb Michael Hill. Hill, 29, was arrested Jan. 29 by the Nye County Sheriff’s Office on suspicion of child abuse.
Former Northwest Academy students speak out
Tristan Groom, 15, and his brother Jade Gaastra, 23, speak out on their experiences as former students of Northwest Academy in Amargosa Valley, which includes abuse by staff and excessive medication.
Disruption At Metro PD OIS Presser
A man claiming to be part of the press refused to leave a press conference at Metro police headquarters, Wednesday January 30, 2019. Officers were forced to physically remove the man. (Mat Luschek/Review-Journal)
Clients at Las Vegas’ Homeless Courtyard talk about their experience
Clients at Las Vegas’ Homeless Courtyard talk about their experience after the city began operating around the clock. (Bizuayehu Tesfaye/Las Vegas Review-Journal) @bizutesfaye
Lists of costs for procedures, drugs and devices are now posted the websites of hospitals to comply with a new federal rule designed to provide additional consumer transparency. Good luck figuring out what they mean.
People in Mesquite deal with a massive power outage
People in Mesquite respond to a major power outage in the area on Monday, Jan. 21, 2019. (Mat Luschek/Las Vegas Review-Journal)
Group helping stranded motorists during power outage
A group of Good Samaritans are offering free gas to people in need at the Glendale AM/PM, during a massive power outage near Mesquite on Monday, Jan. 21, 2019. (Mat Luschek/Las Vegas Review-Journal)
U.S. Sen. Jacky Rosen falls at Las Vegas parade
U.S. Sen. Jacky Rosen of Nevada fell and injured her wrist at the Martin Luther King Day parade in Las Vegas on Monday, Jan. 21, 2019. (Nathan Asselin/Las Vegas Review-Journal)
Local astronomers host super blood wolf moon viewing
The Las Vegas Astronomical Society paired with the College of Southern Nevada to host a lunar eclipse viewing Sunday night. Known as the super blood wolf moon, the astronomical event won't occur for another 18 years. (Rachel Aston/Las Vegas Review-Journal) @rookie__rae
Tate Elementary shows academic progress after categorical funding
Students at Tate Elementary in Las Vegas has benefited from a program to boost education funding in targeted student populations, known as categorical funding. One program called Zoom helps students who have fallen below grade level in reading. (K.M. Cannon/Las Vegas Review-Journal) @KMCannonPhoto
The third annual Women’s March in Las Vegas
The third annual Women’s March in Las Vegas. (Bizuayehu Tesfaye/Las Vegas Review-Journal) @btesfaye
First former felon to work for Nevada Department of Corrections
After his father died, Michael Russell struggled for years with drug addiction. When he finally decided to change for good, he got sober and worked for years to help others. Now he is the first former felon to be hired by the Nevada Department of Corrections. (Rachel Aston/Las Vegas Review-Journal) @rookie__rae
Three Square helps TSA workers
Three Square Food Bank donated over 400 care bags to TSA workers affected by the government shutdown Wednesday, filled with food, personal hygiene products and water.
Las Vegas furniture store donates to Clark County firehouses
Walker Furniture donated new mattresses to all 30 Clark County firehouses in the Las Vegas Valley, starting today with Station 22. (Mat Luschek/Las Vegas Review-Journal)
Snow accumulated in the Las Vegas Valley for the first time in more than a decade, with snow falling mostly in the western, northwestern and southern areas. (Bizuayehu Tesfaye/Las Vegas Review Journal) @bizutesfaye
Snow and ice contributed to the closure of Interstate 15 near Primm. Jim Prather/Las Vegas Review-Journal
I-15 traffic diverted at St. Rose Parkway
The Nevada Highway Patrol has closed Interstate 15 in both directions between south Las Vegas and the California state line due to icy road conditions, Monday, Feb. 18, 2019. (Jim Prather/Las Vegas Review-Journal)
Ice on roadway shuts down I-15 south of Las Vegas
An overnight snowstorm left an icy roadway, causing the Nevada Highway Patrol to shut down Interstate 15 south of Las Vegas to the California state line. (Jim Prather/Las Vegas Review-Journal)
I-15 closed at St. Rose Parkway
Ice on Interstate 15 caused the Nevada Highway Patrol to close the highway from St. Rose Parkway in south Las Vegas to the California state line on Monday, Feb. 18, 2019. (Jim Prather/Las Vegas Review-Journal)
The Drug Enforcement Administration is launching a new effort in Nevada to combat the opioid epidemic.
North Las Vegas Blvd Robbery 1
Las Vegas police are asking for help finding a man suspected of robbing a business at gunpoint on the morning of Feb. 19, 2019 in the northeast valley. (Las Vegas Metropolitan Police Department)
North Las Vegas Blvd Robbery 2
Las Vegas police are asking for help finding a man suspected of robbing a business at gunpoint on the morning of Feb. 19, 2019 in the northeast valley. (Las Vegas Metropolitan Police Department)
North Las Vegas robbery at a business
Robbery of a business in the 1600 block of North Main Street in North Las Vegas. If you know the suspects or have seen them before please contact the North Las Vegas Police Department at 702-633-9111 or Crime Stoppers at 702-385-5555.
Burglary at Made In Argentina restaurant
Pablo Rodriguez, owner of the Made in Argentina restaurant on Valley View Boulevard, describes the scene of the burglary that took place at his business on Sunday morning. (Mat Luschek/Review-Journal)
Nevada Highway Patrol vehicle hit on US 95 in Las Vegas
Nevada Highway Patrol trooper Adam Whitmarsh was on a traffic stop on U.S. Highway 95, north of Ann Road, about 7:45 a.m. Saturday, Feb 16, 2019, when a Honda CRV struck the back of the patrol car, causing the car to crash into the stopped vehicle. (Nevada Highway Patrol)
NYE Homicide suspects (1 of 2)
Las Vegas police are asking for help locating three men suspected in a deadly shooting on New Year’s Eve inside a southwest valley home. (Las Vegas Metropolitan Police Department)
NYE Homicide suspects (2 of 2)
Las Vegas police are asking for help locating three men suspected in a deadly shooting on New Year’s Eve inside a southwest valley home. (Las Vegas Metropolitan Police Department)
Las Vegas police look for masked armed robbery suspect
Las Vegas police are looking for a masked man suspected of an armed robbery Feb. 14 in the west valley. (Las Vegas Metropolitan Police Department)
The attorney for Marcel and Patricia Chappuis, operators of a private boarding school at the center of an ongoing abuse investigation, accused the Nye County Sheriff’s Office of harrassing and targeting the couple following a bail hearing Thursday morning.
Home security camera captures man stealing packages
Packages were stolen from outside a Las Vegas home on Feb. 7 and one of them contained chemotherapy drugs for a young cancer patient. (neighbors.ring.com)
The creation of the Southern Nevada Sex Trafficking Multidisciplinary Team.
Several organizations and law enforcement agencies came together to announce the creation of the Southern Nevada Sex Trafficking Multidisciplinary Team. (Bizuayehu Tesfaye/Las Vegas Review-Journal @bizutesfaye
LVMPD Addresses January 17th OIS (Short)
Las Vegas Metro Police Assistant Sheriff Brett Zimmerman address members of the press regarding the officer involved shooting that took place on. January 31, 2019, in the vicinity of I-215 and Alta. Suspect Christopher Ashoff was shot, and is in stable condition. (Mat Luschek/Review-Journal)
Las Vegas police address officer-involved shooting on Jan. 31 (Full)
Las Vegas Metro Police Assistant Sheriff Brett Zimmerman gave details on the officer-involved shooting that took place on Jan. 31, 2019, in Summerlin. Suspect Christopher Ashoff was shot and was in stable condition. (Mat Luschek/Las Vegas Review-Journal)
Child Abuse Investigation At Northwest Academy
An investigation is currently underway by the Nye County Sheriff's Office at Northwest Academy for child abuse.
Lockdown ends at 2 Summerlin schools after Las Vegas police shooting
A man wanted on “violent felonies” was hospitalized in critical condition after he was shot by Las Vegas police Thursday morning during a standoff in a west valley neighborhood.
Krystal Whipple Appears In Court For The First Time
Krystal Whipple, who is charged in the death of a nail salon manager, appeared Thursday before Las Vegas Justice of the Peace Amy Chelini, who scheduled a status hearing for April 1.
Metropd Addresses Ois From Jan 27 2019 (Short)
Assistant Sheriff Tim Kelly addresses members of the media regarding an officer involved shooting that took place on Jan. 27, 2019. The suspect in the incident was unharmed, but placed under arrest and charged with several crimes. (Mat Luschek/Review-Journal)
Las Vegas police addresses officer-involved shooting
Assistant Sheriff Tim Kelly addresses members of the media regarding an officer-involved shooting that took place on Jan. 27, 2019. The suspect in the incident was unharmed, but placed under arrest and charged with several crimes. (Mat Luschek/Las Vegas Review-Journal)
LVMPD sex crimes bureau Lt .David Valenta speaks to the media about the arrest of Antwon Perkins, the suspect in Thursday’s abduction and assault of a Cadwallader Middle School student who was walking to school. (Katelyn Newberg/Las Vegas Review-Journal)
North Las Vegas police chief discusses officer-involved shooting
North Las Vegas police chief Pamela Ojeda held a press conference Thursday, Jan. 24, regarding an officer-involved shooting that took place on Jan. 21. The incident resulted in the killing of suspect Horacio Ruiz-Rodriguez. (Mat Luschek/Las Vegas Review-Journal)
Nye County Sheriff’s Office testing weapon mounted cameras
On Wednesday, January 23, 2019 the Nye County Sheriff’s Office announced the testing and evaluation of Weapon-Mounted Cameras (WMCs) from Viridian Weapon technologies at the Las Vegas Shot Show.
Lt. Ray Spencer gives an update regarding a homicide investigation that occurred in front of a bus stop near Fremont Street and Casino Center, involving a security officer and another male.
Krystal Whipple arrested in Arizona
Krystal Whipple, charged in the killing of a Las Vegas nail salon manager over a $35 manicure, is expected to return to Nevada to face a murder charge.
Alleged Las Vegas casino con man, who was on the run, appears in court
Mark Georgantas -- who entered a plea deal on a charge of stealing from people he met at Las Vegas casinos, but failed to show up for sentencing -- appearing in court after being captured in Utah. (K.M. Cannon/Las Vegas Review-Journal) @KMCannonPhoto
LVMPD Looking For Blue Bucket Bandit - VIDEO
On December 31, 2018 detectives with the LVMPD Northwest Area Command Patrol Investigations Section received a report of the theft of a surveillance camera from the front porch of a residence located in the area of West Lake Mead Boulevard and North Rampart Boulevard. The victim in this case became aware of other residents in the neighborhood who had been the victim of similar crimes.
Las Vegas Metropolitan Briefing - 18th street gang
Las Vegas Metropolitan briefs the media on the recent arrests made regarding the 18th street gang.
|
Q:
How to traverse an multidimensional array to write it in a delimited file. (PHP)
I've an array "$results" that contains result-set which is scraped from web page using regular expressions.
I'm bit confused in traversing the array to write the whole data of array in .csv file.
here is output of array after printing it in following way.
print_r ($results); //printing the array
Output
Array
(
[fullName] => Ogdensburg Walmart Store #2092
[street1] => 3000 Ford Street Ext
[city] => Ogdensburg
[state] => NY
[zipcode] => 13669
[phone] => (315) 394-8990
[latitude] => 44.7083
[longitude] => -75.4564
)
Array
(
[fullName] => Evans Mills Walmart Supercenter Store #5497
[street1] => 25737 Us Route #11
[city] => Evans Mills
[state] => NY
[zipcode] => 13637
[phone] => (315) 629-2124
[latitude] => 44.0369
[longitude] => -75.8455
)
Array
(
[fullName] => Watertown Walmart Supercenter Store #1871
[street1] => 20823 State Route 3
[city] => Watertown
[state] => NY
[zipcode] => 13601
[phone] => (315) 786-0145
[latitude] => 43.9773
[longitude] => -75.9579
)
I've only worked with simple arrays, can anybody give me hint how to traverse $results array to write it in .csv file or .xls file.
A:
you can use this:
$fp = fopen("file.csv","w");
foreach((array)$results as $val) {
fwrite($fp,implode(";",$val)."\r\n");
}
fclose($fp);
few things to say:
the "\r\n" is needed to properly change the line
while the most common separator is the comma (,), i find that some
apps like microsoft excel 2010 doesnt like it and instead put the whole line in a cell; instead the semicolon worked in this case.
i always had problems with fputcsv, so instead im going with this.
EDITED:
$fp = fopen("file.csv","w");
$contents = file_get_contents('http://www.walmart.com/storeLocator/ca_storefinder_results.do?serviceName=&rx_title=com.wm.www.apps.storelocator.page.serviceLink.title.default&rx_dest=%2Findex.gsp&sfsearch_single_line_address=K6T');
preg_match_all('/stores\[(\d+)\] \= \{/s', $contents, $matches);
foreach ($matches[1] as $index) {
preg_match('/stores\[' . $index . '\] \= \{(.*?)\}\;/s', $contents, $matches);
preg_match_all('/\'([a-zA-Z0-9]+)\' \: ([^\,]*?)\,/s', $matches [1], $matches);
$c = count ($matches [1]);
$results = array();
for ($i=0; $i<$c; $i++) {
$results [$matches [1] [$i]] = trim($matches [2] [$i], "\'");
}
fwrite($fp,implode(";",array_values($results))."\r\n");
}
fclose($fp);
EDITED 2:
to write only specific columns in the .csv file you need to avoid adding them in your
results array() OR unset-ting them after, like here:
...
for ($i=0; $i<$c; $i++) {
$results [$matches [1] [$i]] = trim($matches [2] [$i], "\'");
}
unset( $results["weekEndSaturday"] );
unset( $results["recentlyOpen"] );
.. go on, renove the non-desired values ..
fwrite($fp,implode(";",array_values($results))."\r\n");
|
1911 Michoacán earthquake
The 1911 Michoacán earthquake occurred on June 7 at 04:26 local time (11:02 UTC). The epicenter was located near the coast of Michoacán, Mexico. The earthquake had a magnitude of 7.6 on the moment magnitude scale. 45 people were reported dead. In Mexico City, 119 houses were destroyed. Cracks were reported in Palacio Nacional, Escuela Normal para Maestros, Escuela Preparatoria, Inspección de Policía, and Instituto Geológico. Ciudad Guzmán, the seat of Zapotlán el Grande, Jalisco, suffered great damage.
The earthquake occurred hours before the revolutionary Francisco I. Madero entered Mexico City on the same day, and it was also known as "temblor maderista".
On June 7, 2011, a ceremony was held in Ciudad Guzmán commemorating the centenary of this earthquake.
This earthquake was a megathrust earthquake along the Middle America Trench (MAT), a major subduction zone.
See also
List of earthquakes in 1911
List of earthquakes in Mexico
References
External links
Category:Earthquakes in Mexico
Michoacan
Category:1911 in Mexico
|
Introduction {#cesec10}
============
Intravenous alteplase has been approved for treatment of acute ischaemic stroke in Europe for patients who are younger than 80 years and can be treated within 4·5 h. Such use is associated with improved functional outcome at 3 months after stroke,[@bib1] but whether treatment improves survival and sustains functional recovery in the long term is unclear. Of the 12 completed randomised controlled trials, ten reported outcomes at 90 days or less,[@bib1] two reported outcomes at 6 months,[@bib2; @bib3] and one reported outcomes at 12 months,[@bib3] but none have reported effects at more than 1 year after stroke. Furthermore, the effect of thrombolysis on health-related quality of life---an important measure of the clinical and economic value of treatment---has not been reported to our knowledge.
The third International Stroke Trial (IST-3)[@bib2] recruited 3035 patients---half of whom were older than 80 years---to assess the effect of thrombolytic treatment with intravenous alteplase within 6 h of onset of acute ischaemic stroke. The results showed that although thrombolytic treatment was not associated with a significant difference in the proportion of patients who were alive and independent at 6 months, treatment did seem to improve functional outcome. A prespecified secondary ordinal analysis of Oxford handicap scale scores showed that treatment was associated with a favourable shift in the distribution of Oxford handicap scale scores (odds ratio \[OR\] 1·27, 95% CI 1·10--1·47; p=0·001).[@bib2] A secondary aim of IST-3 was to assess whether thrombolytic treatment improved outcomes more than 1 year after stroke, and sought to assess survival, functional outcome, health-related quality of life, overall functioning, and living circumstances at 18 months.[@bib4; @bib5]
Methods {#cesec20}
=======
Study design and participants {#cesec30}
-----------------------------
The methods of the trial have been described in full previously.[@bib2; @bib4; @bib5; @bib6] IST-3 was a randomised, open-label trial of intravenous alteplase (0·9 mg/kg) plus standard care compared with standard care alone (control). Eligibility criteria were: symptoms and signs of clinically definite acute stroke, known time of stroke onset, treatment could be started within 6 h of onset, and exclusion by CT or MRI of intracranial haemorrhage and structural brain lesions that could mimic stroke (eg, cerebral tumour). A patient could only be included in the trial if both they (or a proxy) and their clinician believed that the treatment was promising but unproven---ie, there was neither a clear indication for treatment, nor a clear contraindication against treatment. The effect that using this uncertainty principle approach as a key eligibility criterion had on the type of patients included and excluded from the trial has been described in detail elsewhere.[@bib2; @bib6] Generally, patients who could be treated within licence were rarely enrolled, unless there was a specific reason that led the clinician or patient to be uncertain about whether to treat or not; as a result, 95% of enrolled patients did not meet the terms of the prevailing EU approval for treatment. All participants or proxies gave informed consent. The protocol was approved by the Multi-Centre Research Ethics Committee (Scotland) and by local ethics committees.
For the analysis presented here, we planned to assess outcome in patients who had follow-up at 6 months and 18 months. In seven countries (Austria, Belgium, Canada, Italy, Mexico, Poland, and UK) follow-up had to cease on Jan 30, 2012; therefore, we excluded any patients from these countries who were recruited after June 30, 2010, because they would not reach the 18-month follow-up point. In three countries (Australia, Norway, and Sweden), all recruited patients were to be followed up to 18 months, as part of a sub-study. Two countries (Portugal and Switzerland) followed up patients to 6 months only and were not included in this analysis.
Randomisation {#cesec40}
-------------
After enrolment, patients were randomly assigned by a secure central telephone or web-based computer system, which recorded baseline data and generated the treatment allocation only after the baseline data had been checked for range and consistency. The system used a minimisation algorithm to balance for key prognostic factors: geographic region, age, National Institutes of Health stroke scale score, sex, time since onset of stroke, stroke clinical syndrome, and presence or absence of visible ischaemic change on the pre-enrolment brain scan.[@bib4; @bib5] To avoid predictable alternation of treatment allocation, and thus potential loss of allocation concealment, patients were allocated with a probability of 0·80 to the treatment group that would minimise the difference between the groups for the key prognostic factors. Recruitment in the small double-blind phase (n=276) began in May, 2000, continued without interruption into the open-treatment phase (n=2759), and was completed in July, 2011.
Procedures {#cesec50}
----------
In the ten countries participating in follow-up at 6 months and 18 months after enrolment (Australia, Austria, Belgium, Canada, Italy, Mexico, Norway, Poland, Sweden, and UK), if the patient was not known to have died, staff at each national coordinating centre contacted the patient\'s doctor (or hospital coordinator) to confirm that the patient was alive and that they might be approached for follow-up. In Austria and Italy, experienced stroke physicians, masked to treatment allocation, contacted all patients by telephone. In the other eight countries, IST-3 trial office staff posted a questionnaire to patients to assess outcome. Non-responders were sent a second questionnaire. If no questionnaire was returned, an experienced, masked clinician or stroke nurse assessed the patient by telephone interview. Telephone assessment of disability in stroke survivors is as valid as face-to-face interviews[@bib7] and postal questionnaires.[@bib8]
The primary outcome of the trial was the proportion of patients alive and independent with an Oxford handicap scale[@bib9] score of 0--2 at 6 months (this outcome was chosen instead of survival alone because many people regard survival after a stroke in a disabled or dependent state as worse than death). The secondary endpoints at 18 months were: survival, Oxford handicap scale score, health-related quality of life, overall functioning, and living circumstances. The Oxford handicap scale is a six-point scale almost identical to the modified Rankin scale.[@bib10] In emergency care of acute ischaemic stroke, recording quality of life at baseline before randomisation was not possible; instead, quality of life was measured at 6 months and 18 months with the EuroQoL instrument,[@bib11] which assesses current self-rated health by a combination of questions about wellbeing and a visual analogue scale score. The questions are about the five dimensions of mobility, self-care, activity, pain or discomfort, and anxiety (the EQ-5D). Each dimension has three levels (no problems, some problems, severe problems), which can be presented individually. A unique health state is defined by combining one level from each of the five dimensions. Patients\' responses can then be combined into an EQ utility index with scores ranging from −1 to +1 (where +1 represents perfect health, 0 represents a state equivalent to death, and −1 represents a state worse than death). Calculation of the EQ utility index requires valuations for all health states, and these have been estimated for the UK and other European populations.[@bib12] For the visual analogue scale, 100 represents the best imaginable health and 0 the worst imaginable health. We used the EuroQoL instrument because it is short and simple, and in patients with stroke it has been validated,[@bib13; @bib14; @bib15; @bib16; @bib17] is responsive to change,[@bib18] and is associated with higher response rates and fewer missing data than more complex instruments.[@bib16] Many patients who have had severe strokes might not be able to complete the questionnaire themselves and because responses from a proxy have reasonable validity,[@bib15; @bib19] we therefore accepted responses submitted by a spouse, partner, close relative, or carer.
We also assessed binary (yes or no) answers to two questions, about global functioning: "Has the stroke left you with any problems?" and activities of daily living: "Do you need help from anybody with everyday activities (in washing, dressing, feeding, and going to the toilet)?" These questions have been validated[@bib17] and were used previously in a large trial.[@bib20] We also asked whether patients were living in their own home, a relative\'s home, a residential home, a nursing home, or were still in hospital. Finally, the questionnaire asked patients enrolled in the open-label treatment phase what treatments they recalled being given in hospital, including thrombolysis with alteplase. If the patient or proxy did not complete a specific item on a postal questionnaire, we did not re-contact them.
Statistical analysis {#cesec60}
--------------------
All randomly assigned patients who were due to be followed up at 18 months were included in the analysis of survival. We constructed Kaplan-Meier survival curves, and compared treatment groups with the log-rank test. Survival times were censored at 548 days after enrolment if patients died at a later date or returned an 18-month form at a later date. For patients from the Australia, Norway, Sweden, and UK, where reporting of deaths was prompt, if there was no known death date and no return of an 18-month form, patients were censored at 548 days. For patients from other countries who had no reported death date and no 18-month form, survival was censored at the date of return of the 6-month form or at the last date of contact, whichever was later. The justification for, and the methods for statistical adjustment of, the outcomes and the ordinal analyses of the Oxford handicap scale score at 18 months were specified in the statistical analysis plan and also described in the report of the primary outcomes.[@bib2; @bib5] We divided the Oxford handicap scale into five levels: 0, 1, 2, and 3 were retained and 4, 5, and 6 were combined into a single level. The treatment OR between one level and the next was assumed to be constant, so a single parameter (a common OR) summarises the shift in outcome distribution between treatment and control groups.
In the main analysis, we report results without imputing missing data. In the sensitivity analysis, for patients with an unknown Oxford handicap scale score at 18 months, we imputed the value from their 6-month assessment (last observation carried forward). For the EuroQoL instrument, we analysed the three levels of each EQ-5D domain as ordered categories by ordinal logistic regression, calculated the mean overall difference in visual analogue scale score between treatment groups, and estimated the EQ-5D index---calculated with a set of valuations derived from a sample of the UK population with the time trade-off method and also the UK visual analogue scale and European visual analogue scale valuations.[@bib12] Analyses were adjusted for baseline prognostic factors (age, National Institutes of Health stroke scale score, delay between onset and enrolment, and presence of acute ischaemic change on the baseline scan). We did several sensitivity analyses to assess the effect of missing data for Oxford handicap scale score and EQ-5D, and we assessed the effect of setting utility to zero for patients who had died. We did subgroup analyses of the effect of treatment on Oxford handicap scale score (ordinal logistic regression, as in the study by Frank and colleagues[@bib21]) and utility subdivided by age (\>80 *vs* ≤80 years), time to randomisation (≤3·0, \>3·0--4·5, \>4·5--6·0 h), baseline National Institutes of Health stroke scale score (0--5, 6--15, 16--25, \>25), phase of the trial (masked *vs* open label), and by the person completing the form (patient *vs* proxy). For National Institutes of Health stroke scale score, we also fitted a model with baseline severity as a linear regressor with treatment-specific slopes. Analyses were done with SAS (version 9.3).
This study is registered with [controlled-trials.com](http://controlled-trials.com){#interrefs20}, number ISRCTN25765518.
Role of the funding source {#cesec70}
--------------------------
The sponsors had no role in data collection, data storage, data analysis, preparation of this report, or the decision to publish. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.
Results {#cesec80}
=======
Of the 3035 patients enrolled by 156 hospitals in 12 countries, 2348 (77·4%) met the criteria for inclusion in the 18-month follow-up study---1169 assigned to alteplase, 1179 assigned to control ([figure 1](#fig1){ref-type="fig"}). The baseline characteristics of this subset were well balanced between groups ([table 1](#tbl1){ref-type="table"}) and were not much different from those who were ineligible for the 18-month follow-up analysis ([appendix](#sec1){ref-type="sec"}).
Of the 2348 patients scheduled for 18-month follow-up, vital status and Oxford handicap scale score at 18 months were known for 2290 (97·5%). Survival at 18 months did not differ significantly between groups: 408 of 1169 (34·9%) participants allocated to alteplase versus 414 of 1179 (35·1%) allocated to control died (log-rank p=0·85; [figure 2](#fig2){ref-type="fig"}).
At 18 months, of 2348 participants, vital status and disability were known for 2239 (95·3 %), vital status only was known for 51 (2·2%), and vital status and disability were unknown for 58 (2·5%). Oxford handicap scale scores were available for 1117 participants assigned to alteplase versus 1122 assigned to control. 391 (35·0%) patients allocated to alteplase versus 352 (31·4%) allocated to control were alive and independent (Oxford handicap scale score 0--2) at 18 months (adjusted odds ratio 1·28, 95% CI 1·03--1·57; p=0·024; unadjusted OR 1·18, 95% CI 0·99--1·40; p=0·068; [table 2](#tbl2){ref-type="table"}), with a favourable shift in Oxford handicap scale score (adjusted common OR 1·30, 95% CI 1·10--1·55; p=0·002). The size and statistical significance of the effect on Oxford handicap scale score at 18 months was robust to sensitivity analyses for missing data (data not shown). The [appendix](#sec1){ref-type="sec"} shows Oxford handicap scale score at 6 months in patients scheduled for 18-month follow-up who had data available at 6 months.
The EQ utility index could be calculated for 1341 (91·3%) of the 1468 patients who were alive at 18 months. 591 (44%) of these assessments were completed by patients themselves, 724 (54%) by a valid proxy, and 25 (2%) by a doctor. Treatment was associated with significant improvements in mobility, self-care, ability to do usual activities, and pain or discomfort, with no evidence of an effect on anxiety or depression ([table 3](#tbl3){ref-type="table"}). At 18 months, alteplase was associated with significantly fewer patients reporting being left with problems and needing help with everyday activities ([table 3](#tbl3){ref-type="table"}).
Although treatment with alteplase was associated with a significantly higher EQ utility index in survivors (p=0·028; [table 4](#tbl4){ref-type="table"}), the mean adjusted difference in visual analogue scale score was not significant (p=0·072; [table 4](#tbl4){ref-type="table"}). These findings were robust in the sensitivity analyses (data not shown). The [appendix](#sec1){ref-type="sec"} shows EQ-5D, EQ utility index, and visual analogue scale score at 6 months and 18 months using different valuations. Of the participants who were still alive, the proportion who were resident at home did not differ significantly between groups ([appendix](#sec1){ref-type="sec"}).
For the ordinal subgroup analysis of Oxford handicap scale score at 18 months, significant interactions existed between baseline variables and treatment effect. Greater differences in favour of alteplase were reported for age older than 80 years (p=0·032) and high National Institutes of Health stroke scale score (p=0·021), but not for time to treatment, respondent (patient *vs* proxy), or masking of assessment of outcome (double blind *vs* open label; [appendix](#sec1){ref-type="sec"}). When age, delay, and National Institutes of Health stroke scale score were treated as continuous variables, the interaction of ordinal Oxford handicap scale score with age became non-significant, delay remained non-significant, and for National Institutes of Health stroke scale score the p value for a trend was 0·004 ([appendix](#sec1){ref-type="sec"}). For EQ utility index, when subgroups were in discrete categories, none of the interactions were statistically significant ([appendix](#sec1){ref-type="sec"}). However, when the National Institutes of Health stroke scale score was treated as continuous, every five-point increase in score reduced the EQ utility index by 0·12 in the alteplase group versus 0·15 in the control group (adjusted estimates; p=0·008 for difference in slopes). For delay in enrolment time and age there was no trend in EQ utility index, irrespective of whether the variables were grouped or entered into models as a linear trend (data not shown).
Of the 1468 patients who were alive at 18 months, 1260 were asked to recall if they had been given thrombolytic treatment ([appendix](#sec1){ref-type="sec"}); 273 in the alteplase group versus 156 in the control group correctly recalled whether or not they had received thrombolytic treatment. In both treatment groups, the ability to recall treatment correctly was associated with better outcome; patients with correct recall were more likely to have an Oxford handicap scale score of 0--2 than were those who remembered incorrectly or did not know (62·5% *vs* 49·3%; 0·0001). Of patients with correct recall, those treated with alteplase were more likely to have an Oxford handicap scale score of 0--2 than were those in the control group (66·7% *vs* 55·1%; 0·018), whereas of those who did not remember correctly, outcomes did not differ significantly between groups (OHS 0--2 48·6% *vs* 49·9%; 0·714); a significant interaction existed between recall status and treatment (p\<0·0001).
Discussion {#cesec90}
==========
We have shown that, for treatment of acute ischaemic stroke, thrombolysis with intravenous alteplase seems to provide a benefit at 18 months. Treatment had no effect on survival, but was associated with a significant increase in the likelihood of being alive and independent. However, the unadjusted absolute difference in the number of patients alive and independent at 18 months was not significant, so judgment on whether or not the results are clinically significant rests on the quality of the data and the overall patterns of effect seen across all measures. The ordinal estimates of effect at 6 months and 18 months were similar and significant. Treatment was also associated with a gain in health-related quality of life that was significant for four of the five dimensions of the EQ-5D and the overall EQ utility index (though not for visual analogue scale score). Living circumstances did not differ significantly between groups.
Strengths of this study are the large number of patients and the completeness of follow-up. Of the patients scheduled for 18-month follow-up, a small proportion were missing data for both vital and functional outcome status. We estimated the EQ utility index in more than 91% of survivors (a similar proportion to that in a trial[@bib23] of younger and less impaired patients with coronary artery disease) and our sensitivity analyses also showed that the estimates of overall health-related quality of life with the EQ utility index were robust to various assumptions about missing data. Although thrombolytic treatment was associated in survivors with less functional impairment, better health-related quality of life, and less likelihood of being left with problems and needing help with daily activities after stroke, it did not translate into a higher proportion of patients living at home at 18 months, perhaps because living circumstances are affected by social and financial factors that are not influenced by treatment. We believe that the direction and size of the effects are clinically significant and will inform health economic assessments of thrombolytic treatment. For example, in 2002, the estimated cost of long-term care of an independent survivor of stroke was £876 per year and that of a dependent survivor was £11 292 per year,[@bib24] so even a small difference in the proportion of patients who survive and are independent will have substantial economic impact.
Lyden[@bib25] has identified limitations of IST-3, chiefly that treatment was not masked. Patient-reported outcomes---eg, health-related quality of life---are subjective,[@bib26] and recall of thrombolytic treatment could affect patient responses. Only 30% of survivors correctly recalled whether or not they had received thrombolytic treatment. As expected, accurate recall was associated with better outcome in both treatment groups. Thus, recall bias might have affected our findings. However, the analysis of recall was based on a variable measured in a subset of survivors after randomisation and so could itself be biased. The effects of treatment on the Oxford handicap scale score and EQ utility index were much the same in the masked and open-label parts of the study ([appendix](#sec1){ref-type="sec"}). Assessment of health-related quality of life is limited because many patients who have had a stroke are unable to complete the form themselves. The high proportion of forms completed by a proxy in IST-3 is a result of the severity of stroke in the patients included in the trial. Although the use of surrogates is a potential weakness, it did enable us to achieve satisfactory response rates; however, because proxies tend to assign worse health status than do patients,[@bib15] we were reassured that there was no interaction between the person responding and the effect of treatment on utility or Oxford handicap scale score. Not all enrolled patients were scheduled to be followed up for 18 months, but the selection criteria for the longer follow-up cohort did not seem to introduce relevant imbalances at baseline, nor were the characteristics of the cohort substantially different from those not included in long-term follow-up. We therefore believe the 18-month follow-up cohort is representative of the trial as a whole.
Another weakness is that the trial was under-powered, so the subgroup analyses of the effects of baseline age, stroke severity, and delay to enrolment on the Oxford handicap scale score and health-related quality of life should be treated with caution. These are secondary analyses of a secondary outcome, and the apparent lack of effect of time to treatment might be due to chance. Furthermore, a more appropriate assessment of the complex interactions between age, stroke severity, and time to treatment will be available from a meta-analysis of individual patient data by the Stroke Thrombolysis Trialists.[@bib27]
In conclusion, IST-3 adds to the evidence from previous trials ([panel](#box1){ref-type="boxed-text"}) and shows that although thrombolysis for acute ischaemic stroke with intravenous alteplase does not improve survival, there is evidence of improvement in several measures of function and quality of life in survivors of all ages for up to 18 months after treatment.
Correspondence to: Prof Peter Sandercock, Division of Clinical Neurosciences, University of Edinburgh, Western General Hospital, Crewe Road, Edinburgh EH4 2XU, UK <[email protected]>
Supplementary Material {#sec1}
======================
Supplementary appendix
Acknowledgments
===============
We thank all the patients who participated in the study, and the many individuals not specifically mentioned in the report who have supported the study. We also thank NIHR Stroke Research Network, NHS Research Scotland, and the National Institute for Social Care and Health Research Clinical Research Centre. IST-3 is an investigator-led trial. The University of Edinburgh and the NHS Lothian Health Board are cosponsors. The double-blind start-up phase was supported by the Stroke Association (UK). The expansion phase was funded by the Health Foundation UK. The main phase of the trial was funded by: UK Medical Research Council and managed by the National Institutes of Health Research on behalf of the MRC-NIHR partnership, the Research Council of Norway, Arbetsmarknadens Partners Forsakringsbolag Insurances Sweden, the Swedish Heart Lung Fund, The Foundation of Marianne and Marcus Wallenberg, Stockholm County Council, Karolinska Institutet, Polish Ministry of Science and Education, Australian Heart Foundation, Australian National Health and Medical Research Council, Swiss National Research Foundation, Swiss Heart Foundation, Foundation for Health and Cardio-/Neurovascular Research (Basel, Switzerland), Assessorato alla Sanita (Regione dell\'Umbria, Italy), and Danube University (Krems, Austria). Boehringer Ingelheim GmbH donated drug and placebo for the double-blind phase, but thereafter had no role in the trial. The UK Stroke Research Network adopted the trial on May 1, 2006, supported the initiation of new UK sites, and in some centres, data were collected by staff funded by the Network or working for associated NHS organisations. Central imaging was done at the Brain Imaging Research Centre (University of Edinburgh), partly funded by the Scottish Funding Council and the Chief Scientist Office of the Scottish Executive. Additional support was received from Chest Heart and Stroke Scotland, DesAcc, University of Edinburgh, Danderyd Hospital R&D Department, Karolinska Institutet, Oslo University Hospital, and the Dalhousie University Internal Medicine Research Fund.
The study was conceived by the cochief investigators---PS, RIL, and JW. JW led the imaging. The study was designed by PS, RIL, and JW, with input from all others who were coordinators of the trial in their own country. PS, RIL, JW, MD, and KI wrote the protocol. KI is the study coordinator. GC is the study statistician who prepared the analyses. GM advised on statistical aspects. PS, RIL, MD, WW, GV, AC, AK, EB, KBS, VM, AP, GH, KM, SR, GG, SP, AA, MC, and PL recruited patients. GV, AC, AK, EB, KBS, VM, AP, GH, KM, SR, GG, SP, AA, MC, and PL were national coordinators. PS wrote the first draft and all authors commented on subsequent drafts and approved the final version.
*University of Edinburgh (Edinburgh, UK)*---Peter Sandercock, Joanna M Wardlaw, Martin Dennis, Geoff Cohen, Gordon Murray, Karen Innes, Will Whiteley; *Sydney Medical School*---*Westmead Hospital and The George Institute for Global Health (University of Sydney, Sydney, Australia)*---Richard I Lindley; *Sheffield Teaching Hospitals NHS Foundation Trust (Sheffield, UK)*---Graham Venables; *Institute of Psychiatry and Neurology and Medical University of Warsaw (Warsaw, Poland)*---Anna Czlonkowska; *Institute of Psychiatry and Neurology (Warsaw, Poland)*---Adam Kobayashi; *Department of Neurology (Ospedale, Citta\' di Castello, Italy)*---Stefano Ricci; *Karolinska Institutet (Stockholm, Sweden)*---Veronica Murray; *Oslo University Hospital (Oslo, Norway)*---Eivind Berge, Karsten Bruins Slot; *School of Medicine and Pharmacology (The University of Western Australia, Perth, Australia) and Royal Perth Hospital (Perth, Australia)*---Graeme J Hankey; *Hospital Geral de Santo Antonio (Porto, Portugal)*---Manuel Correia; *Cliniques Universitaires Saint-Luc (Bruxelles, Belgium)*---Andre Peeters; *Landesklinikum Donauregion Tulln (Tulln, Austria)*---Karl Matz; *University Hospital Basel (Basel, Switzerland)*---Phillippe Lyrer; *Dalhousie University and Queen Elizabeth II Health Sciences Centre (Halifax, Canada)*---Gord Gubitz, Stephen J Phillips; *and Instituto Nacional de Neurologia (Mexico City, Mexico)*---Antonio Arauz.
EB and AC have received honoraria and travel costs from Boehringer Ingelheim. GB has received honoraria and speaker fees from Boehringer Ingelheim, Sanofi Synthelabo Aventis, Hoffman La Roche, and Novo Nordisk. AK has received lecture fees and conference travel costs from Boehringer Ingelheim. RIL has been paid for his role as a member of a conference scientific committee and for lectures by Boehringer Ingelheim and has attended national stroke meetings organised and funded by Boehringer Ingelheim. PS has received lecture fees (paid to the Division of Clinical Neurosciences, University of Edinburgh) and travel expenses from Boehringer Ingelheim, was a member of the independent data and safety monitoring board of the RE-LY trial funded by Boehringer Ingelheim for which attendance fees and travel expenses were paid (to the Division of Clinical Neurosciences, University of Edinburgh). KBS has received an honorarium for a lecture from Boehringer Ingelheim and had costs for participating in scientific meetings reimbursed; is a member of the European Medicines Agency\'s Committee for Medicinal Products for Human Use and the Cardiovascular Working Party. The views expressed in this Article are the personal views of KBS and should not be understood or quoted as being made on behalf of or reflecting the position of the European Medicines Agency or one of its committees or working parties. VM has received an unrestricted educational grant from Boehringer Ingelheim for a meeting on thrombolysis in stroke at which IST-3 was discussed. JMW received funding to the Division of Clinical Neurosciences, University of Edinburgh for reading CT scans for ECASS III from Boehringer Ingelheim, is the contact reviewer for Cochrane systematic reviews of thrombolytic treatment for acute stroke, has attended meetings held by Boehringer Ingelheim as an unpaid independent adviser during the licensing of alteplase, but was refunded her travel expenses and the time away from work, has attended and spoken at meetings organised and funded by Boehringer Ingelheim for which she received honoraria and travel expenses, and is director of the Brain Research Imaging Centre for Scotland, which has received some funding supplemented by grants and donations from Novartis, Schering, General Electric, and Boehringer Ingelheim. All other members of the writing committee declare that they have no conflicts of interest.
{#fig1}
{#fig2}
######
Baseline characteristics of patients included in 18-month follow-up
**Alteplase group (n=1169)** **Control group (n=1179)**
------------------------------------------------------------------------------------- --------------------------------------------- ------------------------------ ----------------------------
Region
Americas (Canada, Mexico) 5 (\<1%) 6 (1%)
Australia 89 (8%) 90 (8%)
Eastern Europe (Poland) 158 (14%) 159 (13%)
Northwest Europe (UK, Austria, Belgium) 550 (47%) 556 (47%)
Scandinavia (Norway, Sweden) 251 (21%) 250 (21%)
Southern Europe (Italy) 116 (10%) 118 (10%)
Age
18--50 years 49 (4%) 57 (5%)
51--60 years 83 (7%) 81 (7%)
61--70 years 153 (13%) 158 (13%)
71--80 years 291 (25%) 304 (26%)
81--90 years 523 (45%) 512 (43%)
\>90 years 70 (6%) 67 (6%)
Women 592 (51%) 596 (51%)
National Institutes of Health stroke scale score
0--5 235 (20%) 236 (20%)
6--10 323 (28%) 330 (28%)
11--15 244 (21%) 235 (20%)
16--20 207 (18%) 219 (19%)
\>20 160 (14%) 159 (13%)
Delay in enrolment
≤3·0 h 320 (27%) 307 (26%)
\>3·0--4·5 h 471 (40%) 481 (41%)
\>4·5--6·0 h 378 (32%) 389 (33%)
\>6·0 h 0 (0%) 2 (\<1%)
Atrial fibrillation 347 (30%) 331 (28%)
Systolic blood pressure
≤143 mm Hg 380 (33%) 380 (32%)
144--164 mm Hg 379 (32%) 405 (34%)
≥165 mm Hg 410 (35%) 394 (33%)
Diastolic blood pressure
≤74 mm Hg 342 (29%) 343 (29%)
75--89 mm Hg 409 (35%) 448 (38%)
≥90 mm Hg 406 (35%) 381 (32%)
Blood glucose concentration[\*](#tbl1fn1){ref-type="table-fn"}
≤5 mmol/L 202 (20%) 207 (20%)
6--7 mmol/L 501 (49%) 485 (47%)
≥8 mmol/L 324 (32%) 347 (33%)
Treatment with antiplatelet drugs in previous 48 h 599 (51%) 610 (52%)
Assessment of acute ischaemic change
Scan normal 99 (8%) 102 (9%)
Scan not normal but no sign of acute change 551 (47%) 579 (49%)
Signs of acute change 511 (44%) 490 (42%)
Predicted probability of poor outcome at 6 months[†](#tbl1fn2){ref-type="table-fn"}
\<40% 633 (54%) 640 (54%)
≥40--\<50% 130 (11%) 113 (10%)
≥50--\<75% 275 (24%) 304 (26%)
≥75% 131 (11%) 122 (10%)
Stroke syndrome
TACI 491 (42%) 509 (43%)
PACI 460 (39%) 430 (36%)
LACI 137 (12%) 133 (11%)
POCI 79 (7%) 104 (9%)
Other 2 (\<1%) 3 (\<1%)
Data are n (%). TACI=total anterior circulation infarct. PACI=partial anterior circulation infarct. LACI=lacunar infarct. POCI=posterior circulation infarct.
Baseline glucose concentration was not recorded for the first 282 patients recruited; thus, glucose measurements were available for 2066 of 2348 participants (88%; 1027 allocated to alteplase and 1039 allocated to control).
Calculated from a model based on age and baseline National Institutes of Health stroke scale score.[@bib22]
######
Oxford handicap scale scores at 18 months
**Alteplase group** **Control group** **Adjusted anaylsis**[\*](#tbl2fn1){ref-type="table-fn"} **Unadjusted analysis**[†](#tbl2fn2){ref-type="table-fn"} **Difference per 1000 patients**[†](#tbl2fn2){ref-type="table-fn"}**(95% CI)**
------------------------------------------------------------------------------------------- ----------------------------------------------------------------- --------------------- --------------------- ---------------------------------------------------------- ----------------------------------------------------------- -------------------------------------------------------------------------------- ---------------- ----
Planned 18-month follow-up 1169 1179 .. .. .. .. ..
Missing OHS data at 18 months[‡](#tbl2fn3){ref-type="table-fn"} 52 (4%) 57 (5%) .. .. .. .. ..
Number analysed (both vital and OHS status known) 1117 (96%) 1122 (95%) .. .. .. .. ..
OHS score at 18 months[§](#tbl2fn4){ref-type="table-fn"}
0 119 (11%) 83 (7%) .. .. .. .. ..
1 135 (12%) 141 (13%) .. .. .. .. ..
2 137 (12%) 128 (11%) .. .. .. .. ..
3 132 (12%) 138 (12%) .. .. .. .. ..
4 81 (7%) 107 (10%) .. .. .. .. ..
5 105 (9%) 111 (10%) .. .. .. .. ..
Died before 18 months[§](#tbl2fn4){ref-type="table-fn"}[¶](#tbl2fn5){ref-type="table-fn"} 408 (37%) 414 (37%) 0·95 (0·78 to 1·16) 0·628 0·98 (0·83 to 1·17) 0·855 4 (−36 to 44)
Alive and independent (OHS score 0--2)[§](#tbl2fn4){ref-type="table-fn"} 391 (35%) 352 (31%) 1·28 (1·03 to 1·57) 0·024 1·18 (0·99 to 1·40) 0·068 −36 (−75 to 3)
Alive and had favourable outcome (OHS score 0 or 1)[§](#tbl2fn4){ref-type="table-fn"} 254 (23%) 224 (20%) 1·23 (0·98 to 1·55) 0·076 1·18 (0·96 to 1·44) 0·109 −28 (−62 to 6)
Data are n (%) unless stated otherwise. OHS=Oxford handicap score.
Logistic regression of outcome on treatment group, adjusted for age, National Institutes of Health stroke scale score, and delay (all linear) and visible infarct on baseline scan.
Standard binomial test with normal approximation.
Includes patients who did not return an 18-month form but died more than 18 months after enrolment ([figure 1](#fig1){ref-type="fig"}).
Percentages based on number analysed for OHS. For one participant, OHS was imputed on the basis of responses to EQ-5D.
If all patients known to be alive are included in the denominators, the percentage dead at 18 months are 35·8% in the alteplase group and 36·0% in the control group.
######
EQ-5D and other assessments of function at 18 months
**Alteplase group** **Control group** **Odds ratio (95% CI)**[\*](#tbl3fn1){ref-type="table-fn"} **p value** **Difference per 1000 patients**[†](#tbl3fn2){ref-type="table-fn"}**(95% CI)**
------------------------------------------------- ------------------------------------- --------------------- --------------------- ------------------------------------------------------------ ---------------- --------------------------------------------------------------------------------
**EQ-5D**
Mobility 702 692 .. .. ..
No problems walking 283 (40%) 259 (37%) 1·30 (1·05 to 1·61) 0·017 −29 (−80 to 22)
Some problems walking 343 (49%) 346 (50%) .. .. 11 (−41 to 64)
Confined to bed 76 (11%) 87 (13%) .. .. 17 (−16 to 51)
Self-care 695 689 .. .. ..
No problems with self-care 372 (54%) 328 (48%) 1·43 (1·16 to 1·78) 0·001 −59 (−112 to −7)
Some problems washing or dressing 176 (25%) 191 (28%) .. .. 24 (−23 to 70)
Unable to wash or dress 147 (21%) 170 (25%) .. .. 35 (−9 to 79)
Usual activities 699 694 .. .. ..
No problems with usual activities 235 (34%) 209 (30%) 1·32 (1·07 to 1·62) 0·008 −35 (−84 to 14)
Some problems with usual activities 258 (37%) 256 (37%) .. .. 0 (−51 to 50)
Unable to do usual activities 206 (29%) 229 (33%) .. .. 35 (−13 to 84)
Pain or discomfort 698 694 .. .. ..
No pain or discomfort 344 (49%) 304 (44%) 1·26 (1·02 to 1·56) 0·029 −55 (−107 to −2)
Moderate pain or discomfort 316 (45%) 355 (51%) .. .. 59 (6 to 111)
Extreme pain or discomfort 38 (5%) 35 (5%) .. .. −4 (−27 to 19)
Anxiety or depression 693 690 .. .. ..
Not anxious or depressed 353 (51%) 349 (51%) 1·05 (0·85 to 1·29) 0·668 −4 (−56 to 49)
Moderately anxious or depressed 292 (42%) 290 (42%) .. .. −1 (−53 to 51)
Extremely anxious or depressed 48 (7%) 51 (7%) .. .. 5 (−23 to 32)
**Additional questions about overall function**
Stroke left patient with problems 484/700 (69%) 542/699 (78%) 1·67 (1·30 to 2·17) \<0·0001 84 (38 to 130)
Needs help with everyday activities 298/696 (43%) 350/692 (51%) 1·59 (1·25 to 2·00) \<0·0001 78 (25 to 130)
Data are n (%) unless stated otherwise.
Logistic regression of outcome on treatment group, adjusted for age, National Institutes of Health stroke scale score, and delay (all linear) and visible infarct on baseline scan.
Standard binomial test with normal approximation.
######
EQ utility index and visual analogue scale score assessment of overall health at 18 months
**Alteplase group** **Control group** **Adjusted analysis**[\*](#tbl4fn1){ref-type="table-fn"} **Unadjusted analysis**[†](#tbl4fn2){ref-type="table-fn"}
----------------------------- --------------------- ------------------- ---------------------------------------------------------- ----------------------------------------------------------- --------------- ------- --------------- -------
Visual analogue scale score 653 62·07 (0·90) 648 60·57 (0·91) 2·18 (1·21) 0·072 1·49 (1·28) 0·244
EQ utility index 674 0·550 (0·015) 667 0·502 (0·016) 0·062 (0·020) 0·002 0·049 (0·022) 0·028
Adjusted for age, National Institutes of Health Stroke Scale score, delay from onset to enrolment, and presence of visible ischaemia on the baseline scan.
Significance based on *t* test. Utility based on UK time trade-off valuations on a scale of −1 to +1.
###### Research in context
**Systematic review**
The primary results of IST-3[@bib2] included a systematic review of randomised controlled trials of alteplase in acute stroke.[@bib1] To accompany this review we searched up to April 30, 2013, for additional randomised trials of intravenous alteplase versus control within 6 h of onset of acute stroke in the Cochrane Stroke Trials Registry, Internet Stroke Trials Centre, and reference lists in review articles and conference abstracts. For the Cochrane Stroke Trials Registry we searched for interventions with thrombolytic drugs in acute ischaemic stroke added since the last update of the Cochrane review. For the Internet Stroke Center, we searched for "acute ischemic stroke", "acute ischaemic stroke", "thrombolysis", "thrombolytic therapy", "alteplase", and "recombinant tissue plasminogen activator". For each trial, we checked the primary trial publication, and when available, the trial protocol, to determine if it was planned to collect long-term clinical outcome data (ie, more than 90 days after enrolment) or health-related quality-of-life data, as assessed by a valid instrument such as EQ-5D or Short Form 36.
Of the 12 completed randomised controlled trials, ten reported outcome at 90 days or less,[@bib1] two reported clinical outcome at 6 months[@bib2; @bib3] and one at 12 months,[@bib3] but none reported effects more than 12 months after stroke. The Second European Collaborative Acute Stroke Study collected data on health-related quality of life at 90 days with the SF-36, but has yet to report those data. In the NINDS Trial,[@bib3] mortality at 12 months did not differ significantly between alteplase and placebo groups (24% *vs* 28%; p=0·29). The primary outcome was favourable outcome, defined as minimal or no disability as measured by the Barthel index, the modified Rankin scale, and the Glasgow outcome scale, and the treatment effect was assessed with a global statistic. The global statistic favoured the alteplase group at 6 months (OR for a favourable outcome 1·7, 95% CI 1·3--2·3) and at 12 months (1·7, 1·2--2·3).
**Interpretation**
IST-3 confirms the evidence from previous trials on the neutral effect of thrombolysis with alteplase on survival after stroke in a much larger sample, and adds to the evidence that improvements in function reported at earlier timepoints are evident at 18 months. IST-3 also provides the first validated estimates of the effect of thrombolysis with alteplase on health-related quality of life.
[^1]: Members listed in the [appendix](#sec1){ref-type="sec"}
|
What Causes Electrical Equipment To Overheat?
By
Lon Lockwood
|August 16, 2013
“In 2011, an estimated 47,700 home structure fires reported to U.S. fire departments involved some type of electrical failure or malfunction as a factor contributing to ignition,” according to the National Fire Prevention Agency (NFPA). Electrical overheating is one of the most common causes of electrical fires and by knowing what causes electrical equipment to overheat, you can hopefully prevent a tragedy from happening.
Main Causes Of Electrical Overheating
There are three main causes of electrical overheating: excessive current, poor connections and insulation breakdown. Of the three, excessive current is the least likely because circuit breakers and fuses typically protect against this. Poor connections generate a high wattage over a small area for a long period of time and as a result, can cause electrical fires. The breakdown of insulation is another common cause because as it is defeated, sparks may be ejected.
What To Look For
Frayed cords, discolored outlets and broken or cracked plugs are key indicators that something is wrong. In order to prevent electrical equipment or devices from overheating, these warning signs should be addressed immediately.
Possible Solutions
If you’re wondering what can be done to improve electrical safety, we’ve provided some suggestions:
Have GFCI outlets installed indoors wherever there is a potential water hazard, including the kitchen and the bathroom.
Dedicate separate circuits and outlets for large appliances such as your refrigerator, oven, dishwasher, washing machine and dryer.
Dispose of damaged or frayed electrical cords.
It’s important to have a licensed electrician perform an electrical safety inspection to make sure your home is safe. Give Lon Lockwood Electric a call today at (585) 766-4702 to schedule an appointment.
|
Q:
Check if jump buffer is valid or not (non-local jumps)
We have implemented "longjmp–Restore stack environment" in our code base. The longjmp routine is called by a particular error_exit function which can be invoked from anywhere.
Thus it is possible that when longjmp is called the setjmp routine may not have been called and the buffer can have invalid value leading to a crash.
Can I initialise the buffer to NULL or is there any check available to check for unset or invalid value. One way is that I can set a flag variable whenever setjmp is called, and I can check against that. But that is only a hack.
void error_exit()
{
extern jmp_buf buf;
longjmp(buf, 1);
return 1;
}
Can I do something like this?
void error_exit()
{
extern jmp_buf buf;
if(buf)
longjmp(buf, 1);
return 1;
}
The code is mixed C/C++, I know I can replace setjmp and longjmp with C++ exception handling everywhere, but that is not possible now, can I instead catch longjmp with invalid buffer which leads to a crash?
A:
jmp_buf is not particularly well documented. In linux headers, you can find something like:
typedef int __jmp_buf[6];
struct __jmp_buf_tag {
__jmp_buf __jmpbuf; /* Calling environment. */
int __mask_was_saved; /* Saved the signal mask? */
__sigset_t __saved_mask; /* Saved signal mask. */
};
typedef struct __jmp_buf_tag jmp_buf[1];
Setting it to zero and then test whole size may be a lost of time.
Personally, I would keep a pointer to this buffer, initializing it to NULL and setting it right before setjmp.
jmp_buf physical_buf;
jmp_buf *buf = NULL;
...
buf = &physical_buf;
if (setjmp(*buf)) {
...
}
It is the same idea as having a separate flag. Moreover you can allocate jmp buffers dynamically if necessary.
|
Q:
base 13 12 11 10 HELP
Convert the last four digits 122917
number to base 13, where A, B, and C correspond to 10, 11
and 12
does anyone know how would i start this ?
A:
We write $$2917=\sum_{i=0}^nx_i13^i=x_0+13(x_1+13(x_2+...))$$
All $x_i$ are between $0$ and $C=12$. From the above equation, $x_0$ is the reminder of the division of number $2917$ by $13$. Subtract the reminder from $2917$, divide by $13$ and you get $$\frac{2917-x_0}{13}=x_1+13(x_2+...)$$
This means that $x_1$ is the reminder of division of $\frac{2917-x_0}{13}$ by $13$. And continue like that, until the result of the division is less than $13$.
|
Atheism at a glance
Atheism is the absence of belief in any Gods or spiritual beings. The word Atheism comes from a, meaning without, and theism meaning belief in god or gods.
Atheists don't use God to explain the existence of the universe.
Atheists say that human beings can devise suitable moral codes to live by without the aid of Gods or scriptures.
Reasons for non-belief
People are atheist for many reasons, among them:
They find insufficient evidence to support any religion.
They think that religion is nonsensical.
They once had a religion and have lost faith in it.
They live in a non-religious culture.
Religion doesn't interest them.
Religion doesn't seem relevant to their lives.
Religions seem to have done a lot of harm in the world.
The world is such a bad place that there can't be a God.
Many atheists are also secularist, and are hostile to any special treatment given to organised religion.
It is possible to be both atheist and religious. Virtually all Buddhists manage it, as do some adherents of other religions,such as Judaism and Christianity.
Atheists and morality
Atheists are as moral (or immoral) as religious people.
In practical terms atheists often follow the same moral code as religious people, but they arrive at the decision of what is good or bad without any help from the idea of God.
What does it mean to be human?
Atheists find their own answers to the question of what it means to be human. This discussion looks at the question from both theological and ethical viewpoints.
|
Q:
Issue with timed animation
I'm trying to create different timed animations that run independently from each other,I found a really well made animation code and I adapted it for my needs. I have created an jsFiddle example to show the creation of the animations inside a list item. The problem I'm having is that if I click on one green timed circle, the other ones stop. I don't know how to instantiate them for each row...
Here is the jsfiddle: JSFIDDLE
Some code:
var methods = {
init: function (options) {
var state = {
timer: null,
timerSeconds: 60,
callback: function () {},
timerCurrent: 0,
showPercentage: false,
fill: false,
color: '#CCC'
};
state = $.extend(state, options);
return this.each(function () {
var $this = $(this);
var data = $this.data('pietimer');
if (!data) {
$this.addClass('pietimer');
$this.css({
fontSize: $this.width()
});
$this.data('pietimer', state);
if (state.showPercentage) {
$this.find('.percent').show();
}
if (state.fill) {
$this.addClass('fill');
}
$this.pietimer('start');
}
});
},
stopWatch: function () {
var data = $(this).data('pietimer');
if (data) {
var seconds = (data.timerFinish - (new Date().getTime())) / 1000;
if (seconds <= 0) {
clearInterval(data.timer);
$(this).pietimer('drawTimer', 100);
data.callback();
} else {
var percent = 100 - ((seconds / (data.timerSeconds)) * 100);
$(this).pietimer('drawTimer', percent);
}
}
},
drawTimer: function (percent) {
$this = $(this);
var data = $this.data('pietimer');
if (data) {
$this.html('<div class="percent"></div><div class="slice' + (percent > 50 ? ' gt50"' : '"') + '><div class="pie"></div>' + (percent > 50 ? '<div class="pie fill"></div>' : '') + '</div>');
var deg = 360 / 100 * percent;
$this.find('.slice .pie').css({
'-moz-transform': 'rotate(' + deg + 'deg)',
'-webkit-transform': 'rotate(' + deg + 'deg)',
'-o-transform': 'rotate(' + deg + 'deg)',
'transform': 'rotate(' + deg + 'deg)'
});
var secs = (data.timerSeconds) * ((100 - percent) / 100); /*NEW*/
$this.find('.percent').html(Math.round(secs) + ''); /*Changed*/
if (data.showPercentage) {
$this.find('.percent').show();
}
if ($this.hasClass('fill')) {
$this.find('.slice .pie').css({
backgroundColor: data.color
});
} else {
$this.find('.slice .pie').css({
borderColor: data.color
});
}
}
},
start: function () {
var data = $(this).data('pietimer');
if (data) {
data.timerFinish = new Date().getTime() + (data.timerSeconds * 1000);
$(this).pietimer('drawTimer', 0);
data.timer = setInterval("$this.pietimer('stopWatch')", 50);
}
},
reset: function () {
console.log("flag 4");
var data = $(this).data('pietimer');
if (data) {
clearInterval(data.timer);
$(this).pietimer('drawTimer', 0);
}
}
};
$.fn.pietimer = function (method) {
if (methods[method]) {
return methods[method].apply(this, Array.prototype.slice.call(arguments, 1));
} else if (typeof method === 'object' || !method) {
return methods.init.apply(this, arguments);
} else {
$.error('Method ' + method + ' does not exist on jQuery.pietimer');
}
};
function runTimer() {
$('.timer').pietimer({
timerSeconds: 60,
color: '#ccc',
fill: '#ccc',
showPercentage: true,
callback: function () {
console.log("flag 7");
// alert("yahoo, timer is done!");
$('.timer').pietimer('reset');
$this.find('.percent').html(0);
}
});
}
Thanks in advance.
A:
Found the solution, It's here for anyone who would want it :)
LINK: JSBIN
|
Q:
Copy Memory to std::copy
I'm trying to convert all my CopyMemory functions to std::copy functions.
It works with copymemory and memcpy but not std::copy. Can anyone tell me what I'm doing wrong or how to fix it?
template<typename T>
void S(unsigned char* &Destination, const T &Source)
{
//CopyMemory(Destination, &Source, sizeof(T));
std::copy(&Source, &Source + sizeof(T), Destination); //Fails..
Destination += sizeof(T);
}
template<typename T>
void D(T* &Destination, unsigned char* Source, size_t Size)
{
//CopyMemory(Destination, Source, Size);
std::copy(Source, Source + Size, Destination);
Source += sizeof(T);
}
template<typename T>
void D(T &Destination, unsigned char* Source, size_t Size)
{
//CopyMemory(&Destination, Source, Size);
std::copy(Source, Source + Size, &Destination);
Source += sizeof(T);
}
I've also figured that I could do the following to convert iterators to pointers:
std::string Foo = "fdsgsdgs";
std::string::iterator it = Foo.begin();
unsigned char* pt = &(*it);
How would I convert pointers to iterators then? :S
The code I use to test the memcpy/copymem vs std::copy is as follows (It prints 7 if it works.. and random numbers if it doesn't):
#include <windows.h>
#include <iostream>
#include <vector>
#include <typeinfo>
using namespace std;
typedef struct
{
int SX, SY;
uint32_t Stride;
unsigned long ID;
int TriangleCount;
} Model;
template<typename T>
void S(unsigned char* &Destination, const T &Source)
{
CopyMemory(Destination, &Source, sizeof(T));
Destination += sizeof(T);
}
template<typename T>
void S(unsigned char* &Destination, const std::vector<T> &VectorContainer)
{
size_t Size = VectorContainer.size();
for (size_t I = 0; I < Size; ++I)
S(Destination, VectorContainer[I]);
}
void S(unsigned char* &Destination, const Model &M)
{
S(Destination, M.SX);
S(Destination, M.SY);
S(Destination, M.Stride);
S(Destination, M.ID);
S(Destination, M.TriangleCount);
}
template<typename T>
void D(T* &Destination, unsigned char* Source, size_t Size)
{
CopyMemory(Destination, Source, Size);
Source += sizeof(T);
}
template<typename T>
void D(T &Destination, unsigned char* Source, size_t Size)
{
CopyMemory(&Destination, Source, Size);
Source += sizeof(T);
}
template<typename T>
void D(std::vector<T> &Destination, unsigned char* Source, size_t Size)
{
Destination.resize(Size);
for(size_t I = 0; I < Size; ++I)
{
D(Destination[I], Source, sizeof(T));
Source += sizeof(T);
}
}
void D(Model* &Destination, unsigned char* Source)
{
D(Destination->SX, Source, sizeof(Destination->SX));
D(Destination->SY, Source, sizeof(Destination->SY));
D(Destination->Stride, Source, sizeof(Destination->Stride));
D(Destination->ID, Source, sizeof(Destination->ID));
D(Destination->TriangleCount, Source, sizeof(Destination->TriangleCount));
}
long double* LD = new long double[25000];
std::vector<Model> ListOfModels, ListOfData;
void ExecuteCommands()
{
switch(static_cast<int>(LD[1]))
{
case 1:
{
LD[2] = 2;
unsigned char* Data = reinterpret_cast<unsigned char*>(&LD[3]);
Model M; M.SX = 1; M.SY = 3; M.Stride = 24; M.ID = 7; M.TriangleCount = 9;
Model K; K.SX = 3; K.SY = 21; K.Stride = 34; K.ID = 9; K.TriangleCount = 28;
ListOfModels.push_back(M);
ListOfModels.push_back(K);
S(Data, ListOfModels);
}
break;
}
}
void* GetData()
{
unsigned char* Data = reinterpret_cast<unsigned char*>(&LD[3]);
D(ListOfData, Data, LD[2]);
cout<<ListOfData[0].ID; //Should print 7 if it works.
return &ListOfData[0];
}
int main()
{
LD[1] = 1;
ExecuteCommands();
GetData();
}
A:
There are so many things wrong with this code that it's almost impossible to know where to begin. And the errors are in many cases so basic that it betrays a gross misunderstanding of what you should be doing. The kind of code you're writing is dangerous for experienced C++ programers; the errors you've made in your code suggest that you're far from experienced.
Stop trying to do what you're trying to do.
But let's take your code.
std::copy(&Source, &Source + sizeof(T), Destination); //Fails..
First, let's talk about pointers in C++.
If you have a pointer to some type T, let's say T *t, doing this t + 1 will not shift the pointer over one byte. This is basic pointer arithmetic stuff here; t + 1 will shift it over by sizeof(T); that's how pointers have worked since the earliest days of C, let alone C++.
Source is a T&, so &Source is a T*. Therefore, adding sizeof(T) to it will increment the pointer by sizeof(T) * sizeof(T). That's not what you want.
Second, std::copy is not memcpy. std::copy is for copying one collection of values (defined by an input iterator pair) into another collection of values defined by an output iterator. std::copy requires that the value_type of the input iterator is implicitly convertible to the value_type of the output iterator.
The value_type of a T*, the input iterator in question, is a T; T*'s point to Ts. The value_type of your char*, your output iterator, is char. std::copy is going to try to do effectively this:
char *val;
T *t;
*val = *t;
Even ignoring the fact that these two pointers are uninitialized, this makes no sense. Unless T has an operator char conversion operator, you cannot simply take a T and shove it into a char. Therefore, you get a compile error. As you should.
If you truly have some T and want to copy it into a char* array of the appropriate size (or vice-versa), std::copy is not the tool you need. The tool you want is std::memcpy. std::copy is for copying objects, not copying bytes.
|
A mismatch index based on the difference between measured left ventricular ejection fraction and that estimated by infarct size at three months following reperfused acute myocardial infarction.
The reduction of left ventricular ejection fraction (LVEF) following ST-segment elevation myocardial infarction (STEMI) is a result of infarcted myocardium and may involve dysfunctional but viable myocardium. An index that may quantitatively determine whether LVEF is reduced beyond the expected value when considering only infarct size (IS) has previously been presented based on cardiac magnetic resonance (CMR). The purpose of this study was to introduce the index based on the electrocardiogram (ECG) and compare indices based on ECG and CMR. In 55 patients ECG and CMR were obtained 3 months after STEMI treated with primary percutaneous coronary intervention. Significant, however moderate inverse relationships were found between measured LVEF and IS. Based on IS and LVEF an IS estimated LVEF was derived and an MI-LVEF mismatch index was calculated as the difference between measured LVEF and IS estimated LVEF. In 41 (74.5%) of the patients there was agreement between the ECG and CMR indices in regards to categorizing indices as >10 or ≤ 10 and generally no significant difference was detected, mean difference of 1.26 percentage points (p = 0.53). The study found an overall good agreement between MI-LVEF mismatch indices based on ECG and CMR. The MI-LVEF mismatch index may serve as a tool to identify patients with potentially reversible dysfunctional but viable myocardium, but future studies including both ECG and CMR are needed.
|
If Kerry does manage to seal a historic deal in the next few days, attention will immediately turn back to Congress, which voted earlier this year to give itself a say in the matter. Under the terms that lawmakers set for themselves (after some bargaining with the administration), it won’t be easy for Congress to stop the agreement, and the precise timing of the accord could have a significant impact on the outcome. If the administration submits the signed deal to Congress by Thursday, the House and Senate would have just 30 days to review and vote on it—a window that, in actuality, becomes even shorter since lawmakers are scheduled to leave for their sacred summer recess by the end of the month. If the Thursday deadline slips, then under the law Congress would have 60 days to consider the agreement.
“They’re rushing,” said Senator Bob Corker, chairman of the Foreign Relations Committee, in a Sunday appearance on Face the Nation. Corker’s implication? That the administration wants to jam Congress because the longer that lawmakers have to review the deal, the more likely they’ll be to oppose it. Josh Earnest, the White House press secretary, scoffed at that suggestion, telling reporters that the fact that it has taken two years even to get to this point “indicates that nobody’s been in a rush.”
The administration’s critics say that Obama and Kerry, in their desperation for a legacy achievement, have conceded far too much and crossed many of the red lines they once set. Inspections, critics worry, won’t be frequent enough, much less “anytime, anywhere”; sanctions would be lifted too quickly and would be harder to “snap back” into place than Obama has suggested; and Iran’s ability to acquire a nuclear weapon would merely be delayed, not eliminated. Reports on Monday indicated that Iran was pushing for a U.N. arms embargo to be lifted as part of the deal, which the U.S. has opposed. Still, because of the compromise Congress and the administration agreed to in the spring, opponents would need a two-thirds majority to block the deal, and that would require a large number of Democrats deserting the president. To that end, Obama has largely cleared his schedule this week in anticipation of an announcement, and he’s invited Senate Democrats to the White House on Tuesday night—possibly to lobby them on the agreement.
Lawmakers like to talk about asserting their prerogatives a lot more than they actually assert them, as Congress demonstrated by its inaction on a resolution authorizing Obama’s war against ISIS. It’s entirely possible that neither the House nor Senate will hold a vote on the Iran deal, in which case the deal would take effect after the 30- or 60-day window ends. But both chambers are getting ready to act. The House is “on high alert,” said Kevin Smith, a spokesman for Speaker John Boehner, and a hearing is already scheduled for Thursday in the Foreign Affairs Committee. In the Senate, Corker has been convening briefings for members ahead of an announcement. The party leadership could decide to hold votes just to put Democrats in a tough spot. “If this is a bad deal, Democrats are not going to be able to hide,” Smith told me.
Republicans might not be able to stop an agreement that they see as a capitulation to a dangerous regime, but it looks like they’re going to try.
We want to hear what you think about this article. Submit a letter to the editor or write to [email protected].
|
Tehran, YJC. Boroujerdi says that Iran is the leading power in regional defense.
Chairman
of the National Security and Foreign Policy of the Majlis said "Iran’s Ministry
of Defense has a special place in the regional deterrence. Iran’s power in the
region and the world is so great that no superpower the like of America would
dare attack the country. They struck nuclear sites in Iraq and Syria but dared
not attack Iran.”
"After
the revolution, since they didn’t have the F14 in their air force, the US
destroyed their entire arsenal that belonged to the fighter with the intention
to prevent the parts being smuggled to Iran. But our experts, relying on
national abilities, rebuilt all the parts inside the country.”
He also
pointed to the Iranian nuclear case being pursued by the US and said "The
outcome of this international encounter between Iran and the US will be the
recognition of Iran’s nuclear right and power.”
Boroujerdi
added "The fact that John Kerry says the Fordo facilities must be removed is
because they know that with passive defense, they do not have the power to
attack the facilities.”
|
1. Field of the invention
This invention relates to fixed blade knives with capacity for storage and carriage of survival or tactical gear and particularly a removable, self-contained, ejectable compartment for said gear in the handle.
2. Description of related art
Knives with survival or tactical gear in their handles have been used for many years. The user can store gear in the handle and have it readily available for use at any time. Things like, a knife sharpener, a fire starter, a lighter, fish hooks and line or a flashlight can be carried conveniently.
There are no U.S. Patents found with knives of similar design, however many examples of knives with storage capacity in the handles are available. With these designs it is necessary to have two functioning hands to access gear within the handle. The gear stored in the handle must be accessed by removing a separate butt-cap. A separate butt cap is inferior to the present invention as it can be lost, rendering the storage capacity of the handle useless. Also, gear held loosely in the handle, as in all prior examples, requires the user to empty all of the contents to a suitable area to choose the implement needed.
Many times in the field a clean area to empty the handles to access the gear inside is not available, users are surrounded by leaf and foliage covered ground, snow or over water as on a dock or in a boat. Loss of important survival or tactical gear is possible. Nimble fingers and good visibility are required to thread the butt-cap back onto the handle.
|
Orange County Home Mortgage Con Artist (Barely) Gets Punished
Kenny Rojas never attended college, but he operated what appeared to be an honest mortgage-brokerage company in Orange and Los Angeles counties, First Liberty Wholesale Lending Inc.
But Rojas' superficial success masked cheating.
The Orange County man routinely created fake bank documents to inflate incomes for home loan applicants, according to records at the Ronald Reagan Federal Courthouse in Santa Ana.
FBI agents ended the scam when they arrived with questions at Rojas' home in May 2008.
In December 2009, a federal grand jury issued a three-count indictment against Rojas for conspiracy to commit mail fraud, mail fraud, and aiding and abetting in financial crimes causing more than $1 million in losses.
Rojas initially pleaded not guilty, but in 2010, he accepted a plea-bargain deal admitting to the conspiracy count to get the other two charges dropped and asked for no prison time--a request bolstered by the fact he cooperated with the FBI-, including without legal representation--during the criminal investigation that sent another man, Eduardo Ruiz, to prison for 108 months.
Federal prosecutors did not want the public to know their stance on Rojas' punishment; they got their position sealed.
But this month, U.S. District Court Judge David O. Carter sentenced Rojas, 34, to undergo three years of probation and pay a $100 special assessment.
|
var d = require('../dtrace-provider');
var provider = d.createDTraceProvider("nodeapp");
var probe = provider.addProbe("p1", "int", "int");
provider.enable();
probe.fire(function(p) {
return [1, 2, 3, 4];
});
|
As is well known, Fibre Channel (FC) is an American National Standards Institute (ANSI) standard specifying a bidirectional serial data channel, structured for high performance capability. Physically, the Fibre Channel may be viewed as an interconnection of multiple communication points, called N_Ports, interconnected by a link comprising a switching network, called a fabric, or a point-to-point link. Fibre is a general term used to cover all physical media types supported by the Fibre Channel, such as optical fibre, twisted pair, and coaxial cable.
The Fibre Channel provides a general transport vehicle for Upper Level Protocols (ULPs) such as Intelligent Peripheral Interface (IPI) and Small Computer System Interface (SCSI) command sets, High-Performance Parallel Interface (HIPPI) data framing, IP (Internet Protocol), IEEE 802.2, and others. Proprietary and other command sets may also use and share the Fibre Channel, but such use is not defined as part of the Fibre Channel standard.
Fibre Channel is structured as a set of hierarchical functions denoted FC-0, FC-1, FC-2, FC-3 and FC-4.
FC-0 defines the physical portions of the Fibre Channel including the fibre, connectors, and optical and electrical parameters for a variety of data rates and physical media. Coax and twisted pair versions are defined for limited distance applications. FC-0 provides the point-to-point physical portion of the Fibre Channel. A variety of physical media is supported to address variations in cable plants.
FC-1 defines the transmission protocol which includes the serial encoding, decoding, and error control.
FC-2 defines the signaling protocol which includes the frame structure and byte sequences.
FC-3 defines a set of services which are common across multiple ports of a node.
FC-4 is the highest level in the Fibre Channel standard. It defines the mapping, between the lower levels of the Fibre Channel and the IPI and SCSI command sets, the HIPPI data framing, IP, and other ULPs.
Additional details regarding these and other aspects of Fibre Channel can be found in the ANSI Fibre Channel standard documents, including the FC-PH, FC-FS, FC-AL, FC-PI, FC-DA and FC-LS documents, all of which are incorporated by reference herein.
In typical conventional practice, Fibre Channel links are designed to operate at data rates of 4.25 Gbps, 2.125 Gbps or 1.0625 Gbps. Although higher data rates are possible, the industry is reluctant to spend money upgrading existing hardware to implement these higher data rates. The problem is that as data rates increase, to the proposed Fibre Channel rates of 8 Gbps, 16 Gbps and higher, the existing hardware degrades the electrical signals to the extent that error-free operation cannot be achieved without electrical equalization.
Current implementations generally attempt to address this problem through the use of pure receive equalization. However, at high data rates, on the order of 8 Gbps or higher, this receive-only equalization approach is very complicated, and requires significant increases in size and power consumption for the associated hardware. Moreover, the receive-only equalization approach may fail to provide the desired error-free operation at the high data rates.
Accordingly, what is needed is an improved approach to equalization for Fibre Channel or other bidirectional serial data channels, which can accommodate higher data rates without the need for hardware infrastructure upgrades while also avoiding the drawbacks of conventional receive-only equalization.
|
Q:
Prove that $\gcd(a,b) = \gcd(b,a)$
I want to prove $\gcd(a,b)=\gcd(b,a)$. I tried using the euclidean algorithm but that didn't help me much.
A:
If $\gcd(a,b)=x$ then $x \mid a$ and $x\mid b$ this implies $x \mid \gcd(b,a)$. By similar argument $\gcd(b,a)\mid\gcd(a,b).$ i.e., $\gcd(a,b) = \gcd(b,a). $
|
DNA sequencing and copy number variation analysis of MCHR2 in a cohort of Prader Willi like (PWL) patients.
Prader Willi Syndrome (PWS) is a syndromic form of obesity caused by a chromosomal aberration on chromosome 15q11.2-q13. Patients with a comparable phenotype to PWS not carrying the 15q11.2-q13 defect are classified as Prader Willi like (PWL). In literature, PWL patients do frequently harbor deletions at 6q16, which led to the identification of the single-minded 1 (SIM1) gene as a possible cause for the presence of obesity in these patients. However, our previous work in a PWL cohort showed a rather limited involvement of SIM1 in the obesity phenotype. In this paper, we investigated the causal role of the melanin-concentrating hormone receptor 2 (MCHR2) gene in PWL patients, as most of the reported 6q16 deletions also encompass this gene and it is suggested to be active in the control of feeding behavior and energy metabolism. Copy number variation analysis of the MCHR2 genomic region followed by mutation analysis of MCHR2 was performed in a PWL cohort. Genome-wide microarray analysis of 109 patients with PWL did not show any gene harboring deletions on chromosome 6q16. Mutation analysis in 92 patients with PWL demonstrated three MCHR2 variants: p.T47A (c.139A>G), p.A76A (c.228T>C) and c.*16A>G. We identified a significantly higher prevalence of the c.228T>C C allele in our PWL cohort compared to previously published results and controls of the ExAC Database. Overall, our results are in line with some previously performed studies suggesting that MCHR2 is not a major contributor to human obesity and the PWL phenotype.
|
Emphysematous pyelonephritis.
A 54-year-old nondiabetic male presented with high fever, vague lower abdominal pain and leakage of urine around his long-standing suprapubic catheter. Examination revealed pyrexia and tenderness in the right renal angle. White cell count was 22.8 x 10(9)/l. Plain abdominal X-ray showed calculi in the right kidney, ureter and bladder. Intravenous pyelogram showed gas confined to the right upper renal pelvis and perinephric space. Urine and blood cultures, plain abdominal X-ray, intravenous pyelogram, abdominal ultrasound, MAG3 renogram and histopathology. Emphysematous pyelonephritis: class 2 or type 1. Escherichia coli was isolated from urine obtained by endoscopic drainage. Endoscopic drainage of pus and simple nephrectomy.
|
After having typically appeared in the very hallowed pages of Baseball Think Factory, Dan Szymborski’s ZiPS projections have been released at FanGraphs the past couple years. The exercise continues this offseason. Below are the projections for the Boston Red Sox. Szymborski can be found at ESPN and on Twitter at @DSzymborski.
Other Projections: Arizona / Atlanta / Chicago AL / Chicago NL / Cleveland / Colorado / Detroit / Houston / Los Angeles AL / Los Angeles NL / Miami / Milwaukee / Minnesota / New York NL / Oakland / San Diego / San Francisco / St. Louis / Tampa Bay / Washington
Batters
Despite the considerable investments made by the club both in Hanley Ramirez and Pablo Sandoval this offseason — amounting to nearly $200 million collectively, those contracts — the top WAR projection among all Red Sox players belongs to their second-round pick from the 2004 draft. Dustin Pedroia produced the lowest slugging and isolated-power figures (.376 and .098, respectively) of his career last year, while also recording a career-worst strikeout rate (12.3%). ZiPS calls for Pedroia to find some positive regression in all three areas while still retaining his elite second-base defense.
Probably also capable of providing if not elite, then at least above-average, second-base defense is Mookie Betts. Owing to the continued employment by the club of Pedroia, however, Betts will be forced to supply above-average defense elsewhere. In this case, the most likely destination is right field. It would fair to say that Betts doesn’t possess the typical right-field profile, featuring less power and size than most who play the position. He has excellent plate-discipline skills, however, plus speed and non-negligible power on contact. Note that Betts’ defensive projection below (of -1 runs) is for center field. The equivalent in right would be about +6 or +7 runs saved.
Pitchers
As noted by Dave Cameron last Thursday, Rick Porcello isn’t necessarily the pitcher one conjures up when endeavoring to identify an obvious No. 1 starter for a postseason contender. ZiPS is optimistic that he can at least fake it for the time being, however. Despite having never posted above a 3.2 WAR in any season, Porcello is projected to produce a 3.5 WAR for Boston in 2015. This is unusual, of course. Projections systems are marked, if anything, by the application of regression to a player’s performances. To project a career year, then, is by definition atypical.
A hasty inspection of the ZiPS projections published here at the site so far reveals only two pitchers — Clayton Kershaw (60 ERA-) and Craig Kimbrel (42 ERA-) — to receive a better ERA forecast relative to park and league than Koji Uehara. That group will likely receive at least one more member — after Cincinnati’s projections are published, for example — but the point remains that Uehara continues to profile as one of the league’s best pitchers on a per-inning basis.
Bench/Prospects
Outfielder Shane Victorino (402 PA, 1.2 WAR) is omitted from the depth-chart image below, but will almost certainly play something larger than a typical bench role. The same sentiment probably applies to Allen Craig (506 PA, 0.5 WAR) and, merely because his defense is so good, Jackie Bradley Jr. (505 PA, 0.7 WAR). Catcher Blake Swihart (461 PA, 1.9 WAR) receives the top WAR projection among batting prospects. Rookie-eligible pitchers Matt Barnes, Edwin Escobar, Henry Owens, and Eduardo Rodriguez — all four of them — receive a projection better than 1.0 WAR.
Depth Chart
Below is a rough depth chart for the present incarnation of the Red Sox, with rounded projected WAR totals for each player. For caveats regarding WAR values see disclaimer at bottom of post. Click to embiggen image.
Ballpark graphic courtesy Eephus League. Depth charts constructed by way of those listed here at site and author’s own haphazard reasoning.
Batters, Counting Stats
***
Batters, Rates and Averages
***
Batters, Assorted Other
***
Pitchers, Counting Stats
***
Pitchers, Rates and Averages
***
Pitchers, Assorted Other
***
Disclaimer: ZiPS projections are computer-based projections of performance. Performances have not been allocated to predicted playing time in the majors — many of the players listed above are unlikely to play in the majors at all in 2014. ZiPS is projecting equivalent production — a .240 ZiPS projection may end up being .280 in AAA or .300 in AA, for example. Whether or not a player will play is one of many non-statistical factors one has to take into account when predicting the future.
Players are listed with their most recent teams unless Dan has made a mistake. This is very possible as a lot of minor-league signings are generally unreported in the offseason.
ZiPS is projecting based on the AL having a 3.93 ERA and the NL having a 3.75 ERA.
Players that are expected to be out due to injury are still projected. More information is always better than less information and a computer isn’t what should be projecting the injury status of, for example, a pitcher with Tommy John surgery.
Regarding ERA+ vs. ERA- (and FIP+ vs. FIP-) and the differences therein: as Patriot notes here, they are not simply mirror images of each other. Writes Patriot: “ERA+ does not tell you that a pitcher’s ERA was X% less or more than the league’s ERA. It tells you that the league’s ERA was X% less or more than the pitcher’s ERA.”
Both hitters and pitchers are ranked by projected zWAR — which is to say, WAR values as calculated by Dan Szymborski, whose surname is spelled with a z. WAR values might differ slightly from those which appear in full release of ZiPS. Finally, Szymborski will advise anyone against — and might karate chop anyone guilty of — merely adding up WAR totals on depth chart to produce projected team WAR.
|
Q:
Text not visible in mozilla
When I type my login credentials in mozilla the text is not visible but the credentials are submitted.However,When the same site is run in other browser everything works fine. Any solutions?
A:
Check style applied on the TextBox
OR
Try to disable/reduce the padding.
|
1. Field of the Invention
The invention relates to risk analysis in oil and gas prospecting. More particularly, it relates to the use of seismic attributes and supporting data quality to reduce the uncertainty of hydrocarbon presence and the uncertainty of accumulation size.
2. Description of the Prior Art
Oil and Gas exploration is typically a high-risk enterprise. Several geologic factors are needed to insure a petroleum accumulation. Prior to drilling, there are usually incomplete information and a variable quality of information regarding the necessary geologic factors. One important tool for pre-drill risk mitigation is seismic data.
For many years seismic exploration for oil and gas has involved the use of seismic energy sources and seismic receivers. The seismic receivers, the land based versions commonly called geophones and aquatic based versions called hydrophones, sense acoustic waves and produce electric signals indicative of the sensed waves.
In typical exploration practice, a source energy is generated by a seismic energy source, and when sensed, are transformed into electrical signals. The source wave travels into the surface of the earth and is reflected or refracted by subsurface geologic features. These reflections are detected by the phones and are converted to electric signals. These electric signals represent acoustic waves reflected from the interface between subsurface layers in the earth and form a continuous amplitude signal in time. The amplitude recording in time of the phone output at a single location is commonly called a seismic trace.
It is common practice for an arrangement of sources and receivers to be repeated in a predictable pattern, which then allows many seismic traces to be recorded. A collection of seismic traces, gathered in a repeatable way, forms a complete seismic survey. The source and receiver pattern within a seismic survey is generally repeated along a line, called two-dimensional data (2-D) or in some rectangular fashion covering an area, called three-dimensional data (3-D).
Modern seismic recording equipment transforms the analog signals produced by the phones to digital representations of the signal. These seismic traces are stored on a medium, such as magnetic tape, as digital samples. The digitized traces containing the reflection amplitudes from the earth can then be rearranged and processed by computer software to form a representative image of the earth""s subsurface layers.
One such technique in seismic processing is to form CMP (Common Mid-Point) gathers of seismic traces. The CMP technique groups together seismic traces with the same mid-point between the source and receiver. The traces within the CMP gather are further sorted by increasing distance between source and receiver. This distance between source and receiver is usually referred to as source-receiver offset.
FIG. 1 is a ray diagram detailing the CMP technique. For the case of a flat earth approximation, the CMP gather represents reflection signal from a common point on the interface between subsurface layers. FIG. 1 illustrates the seismic energy reflected from a subsurface interface for source-receiver offset pairs within a CMP gather. Note that in FIG. 1 all of the reflected energy corresponds to the same subsurface point but for differing source-receiver offsets.
FIG. 2 is an exemplary trace showing the gathering of a single subsurface point in the context of a CMP gather. This figure illustrates a single subsurface point within a CMP gather represented by a series of amplitude traces in time with increasing offsets between the source and receiver.
FIG. 3 is an exemplary embodiment of an equivalent time image of the subsurface reflection point or interface produced by ordering the traces in a line or over an area. To enhance signal from a single subsurface point, the reflection amplitudes within a CMP gather are flattened and then summed together (stacked) to eliminate noise or energy that does not correspond to the primary reflection. This process reduces each CMP gather to a single stacked trace. The amplitudes on the stacked trace represent different reflecting interfaces. Those skilled in the art can interpret this CMP stacked amplitude data as equivalent cross-sections of subsurface layers.
The primary use of CMP seismic data is to mitigate the pre-drill uncertainty in finding hydrocarbons. The following discussion, as summarized by FIGS. 4 through 8, outlines the basic elements of hydrocarbon risk analysis.
FIG. 4 is a geologic diagram of conditions for hydrocarbon accumulation. As depicted in FIG. 4, several unlikely geologic conditions should be satisfied for a hydrocarbon accumulation to exist. These geologic elements are (1) trap, (2) reservoir, (3) source, (4) timing, and (5) seal. The most common use for CMP data is to contour subsurface layers and identify the likely area for a trap.
The quality and quantity of the CMP data introduces uncertainty in this estimate. Similar uncertainty exists for each of the geologic elements. This uncertainty is quantified by assigning a probability factor between zero to one (0.0-1.0) to each geologic factor.
FIG. 5 is a probability diagram showing the determination of chance of petroleum accumulation. The product of these probability factors is usually designated Pg and indicates the chance that a petroleum accumulation exists. After all geologic elements have been investigated using the available data, the possible accumulation is referred to as a prospect.
FIG. 6 is a probability distribution curve for accumulation size. Associated with the chance that an accumulation exists, is the probability distribution for accumulation size. The size distribution depends on specifics within the geologic area, however the shape of the distribution remains the same. To those skilled in the art, this shape is referred to as lognormal. The range of values in the lognormal distribution for size can be very large or very small.
FIG. 7 is a probability distribution curve detailing the differences of the curves of FIG. 6. A small range in size distribution means that there is more certainty in the outcome, as illustrated in FIG. 7. The product of the probability distribution for size with the chance that an accumulation exists, Pg, determines the probability of finding a particular size of accumulation.
FIG. 8 is a distribution curve for a distribution estimate. This type of prospect risk analysis greatly aids a successful economic outcome of hydrocarbon exploration.
The use and interpretation of CMP data and its risk mitigation value continues to grow beyond simple mapping for trap. Seismic processing techniques are used to extract information from CMP amplitudes that more directly indicate the presence of hydrocarbons. These extracted attributes of CMP data are often called Direct Hydrocarbon Indicators or DHI""s. One example of a DHI attribute is the amplitude variation with offset (AVO) within a CMP gather.
FIG. 9 is an exemplary AVO graph. Under specific conditions, these offset amplitude variations are indicative of a gas reservoir. As is the case with the geologic factors for hydrocarbon accumulation, there is ambiguity in the seismic attributes for DHI""s.
Many typical analysis functions cannot empirically assess the risk and/or probability analysis in a cohesive integrated manner. As such, many intermediate steps have to be taken to achieve a subjective analysis of risk and/or probability factors, such as those discussed above, in the search for hydrocarbon bearing areas. Many other problems and disadvantages of the prior art will become apparent to one skilled in the art after comparing such prior art with the present invention as described herein.
Aspects of the invention are found in a risk analysis method, and a system for implementing such a method. In this invention, the seismic attributes are correlated versus data quality to give a relative indication of the possible success or economic viability of a particular prospect.
In one exemplary embodiment, a cross plot or matrix is formed of seismic attributes versus data quality. Seismic attributes indicative of hydrocarbons are assigned to the horizontal axis and measures of data quality are assigned to the vertical axis.
Positive seismic attributes of hydrocarbon cause a rightward shift along the horizontal axis. Negative seismic attributes of hydrocarbons cause a leftward shift along the horizontal axis.
Similarly, positive indicators of data quality cause an upward shift along the vertical axis and negative indicators of data quality cause a downward shift along the vertical axis. As confidence in data quality increase (upward shift) and positive seismic indicators of hydrocarbons increase (rightward shift), the confidence in finding hydrocarbons increases and the uncertainty in accumulation size decreases.
The upper right-hand corner of the DHI matrix represents the highest confidence in hydrocarbon presence and accumulation size. The upper left-hand corner of the DHI matrix represents high confidence that no hydrocarbons exist. The lower right-hand or left-hand comer indicates that there is no confidence, very high risk, in the presence of hydrocarbons, because data quality is sparse or of very poor quality.
Aspects of the present invention include the assignment of weights to the seismic attributes and to the data quality measures. These attributes and measures scale the response for a particular geologic province. If the geologic province is new or under-explored, it is possible to use the weights and scoring from an analogous area until drilling results in sufficient experience for matrix calibration are available.
As such, a system and method for objectively determining the risk assessment of a hydrocarbon prospect based on data quality and structural characteristics is envisioned. Other objects, advantages and novel features of the present invention will be apparent to those skilled in the art from the following detailed description of the invention, the appended claims, and in conjunction with the accompanying drawings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.