text
stringlengths
8
5.77M
LipidIMMS Analyzer: integrating multi-dimensional information to support lipid identification in ion mobility-mass spectrometry based lipidomics. Ion mobility-mass spectrometry (IM-MS) has showed great application potential for lipidomics. However, IM-MS based lipidomics is significantly restricted by the available software for lipid structural identification. Here, we developed a software tool, namely, LipidIMMS Analyzer, to support the accurate identification of lipids in IM-MS. For the first time, the software incorporates a large-scale database covering over 260 000 lipids and four-dimensional structural information for each lipid [i.e. m/z, retention time (RT), collision cross-section (CCS) and MS/MS spectra]. Therefore, multi-dimensional information can be readily integrated to support lipid identifications, and significantly improve the coverage and confidence of identification. Currently, the software supports different IM-MS instruments and data acquisition approaches. The software is freely available at: http://imms.zhulab.cn/LipidIMMS/. Supplementary data are available at Bioinformatics online.
Committed to the Environment We are committed to the principle of responsible luxury—helping hoteliers to reduce their environmental footprint while maintaining the highest possible standards for guests. ‘Responsible luxury’ isn’t just a catchphrase; it’s our living, breathing mission to create environmentally friendly products that we can all feel good about. Naturally Nature-Friendly. Protecting the environment and the people who use our products is of supreme importance to Gilchrist & Soames. Safeguarding our delicate ecology is deeply rooted in our business philosophy. We pursue ecofriendly practices in all aspects of product creation, from ingredients and packaging to sourcing and manufacturing. Our concern follows the product, even in disposal. As a member of Green Dot, we are aligned with an organization that promotes efficient and environmentally sound recycling and waste management solutions in North America and Europe. Cruelty-Free Development. We are committed to cruelty-free development and manufacturing. Our products and formulations are never tested on animals and are Leaping Bunny-certified. Packaging Innovation. Gilchrist & Soames uses a variety of packaging resins that are highly recyclable, including PET, HDPE, and LDPE. We are also actively engaged in recycling efforts to lessen our impact on the environment. Our bottles, cartons, and labels are made from the most readily available recyclable materials. Step by step, it all adds up to a green today and an even greener tomorrow. At Gilchrist & Soames we recognise that each action our business takes has an impact on the environment and the communities in which we operate. Our Sustainability Policy provides further information.
--- abstract: 'This paper examines the impact of system parameters such as access point density and bandwidth partitioning on the performance of randomly deployed, interference-limited, dense wireless networks. While much progress has been achieved in analyzing randomly deployed networks via tools from stochastic geometry, most existing works either assume a very large user density compared to that of access points, which does not hold in a dense network, and/or consider only the user signal-to-interference-ratio as the system figure of merit, which provides only partial insight on user rate as the effect of multiple access is ignored. In this paper, the user rate distribution is obtained analytically, taking into account the effects of multiple access as well as the SIR outage. It is shown that user rate outage probability is dependent on the number of bandwidth partitions (subchannels) and the way they are utilized by the multiple access scheme. The optimal number of partitions is lower bounded for the case of large access point density. In addition, an upper bound of the minimum access point density required to provide an asymptotically small rate outage probability is provided in closed form.' author: - 'Stelios Stefanatos and Angeliki Alexiou, [^1]' title: Access Point Density and Bandwidth Partitioning in Ultra Dense Wireless Networks --- Access point density, bandwidth partitioning, stochastic geometry, ultra dense wireless networks, user rate outage probability. Introduction ============ cell networks have attracted a lot of attention recently as they are considered a promising method to satisfy the ever increasing rate demands of wireless users. Some studies have suggested that, by employing low cost access points (APs), the density $\lambda_a$ of APs will potentially reach, or even exceed, the density $\lambda_u$ of user equipments (UEs), therefore introducing the notion of ultra dense wireless networks [@Qualcomm]. With a large number of APs available, a random UE will most probably connect to a strong signal AP, having to share the AP resources with a limited number of co-served UEs, and, ultimately, achieve high rates. However, in order to exploit the full system resources, a universal frequency reuse scheme is employed which inevitably results in significant interference that has to be taken carefully into account in system design and performance analysis. Related Works and Motivation ---------------------------- With an increasing network density, the task of optimally placing the APs in the Euclidean plane becomes difficult, if not impossible. Therefore, the APs will typically have an irregular, random deployment, which is expected to affect system performance. Recent research has showed that such randomly deployed cellular systems can be successfully analyzed by employing tools from stochastic geometry [@Baccelli; @ElSawy]. While significant results have been achieved, most of these works assume $\lambda_u \gg \lambda_a$, effectively ignoring UE distribution, and/or consider only the user signal-to-interference ratio (SIR). Assumption $\lambda_u \gg \lambda_a$ does not hold in the case of dense networks, whereas SIR provides only partial insight on the achieved user rate as the effect of multiple access is neglected [@AndrewsMag]. A few recent works have attempted to address these issues. Specifically, the UE distribution is taken into account in [@Lee; @Li] by incorporating in the analysis the probability of an AP being inactive (no UE present within its cell). However, analysis considers only the SIR. In [@Cao; @Singh] the UE distribution is employed for computation of user rates under time-division-multiple-access (TDMA) without considering the effect of SIR outage. In addition, TDMA may not be the best multiple access scheme under certain scenarios. Partitioning the available bandwidth and transmitting on one of the resulting subchannels (SCs), i.e., frequency-division-multiple-access (FDMA), has been shown in [@Andrews_bandwidth] to be beneficial for the case of ad-hoc networks assuming a channel access scheme where each node transmits independently on a randomly selected SC. This decentralized scheme was employed in [@Huang] for modeling the uplink of a cellular network with frequency hopping channel access. However, this approach is inappropriate for a practical cellular network where scheduling decisions are made by the AP and transmissions are orthogonalized to eliminate intra-cell interference (no sophisticated processing at receivers is assumed that would allow for non-orthogonal transmissions). A straightforward modification of the bandwidth partitioning concept for the downlink cellular network was considered in [@Andrews] where UEs are multiplexed via TDMA and transmission is performed on one, randomly selected SC. This simple scheme was shown to provide improved SIR performance, however, with no explicit indication of how many partitions should be employed or how performance would change by allowing more that one UEs transmitting at the same time slot on different SCs. Contributions and Paper Organization ------------------------------------ In this paper, the stochastic geometry framework is employed for analyzing the downlink user rate of a dense wireless network under a multiple access scheme that exploits bandwidth partitioning for *both* interference reduction and efficient resource sharing among UEs. The previously mentioned issues are explicitly addressed by considering in the analysis - [ the UE distribution, ]{} - [ a multiple access scheme that allows for parallel orthogonal transmissions in frequency, ]{} - [ the effect of SIR outage. ]{} Under this framework, the user rate distribution is analytically derived for two instances of multiple access schemes that reveals dependence of performance on the number of bandwidth partitions as well as the way the are utilized. The analytical rate distribution expression, apart from allowing for efficient numerical optimization of system parameters, is employed to derive a closed-form lower bound of the optimal number of partitions for the case of large AP density, as well as a closed-form upper bound of the minimum AP density required to provide a given, asymptotically small, rate outage probability. The latter is of critical importance given the trend of AP densification in future wireless networks. Numerical results demonstrate the merits of increased AP density, as well as efficient use of bandwidth partitions, in enhancing network performance in terms of achieved user rate. The paper is organized as follows. Section II describes the system model, along with a discussion on the suitability of various metrics with respect to (w.r.t.) UE performance. In Section III, the user rate distribution is analytically obtained for two instances of multiple access schemes. The number of SCs that minimizes rate outage probability is investigated in Section IV, and Section V provides a closed form upper bound of the minimum required AP density that can support a given, asymptotically small, user rate outage probability. Section VI presents numerical examples that provide insights on various system design aspects, and Section VII concludes the paper. System Model and Performance Metrics ==================================== The downlink of an interference-limited, dense wireless network is considered. Randomly deployed over $\mathbb{R}^2$ APs and UEs are modeled as independent homogeneous Poisson point processes (PPP) $\Phi_a$, $\Phi_u$, with densities $\lambda_a$, $\lambda_u$, respectively. Full buffer transmissions and Gaussian signaling are assumed, with interference treated as noise at the receivers. Each UE is served by its closest AP resulting in irregular, disjoint cell shapes forming a Voronoi tessellation of the plane [@Baccelli]. Elimination of intra-cell interference is achieved by an orthogonal FDMA/TDMA scheme with the total system bandwidth partitioned offline to $N$ equal size SCs. All active APs in the system transmit at the same power over all (active) SCs, with the power selected appropriately large so that the system operates in the interference limited region in order to maximize spectral efficiency [@Lozano]. No coordination among APs is assumed, i.e., each AP makes independent scheduling decisions. Considering a typical UE located at the origin and served by its closest AP of index, say, $0$, the SIR achieved at SC $n \in \{1, 2, \ldots, N\}$ is given by $$\label{eq:1} \textrm{SIR}_{n} = \frac{g_{0,n} r_0^{-\alpha}}{\sum_{i\in \Phi_{a}\setminus \{0\}}\delta_{i,n} g_{i,n}r_i^{-\alpha}},$$ where $r_0\geq0$ is the distance from the serving AP, $\alpha > 2$ the path loss exponent, and $g_{0,n}\geq0$ an exponentially distributed random variable with unit mean, modeling small scale (Rayleigh) fading. The denominator in (\[eq:1\]) represents the interference power at the considered SC, where $g_{i,n}$, $r_i$ are the channel fading and distance of AP $i$ w.r.t. the typical UE, respectively, and $\delta_{i,n} \in \{0,1\}$ is an indicator variable representing whether AP $i$ transmits on SC $n$. Note that $\delta_{i,n}$ depends on the total number of UEs associated with AP $i$ as well as the multiple access scheme and its presence in (\[eq:1\]) is to account for APs that do not interfere due to lack of associated UEs and/or scheduling decisions. Channel fadings $\{g_{i,n}\}$ are assumed independent, identically distributed (i.i.d.) w.r.t. AP index $i$. $\textrm{SIR}_{n}$ is a random variable due to the randomness of fading, AP and UE locations, as well as the multiple access scheme, and its statistical characterization is of interest. To this end, the interference term of (\[eq:1\]) can be viewed as shot-noise generated by a marked PPP [@Baccelli] of density $\lambda_a$ outside a ball of radius $r_0$ centered at the origin, and marks $\{g_{i,n},\delta_{i,n}\}$. Statistical characterization of a marked PPP can be obtained by standard methods when the following conditions hold [@Baccelli]: 1. [marks are mutually independent given the location of points, and,]{} 2. [each mark depends only on the location of its corresponding point.]{} Channel fadings $\{g_{i,n}\}$ satisfy both conditions by assumption, whereas variables $\{\delta_{i,n}\}$ satisfy only the first due to their dependence on the total number of UEs associated with each AP. By fundamental properties of the PPP, the numbers of UEs associated with different APs are mutually independent since AP cells are disjoint. For each AP, the number of associated UEs is determined by $\lambda_u$ and its cell area, with the latter depending not only on its own position but also on the position of its neighbours APs as well, rendering condition (2) invalid for $\{\delta_{i,n}\}$. In order to obtain tractable expressions for the SIR distribution the following assumption is adopted: The number $K$ of UEs associated with a random AP is independent of $\Phi_a$. Note that this assumption is actually stronger than the second condition but is convenient as it allows for incorporating the averaged-over-$\Phi_a$ probability mass function (PMF) of $K$ that will be used later in the analysis, given by the following lemma: The PMF of the number $K$ of UEs associated with a randomly chosen AP, averaged over the statistics of $\Phi_a$, is [@Yu] $$\label{eq:Kpmf} \Pr\{K\}=\frac{3.5^{3.5}\Gamma(K+3.5)\tau^{3.5}}{\Gamma(3.5)K!(1+3.5\tau)^{K+3.5}}, K \geq0,$$ where $\Gamma(\cdot)$ is the Gamma function and $\tau \triangleq \lambda_a/\lambda_u$. Note that $\Pr\{K\}$ is a decreasing function of $\tau$ and the mean of $K$ equals $1/\tau$. The above approach was shown in [@Lee; @Li] to provide accurate results and will be validated by simulations in Sect. III. Under Assumption 1, $\{\delta_{i,n}\}$ are i.i.d. over $i$ and characterized by the *activity probability* $p_n \triangleq \Pr\{\delta_{i,n}=1\} \in (0,1], \forall i$, whose actual value will be investigated in Sect. III for specific multiple access schemes. The cumulative distribution function (CDF) of $\textrm{SIR}_{n}$ can now be obtained in a simple expression as given by the following lemma: The CDF of $\textrm{SIR}_{n}$ under an activity probability $p_{n}$ is given by [@Andrews] $$\label{eq:FSIR} F_{\textrm{SIR}_{n}}(\theta) \triangleq \Pr\{\textrm{SIR}_{n} \leq \theta\} = 1-\frac{1}{1+p_{n}\rho(\theta)},$$ for $\theta \geq 0$, where $\rho(\theta) \triangleq \theta^{2/\alpha}\int_{\theta^{-2/\alpha}}^{\infty}1/\left(1+u^{\alpha/2}\right)du$. Note that setting $p_{n}$ *a-priori* equal to 1, as in, e.g., [@Cao; @Singh], implies that there is always a UE available to be allocated in every AP of the system, i.e., $\lambda_u \gg \lambda_a$, which is not the case in dense networks. For example, for the case $\lambda_a=\lambda_u$ and noting that $p_n \leq \Pr\{K>0\}$ for any multiple access scheme, it follows from (\[eq:Kpmf\]) that $p_n \leq 0.58$. Knowledge of (\[eq:FSIR\]) is of importance as it provides the probability $F_{\textrm{SIR}_{n}}(\theta_0)$ of service outage due to inability of UE operation below SIR threshold $\theta_0$ whose value may be dictated by operational requirements, e.g., synchronization, and/or application (QoS) requirements. In addition, $F_{\textrm{SIR}_{n}}(\theta)$ can be used to obtain CDFs of other directly related quantities of interest by transformation of variables. One such quantity employed extensively in the related literature, e.g., [@Andrews; @Li], is the rate $\overline{R}_{n}$ achieved *per channel use*, i.e, on a single time slot, given by $$\label{eq:R_bound} \overline{R}_{n} \triangleq \frac{1}{N} \log_2(1+\textrm{SIR}_{n}) \textrm{ (b/s/Hz).}$$ Examining $\overline{R}_{n}$ is important from the viewpoint of system throughput [@Li] but provides little insight on the achieved user rate. Note that $\overline{R}_{n}$ is an upper bound on the actual user rate. In case when the considered SC has to be time-shared among UEs, rate will only be a fraction of $\overline{R}_n$. In an attempt to remedy this issue, $\overline{R}_n$ was divided by the (random) number of UEs sharing the SC in [@Cao; @Singh] (case of $N=1$ was only considered). However, this is still a misleading measure of performance as it does not take into account the probability of an SIR outage and, therefore, provides overconfident results. In order to avoid these issues, the achieved rate of a typical UE is defined in this paper as $$\label{eq:4} R_{n}\triangleq \mathbf{1}\{\textrm{SIR}_{n} \geq \theta_0\} \frac{\overline{R}_{n}}{(L_{n}+1)} \textrm{ (b/s/Hz)},$$ where $\mathbf{1}\{\cdot\}$ is the indicator function and integer $L_{n} \geq 0$ is the number of time slots between two successive transmissions to the typical UE, referred to as *delay* in the following. Clearly, (\[eq:4\]) takes into account both the effects of multiple access and SIR outage via $L_{n}$ and the indicator function, respectively. In order to obtain the rate outage probability, i.e., the CDF of $R_n$, (statistical) evaluation of $p_n$ and $L_n$ is required, both depending, in addition to the UE distribution, on the multiple access scheme that is investigated in the following section. Effect of Multiple Access on Achievable User Rates ================================================== In this section the effect of multiple access on the achievable user rate is investigated. The multiple access scheme must strive to maximize UE resource utilization, while at the same time minimize inter-cell interference. These are conflicting requirements which, as it will be shown, can be (optimally) balanced by the choice of $N$. For analytical purposes the two schemes considered below are non-channel aware, with the corresponding performance serving as a lower bound under a channel aware resource assignment scheme, and fair, i.e., there are no priorities among UEs. TDMA ---- The following scheme, referred to in the following as TDMA, will serve as a baseline. - UEs are multiplexed via TDMA. - Transmission to any UE is performed on one, randomly selected SC out of total $N$. This scheme is a straightforward application of the random SC selection scheme employed in adhoc studies [@Andrews_bandwidth] to the downlink cellular setting, and can be also viewed as a generalization of conventional TDMA ($N=1$) that is usually assumed in works on cellular networks. It was first examined in [@Andrews], where it was shown that it provides improved SIR coverage by using essentially the same principle as in a frequency hopping scheme. FDMA/TDMA --------- The major argument against TDMA is the inability of parallel transmissions in frequency by multiple UEs when $N>1$, which is expected to be beneficial under certain operational scenarios. In this paper, the following simple modification is employed, referred to as FDMA/TDMA in the following, that allows for multiple UEs served at a single time slot ($\lfloor \cdot \rfloor$ denotes the largest smallest integer operator). - Define $\mathcal{N} \subseteq \{1,2,\ldots,N\}$ the set of available SCs for allocation at any given instant. - Randomly order the $K$ cell users via an index $k \in \{1,2,\ldots,K\}$. $\mathcal{N} \gets \{1,2,\ldots,N\}$; Assign UE $k$ a random SC $n_k \in \mathcal{N}$; $\mathcal{N} \gets \mathcal{N}\backslash\{n_k\}$; - UEs sharing the same SC are multiplexed via TDMA. Note that the above scheme also subsumes conventional TDMA as a special case. Two typical realizations of the scheme for $N=3$ are shown in Fig. 1. As can be seen, there will be cases with unused SCs ($K<N$) or SCs that support one additional UE compared to others ($K>N$) with no action taken to compensate for these effects. The inefficient bandwidth utilization is not a real issue since presence of unused SCs is beneficial in terms of reduced interference and also $N$ is variable that can be set to a small enough value so that this event is avoided, if desired. The load imbalance among SCs is irrelevant for rate computations due to averaging. Having specified the multiple access schemes, the corresponding quantities $p_{n}$ and $L_{n}$ will be evaluated in the following subsections. By the symmetry of the system model and the schemes considered, all the performance metrics presented in Sect. II do not depend on $n$. Therefore, the typical UE will be considered assigned to SC 1 and index $n$ will be dropped from notation in the following. Computation of Activity Probability ----------------------------------- The following lemma holds for the activity probability $p$ of TDMA and FDMA/TDMA. Under assumption 1, the activity probability of any AP in the system, other than $0$, is $$\label{eq:activ_prob} p = \left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} \frac{1}{N}\Pr\{K>0\},& for TDMA,\\ \frac{1}{N}\sum_{K>0}\Pr\{K\}\min\{K,N\},& for FDMA/TDMA, \end{IEEEeqnarraybox}\right.$$ with $\Pr\{K\}$ as given in Lemma 1. See Appendix A. As expected, $p$ is a decreasing function of $N$ in both cases, and it can be easily shown that, for the same $N>1$, $p$ of FDMA/TDMA is lower bounded by the corresponding $p$ of TDMA with equality when $\Pr\{K \leq 1\} = 1$, i.e., with a dense AP deployment. Note that in [@Andrews], $p$ for TDMA was set equal to $1/N$, implying that $\Pr\{K>0\}=1$, which is (approximately) valid only for $\tau \rightarrow 0$. Substituting (\[eq:activ\_prob\]) in (\[eq:FSIR\]) shows that $F_{\textrm{SIR}}(\theta)$ is decreasing in $N$, i.e., bandwidth partitioning improves performance in terms of SIR. Figure 2 shows the behavior of $p$ as a function of $N$ for FDMA/TDMA and TDMA and various values of $\tau$. Conventional TDMA performance corresponds to $N=1$. It can be seen that both schemes outperform conventional TDMA, with larger values of $N$ required to obtain the same $p$ under heavier system load. TDMA is always better than FDMA/TDMA, especially under heavy system load. For $\tau=10$, both schemes essentially operate exactly the same and this is reflected on the values of $p$. For $\tau=1$, FDMA/TDMA is worse than TDMA but relatively close, with similar dependence on $N$ (inversely proportional). *Remark*: According to the previous discussion, the SIR grows unbounded with increasing $\tau$ and/or $N$, which is unrealistic. However, arbitrarily large values of $\tau$ are not of interest due to practical considerations, whereas arbitrarily large values of $N$ are not acceptable from a user rate perspective as will be shown in Sect. IV (Lemma 6). Computation of Delay -------------------- Computation of delay requires knowledge of the distribution of the total number $K_0$ of UEs associated with AP 0, *in addition to* the typical UE. The PMF of (\[eq:Kpmf\]) does not hold for AP 0 as conditioning on its area covering the position of the typical UE makes it larger than the cell area of a random AP [@Baccelli]. Taking this fact into account, the PMF of $K_0$ can be shown to be given as in the following lemma. The PMF of the number $K_0$ of UEs associated with AP 0, in addition to the typical UE, is [@Yu] $$\label{eq:K0pmf} \Pr\{K_0\}=\frac{3.5^{4.5}\Gamma(K_0+4.5)\tau^{4.5}}{\Gamma(4.5)K!(1+3.5\tau)^{K_0+4.5}}, K_0 \geq0.$$ For the case of TDMA, it is clear that $L=K_0$, whereas $L$ for FDMA/TDMA is given in the following lemma. Define the event $\mathcal{A}_l \triangleq$ {$l$ UEs assigned on SC 1 in addition to the typical UE}, $l\geq0$. The PMF of $L$ for FDMA/TDMA equals $$\label{eq:p_L} \Pr\{L\}=\sum_{K_0\geq0}\Pr\{K_0\}\Pr\{\mathcal{A}_{L} | K_0\},$$ with $\Pr\{K_0\}$ as given in Lemma 4, $$\label{eq:p_L0_K} \setlength{\nulldelimiterspace}{0pt} \Pr\{\mathcal{A}_0 | K_0\}=\left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} 1,& $0 \leq K_0 \leq N-1$, \\ \frac{2N-K_0-1}{N}, &$N \leq K_0 \leq 2N-2 $,\\ 0,&$K_0 \geq 2N-1$,% \end{IEEEeqnarraybox}\right.$$ and $$\Pr\{\mathcal{A}_l | K_0\}=\begin{cases} 0,& \textrm{$0 \leq K_0 \leq l N-1$}, \\ \frac{K_0-lN+1}{N},& \textrm{$lN \leq K_0 \leq (l+1)N-1$,}\\ \frac{(l+2)N-K_0-1}{N},&\parbox[t]{1\textwidth}{$(l+1)N \leq K_0$\\ \textrm{\phantom{xxxxx} $\leq (l+2)N-2$,}}\\ 0,& \textrm{$K_0 \geq (l+2)N-1$}, \end{cases}$$ for $l \geq 1$. Follows by the same arguments as in the proof of Lemma 3. Figure 3 shows the mean value of $L$ as a function of $N$ for FDMA/TDMA and TDMA and various values of $\tau$. Both schemes provide reduced delay by increasing $\tau$, since the number of UEs associated with the AP is reduced. For the case of FDMA/TDMA, average $L$ decreases also with $N$ as there are more SCs available to UEs and the probability of time sharing one of them by many UEs is reduced. On the other hand, for TDMA, $L$ is independent of $N$ since the availability of SCs is not exploited for parallel transmissions. Computation of Rate ------------------- Obviously, FDMA/TDMA is advantageous when delay is considered, whereas, TDMA is more robust to interference. However, it is the achieved UE rate that is of more interest and, at this point, there is no clear indication which of the two schemes is preferable under this performance metric, i.e., what is of more importance, robustness to interference or efficient multiple access. Having specified the statistics of $p$ and $L$, the CDF of $R$ can now be obtained for both multiple access schemes as follows. The CDF of $R$ equals $$\label{eq:F_R} F_{R}(r) = \sum_{L\geq0} \Pr\{L\} F_{R}(r|L),$$ with $\Pr\{L\}$ as given in Sect. III. D and $F_{R}(r|L)$ the CDF of $R$ conditioned on the value of $L$, given by $$\label{eq:F_R_L} \setlength{\nulldelimiterspace}{0pt} F_{R}(r|L)=\left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} F_{\textrm{SIR}}(\theta_0),& $r \leq \frac{\overline{R}_{\theta_0}}{N(L+1)}$, \\ F_{\textrm{SIR}}\left(2^{rN(L+1)}-1\right),& $r \geq \frac{\overline{R}_{\theta_0}}{N(L+1)}$, \end{IEEEeqnarraybox}\right.$$ where ${\overline{R}_{\theta_0}} \triangleq \log_2(1+\theta_0)$ is the minimum achievable rate for $N=1$ and $K_0=0$ (no contending UEs), conditioned on UE operation above SIR threshold $\theta_0$. See Appendix B. Note that the upper term of (\[eq:F\_R\_L\]) indicates that for small values of $r$, rate outage probability coincides with the SIR outage probability, irrespective of the actual value of $r$, as for this rate region it is the SIR outage event (strong interference) that prevents UEs from achieving these rates. For larger rates, $L$ appears in the lower term of (\[eq:F\_R\_L\]), i.e., multiple access also affects performance in addition to interference. Figure 4 shows $F_{R}(r)$ for $\theta_0 = 0 \textrm{ dB}$, $\alpha = 3$, $N = 1, 5, 10$, and various values of $\tau$, for FDMA/TDMA and TDMA. For the case of $\tau=1$ each analytical CDF is accompanied by the corresponding empirical CDF (dotted lines) obtained by simulations (simulation results for other $\tau$ values are omitted for clarity). The good match between analysis and simulation validates the use of the derived formulas for system analysis and design. As it can be seen, increasing AP density, i.e., increasing $\tau$, results in improved performance as the distance between UE and serving AP, as well as the number of UEs sharing the resources of a single AP, are reduced, which overbalance the effect of reduced distance from interfering APs. Concerning the dependence of rate on $N$, it can be seen that setting $N=1$ (conventional TDMA) is optimal when large data rates are considered, irrespective of $\tau$. However, the shape of the CDF for $N=1$ indicates a highly unfair system. Increasing $N$ results in a progressively more fair system, favoring the small-rate operational region. In particular, for $\tau=1$, and assuming a rate outage when the typical UE rate is below $0.1$ b/s/Hz (corresponding to 2 Mbps in a 20 MHz system bandwidth), the outage probability is about 0.49, 0.25, and 0.15, for $N=1,5,$ and 1$0$, respectively with FDMA/TDMA. Comparing FDMA/TDMA and TDMA for the same $N>1$, it can be seen that TDMA is a better choice at low data rates. As stated above, at this value range it is the SIR outage probability that defines performance and TDMA is preferable as it is more robust to SIR outage events. When higher data rates are considered, TDMA is penalized by the inability of concurrent UE transmissions and FDMA/TDMA becomes a better choice. In Fig. 4, this difference in performance is more clearly seen for $\tau=10$, which results in average $K$ and $K_0$ equal to 10 and 12.8, respectively. For this load and the values of $N$ considered, FDMA/TDMA utilizes all SCs with high probability, resulting in significantly larger interference compared to that achieved by TDMA which only allows for transmission on a single SC. However, for the same reason, performance of FDMA/TDMA is significantly better for higher rates as it provides much smaller delay than TDMA. Results for $\tau=1, 10$ show that the performance advantage of FDMA/TDMA in higher rates and of TDMA in lower rates still holds but is less pronounced. Optimal Number of Subchannels ============================= By simple examination of Fig. 4, it is understood that for a given minimum rate $r_0 > 0$, there is an $r_0$-dependent, optimal number $N^*$ of SCs that minimizes rate outage probability which can be obtained by numerical search using the analytical expression of Proposition 1. Unfortunately, the highly non-linear dependence of $F_{R}(r_0)$ on $N$ does not allow for a closed form expression of $N^*$ that holds in the general case. However, the following proposition, valid under certain operational scenarios to be identified right after, provides some guidelines. \[proposition\][Proposition]{} Under the assumption $\Pr\{L = 0\}=1$ (no time-sharing of SCs) and for any $\theta_0 \geq 0$, $r_0>0$, the optimal number $N^*$ of SCs that minimizes $F_R(r_0)$ is lower bounded by $$\label{eq:Nlb} N^* \geq N^*_{\textrm{\emph{lb}}} \triangleq \max\left\{1,\left\lfloor \overline{R}_{\theta_0}/r_0 \right\rfloor\right\},$$ Setting $r=r_0$ and keeping only the term $L=0$ in (\[eq:F\_R\]), $F_R(\cdot)$ can be written as a function of $N$ as $$\label{eq:F_R_LB} \setlength{\nulldelimiterspace}{0pt} F_{R}(N)=\left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} F_{\textrm{SIR}}(\theta_0),& $N \leq \frac{\overline{R}_{\theta_0}}{r_0}$, \\ F_{\textrm{SIR}}\left(2^{r_0N}-1\right),& $N \geq \frac{\overline{R}_{\theta_0}}{r_0}$. \end{IEEEeqnarraybox}\right.$$ As shown in Sect. II. C, $F_{\textrm{SIR}}(\theta_0)$ is a decreasing function of $N$ for both TDMA and FDMA/TDMA, therefore, so is $F_R(r_0)$ for $N \leq \frac{\overline{R}_{\theta_0}}{r_0}$. As discussed in Sect. II. D, $L$ can be made arbitrarily small for both TDMA and FDMA/TDMA by increasing $\tau$, therefore, Proposition 2 holds asymptotically for $\tau \gg 1$, i.e., in an ultra dense AP deployment where $\Pr\{K_0=0\} \rightarrow 1$. However, FDMA/TDMA can also reduce delay by increasing $N$. It is easy to see that if, for a given $\tau$, $N^*_{\textrm{lb}}$ is larger than the minimum value of $N$ required for $\Pr\{K_0\leq N-1\} = 1$, i.e., no UEs sharing a SC, $N^*$ for FDMA/TDMA cannot be smaller than $N^*_{\textrm{lb}}$. These observations are summarized in the following corollary. The bound of (\[eq:Nlb\]) holds for TDMA when $\tau$ is sufficiently large so that $\Pr\{K_0=0\} \rightarrow 1$, and for FDMA/TDMA when $\tau$ is sufficiently large so that $\Pr\{K_0\leq N^*_{\textrm{lb}} - 1\} \rightarrow 1$. Note that the lower bound of (\[eq:Nlb\]) is inversely proportional to $r_0$ corresponding to the fact that, when lower user rates are considered, there is no need for large bandwidth utilization and the system can reduce interference by increasing $N$. For rates $r_0>\overline{R}_{\theta_0}/2$ the bound becomes trivial, i.e., equal to one, however, these rates may be of small interest in a practical setting as they lead to large rate outage probability, even with optimized $N$ and moderate load (see Fig. 4 and Sect. VI). The following lemma guarantees that an arbitrarily large $N$ cannot provide any non-zero $r_0$, even though the SIR grows unbounded with $N$. For $N\rightarrow \infty$, $F_R(r_0) \rightarrow 1$, for any $r_0 >0$. It was shown in [@Andrews] that the mean of $\overline{R}$ is a strictly decreasing function of $N$. Since $0 \leq R \leq \overline{R}$, it follows that the mean of $R$ tends to 0 with increasing $N$, and applying Markov’s inequality completes the proof. Minimum Required Access Point Density for a Given Rate Outage Probability Constraint ==================================================================================== A common requirement in practical systems is to provide a minimum rate $r_0$ to their subscribers with a specified, small outage probability $\epsilon>0$. As is clear from Fig. 4, these system requirements may be such that they cannot be satisfied for a certain $\tau$, even under optimized $N$. It is therefore necessary to operate in a greater $\tau$, i.e., increase AP density, and it is of interest to know the minimum value, $\tau_{\textrm{min}}$, that can provide the given requirements. As in the case of $N^*$, a closed form expression for $\tau_{\textrm{min}}$ can not be found in closed form for the general case and a two-dimensional numerical search (over $\tau$ and $N$) is necessary. However, under asymptotically small $\epsilon$, an upper bound of $\tau_{\textrm{min}}$ can be obtained for the case of FDMA/TDMA, as stated in the following proposition. \[proposition\][Proposition]{} For FDMA/TDMA and asymptotically small $\epsilon$, the minimum value of $\tau$, $\tau_{\textrm{min}}$, that can support a UE rate $r_0$ with $F_{R}(r_0) \leq \epsilon$ under an SIR threshold $\theta_0$, is upper bounded as $$\label{eq:tmin} \tau_{\textrm{min}} \leq \left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} \frac{(1-\epsilon)\rho(\theta_0)}{\epsilon \left\lfloor \overline{R}_{\theta_0}/r_0 \right\rfloor},& $r_0 \leq \overline{R}_{\theta_0}$\\ \frac{(1-\epsilon)\rho(2^{r_0}-1)}{\epsilon}& $r_0 \geq \overline{R}_{\theta_0}$, \end{IEEEeqnarraybox}\right.$$ An upper bound on $\tau_{\textrm{min}}$ can be obtained by seeking the value of $\tau$ that provides the requested outage probability constraint with equality and under $N=N^*_{\textrm{lb}}$, as given in (\[eq:Nlb\]), which is not guaranteed to be the optimal choice for $N$. In addition, only values of $\tau$ for which $N^*_{\textrm{lb}}$ results in $\Pr\{L=0\} = 1$ are considered, which effectively places a lower bound on the search space of $\tau$ that may be greater than $\tau_{\textrm{min}}$. Under these restrictions, $$\begin{aligned} \label{eq:proof} F_{R}(r_0) &\overset{(a)}{=}& \left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} F_{\textrm{SIR}}(\theta_0),& $r_0 \leq \overline{R}_{\theta_0}$,\\ F_{\textrm{SIR}}(2^{r_0} -1),& $r_0 \geq \overline{R}_{\theta_0}$, \end{IEEEeqnarraybox}\right.\nonumber\\ &\overset{(b)}{=}& \left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} \frac{1}{1+\tau N^*_{\textrm{lb}}/\rho(\theta_0)}, & $r_0 \leq \overline{R}_{\theta_0}$,\\ \frac{1}{1+\tau/\rho(2^{r_0}-1)}, & $r_0 \geq \overline{R}_{\theta_0}$, \end{IEEEeqnarraybox}\right.\end{aligned}$$ where (a) follows from (\[eq:F\_R\]), (\[eq:F\_R\_L\]) with $\Pr\{L=0\} = 1$ and (b) from (\[eq:FSIR\]) and (\[eq:activ\_prob\]) with $\min\{K,N^*_{\textrm{lb}}\}=K, \forall K$. Setting (\[eq:proof\]) equal to $\epsilon$ results in (\[eq:tmin\]). Note that the asymptotically small $\epsilon$ guarantees that the bound of (\[eq:tmin\]) is large enough such that $\Pr\{L=0\} \rightarrow 1$, i.e., it is within the restricted search space employed for its derivation. A simpler form of the bound can be obtained when asymptotic values of $r_0$ are considered as shown in the following proposition. \[proposition\][Proposition]{} For asymptotically small or large values of $r_0$, the bound of (\[eq:tmin\]) can be approximated by $$\label{eq:tminasympt} \tau_{\textrm{min}} \leq \left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} \frac{(1-\epsilon)\rho(\theta_0)}{\epsilon \overline{R}_{\theta_0} }r_0,& $r_0 \ll \overline{R}_{\theta_0}$,\\ \frac{(1-\epsilon)2\pi}{\epsilon \alpha \sin(2\pi/\alpha)}2^{2r_0/\alpha},& $r_0 \gg \overline{R}_{\theta_0}$, \end{IEEEeqnarraybox}\right.$$ The upper part of (\[eq:tminasympt\]) can be obtained by noting that $1/\lfloor \overline{R}_{\theta_0}/r_0 \rfloor \approx r_0/\overline{R}_{\theta_0}$, for $r_0 \rightarrow 0$, whereas the lower part can be obtained by noting that $\rho(2^{r_0}-1)\approx \rho(2^{r_0}) = 2^{2r_0/\alpha} \int_{2^{-2r_0/\alpha}}^\infty 1/(1+u^{\alpha/2})du \approx 2^{2r_0/\alpha} \int_0^\infty 1/(1+u^{\alpha/2})du=2^{2r_0/\alpha}2\pi\sin(2\pi/\alpha)/\alpha$, for $r_0 \rightarrow \infty$ Equation (\[eq:tminasympt\]) clearly shows that the bound of $\tau_{\textrm{min}}$ grows linearly and exponentially with $r_0$, for asymptotically small and large $r_0$, respectively. Numerical Results and Discussion ================================ This section employs the analytical results obtained previously to examine various aspects of system design. In all cases the path loss exponent is set to $\alpha=3$ and, unless stated otherwise, the SIR threshold is set to $\theta_0 = -6\textrm{ dB}$ ($\overline{R}_{\theta_0}\approx 0.3233$), roughly corresponding to the operational SIR required by the minimum coding rate scheme of a real cellular system [@Piro]. *1) Optimal number of SCs:* Figure 5 shows $F_R(\overline{R}_{\theta_0}/5)$ as a function of $N$ for FDMA/TDMA and TDMA, and $\tau=0.1, 1$, and $10$. Note that by Proposition 2, $N^*$ is lower bounded by $N^*_{\textrm{lb}} = 5$ when $\Pr\{L=0\} = 1$. Consider first TDMA. It can be directly calculated that $\Pr\{L=0\}=\Pr\{K_0=0\} \approx 0.0023, 0.3227, 0.8809$, for $\tau = 0.1, 1, 10$, respectively. Therefore, the operational conditions of Corollary 1 hold (approximately) only for the $\tau = 10$ case. It can be seen, that the bound is actually tight for that case, as $N^*=5$, whereas $N^*$ tends to one as smaller $\tau$ values are considered, i.e., $N^*_{\textrm{lb}}$ is a tight bound when $\tau \gg 1$ but is irrelevant for small $\tau$. Turning to the FDMA/TDMA case, $\Pr\{L=0\}=\Pr\{K_0 < N^*_{\textrm{lb}} -1\} \approx 0.1864, 0.9931, 1$, for $\tau = 0.1, 1, 10$, respectively, i.e., the operational conditions of Corollary 1 correspond to the cases of $\tau=1$ and $10$, with $N^*$ actually equal to $N^*_{\textrm{lb}}$. For $\tau=0.1$, $N^*=12$, i.e., $N^*_{\textrm{lb}}$ also servers as a lower bound in this case, albeit a loose one. However, note that performance gain with $N^*$ is only marginal compared to $N^*_{\textrm{lb}}$. These observations, along with extensive numerical experiments, suggest that setting $N=N^*_{\textrm{lb}}$ as per (\[eq:Nlb\]) is a good practise for FDMA/TDMA as it either corresponds to the optimal value or provides performance close to optimal. For TDMA, setting $N=N^*_{\textrm{lb}}$ for small $\tau$ may lead to considerable performance degradation. *2) Comparison of multiple access schemes with optimal $N$:* Figure 6 depicts the minimum rate outage probability provided by FDMA/TDMA and TDMA when the corresponding optimal $N$ for each rate $r_0$ is employed (found by numerical search). Note that these curves should not be confused as CDF curves since a different $N$ is employed for each rate. Performance of conventional TDMA is also shown. As can be seen, for small to moderate rates ($r_0 < \overline{R}_{\theta_0}$), optimal bandwidth partitioning provides significant benefits compared to conventional TDMA. FDMA/TDMA is shown to outperform TDMA in this regime as it exploits bandwidth more efficiently. For large rates ($r_0 \geq \overline{R}_{\theta_0}$) all schemes have the same performance as $N^*$ becomes one. It is safe to say that, under optimal $N$, FDMA/TDMA is preferable to TDMA as it provides at least as good performance with the added benefit of reduced delay that is of importance under time-sensitive applications. *3) Effect of SIR threshold:* Figure 7 shows $F_R(r_0)$ as a function of SIR threshold $\theta_0$, for $r_0=0.1$, $\tau = 1$, and with $N$ optimized for each $\theta_0$ by numerical search. It can be seen that larger $\theta_0$ values result in degradation of performance for both FDMA/TDMA and TDMA, albeit much less severe than conventional TDMA. Also shown is the performance of FDMA/TDMA when $N$ is optimized assuming $\theta_0=0$, i.e., neglecting SIR outage. As expected, performance (significantly) degrades when the actual SIR threshold exceeds a certain value (about $-5$ dB in this case). Performance of TDMA assuming $\theta_0=0$ is not shown as it matches that of conventional TDMA. Similar behaviour is observed for other values of $r_0$, $\tau$ and $\alpha$. These results clearly illustrate the necessity of employing the SIR threshold in system analysis and design. *4) Minimum AP density:* Figure 8 shows $\tau_{\textrm{min}}$ as a function of rate $r_0$, obtained by a two-dimensional numerical search over $\tau$ and $N$, for FDMA/TDMA, TDMA and conventional TDMA, under an outage constraint $F_R(r_0) \leq 0.1$. In addition, the asymptotic bounds of (\[eq:tminasympt\]) are also shown. Note that, even though (\[eq:tminasympt\]) is derived assuming asymptotically small $\epsilon$, it still provides a very good approximation of $\tau_{\textrm{min}}$ for this case. Specifically, $\tau_{\textrm{min}}$ of FDMA/TDMA exhibits the behavior predicted by (\[eq:tminasympt\]), i.e., increases linearly and exponentially with $r_0$ for asymptotically small and large $r_0$ respectively. Performance of TDMA follows the same trend with FDMA/TDMA but results in about 1.5 times larger values of $\tau_{\textrm{min}}$ for small $r_0$. It is interesting to note that the very good correspondence of the numerical and analytical results for FDMA/TDMA implies that the optimal system parameters ($\tau$ and $N$) for FDMA/TDMA are such that $\Pr\{L=0\}\approx 1$, i.e., there is small probability of sharing a SC. In contrast, TDMA achieves performance close to FDMA/TDMA with $\Pr\{L=0\}\approx 0$ for small $r_0$ ($\tau_{\textrm{min}}$). Conventional TDMA is clearly out of consideration for the small rate region as it significantly suffers from interference and the only mechanism to reduce it is by employing a large AP density. For rate values equal or greater than $\overline{R}_{\theta_0}$ all schemes coincide as the optimal value of $N$ turns out to be equal to one. Conclusion ========== In this paper, system parameter selection, namely, number of bandwidth partitions and AP density was investigated for randomly deployed ultra dense wireless networks. The stochastic geometry framework from previous works was incorporated and the user rate distribution was derived analytically, taking into account the UE distribution, multiple access scheme and SIR outage. It was shown that performance depends critically on the number of bandwidth partitions and the way they are utilized by the multiple access scheme. The optimal number of partitions was tightly lower bounded under large AP density, showing that smaller bandwidth utilization is beneficial for interference reduction when small rates are considered. In addition, an upper bound on the minimum AP density required to provide an asymptotically small rate outage probability was obtained, that was shown to provide a very good estimate of the minimum density under moderate probability constraints. When the considered rates are small enough to allow for bandwidth partitioning, the minimum required density is smaller by orders of magnitude compared to the one provided by conventional TDMA. Proof of Lemma 3 ================ Activity probability for TDMA is obtained by simply noting that $p = \Pr\{\textrm{transmission on SC 1}|K > 0\} \Pr\{K>0\}$. For the case of FDMA/TDMA, consider a random AP associated with $K$ indexed UEs and let $p(K)$ denote the probability of assigning at least one UE on SC 1. Clearly, $p(K)=1$ for $K \geq N$ and $p(K)=0$ for $K=0$. For the case $0<K<N$, define the mutually exclusive events $\mathcal{B}_m \triangleq$ $\{$$m$-th UE is assigned SC $1 \}, m=1,2,\ldots, K$. It is easy to see that $$\label{eq:Pr_Bm} \Pr\{\mathcal{B}_m\} = \frac{1}{N-(m-1)} \prod_{r=1}^{m-1}\left( 1 - \frac{1}{N-(r-1)}\right),$$ and $$\label{eq:p_N_K} p(K) = \sum_{m=1}^{K} \Pr\{\mathcal{B}_m\} = K/N, 0<K<N,$$ where the last equality follows by simple algebra. Averaging $p(K)$ over $K$ results in the form of (\[eq:activ\_prob\]). Proof of (\[eq:F\_R\_L\]) ========================= Denoting the SIR outage event $\{\textrm{SIR} < \theta_0\}$ and its complement, $\{\textrm{SIR} \geq \theta_0\}$, as $\mathcal{O}$ and $\overline{\mathcal{O}}$, respectively, $F_{R}(r|L)$ can be written as $$\begin{aligned} F_{R}(r|L)&{}={}& F_{\textrm{SIR}}(\theta_0) F_{R}(r|L,\mathcal{O})\nonumber\\ &&{+}\:(1-F_{\textrm{SIR}}(\theta_0)) F_{R}(r|L,\overline{\mathcal{O}}).\end{aligned}$$ From (\[eq:4\]), $R = 0$ conditioned on $\mathcal{O}$, therefore, $$\label{eq:F_R_L_a2} F_{R}(r|L,\mathcal{O}) = 1, \forall r, L,$$ whereas, conditioned on $\overline{\mathcal{O}}$, $$\begin{aligned} F_{R}(r|L,\overline{\mathcal{O}}) &{}={}& \Pr\{\overline{R}/(L+1) < r | \overline{\mathcal{O}}\}\nonumber\\ &{}={}& F_{\textrm{SIR}}(\tilde{\theta}|\textrm{SIR} \geq \theta_0)\nonumber\\ &{}={}& \left\{\begin{IEEEeqnarraybox}[\relax][c]{l's} \frac{F_{\textrm{SIR}}(\tilde{\theta})-F_{\textrm{SIR}}(\theta_0)}{1-F_{\textrm{SIR}}(\theta_0)},& $\tilde{\theta} \geq \theta_0$\\ 0,& $\tilde{\theta} < \theta_0,$ \end{IEEEeqnarraybox}\right.\end{aligned}$$ where $\tilde{\theta} \triangleq 2^{rN(L+1)}-1$ and the last equality follows from basic probability theory and the continuity of $F_{\textrm{SIR}}(\theta)$. Combining (20)–(22) leads to (\[eq:F\_R\_L\]) and application of the total probability theorem gives (\[eq:F\_R\]). [99]{} I. Hwang, B. Song, and S. S. Soliman, “A holistic view on hyper-dense heterogeneous and small cell networks,” *IEEE Commun. Mag.*, vol. 51, no. 6, pp. 20–27, Jun. 2013. F. Baccelli and B. B[ł]{}aszczyszyn, *Stochastic geometry and wireless networks*. Now Publishers Inc., 2009. H. ElSawy, E. Hossain, and M. Haenggi,“Stochastic geometry for modeling, analysis, and design of multi-tier and cognitive cellular wireless networks: a survey,” *IEEE Communications Surveys & Tutorials*, vol. 15, pp. 996–1019, Jul. 2013. J. G. Andrews, “Seven ways that HetNets are a cellular paradigm shift,” *IEEE Commun. Mag.*, vol. 51, no. 3, pp. 136–144, Mar. 2013. S. Lee and K. Huang, “Coverage and economy in cellular networks with many base stations,” *IEEE Commun. Lett.*, vol. 16, no. 7 pp. 1038–1040, Jul. 2012. C. Li, J. Zhang, and K. B. Letaief, “Throughput and energy efficiency analysis of small cell networks with multi-antenna base stations,” Jun. 2013, online: http://arxiv.org/abs/1306.6169. D. Cao, S. Zhou, and Z. Niu, “Optimal base station density for energy-efficient heterogeneous cellular networks,” in *Proc. of IEEE Int. Conf. on Commun. (ICC)*, Otawwa, Canada, Jun. 2012. S. Singh, H. S. Dhillon and J. G. Andrews, “Offloading in heterogeneous networks: modeling, analysis and design insights”, *IEEE Trans. on Wireless Commun.*, vol. 12, no. 5, pp. 2484–2497, May 2013. N. Jindal, J. G. Andrews, and S. P. Weber, “Bandwidth partitioning in decentralized wireless networks,” *IEEE Trans. Wireless Commun.*, vol. 7, no. 12, pp. 5408–5419, Jul. 2008. K. Huang, V. K. N. Lau, and Y. Chen, “Spectrum sharing between cellular and mobile ad hoc networks: Transmission-capacity trade-off,” *IEEE J. Sel. Areas Commun.*, vol. 27, no. 7, pp. 1256–1267, Sep. 2009. J. G. Andrews, F. Baccelli, and R. K. Ganti, “A tractable approach to coverage and rate in cellular networks,” *IEEE Trans. Commun.*, vol. 59, no. 11, pp. 3122–3134, Nov. 2011. A. Lozano, R. Heath, and J. Andrews, “Fundamental limits of cooperation,” *IEEE Trans. Inf. Theory*, vol. 59, no. 9, pp. 5213–5226, Sep. 2013. S. M. Yu and S.-L. Kim, “Downlink capacity and base station density in cellular networks,” in *Proc. of IEEE WiOpt Workshop on Spatial Stochastic Models for Wireless Networks (SpaSWiN)*, 2013. G. Piro, L. Alfredo Grieco, G. Boggia, F. Capozzi, and P. Camarda, “Simulating LTE cellular systems: an open-source framework,” *IEEE Trans. on Veh. Technol.*, vol. 60, no. 2, pp 498-–513, Feb. 2011. [^1]: The authors are with the Department of Digital Systems, University of Piraeus, Greece. This work has been performed in the context of THALES-INTENTION (MIS: 379489) research project, within the framework of Operational Program “Education and Lifelong earning”, co-financed by the European Social Fund (ESF) and the Greek State.
*2 - 7*c**2. 10*c**2 Collect the terms in 6 - 2*t**3 - 6 + 4*t**3. 2*t**3 Collect the terms in -26*p + 29*p + p - 4*p**2. -4*p**2 + 4*p Collect the terms in 2906*r - 2906*r + 28*r**3. 28*r**3 Collect the terms in -144*g**2 + 9*g - 9*g + 146*g**2. 2*g**2 Collect the terms in 5*q + 5*q - 12*q. -2*q Collect the terms in 8*p**3 + 8 + 5 - 17. 8*p**3 - 4 Collect the terms in r**2 + 32*r**2 + 4*r**2 + 61*r**2. 98*r**2 Collect the terms in -50*w**3 - 249*w**3 + 41*w**3. -258*w**3 Collect the terms in 18 - 28 - 62*x + 10. -62*x Collect the terms in -23 - 10*j**3 + 23. -10*j**3 Collect the terms in 11*b**2 + b**2 + 6*b**2. 18*b**2 Collect the terms in -3*h**3 + 6*h**2 - h**2 + 11*h**2 - 16*h**2. -3*h**3 Collect the terms in -36 - 10*k**2 + 36. -10*k**2 Collect the terms in 87*y**3 + 89*y**3 - 345*y**3 + 79*y**3 + 89*y**3. -y**3 Collect the terms in 319*r**2 + 74*r**2 - 400*r**2. -7*r**2 Collect the terms in 8*l**3 - 3*l**3 + 6*l**3. 11*l**3 Collect the terms in -d + 3*d - 8*d**3 + 6*d**3. -2*d**3 + 2*d Collect the terms in -1 + 2 + 14*j - 11*j - 1. 3*j Collect the terms in -3*s - 6 + 13*s + 4. 10*s - 2 Collect the terms in -966 - s**2 + 966. -s**2 Collect the terms in 19*w**2 + 0*w**2 + w**2. 20*w**2 Collect the terms in -z + 59*z**2 - 108*z**2 + 52*z**2. 3*z**2 - z Collect the terms in 1509*o - 1509*o + 5*o**2 + 3*o**2. 8*o**2 Collect the terms in -5*o + 5*o - 4*o**2 + 0*o. -4*o**2 Collect the terms in -185*n + 96*n + 89*n + 2*n**2. 2*n**2 Collect the terms in -15*y**3 + 3*y - 3*y + 0*y. -15*y**3 Collect the terms in 31*b + 39*b - 103*b + 33*b + b**3. b**3 Collect the terms in -625*f**2 + 183*f**2 + 2 - 2. -442*f**2 Collect the terms in -19 + 19 + 21*s - 17*s. 4*s Collect the terms in -6*k - 2*k - 41 + 41. -8*k Collect the terms in -87*k + 41*k + 43*k. -3*k Collect the terms in p + 3*p - 2876 - 3*p + 2876. p Collect the terms in 5*o**2 + 49702*o**3 - 5*o**2 - 49701*o**3. o**3 Collect the terms in 12 - 35*s + 1 - 13. -35*s Collect the terms in -24*w - 48*w + 121*w - 46*w. 3*w Collect the terms in 4*u - 5*u + 4*u. 3*u Collect the terms in 129*m - 18 + 18. 129*m Collect the terms in 23*o - 23*o + 9*o**3. 9*o**3 Collect the terms in 2*i - 9 - 3*i + 6. -i - 3 Collect the terms in 2 + 172*i**2 - 2. 172*i**2 Collect the terms in 14 - 20 - 7*d**2 + 8*d**2 - 40. d**2 - 46 Collect the terms in -11*l**2 - 8*l**2 + 4*l**2 - 18*l**2. -33*l**2 Collect the terms in 56*i - 58*i + 2*i + 3*i**2 + 2*i**2. 5*i**2 Collect the terms in -p**3 + 0*p**2 + 2*p**3 - 2*p**2 - 5*p**2. p**3 - 7*p**2 Collect the terms in 5*w**3 + 9*w**3 + 11*w**3 - 35*w**3 + 8*w**3. -2*w**3 Collect the terms in 130 + 22*y**3 - 130. 22*y**3 Collect the terms in -12*m + 12*m - 77*m**2. -77*m**2 Collect the terms in 31 + 8*z**2 + 29 - 16. 8*z**2 + 44 Collect the terms in -11 + 20 - 9 - 11*u**3. -11*u**3 Collect the terms in -2*q**2 - 5*q**2 + q**2 + 4*q**2. -2*q**2 Collect the terms in 0*i - 4*i + 4*i + 16*i**3. 16*i**3 Collect the terms in -13 + 23 - 7 - 5*l**2 - 3. -5*l**2 Collect the terms in -43*o**3 + 47*o**2 - 47*o**2. -43*o**3 Collect the terms in 7*v**3 - 2*v**3 - 7*v**3 + 237*v**2. -2*v**3 + 237*v**2 Collect the terms in -31*i + 15*i + 17*i. i Collect the terms in 1137*l**2 - 1137*l**2 - 4*l**3. -4*l**3 Collect the terms in 2 - 2 + 1624*v**3 - 1625*v**3. -v**3 Collect the terms in -u - 4*u + 11*u + 39*u. 45*u Collect the terms in j**3 + 5*j**3 - 4*j + 4*j. 6*j**3 Collect the terms in -2806 + 2806 - 9*q**2 + 0*q**2. -9*q**2 Collect the terms in -35*c**2 + 32*c**2 - 85*c**2. -88*c**2 Collect the terms in 2*k**2 - 12436*k + 12436*k. 2*k**2 Collect the terms in 560*u + 1 - 565*u - 1. -5*u Collect the terms in -30*c**2 + 56*c**2 - 24*c**2. 2*c**2 Collect the terms in 206 - 206 + 7*o. 7*o Collect the terms in 6*b**2 + 8*b**2 + 4*b**2. 18*b**2 Collect the terms in -3 + 3 - 352*o**2 + 353*o**2. o**2 Collect the terms in 5*r**3 + 4*r**3 - 14*r**3 - 5. -5*r**3 - 5 Collect the terms in -5*w**3 + 6456 - 6456. -5*w**3 Collect the terms in -11*t**3 + 2*t**3 + 0*t**3. -9*t**3 Collect the terms in 498*v**2 - 259*v**2 - 241*v**2. -2*v**2 Collect the terms in -32392*d**2 + 32392*d**2 + d**3. d**3 Collect the terms in -52*b**2 + 8*b**3 + 52*b**2. 8*b**3 Collect the terms in -9 + 9 - m. -m Collect the terms in 22*w**2 + 168 + 2*w**2 - 4*w**3 - 168. -4*w**3 + 24*w**2 Collect the terms in 2*b**2 + 71 - 71 + 2*b**2. 4*b**2 Collect the terms in -25*n**2 + 24*n**2 + 20 - 20. -n**2 Collect the terms in -2*w**3 + 7*w**3 + w**3 - 4*w**3. 2*w**3 Collect the terms in 99*o - 74*o + 33*o + 110*o. 168*o Collect the terms in -416 - 7*j**2 + 135 + 140 + 141. -7*j**2 Collect the terms in -i**2 + 0*i**2 + 582 - 289 - 293. -i**2 Collect the terms in 126*j**2 - 106*j**3 - 126*j**2 + 2. -106*j**3 + 2 Collect the terms in 349 - 349 + 11*n**3 - 9*n**3. 2*n**3 Collect the terms in -5*w**2 - 4*w**2 + 8*w**2. -w**2 Collect the terms in -66*k**2 - 364*k**2 - 255*k**2. -685*k**2 Collect the terms in 24*l**2 - 72*l**2 + 21*l**2 + 25*l**2. -2*l**2 Collect the terms in -18*c**2 + 2*c**3 + 2*c**3 - 3*c + 18*c**2. 4*c**3 - 3*c Collect the terms in 12*v**2 - 2*v + 3*v - 38*v**2 - 31*v**2. -57*v**2 + v Collect the terms in 20*q + 7*q**3 - 3*q**3 - 4*q. 4*q**3 + 16*q Collect the terms in c**3 - c**3 - 3*c**3 + c**3. -2*c**3 Collect the terms in 416*s**3 - 163*s**3 + 2 - 2. 253*s**3 Collect the terms in a + 23 + 30 + 2*a. 3*a + 53 Collect the terms in r**2 - 4*r**3 - r**2 + 4 - 5. -4*r**3 - 1 Collect the terms in 442*x - 2*x**3 - 442*x. -2*x**3 Collect the terms in 27177*o**2 - o**3 - 27177*o**2. -o**3 Collect the terms in -25039 + n**3 + 25039. n**3 Collect the terms in 30*r**3 + 24*r**3 - 59*r**3. -5*r**3 Collect the terms in 4 - 125*f + 124*f - 4. -f Collect the terms in 704 + 706 - 1410 - 22*k. -22*k Collect the terms in -d - 206 + 206. -d Collect the terms in 24*t - 47*t + 75*t**3 + 23*t. 75*t**3 Collect the terms in -427 - 425 + 23*q + 852. 23*q Collect the terms in -6*l + 32*l - 24*l. 2*l Collect the terms in 38*d**3 - 34*d**3 - 7 + 7. 4*d**3 Collect the terms in -3*c + 7*c + 6*c. 10*c Collect the terms in 2*k**3 - 274*k**2 + k**3 - 5*k**3 + 2. -2*k**3 - 274*k**2 + 2 Collect the terms in 3*g**3 - g**3 - 2*g**3 - 4*g**3. -4*g**3 Collect the terms in -123*n**2 + 61*n**2 + 61*n**2. -n**2 Collect the terms in 8*a - 11*a - a. -4*a Collect the terms in -30*u**2 - 5*u**2 - 3*u + 3*u. -35*u**2 Collect the terms in -7*r**2 - 11*r**3 + 7*r**2 - 2*r**3. -13*r**3 Collect the terms in 3 + 0 - 7*v**2 - 4 - 9*v**2. -16*v**2 - 1 Collect the terms in 9 + 2*k - 4*k + 0*k. -2*k + 9 Collect the terms in 14*q**2 - 9189 + 9189. 14*q**2 Collect the terms in 0*i - 191 + 0*i + 2*i. 2*i - 191 Collect the terms in -37 - 48 + 2*l + 3. 2*l - 82 Collect the terms in -y**2 - y**2 + 2*y**3 + 33*y + 2*y**2. 2*y**3 + 33*y Collect the terms in -47*i**2 + 73*i**2 - 467*i**2. -441*i**2 Collect the terms in -1041434*b**2 - b**3 + 1041434*b**2. -b**3 Collect the terms in -1534*g**2 + 1533*g**2 - 2 - 15. -g**2 - 17 Collect the terms in -2990*a**3 + 1498*a**3 + 1495*a**3. 3*a**3 Collect the terms in -149*g**2 + 307*g**2 - 153*g**2. 5*g**2 Collect the terms in -4 + 8*c**2 + 15 - 11 + 0. 8*c**2 Collect the terms in -962*i + 9875*i - 459*i. 8454*i Collect the terms in 14*g + 10*g - 21*g. 3*g Collect the terms in -z**2 - 6*z - 27*z - 20*z. -z**2 - 53*z Collect the terms in -k - 213395 + 213395. -k Collect the terms in 10*k**2 + k**2 + 8*k**2 - 20*k**2. -k**2 Collect the terms in -3*l**3 - 8*l**3 + l**3 - 5*l**3. -15*l**3 Collect the terms in -33*u**2 + 69*u**2 - 30*u**2. 6*u**2 Collect the terms in -7*a**2 - 3*a**2 + 9*a**2 + a**2 + 2*a**2. 2*a**2 Collect the terms in -4 - 2*z**3 - 1 + 6. -2*z**3 + 1 Collect the terms in -38 + 2*t - 36 + 74. 2*t Collect the terms in -3*c + 61*c**2 + 0*c + 3*c + c. 61*c**2 + c Collect the terms in 353*g**3 + 151*g**3 - 43*g**3. 461*g**3 Collect the terms in 251*h**3 + 299*h**3 - 555*h**3. -5*h**3 Collect the terms in 0*j - 2*j**2 - 5*j + 5*j. -2*j**2 Collect the terms in -44*d**3 - 2 + 2 + 57*d**3. 13*d**3 Collect the terms in -51 + 96 + h**3 - 19. h**3 + 26 Collect the terms in 162*s**3 + 4 - 4. 162*s**3 Collect the terms in -21*r**3 + 309*r**2 + r**3 - 309*r**2. -20*r**3 Collect the terms in -11*l**2 + 6*l**2 - 13*l**2 - 5*l**2. -23*l**2 Collect
[New Helicobacters other than H. pylori]. Since discovery of Helicobacter pylori, more than 30 species non-H. pylori Helicobacter spp. (NHPH) have been reported. Those NHPH were now classified into gastric Helicobacter spp. and enterohepatic Helicobacter spp.(EHS). Gastric NHPH show tight spiral and long shape in the gastric mucosa, and we can distinguish from H. pylori by light microscope. Some gastric NHPH may be zoonosis and cause gastritis in human. H. hepaticus and H. cinaedi belongs in EHS were detected in human diseases. H. hepaticus may be associated with hepatobiliary diseases in humans. Surprisingly, it was reported that H. cinaedi infection was associated with atrial arrhythmias and atherosclerosis. Many NHPH will be recognized as human pathogen in the future.
Lamb production using superovulation, embryo bisection, and transfer. In this study, 39 embryos from donor ewes superovulated with follicle stimulating hormone-pituitary (FSH-P) were bisected to produce pairs of monozygotic twin lambs for experimentation. Each pair obtained by bisecting 8-, 9- or 10-day-old embryos was immediately transferred surgically into a recipient ewe at the same physiological stage. Of the 39 recipients which received a pair of half-embryos by transfer into the uterine horn ipsilateral to the corpus luteum, 28 (72%) lambed. Eighteen of 28 recipients lambing (64%) produced pairs, i.e., 7 male and 11 female pairs. Ten of 28 lambings produced a single lamb, i.e., six males and four females. Overall yield (the number of lambs produced in relation to the number of embryos used) was 118%. This percentage tended to increase, depending on the day of collection (Day 8, 100%; Day 9, 118%; and Day 10, 131%).
That is correct. The buyout value from the desk as of 10/26/01 was $810,000. This was determined by PV'ing the value of the remaining monthly demand charges owed to ENA from November 2001 through the end of the GSA (6/30/08). The nominal value of each monthly demand charge is approximately $11,500. Call me if you have any questions. Fred -----Original Message----- From: Jacoby, Ben Sent: Sunday, December 02, 2001 4:34 PM To: Schwartzenburg, John; Tweed, Sheila; Hodge, Jeffrey T. Cc: Clark, Barton; Mitro, Fred Subject: RE: Site List Rev 4 - Onondaga GSA BKR provisons My statement on the value of the GSA is based on the extensive work that Fred, John and I have done on this deal. Fred previously checked with the gas desk on this because Aquila had indicated an interest to buy out this contract. The gas desk indicated the current value (as of a couple weeks ago) was $800k. Fred, please confirm. Thanks, -------------------------- Ben Jacoby (713) 853-6173
The invention relates to a motion measuring device, particularly for measuring rotation of a body around an axis, said device comprising means such as a coded wheel drivably connected to said body for rotation and having a plurality of holes spaced regularly around said axis and comprising at least one sensor placed on the path of said holes. With such a device, and said coded wheel being mechanically connected to a wheel of a moving object, the distance travelled by said moving object may be determined, as well as the speed and acceleration of said object by measurement of time of displacement. Known devices of this type typically include a wheel formed by a disk or a cylinder made from a high magnetic permeability material, said measuring wheel having teeth passing in front of the sensor. This solution has the disadvantage of generating a binary electrical signal whose levels are often insufficiently dissociated for allowing measurement under good conditions.
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) // // This file is provided under a dual BSD/GPLv2 license. When using or // redistributing this file, you may do so under either license. // // Copyright(c) 2018 Intel Corporation. All rights reserved. // // Author: Liam Girdwood <[email protected]> // #include <linux/pci.h> #include "ops.h" static bool snd_sof_pci_update_bits_unlocked(struct snd_sof_dev *sdev, u32 offset, u32 mask, u32 value) { struct pci_dev *pci = to_pci_dev(sdev->dev); unsigned int old, new; u32 ret = 0; pci_read_config_dword(pci, offset, &ret); old = ret; dev_dbg(sdev->dev, "Debug PCIR: %8.8x at %8.8x\n", old & mask, offset); new = (old & ~mask) | (value & mask); if (old == new) return false; pci_write_config_dword(pci, offset, new); dev_dbg(sdev->dev, "Debug PCIW: %8.8x at %8.8x\n", value, offset); return true; } bool snd_sof_pci_update_bits(struct snd_sof_dev *sdev, u32 offset, u32 mask, u32 value) { unsigned long flags; bool change; spin_lock_irqsave(&sdev->hw_lock, flags); change = snd_sof_pci_update_bits_unlocked(sdev, offset, mask, value); spin_unlock_irqrestore(&sdev->hw_lock, flags); return change; } EXPORT_SYMBOL(snd_sof_pci_update_bits); bool snd_sof_dsp_update_bits_unlocked(struct snd_sof_dev *sdev, u32 bar, u32 offset, u32 mask, u32 value) { unsigned int old, new; u32 ret; ret = snd_sof_dsp_read(sdev, bar, offset); old = ret; new = (old & ~mask) | (value & mask); if (old == new) return false; snd_sof_dsp_write(sdev, bar, offset, new); return true; } EXPORT_SYMBOL(snd_sof_dsp_update_bits_unlocked); bool snd_sof_dsp_update_bits64_unlocked(struct snd_sof_dev *sdev, u32 bar, u32 offset, u64 mask, u64 value) { u64 old, new; old = snd_sof_dsp_read64(sdev, bar, offset); new = (old & ~mask) | (value & mask); if (old == new) return false; snd_sof_dsp_write64(sdev, bar, offset, new); return true; } EXPORT_SYMBOL(snd_sof_dsp_update_bits64_unlocked); /* This is for registers bits with attribute RWC */ bool snd_sof_dsp_update_bits(struct snd_sof_dev *sdev, u32 bar, u32 offset, u32 mask, u32 value) { unsigned long flags; bool change; spin_lock_irqsave(&sdev->hw_lock, flags); change = snd_sof_dsp_update_bits_unlocked(sdev, bar, offset, mask, value); spin_unlock_irqrestore(&sdev->hw_lock, flags); return change; } EXPORT_SYMBOL(snd_sof_dsp_update_bits); bool snd_sof_dsp_update_bits64(struct snd_sof_dev *sdev, u32 bar, u32 offset, u64 mask, u64 value) { unsigned long flags; bool change; spin_lock_irqsave(&sdev->hw_lock, flags); change = snd_sof_dsp_update_bits64_unlocked(sdev, bar, offset, mask, value); spin_unlock_irqrestore(&sdev->hw_lock, flags); return change; } EXPORT_SYMBOL(snd_sof_dsp_update_bits64); static void snd_sof_dsp_update_bits_forced_unlocked(struct snd_sof_dev *sdev, u32 bar, u32 offset, u32 mask, u32 value) { unsigned int old, new; u32 ret; ret = snd_sof_dsp_read(sdev, bar, offset); old = ret; new = (old & ~mask) | (value & mask); snd_sof_dsp_write(sdev, bar, offset, new); } /* This is for registers bits with attribute RWC */ void snd_sof_dsp_update_bits_forced(struct snd_sof_dev *sdev, u32 bar, u32 offset, u32 mask, u32 value) { unsigned long flags; spin_lock_irqsave(&sdev->hw_lock, flags); snd_sof_dsp_update_bits_forced_unlocked(sdev, bar, offset, mask, value); spin_unlock_irqrestore(&sdev->hw_lock, flags); } EXPORT_SYMBOL(snd_sof_dsp_update_bits_forced); void snd_sof_dsp_panic(struct snd_sof_dev *sdev, u32 offset) { dev_err(sdev->dev, "error : DSP panic!\n"); /* * check if DSP is not ready and did not set the dsp_oops_offset. * if the dsp_oops_offset is not set, set it from the panic message. * Also add a check to memory window setting with panic message. */ if (!sdev->dsp_oops_offset) sdev->dsp_oops_offset = offset; else dev_dbg(sdev->dev, "panic: dsp_oops_offset %zu offset %d\n", sdev->dsp_oops_offset, offset); snd_sof_dsp_dbg_dump(sdev, SOF_DBG_REGS | SOF_DBG_MBOX); snd_sof_trace_notify_for_error(sdev); } EXPORT_SYMBOL(snd_sof_dsp_panic);
If you are going to play in Russia’s Back Yard, you better bring your own toys. ~TLB ed. Here they go again: Senate reheats ‘Russian meddling’ claims, using assertions as evidence RT-USA News Members of the Senate Intelligence Committee look at a placard showing ‘Russian social media manipulation’ at a November 1, 2017 hearing. © REUTERS/Joshua Roberts The Senate Intelligence Committee’s final report on ‘Russian interference’ in the 2016 US presidential election is short on evidence and long on reheated assertions and innuendo from ‘experts’ exposed as actual election meddlers. There is little new in the 85-page, partially redacted document released on Tuesday, that has not been made public by the committee previously – including the accusations that “Russia” focused on stoking anger and resentment among African-Americans, for example. There is a reason for that. By the committee’s own admission, “much of this Volume’s analysis is derived from” the work of two Technical Advisory Groups (TAG), which produced two public reports back in December 2018, to the same kind of fawning press coverage the report is receiving now. NEW: The Russian government’s efforts to interfere in the 2016 U.S. elections involved using social media content to mostly target African-Americans, a new Senate committee report concludes.https://t.co/7BRUmiG18T — NPR (@NPR) October 8, 2019 Not surprisingly, the report’s “findings” are being cited as conclusive proof that Democrats were right and President Donald Trump was wrong about 2016, Russia, Ukraine and the US presidential election. The Senate Intelligence Committee unveiled a sweeping new bipartisan report showing Russian efforts to boost Trump’s White House bid on social media during the 2016 U.S. electionhttps://t.co/TUjUhBdMnc — POLITICO (@politico) October 8, 2019 The only trouble with that is that the committee provides no actual evidence for any of its claims – only assertions. For example, their description of the Internet Research Agency – the “Russian troll farm” – is basically copied over from Special Counsel Robert Mueller’s indictment of a dozen of its alleged members. Yet a federal judge presiding over the case ruled back in May that allegations cannot be treated as established evidence or conclusion, coming close to finding Mueller’s prosecutors in contempt. Another document presented as evidence is the January 2017 “Intelligence Community Assessment,” the disingenuously named work of a small group of people, hand-picked by the Obama administration’s DNI and chiefs of the CIA, FBI and NSA – all of whom, except for the NSA, have since been implicated in what seems to be a campaign to spy on Trump, delegitimize his presidency, and have him impeached. The Senate report also quotes testimonies from Obama aides such as Ben Rhodes – helpfully redacted of course – Gen. Philip Breedlove, the NATO commander who tried to set off a war with Russia; professional “Russian bot” hunters like Clint Watts and Thomas Rid; and NATO’s “Strategic Communications Center of Excellence.” The best part, however, has to be the reliance on New Knowledge, presented as “a cybersecurity company dedicated to protecting the public sphere from disinformation attacks.” In reality, New Knowledge was exposed by the New York Times as the outfit that actually ran bots and disinformation operations during the 2017 Alabama special election for the US Senate, targeting Republican candidate Roy Moore on behalf of Democrats – while blaming Russia! In an internal memo, New Knowledge executives boasted how they “orchestrated an elaborate ‘false flag’ operation that planted the idea that the Moore campaign was amplified on social media by a Russian botnet.” The other TAG, led by British academics and researchers, found that the activity of ‘Russian trolls’ increased after the election – by 238 percent on Instagram, 59 percent on Facebook, 52 percent on Twitter, and 84 percent on YouTube. So it was influencing elections… retroactively? Left unsaid was that the absolute quantity of “Russian” posts was minuscule, a proverbial drop in the bucket compared to the billions of social media posts generated and consumed by the US electorate during the campaign. These are the people who “significantly informed the Committee’s understanding of Russia’s social media-predicated attack against our democracy,” as this week’s report puts it. Ever since Hillary Clinton blamed “Russian hackers” for the revelations of corruption within the DNC in July 2016, the Washington establishment has been eager to blame Moscow for all the ills of the US political system, real or imagined. The Senate Intelligence Committee’s report seems to be nothing more than an attempt to reheat the long-cold corpse of a conspiracy that should have been buried with the Mueller Report and allowed to rest in peace. ___ Also from RT The only ‘Russian bots’ to meddle in US elections belonged to Democrat-linked ‘experts’ Worst meddler ever? ‘Russian’ Facebook ads ‘trolling US election’ went completely unseen ********* (TLB) published this article from RT-USA News with our appreciation for the availability and coverage of this story. •••• •••• •••• Stay tuned to … •••• The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB) •••• Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning. •••• Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner. •••• Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
One-second time resolution brain microdialysis in fully awake rats. Protocol for the collection, separation and sorting of nanoliter dialysate volumes. Capillary zone electrophoresis is capable of analyzing nanoliter volumes, reducing the challenge posed by brain microdialysis time resolution improvement to the management of nanoliter dialysate volumes. This fact has not been overlooked and 12- and 6-s time resolution microdialysis have been reported in anesthetized rats. However, behavioral experiments require fully awake and freely moving animals. To achieve high temporal resolution brain microdialysis in awake unrestrained rats, we have developed an online device that mixes the outflowing dialysate with fluorescein isothiocyanate and buffer within a 26-nl reactor. The mixture was continuously accumulated in a 99-micrometer-bore capillary tube. After the experiment the tube was cut into 4-mm pieces and the content of each piece (30 nl, equivalent to 1 s dialysate) was transferred to a test tube. After allowing 18 h for derivatization, the samples were diluted with water and injected into a capillary electrophoresis laser-induced fluorescence detection instrument. This protocol was tested first in an in vitro assay and proved to be capable of detecting glutamate concentration changes in only 1 s. For the in vivo assays, a probe was inserted into the primary somatosensory cortex of eight rats divided in two groups. One group was stimulated by gently moving its whiskers for 10 s. The other group had no whisker manipulation. Moving the whiskers released glutamate in the experimental group. The first and only change was observed at the 12th s. This method allows 1-s time resolution brain microdialysis in freely moving rats and multiple amino acid analysis every second during sensory perception or motor actions in behavioral experiments.
Seoul Metropolitan Rapid Transit (SMRT) is investigating the possibility of operating subway services around the clock on one of its lines for one day a week. On March 11, SMRT announced that it has plans to add overnight services to Line 7 on Saturday mornings from 1am to 5am. Services would operate every 20 minutes throughout the night between Jangam (장암) and Onsu (온수) Stations. The extended section of Line 7 from Onsu to Bupyeong-gu Office (부평구청) will not be included in the trial. Currently, Friday night subway services on Line 7 stop at 1am on Saturday morning and start again around 5:30am. SMRT says that it plans to follow in the footsteps of other cities overseas, such as New York and London, which operate 24 hour trains. Having completed preliminary investigations, the company will now meet with researchers and aim to get the trial underway in the second half of next year. Fares for the early morning services will likely be more expensive, SMRT saying that a different system would be adopted to cover the extra cost and added convenience provided. The cost of hiring more staff and improving facilities for the trial is estimated to cost 2.6 billion won. While nothing is set in stone yet, it’s exciting news and something that many residents have been wanting for a long time. In 2013, Seoul trialed night bus services across the city and ended up adding seven other routes after the trial was a success. It’s hoped that trialing all-night services will also reduce road congestion, particularly in popular areas such as Gangnam which Line 7 travels through.With most public transport stopping at around 1am on Friday nights, there is often a rush for the final bus or train, and catching a taxi in the early hours can also be a challenge due to the number of people heading home after a night out on the town. If the Line 7 trial proves popular, then there is a possibility that we’ll see similar services on other lines. If this happens it will be interesting to see whether taxi drivers come out in opposition of such services. Source: Yonhap | hankooki
The City of Peace and Justice The Hague is not the constitutional capital of The Netherlands (it’s Amsterdam). But it is the home of The Dutch Government, Parliament, Supreme Court, Council of State, Embassies, International Court of Justice and the International Criminal Court plus a ton of different EU agencies. One of the biggest sights to see is Binnenhof, which is next to a small pond called Hofvijver. Binnenhof is a Gothic castle built in the 13th century. Pictures of The Binnenhof and Hofvijver right below. On the other hand the city center is full of small walking streets and tight corridors between the canals. But when it comes to the being the World Capital of Peace and Justice, the center is packed with huge shopping malls as The Passage, de Bijenkorf & New Babylon just to name a few. There is also a Chinatown right in the center which isn’t hard to spot because of the huge arch.
Introduction ============ Benzodiazepines (BZDs) and z-drugs (BZD derivatives, e.g., zolpidem and zopiclone) are among the most commonly used anxiolytics and hypnotics worldwide ([@ref-22]; [@ref-56]). While BZD and z-drugs have been demonstrated to be effective in short-term use ([@ref-44]), their intake is associated with serious adverse effects, including increased risk of cognitive impairments ([@ref-6]; [@ref-34]; [@ref-47]) as well as stumbling and falling, which may result in hip fractures ([@ref-62]; [@ref-71]) as withdrawal symptoms ([@ref-54]). The main serious problem associated with long-term use is the development of tolerance and dependence ([@ref-4]; [@ref-68]; [@ref-71]). The risks and adverse effects of BZDs are of particular relevance to older people. Therefore, the Beers Criteria Update Expert Panel for potentially inappropriate medication use recommends avoiding the prescription of BZDs to patients over the age of 65 years, regardless of their primary disease or symptoms ([@ref-1]). Although guidelines and expert consensus confirm the risks associated with the long-term use of BZD, these drugs are still prescribed frequently ([@ref-22]; [@ref-56]). Thus, despite increasing awareness of the associated risks, the prevalence of inappropriate use has not declined ([@ref-16]; [@ref-29]). "Inappropriate" BZD use is defined as BZD use that is associated with a significantly higher risk of adverse effects than treatment with an alternative evidence-based intervention that is equally, if not more, effective ([@ref-9]; [@ref-42]). Different motives have been given for the inappropriate use of BZDs. Patients report that they lack information on alternative pharmacological and nonpharmacological treatment options, the discontinuation of BZDs, and the potentially hazardous effects of inappropriate BZD use ([@ref-9]; [@ref-20]). Furthermore, regarding the patients perspective, they are often unwilling to discontinue BZD use, as possible physiological and psychological dependencies might be present ([@ref-20]; [@ref-63]). Different reasons for the inappropriate prescription of BZDs have been assessed ([@ref-3]; [@ref-42]; [@ref-68]). These reasons include lack of knowledge of possible evidence-based alternative treatment options, nonspecific knowledge about BZDs among physicians and other specialists, especially in geriatric care, a lack of clarity about how to appropriately prescribe the drug and difficulties applying medication guidelines to clinical practice ([@ref-4]; [@ref-42]). Although physicians report being cautious about initiating BZD treatments, the psychosocial problems of patients are often severe, and the knowledge of how to handle these severe problems using alternative strategies is often limited ([@ref-2]; [@ref-46]). Given the variety of severe risks and adverse effects, including possible dependency, the high prevalence of BZD use in older people in general and the high number of long-term users in particular, interventions that address this issue need to be identified ([@ref-24]; [@ref-45]; [@ref-58]). To address this need, numerous studies have focused on the difficulties in physician-patient communication and patient information involved in the inappropriate use and prescription of BZDs. These studies have investigated specific interventions that are designed to educate patients, provide patient information material, improve physician-patient communication, or build a relationship between patients and physicians ([@ref-24]; [@ref-41]). These interventions can be considered to fall under the umbrella term patient-centeredness ([@ref-57]; [@ref-70]). Patient-centered care is a comprehensive care concept ([@ref-74]). Various definitions have tried to encompass the complexity of this idea ([@ref-57]; [@ref-70]; [@ref-35]). Recently, [@ref-57] merged existing definitions and developed a comprehensive model of patient-centeredness. These researchers defined 15 dimensions of patient-centeredness and, according to expert consensus, isolated the five most relevant dimensions ([@ref-57]). In addition to being treated as a unique individual, the patient's involvement in his or her own care, patient empowerment, patient information, and clinician-patient communication were rated as the most relevant aspects ([@ref-70]). The latter dimensions are mainly understood to be the activities of patient-centered care, which has become an international demand for high-quality medicine ([@ref-35]; [@ref-49]). An increased emphasis on patient-centeredness could address the causes of inappropriate BZD use and decrease its prevalence by focusing on patients' values. Patients' beliefs, preferences, and information need to play a greater role in the care process. Putting the individual patient rather than his or her disease at the center of the treatment plan has increasingly been advocated, and numerous medical experts recommend the implementation of this strategy in routine care ([@ref-13]). Research in various sectors of health care attests to improved care processes as a result of patient-centered approaches. Patients have reported that such approaches restored their satisfaction and self-management abilities and significantly improved their quality of life ([@ref-53]). Research of the physician's perspective describes the need for professional expertise, specific communication skills, and the ability to inform patients based on the evidence-based knowledge presented in guidelines and expert consensuses for clinical practice. Some studies have found that good physician-patient communication is associated with important patient health outcomes ([@ref-38]; [@ref-72]). In addition to dimensions regarding physicians' abilities, there are communication factors related to patient-centered activities where physicians provide information and better educate patients by sharing specific information and using informational resources and tools ([@ref-57]). Furthermore, recent research indicates that interventions that promote patient-centered care have a positive influence on patient-related outcomes ([@ref-18]; [@ref-36]). The high prevalence of inappropriate BZD use and the possible reasons for this use combined with the knowledge of the general benefits of a patient-centered approach in health care highlight the need to consider a patient-centered approach for patients using BZDs. By focusing on the five most important aspects of patient-centered care, this systematic review aimed to identify patient-centered interventions for reducing the inappropriate prescription and use of BZDs and z-drugs. Methods ======= This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO): CRD42014015616. The reporting guidelines used for this review were based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement ([@ref-32]; [@ref-40]). A study protocol was not published. Search strategy --------------- A search was performed using the following databases: Medline (via Ovid), EMBASE, PsycINFO, Psyndex, and the Cochrane Library. The following search terms were used: BZD(s) and/or z-drug(s) and/or anxiolyt\*, hypnotic\* in combination with information\*, communicate\*, educate\*, support\*, system\*, aid\*, program\*, process\*, material\*, health intervent\*, shared decision\*, informed decision\*, choice\*, and train\*. A sample syntax can be found in the appendix. The search was limited to studies published in English or German. The search began in September 2014 and was completed in October 2014. Eligibility criteria -------------------- Studies were included in this review if they met the following criteria: had a controlled design, assessed middle-aged adults (45 years and older), used interventions focused on users of BZD or z-drugs and/or health care professionals (HCPs) involved in the care process, and had a primary outcome of interest of a reduction in BZD use and/or prescriptions. We excluded case series, review papers, meta-analyses, double publications, experimental research, protocols, and animal research. Moreover, studies were excluded if they focused on children or on chronically or seriously mentally ill patients, that is, if the use of BZDs was indicated (e.g., for severe psychiatric disorders such as schizophrenia). Psychopharmacological studies that examined medication phenomena only with respect to the drugs' effects were also excluded. The types of interventions included were predominantly educational or informational in nature. As part of our search strategy, we also performed a secondary search consisting of reference tracking for all full text documents included and a consultation of experts in the respective health care fields. Study selection --------------- First, duplicates were removed. Second, two independent researchers (AM, JT, or EC) screened the selected articles, first by title and then by abstract, for interventions related to the research topic. When the title and abstract were relevant or when eligibility was uncertain, the full text was retrieved. Any uncertainty concerning eligibility was resolved after an assessment of the full text and a discussion within the research team. Data extraction and quality assessment -------------------------------------- The collected data were extracted using a standardized sheet we had developed previously that was based on the Cochrane Extraction Form ([@ref-12]). The extraction form includes information about participants' characteristics (age, gender), the treatment setting, inclusion and exclusion criteria, the randomization process, the intervention description, the duration of the intervention, outcomes, follow-ups, results, and significance. The interventions included were classified by the target population: BZD users, HCPs, or both groups. Data were extracted independently by two authors (AM and JT). Additionally, to consider the potential limitations of the studies included, the quality (or risk of bias) of these studies was assessed by two authors (AM and JT) using the Cochrane Collaboration's tool for assessing the risk of bias in randomized trials ([@ref-28]). The quality assessment form was based on six dimensions: random sequence generation, allocation concealment, blinding of participants, and personnel, blinding of outcome assessments, incomplete outcome data and selective reporting. Data analysis ------------- We used a qualitative analysis to synthesize the data extracted from the included studies ([@ref-17]). Intervention approaches were classified into the following categories: those targeting patients, those with HCPs and multifaceted interventions. Furthermore, we subdivided the interventions into three patient-centered categories: physicians' essential characteristics, clinician-patient communication, and patient information. A meta-analysis could not be conducted because the interventions were too heterogeneous. Results ======= The review findings are presented in three steps. First, the studies are described and illustrated with charts. Then, they are subdivided into three sets, namely, patients, HCPs, and both groups combined. Next, the findings are described by an analysis of study quality, and then, the results are summarized in terms of patient-centered dimensions. We identified 7,068 studies through the electronic search and 11 studies through our secondary search strategy. After the removal of duplicates (4,628) and after the screening process, 20 studies remained relevant and met the inclusion criteria (see [Fig. 1](#fig-1){ref-type="fig"}). ![Flow diagram of studies reviewed.](peerj-06-5535-g001){#fig-1} Description of identified studies --------------------------------- All studies were published in English between 1992 and 2014. The interventions were conducted in the UK (four studies), Australia (four studies), the USA (two studies), the Netherlands (two studies), Canada (two studies), Spain (three studies), Ireland (one study), Belgium (one study), and Sweden (one study). All studies were based on at least a controlled design. Eight studies used an explicit randomized controlled design, an additional nine used a controlled design (including intervention studies), and four used a cluster-randomized design. The study durations varied between 4 weeks and 29 months, with a mean of 6 months. Furthermore, the studies were conducted in different clinical settings that targeted inpatients, outpatients, community residents, or nursing home residents. The majority of the studies were conducted in general practices (11 studies) and nursing homes (five studies). One study each was carried out in a medical center, a hospital, an outpatient service (Medicaid), and a community pharmacy. While nine studies directly addressed BZD users (long-term, chronic, inappropriate), nine studies focused only on HCPs, specifically general practitioners and nurses. Two studies investigated the effect of interventions on both target patients and HCPs (physicians, nurses, and pharmacists). A systematic overview of relevant information for all interventions is shown in [Tables 1](#table-1){ref-type="table"}--[3](#table-3){ref-type="table"}. 10.7717/peerj.5535/table-1 ###### Description of included studies: patients. ![](peerj-06-5535-g002) ![](peerj-06-5535-g003) ![](peerj-06-5535-g004) ![](peerj-06-5535-g005) Reference Title Location Design Setting Duration (months) Sample total *n* Sample description: definition, mean age, sex distribution, groups Intervention Dimension of patient-centered-care model Findings ----------- ---------------------------------------------------------------------------------------------------------------------------------------------- -------------- -------- ----------------------------- ------------------- ------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [@ref-7] Controlled evaluation of brief intervention by general practitioners to reduce chronic use of benzodiazepines UK CT General practices 6 109 Chronic BZDs users, *M* = 62 years, 61% women intervention group (51) control group (58) A self-help booklet included general information about benzodiazepine and techniques of coping with fears and anxiety supported with physician's advice Patient information Eighteen percent of patients in the intervention group (9/50) had a reduction in benzodiazepine prescribing recorded in the notes compared with 5% of the 55 patients in the control group (*p* \< 0.05) [@ref-15] Evaluation of an easy, cost-effective strategy for cutting benzodiazepine use in general practice UK CT General practices 6 209 Long-term regular users of BZDs, *M* = 69 years, 4:1 women to men letter group (65) letter plus advice group (75) control group (69) Discontinuation letter asked the patient to reduce or stop the medication gradually and provide information about reducing medication and practical suggestions for nonpharmacological coping strategies plus 4-monthly information sheets Patient information After 6 months, both intervention groups had reduced their consumption to approximately two thirds of the original intake of benzodiazepines and there was a statistically significant difference between the groups. 18% of those receiving the interventions received no prescriptions at all during the 6 month monitoring period [@ref-23] Discontinuation of long-term benzodiazepine use by sending a letter to users in family practice: a prospective controlled intervention study Nether-lands CT Family practices 6--21 4,416 Long-term BZDs users, *M* = 68 years, 65--69% women experimental group (2,595) control group (1,821) Patient information as a discontinuation letter advised to gradually stop benzodiazepine use supported with patient-physician-communication, which evaluated actual benzodiazepine use Patient information At 6 months a large reduction in benzodiazepine prescription was present of 24% in the experimental group, vs. 5% in the control group. At 21 months again a steady reduction in benzodiazepine prescription of 26% was observed in the experimental group, vs. 9% in the control group, indicating that the short-term gain of the intervention was preserved [@ref-63] Reduction of inappropriate benzodiazepine prescriptions among older adults through direct patient education: The EMPOWER Cluster Canada RCT Community pharmacies 6 303 Long-term BZDs users, *M* = 75 years, 69% women intervention group (138) control group (155) Patient information via a personalized booklet comprising a self-assessment component including risks and advice about drug interactions and mentioning evidence, tapering recommendations and therapeutic substitutes as well as knowledge statements and peer champion theories to create cognitive dissonance about the safety of the benzodiazepine intake and augment self-efficacy Patient information At 6 months, 27% of the intervention group had discontinued benzodiazepine use compared with 5% of the control group (risk difference, 23% \[95%CI, 14--32%\]; intracluster correlation, 0.008; number needed to treat, 4). Dose reduction occurred in an additional 11% (95%CI, 6--16%) [@ref-64] Long-term effectiveness of computer-generated tailored patient education on benzodiazepines: a randomized controlled trial Netherlands RCT General practices 12 695 Chronic BZDs users, *M* = 62.3 years, 68.1 women single tailored letter (163) multiple tailored letter (186) general practitioner letter (159) Patient information either via two individual tailored letters aiming to reduce the positive outcome expectation of benzodiazepines by bearing in mind benefits of its withdrawal and in this case increasing self-efficacy expectations or a short general practitioner letter that modelled usual care Patient information Among participants with the intention to discontinue usage at baseline, both tailored interventions led to high percentages of those who actually discontinued usage (single tailored intervention 51.7%; multiple tailored intervention 35.6%; general practitioner letter 14.5%) [@ref-61] General practitioners reduced benzodiazepine prescriptions in an intervention study: a multilevel application Netherlands CT General practices 12 8,179 Chronic BZDs users, *M* = 64.63 years, 73.2% women intervention group (19 general practices) control group (128 general practices) Patient information as a discontinuation letter outlined information about the risks of continuous use of benzodiazepines and recommended their withdrawal by inviting patients to an appointment to discuss this procedure, followed by an information leaflet about BZDs Patient information and clinician-patient communication Sending a letter to chronic long-term users of benzodiazepines advising decreasing or stopping benzodiazepine use in general practice resulted in a 16% reduction after 6 months and a 14% reduction after 1 year [@ref-27] Randomized controlled trial of two brief interventions against long-term benzodiazepine use: outcome of intervention UK RCT General practices 6 284 Long-term BZDs users, *M* = 69.1 years, 48% females letter group (93) consultation group (98) control group (93) Patient information via self-help booklet included information about tranquilizers, sleeping tablets and their withdrawal accompanied by a leaflet about sleeping problems and a discontinuation letter which informed about risks and advised to stop the intake, supported with patient-physician-communication including general information about benzodiazepines as well as advantages of and guidelines for withdrawal Patient information and clinician-patient communication Results showed significantly larger reductions in BZDS consumption in the letter (24% overall) and consultation (22%) groups than the control group (16%) but no significant difference between the two interventions [@ref-66] Withdrawal from long-term benzodiazepine use: randomized trial in family practice Spain RCT Public primary care centers 12 139 Long-term BZDs users, *M* = 59 years, 82% women intervention group (73)control group (66) Patient information via physicians' interview given on the first and follow up visits: first visit concentrated mostly on general information about benzodiazepines and their risks/effects, while the follow up visits focused on positive reinforcement of achievements Patient information and clinician-patient communication After 12 months, 33 (45.2%) patients in the intervention group and six (9.1%) in the control group had discontinued benzodiazepine use; relative risk = 4.97 (95% confidence interval \[CI\] = 2.2--11.1), absolute risk reduction = 0.36 (95% CI = 0.22--0.50). Sixteen (21.9%) subjects from the intervention group and 11 (16.7%) controls reduced their initial dose by more than 50% [@ref-65] Comparative efficacy of two interventions to discontinue long-term benzodiazepine use: cluster randomized controlled trial in primary care Spain RCT General practices 12 532 Long-term BZDs users, *M* = 64 years, 72% women structured intervention (SIF) (191) structured intervention with written instructions (SIW) (168) control group (173) Educational intervention for patients with fortnightly follow-up visits to support gradual tapering (SIF) and written information material for patients rather than follow-up visits (SIW); patient information via educational interview included an information on benzodiazepine dependence, abstinence and withdrawal symptoms, risks of long-term use and reassurance about reducing medication as well as a self-help leaflet to improve sleep quality Patient information, clinician-patient communication and essential characteristics of the clinician At 12 months, 76 of 168 (45%) patients in the SIW group and 86 of 191 (45%) in the SIF group had discontinued benzodiazepine use compared with 26 of 173 (15%) in the control group. Both interventions led to significant reductions in long-term benzodiazepine use in patients without severe comorbidity 10.7717/peerj.5535/table-2 ###### Description of included studies: health care professionals. ![](peerj-06-5535-g006) ![](peerj-06-5535-g007) ![](peerj-06-5535-g008) ![](peerj-06-5535-g009) Reference Title Location Design Setting Duration Sample total *n* Sample description: definition, mean age, sex distribution, groups Intervention Dimension of patient-centered-care model Findings ----------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- -------- ----------------------------------- -------------- ------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [@ref-5] A randomized trial of a program to reduce the use of psychoactive drugs in nursing home USA RCT Nursing home 5 months 823 Long-time users of psychoactive drugs and BZDs, not reported intervention group of 6 nursing homes (431) control group of 6 nursing homes (392) Educational program to improve medical competence based on the principles of "academic detailing," which focuses on direct patient care, alternatives to psychoactive drugs and recognition of adverse drug reactions face-to-face educational sessions by clinical pharmacists for prescribers and written information material for prescribers Essential characteristics of the clinician and clinician-patient communication Significant reduce psychoactive drug use in experimental group than in control (27% vs. 8%, *p* = 0.02). The comparable figures for the discontinuation of long-acting benzodiazepines were 20% vs. 9% (no significant) [@ref-8] Investigating intervention strategies to increase the appropriate use of benzodiazepines in elderly medical in-patients UK RCT Hospitals 6--12 months 1,414 Inappropriate BZDs users, *M* = 75 years, not reported verbal intervention (not reported) bulletin intervention (not reported) control group (not reported) Verbal intervention delivered in an interactive lecture format by a physician and a pharmacist to an audience arranged by the hospital contact. Bulletin intervention involved dissemination of printed material to physicians, pharmacist and nurses involved in the care at the hospital Essential characteristics of the clinician and clinician-patient communication Appropriate prescribing following verbal intervention increased substantially from 29% to 44% but this did not achieve statistical significance. There was a reduction in appropriate prescribing following bulletin intervention (42--33%) and no change following control intervention (42--42%) [@ref-10] The effect of industry-independent drug information on the prescribing of benzodiazepines in general practice Belgium RCT General practices 4 weeks 128 General practitioners, not reported oral and written information (44) written information (43) no information (41) Educational mail arguing for the rational and short-term prescribing of benzodiazepines, contained specific information regarding the limited effectiveness of long-term benzodiazepine use, risks and different forms of habituation and dependence supported by an independent medical representative whose oral message was congruent with the written materials and who answered any questions Essential characteristics of the clinician The absolute reduction in the number of prescribed packages was highest in condition one (oral and written information) with a mean decrease of 24% compared to the baseline. A reduction of 14% was found in physicians of condition two (written information) and of 3% in the control group [@ref-39] Effects of educational outreach visits on prescribing of benzodiazepines and antipsychotic drugs to elderly patients in primary health care in southern Sweden Sweden RCT General practices 12 months 54 Physicians in general practices, not reported (not reported) intervention group (23) control group (31) Physician's and pharmacist's visits in 2--8 week intervals: the first visit dealt with different causes of confusion in the elderly like medications, infections and other illnesses while discussing associated literature, whereas the second visit focused on the effects and risks of benzodiazepine use with medium or long acting duration of medication action Essential characteristics of the clinician One year after the educational outreach visits there were significant decreases in the active group compared to control group in the prescribing of medium- and long-acting BZDs and total BZDs but not so for antipsychotic drugs [@ref-50] Educating physicians to reduce benzodiazepine use by elderly patients: a randomized controlled trial Canada RCT General practices 12 months 374 General practitioners, *M* = 50.6/50.7 years, not reported intervention group (168) control group (206) Feedback packages were mailed that presented bar graphs comparing the prescriber with his or her peers and a hypothetical "best practice" supported by evidence-based educational material Essential characteristics of the clinician Although the proportion of long-acting benzodiazepine prescriptions decreased by 0.7% in the intervention group between the baseline period and the end of the intervention period (from 20.3%, or a mean of 29.5 prescriptions, to 19.6%, or a mean of 27.7 prescriptions) and increased by 1.1% in the control group (from 19.8%, or a mean of 26.4 prescriptions, to 20.9%, or a mean of 27.7 prescriptions) (*p* = 0.036), this difference was not clinically significant [@ref-51] A Quality Use of Medicines program for general practitioners and older people: a cluster randomized controlled trial Australia RCT General practices 12 months 20 physicians *n* = 20 general practitioners in 16 practices with *n* = 849 patients, older than 65 years intervention group (397) control group (352) Educational sessions by pharmacists explaining how to conduct medication reviews with emphasis on benzodiazepines, accompanied by written sources of information on prescribing medication; risk assessment contained 31 items assessing risk factors for medication misadventure Essential characteristics of the clinician Compared with the control group, participants in the intervention group had increased odds of having an improved medication use composite score (odds ratio \[OR\], 1.86; 95% CI, 1.21--2.85) at 4-month follow-up but not at 12 months [@ref-55] Outcomes of a randomized controlled trial of a clinical pharmacy intervention in 52 nursing homes Australia RCT Nursing homes 12 months 52 nursing homes 52 nursing homes with *n* = 3.230 patients, not reported intervention group of 13 nursing homes (905) control group of 39 nursing homes (2 325) Clinical pharmacy service model based on issues such as drug policy and specific resident problems, together with education and medication review and problem-based educational sessions for nurses addressing basic geriatric pharmacology and some common problems in long-term care medication; review by pharmacists highlighting the potential for adverse drug effects, ceasing one or more drug therapy, non-drug intervention and adverse effect and drug response monitoring Essential characteristics of the clinician This intervention resulted in a reduction in drug use with no change in morbidity indices or survival. The use of benzodiazepines was significantly reduced in the intervention group. Overall, drug use in the intervention group was reduced by 14.8% relative to the controls [@ref-59] A randomized controlled trial of a drug use review intervention for sedative hypnotic medications USA RCT Medicaid recipients (outpatients) 6 months 189 BZDs users, 55 years and older, 61--63% women intervention group (99) control group (89) Written information consisted of: a letter describing the drug use and education council guidelines for sedative hypnotic prescribing; a prescriber-specific profile about sedative hypnotic prescribing; a patient profile for each of the prescribers patients identified as over utilizers Essential characteristics of the clinician The intervention achieved a statistically significant decrease in targeted drug use, and the amount of reduction is likely to have decreased the risk of fractures associated with benzodiazepine use [@ref-58] An intervention to improve benzodiazepine use---a new approach Australia CT General practices (outpatients) 6 months 429 physicians 429 physicians intervention group (not reported) control group (not reported) Information emails consisted of educational facts relating to benzodiazepines, including information on common side effects, indications, precautions and recommendations regarding prescribing as well as characteristics and alternative non-drug techniques; the website contained links to Australian Department of Health and Ageing websites which provided consumer information on medicines including sleeping tablets Essential characteristics of the clinician A significantly smaller number of aged care residents were on benzodiazepines for 6 months or more (*p* \< 0.05) after the intervention compared with before 10.7717/peerj.5535/table-3 ###### Description of included studies: patients and health care professionals. ![](peerj-06-5535-g010) Reference Title Location Design Setting Duration (months) Sample total *n* Sample description: definition, mean age, sex distribution, groups Intervention Dimension of patient-centered-care model Findings ----------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- -------- --------------- ------------------- ------------------ ------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [@ref-48] An Evaluation of an Adapted U.S. Model of Pharmaceutical Care to Improve Psychoactive Prescribing for Nursing Home Residents in Northern Ireland (Fleetwood Northern Ireland Study) Ireland RCT Nursing homes 12 22 nursing homes 22 nursing homes with *n* = 334 residents, *M* = 82.7, 73% female intervention group (173) control group (161) 12 monthly visits from pharmacist to review prescription records of nursing home residents; collaboration of pharmacists with prescribers and patients to improve prescription patterns; pharmacist's visits assessed the pharmaceutical care needs of each resident to identify potential and actual medication-related problems and reviewed the residents' medication with the aim of optimizing psychoactive prescription Essential characteristics of the clinician, clinician-patient-communication and patient information The proportion of residents taking inappropriate psychoactive medications at 12 months in the intervention homes (25/128, 19.5%) was much lower than in the control homes (62/124, 50.0%) (odds ratio 50.26, 95% confidence interval 50.14--0.49) after adjustment for clustering within homes [@ref-69] An effective approach to decrease antipsychotic and benzodiazepine use in nursing homes: the RedUSe project Australia CT Nursing homes 6 25 nursing homes 25 nursing homes with *n* = 1,591 residents, not reported intervention group 13 nursing homes control group *n* = 12 nursing homes Consciousness raising two drug use evaluation (DUE) cycles educational sessions promotional materials (newsletters, pamphlets, posters) and educational sessions and materials focused on informing health professionals and participants about risks and modest benefits associated with antipsychotic medications for dementia and benzodiazepines for sleep disturbance and anxiety management in elderly people Essential characteristics of the clinician, clinician-patient communication and patient in-formation Over the 6-month trial, there was a significant reduction in the percentage of intervention home residents regularly taking benzodiazepines (31.8--26.9%, *p* \< 0.005). For residents taking benzodiazepines at baseline, there were significantly more dose reductions/cessations in intervention homes than in control homes (benzodiazepines: 39.6% vs. 17.6%, *p* \< 0.0001) Quality assessments ------------------- The studies included in this survey differed considerably with respect to methodological quality ([@ref-28]). Detailed evaluations for all studies are included in [Table 4](#table-4){ref-type="table"}. Three categories were used to describe assessment quality: low, high, and unclear risk of bias ("yes" signified low risk; "no," high risk; and "unclear," all other cases). In a second step, quantitative levels were introduced; to meet the "low risk" level, all items in the question were required to have a low risk of bias. The "high risk" and "unclear" levels needed one item with a high risk of bias or an unclear risk of bias, respectively. 10.7717/peerj.5535/table-4 ###### Risk of bias. ![](peerj-06-5535-g011) Reference RANDOM sequence generation Allocation concealment Blinding of participants and personnel Blinding of outcome assessment Incomplete outcome data Selective reporting ----------------------------------------- ---------------------------- ------------------------ ---------------------------------------- -------------------------------- ------------------------- --------------------- **Interventions for patients** [@ref-7] N.R. H H U L U [@ref-15] N.R. L H U U U [@ref-23] N.R. H H U L U [@ref-63] L L L L L L [@ref-64] L L U U H U [@ref-61] N.R. H H L U U [@ref-27] L U H L H U [@ref-66] L L H U L U [@ref-65] L L H L L U **Interventions for HCPs** [@ref-5] L U U U H U [@ref-8] L U H H U U [@ref-10] L U H U U U [@ref-39] L U H U U U [@ref-50] L U L L U U [@ref-51] L U H L H U [@ref-55] L U H U H U [@ref-59] L U H U H U [@ref-58] N.R. U H H H U **Interventions for patients and HCPs** [@ref-48] L L H U L U [@ref-69] N.R. H H U U U **Note:** Rating: L, low risk of bias; H, high risk of bias; U, unclear risk of bias; N.R., no relevance (controlled study design). Regarding randomization, six studies were excluded from the assessment because of their study design (controlled trial) ([@ref-7]; [@ref-15]; [@ref-58]; [@ref-61]; [@ref-69]). In the remaining studies, the randomization was described clearly. Regarding allocation, six studies described in detail an allocation that was performed successfully ([@ref-15]; [@ref-48]; [@ref-63]; [@ref-64]; [@ref-66], [@ref-65]). Four studies reported an inappropriate allocation ([@ref-7]; [@ref-23]; [@ref-61]; [@ref-69]). In the remaining studies, the allocation was unclear. Regarding the blinding of participants, only two studies performed this procedure adequately ([@ref-50]; [@ref-63]). Two other studies poorly described how the blinding process was carried out ([@ref-5]; [@ref-64]). The remaining seventeen studies did not undertake any blinding of participants. Regarding the blinding of outcomes, six studies clearly blinded outcomes and documented the process well ([@ref-27]; [@ref-50]; [@ref-51]; [@ref-61]; [@ref-63]; [@ref-65]), two studies examined the outcomes in a nonblinded manner ([@ref-8]; [@ref-59]), and in the remaining 12 studies, it was unclear whether the respective outcomes had been blinded. The careful blinding in most studies may have impacted their results. Regarding incomplete outcome data, six studies were considered satisfactory, with a low probable risk of bias ([@ref-7]; [@ref-23]; [@ref-63]; [@ref-66], [@ref-65]). In seven additional studies, outcome data were considered incomplete, increasing the risk of bias ([@ref-5]; [@ref-27]; [@ref-51]; [@ref-55]; [@ref-58]; [@ref-59]; [@ref-64]). Due to insufficient information, it could not be determined whether all patients in the remaining studies were included in the respective analyses; therefore, the risk of bias was unclear. Regarding selective reporting, only one study was found to have a low risk of bias ([@ref-63]). For the remaining studies, it was unclear whether important outcomes had not been produced or had simply not been reported. In general, study quality was affected by a high risk of bias. Of the 29 studies in question, only one met all six categories to show no risk of bias ([@ref-63]). Seven studies were identified as having a low risk of bias in half of the categories, particularly those dealing with randomization and allocation and, to a lesser extent, the blinding of outcomes ([@ref-27]; [@ref-48]; [@ref-50]; [@ref-55]; [@ref-63]; [@ref-65]). However, in these studies, the presentation of selective reporting was poor. The remaining 15 studies had a high risk of bias, mainly in the blinding of patients and personnel category. These studies also had poor presentations with respect to the blinding of outcomes and to incomplete data. Although most studies performed randomization well, a high risk of bias was prevalent in all five remaining categories. Thus, the overall quality of these studies, ranging from average to low, needs to be considered when interpreting their results. For the remaining valuation categories, all studies revealed vastly different standards of quality and poor presentation of procedures. If personnel and patients were not blinded, if the measurement processes became apparent, or if the results were not presented properly and completely, the effectiveness of the study in question could be compromised. Summary of findings ------------------- The study results are presented again in terms of group subdivisions (patients, HCPs, both groups combined) and dimensions of patient-centered care. The data analysis identified three dimensions within the model of patient-centered care: patient information, clinician-patient communication, and essential characteristics of the clinician ([@ref-57]). Interventions concerning patients --------------------------------- Nine studies focused on patient interventions. Five studies examined the impact of patient information on the reduction of BZD use ([@ref-7]; [@ref-15]; [@ref-23]; [@ref-63]; [@ref-64]), while the remaining four studies looked at a combination of patient information and extra clinician-patient communication ([@ref-27]; [@ref-61]; [@ref-66], [@ref-65]). Patient information ------------------- [@ref-7] demonstrated a short and simple intervention in which general advice from the GP combined with a self-help booklet reduced BZD intake after 6 months among patients who had taken the medication for more than a year. In a randomized controlled trial (RCT), [@ref-15] suggested that a letter containing information on BZDs and advice on how to reduce their intake, followed by 4 monthly information sheets, could reduce the intake of BZDs by approximately 1/3 after 6 months ([@ref-15]). According to the authors, this simple method could significantly decrease intake among older people as well, whereas previous research suggested that such a reduction was harder to achieve. Another RCT with more than 4,000 participants showed that a letter with advice on how to gradually discontinue BZD use, followed by an appointment with the family practitioner to evaluate actual drug use, could significantly reduce participants' BZD intake. A follow-up after 29 months confirmed the effectiveness of this intervention ([@ref-23]). In a subsequent RCT, [@ref-63] suggested that a personalized eight-page patient-empowerment booklet, based on social constructivist learning and self-efficacy theory, supported the complete cessation of BZD use in older people. An overall reduction in BZD intake was observed 6 months after the intervention ([@ref-63]). Individually tailored interventions delivered to patients either once or three times in a row were effective at discontinuing BZD intake. Moreover, scientists from the Netherlands compared these tailored interventions to a short letter from a general practitioner and found that the former was superior ([@ref-64]). Patient information and clinician-patient communication ------------------------------------------------------- [@ref-61] showed that a letter from a GP with a request to stop or reduce BZD use with their help coupled with a reminder 6 months later for those who had not responded significantly reduced the number of prescriptions per patient per half year. Nearly 150 practices and more than 8,000 patients were included in this study ([@ref-61]). [@ref-27] demonstrated how the dissemination of information to patients along with auxiliary educational talks with a GP could lead to a reduction in BZD intake within 12 months. BZD intake among older patients could be reduced in two ways: via patient information only or via patient information plus supportive communication from a physician. There was no significant difference between the first intervention with information on BZD provided by the GP (combined with a talk) and a second intervention consisting only of a letter signed by the GP. However, significant differences were found in a study that compared routine clinical practice to a treatment that contained standardized advice as well as a tapering-off schedule and biweekly follow-up visits ([@ref-66]). At the 12 month follow-up, 45% of patients in the intervention group and 9.1% in the control group had discontinued their BZD use. This study concluded that the intervention was effective in terms of reducing long-term BZD use and was feasible in primary care. [@ref-65] conducted workshops that trained physicians how to interview patients and how to individualize patient information to lead to a gradual tapering of patients' BZD intake. Regardless of whether patient consultations were followed by additional visits or written instructions, there was a reduction in long-term BZD use in patients without severe comorbidities ([@ref-65]). Interventions for health care professionals ------------------------------------------- Next, we systematically analyzed the studies that employed interventions aimed at HCPs and focused on their essential characteristics and clinician-patient communication as part of the patient-centered care model. Essential characteristics of the clinician ------------------------------------------ [@ref-10] conducted a study to assess whether oral and written information on BZDs or written information alone would have an effect on industry-independent information related to BZD prescribing among general practitioners. The statistical analysis suggested that the combination of physician contact and written information (24%) was superior to only written information (14%); both interventions together led to a decrease in the prescribing rate ([@ref-10]). [@ref-39] examined the effect of outreach visits. Experts visited physicians at private practices twice and provided them with information on confusion in older people and the effects of BZDs as well as other psychotropic drugs on this population ([@ref-39]). One year after the intervention, researchers found a significant decrease (25.8%) in the number of prescriptions of BZD. [@ref-50] were interested in the effects of regular emails sent to physicians over a 6 month period with 2 month intervals. The email contained confidential profiles of BZD prescription users and educational bulletins ([@ref-50]). Physicians in the control group received educational bulletins related to antihypertension drug prescriptions for older people. The researchers reported a 0.7% decrease in prescribing rates in the intervention group and a 1.1% increase in the control group, but this difference was not significant. An educational program developed by [@ref-51] evaluated an intervention complex that consisted of three major parts: educational resources (academic detailing, prescribing information, and feedback), medication risk assessments, and a medication review checklist ([@ref-51]). However, the intervention group did not show a significant reduction in the use of BZDs (OR = 0.51; 95%). [@ref-55] designed an approach to improve the quality of medication care among nursing home residents at large. This intervention consisted of three phases: the introduction to stakeholders of a new professional role related to relationship building, the education of nurses, and a medication review by pharmacists with a postgraduate diploma in clinical pharmacology. While the authors did not find a substantial change in morbidity indices or survival rates (primary outcomes), they did detect a significant decrease of 16.6% in BZD intake (14.8% in cumulative drug intake). [@ref-59] investigated the effect of an intervention packet mailed to prescribers of BZDs. This package consisted of an intervention letter, a review of drug use, guidelines, and a prescriber-specific profile about the prescription of sedative hypnotics, as well as a patient profile for each of the prescriber's patients who were identified as overutilizers. The researchers determined that this intervention significantly reduced the use of BZDs as a targeted sedative hypnotic medication in the intervention group (27.6%) versus a control group (8.5%). [@ref-58] investigated whether informing HCPs about BZD intake via emails and a website affected the number of BZD prescriptions over a 6-month period ([@ref-58]). After the intervention, there was a significantly smaller number of aged care residents who had used BZDs for 6 months or more (*p* \< 0.05) but no significant change in the number of residents taking BZDs or taking BZDs for a long time and no significant change in the quantitative use of BZDs compared to the use among two different control areas (groups). Essential characteristics of the clinician and clinician-patient communication ------------------------------------------------------------------------------ [@ref-5] found a significant reduction in the use of psychoactive drugs (BZD included) among residents at three nursing homes after they implemented a comprehensive educational outreach program ("academic detailing") for HCPs. The reduction in BZD intake was 20% in the intervention group and 9% in the control group, and the patients in the intervention group reported reduced anxiety but more memory loss than the control group. [@ref-8] investigated whether an interactive lecture or the dissemination of printed materials to physicians, nurses, and pharmacists would change the prescribing rate of BZDs toward a more appropriate rate for inpatients. Nearly 1,500 inpatients were included in the study. The prescribing rates were handled more appropriately in both intervention groups (intervention group 1: 29--44%; intervention group 2: 42--33%) than in a control group (42--42%), but these differences were not significant. Interventions for patients and health care professionals -------------------------------------------------------- Finally, we identified two studies that employed a multifaceted approach toward both patients and HCPs that involved several dimensions of the patient-centered care model. Essential characteristics of the clinician, clinician-patient communication, and patient information ---------------------------------------------------------------------------------------------------- [@ref-48] developed a multifaceted approach that entailed medication reviews by pharmacists over a 12-month period. The pharmacists' visits consisted of a review of the residents' prescribing information, the use of an algorithm to help prescribers assess the appropriateness of a medication, and individual conversations on improving prescriptions. As a result of the intervention, the proportion of residents taking inappropriate psychoactive medications at 12 months in the intervention group (25/128, 19.5%) was significantly lower (*p* \< 0.001) than that in the control group (62/124, 50.0%) (odds ratio 50.26, 95% confidence interval 0.14--0.49) after adjustment for clustering within homes. No differences were observed at 12 months in the fall rate between the intervention group and the control group. Finally, these visits led to significantly lower rates of BZD prescribing and intake in the intervention group. In an RCT, [@ref-69] utilized a strategy from the Reducing Use of Sedatives project. This project involved a multistrategic interdisciplinary intervention for reducing the inappropriate use and promoting the appropriate use of medications that entails educational sessions, academic detailing, and a targeted sedative review. The intervention included raising awareness, two drug use evaluation cycles, educational sessions, promotional materials (newsletters, pamphlets, posters), academic detailing, and a targeted sedative review. This intervention complex led to a significant reduction in intervention home residents regularly taking BZDs (31.8--26.9%, *p* \< 0.005) and antipsychotics (20.3--18.6%, *p* \< 0.05); there were significantly more dose reductions and cessations in intervention homes than in control homes (BDZ: 39.6% vs. 17.6%, *p* \< 0.0001; antipsychotics: 36.9% vs. 20.9%, *p* \< 0.01) for residents taking BZDs and antipsychotics at baseline. In summary, the intervention of [@ref-69] led to a significantly higher rate of dosage reductions or cessations in intervention homes than in control homes. Discussion ========== This review surveyed twenty interventions aimed at reducing the inappropriate prescription or use of BZDs and z-drugs. All interventions were based on patient-centered dimensions: patient information, clinician-patient communication, and essential characteristics of the clinician. We used the description of the interventions to assign them to the respective three dimensions of the patient-centered care model developed by [@ref-57]. Patient-centered care is a broad concept in health care; this review shows that although there has been a growing focus on interventions that reduce the inappropriate use of BZDs and z-drugs, no study was defined as a patient-centered intervention or specifically measured the effects of such an intervention. Importantly, all included studies used a controlled design, and most showed a positive effect on the inappropriate prescription and use of BZDs and z-drugs for the intervention compared with typical care. There were comparisons between interventions and typical care as well as between interventions and other interventions. The interventions focused on patients showed a greater effect than those focused on HCPs. The studies that included both groups also showed a positive effect. This review suggests that patient-centered interventions that actively target patients, health professionals, or both are better than no intervention at all. Based on the results of this work, the following recommendations can be derived. First, studies that examined patient information as one important dimension of patient-centered care and focused exclusively on patient-targeted interventions did not indicate a specific way to successfully reduce BZD and z-drug intake. In contrast, it has been shown that there are many methods to provide information that consider the patient's informational needs and preferences. Studies have demonstrated that most educational interventions are more effective with middle-aged participants than with older participants ([@ref-36]; [@ref-37]). However, studies assessing elderly people show more diverse results than those without any age specifications ([@ref-38]); therefore, there is a high probability that the effects of these interventions can also be achieved in older populations. The patient information studies established that providing patients (regardless of age) with information effectively led to the reduction or discontinuation of BZD and z-drug use, and this finding is consistent with previous research ([@ref-41]; [@ref-67]). Providing facts in a comprehensive and well-arranged way, as patient information does, encourages patients to consider reducing or discontinuing the use of the drug ([@ref-11]). Among the interventions that targeted patients, two studies supplemented the provision of patient information through consultations and active support by personnel; these studies also showed a significant reduction in BZD use. Providing patient information encourages patients to discuss these topics with their physician ([@ref-26]; [@ref-43]). Advising patients and discussing the best possible treatments are the main purposes of patient-centered care ([@ref-19]; [@ref-57]). The findings here emphasize the importance of providing patient information as part of a patient-centered approach ([@ref-21]; [@ref-70]). Second, the majority of the studies that focused on clinician-patient communication and essential characteristics of the clinician (HCPs) investigated interventions for HCPs; only three studies investigated interventions for patients. Studies that focused on patient interventions assessed a combination of patient information and clinician-patient communication and suggested that direct educational interventions and discussions with HCPs effectively reduces or stops inappropriate BZD use. This finding can be explained by the active participation of patients in the care process, as they are provided with all the information they need to make decisions regarding their medication consumption. Interventions targeting HCPs that include a combination of patient information sources (via e-mail, letter) and follow-up personal contact with HCPs provide models of success that may be more likely to be effective in reducing the inappropriate prescription and use of BZDs and z-drugs. This two-way communication is an important method of building practitioner-specific skills and increasing practitioner involvement in the interaction ([@ref-52]). Although, we did not explicitly describe and analyze secondary outcomes, in some of these combined studies, the most important results were the absence of symptoms (anxiety, distress, behavior disorders, life quality) as BZD usage was reduced ([@ref-5]). The results were more varied with regard to interventions that concentrated on a set of verbal and nonverbal communication opportunities and skills and a set of attitudes, including those towards the patients, the HCPs themselves (self-reflection) and the medical competency of the HCPs. While some studies have found that the sole use of informative and educational training with printed educational material, training sessions and/or expert visits had positive effects on prescription rates and/or BZD use, other studies did not find similar results. However, it is possible that with educational efforts, positive changes with respect to the inappropriate prescription and consumption of BZDs can be achieved without disrupting care routines or producing high economic costs ([@ref-25]). The factors associated with the knowledge and skills of prescribers belong to the most important dimension of patient-centered care. However, there are no conclusions concerning the comparison of effects between the significant studies. Most studies with statistically significant results used interventions that consisted of complex designs and methods, such as combinations of education and active individual exchanges about prescribing practices. These results suggest that an active exchange of knowledge during discussion leads to improvements in prescription habits. The duration of the studies that targeted clinician-patient communication and the specific characteristics of HCPs ranged from 5 to 12 months (one study lasted 4 weeks), suggesting that positive effects need time but will also be long-lasting. However, some of the studies that examined communication specifications or essential characteristics of HCPs did not report significant positive changes in prescription rates or the use of BZDs. A few explanations for these findings were provided ([@ref-8]; [@ref-50]), in particular, a focus on only one method of intervention (bulletin information) and a failure to combine several strategies. Furthermore, changes in prescribing habits associated with a long-term therapy (as with BZDs) are more difficult than in cases of acute and nonrecurring therapies, and some patients do not associate their medications with harmful effects. Therefore, more studies are needed that clearly define and describe the patient-centered dimensions of communication and HCP characteristics to allow for explicit comparisons and recommendations for clinical practice. Third, this review included two multifaceted interventions that addressed patients as well as HCPs and examined three patient-centered dimensions of medical care: the essential characteristics of the clinician (HCP), clinician-patient communication and patient information ([@ref-48]; [@ref-69]). These studies demonstrated that inappropriate users who were actively informed about appropriate BZD use were more likely to reduce or discontinue BZD use. In addition, HCPs who were informed and involved in active exchanges improved their prescribing behavior, which is consistent with other reviews ([@ref-25]). The available evidence indicates that interventions that address both patients and HCPs are effective and have significant positive effects if patient information and HCP education are implemented simultaneously ([@ref-30]; [@ref-33]). The joint distribution of information and educational resources to both groups stimulates information exchange, which can lead to the cessation of drug use and/or improvements in prescribing behaviors ([@ref-14]; [@ref-60]). Therefore, it is important to use a combination of strategies, such as updating HCP skills and improving awareness among patients, to help reduce or discontinue BZD and z-drug use. Other studies have found that interdisciplinary collaborations in medication-care-related interventions also improve drug use outcomes ([@ref-73]). However, these results should be interpreted with caution, as only two studies were included in the present analysis. When analyzing the identified articles, it became clear that general practitioners and nursing homes were attempting to reduce the inappropriate use of BZDs and z-drugs. This finding was particularly true for older people who were being treated on an outpatient basis or by nursing home personnel. As reported in other published reviews, a number of interventions capable of reducing BZD and z-drug use already exist ([@ref-41]; [@ref-67]). Interventions are more effective than routine care ([@ref-46]). Consistent with previous reviews, interventions that target patients, which are represented under the dimension of patient information, have a positive effect on the reduction of BZD and z-drug use ([@ref-41]). A brief intervention in the form of either a letter or a single consultation is an effective strategy to decrease or stop inappropriate medication use without causing adverse consequences ([@ref-41]). Most strategies promote patient-centered care by providing information, boosting prescriber proficiency, and strengthening clinician-patient communication. Interventions that target patients and HCPs and use a multifaceted approach may be efficient, as studies of these interventions, in most cases, showed sustained reductions in BZD or z-drug use, consistent with other reviews ([@ref-24]). Our review emphasizes that there is a possibility of decreasing the inappropriate prescription and use of BZDs by providing patient-centered skills to providers. Finally, we found that effective interventions for changing clinical practice must target patients as well as HCPs and reflect the perspectives of patient-centered care ([@ref-18]; [@ref-31]). Due to the heterogeneity of the included studies and their designs, this review did not attempt to compare the studies or make a final general statement. In addition, our findings and conclusions should be reconfirmed through further investigations. Strengths and limitations ------------------------- This is the first review of patient-centered care in the field of inappropriate BZD and z-drug usage. A systematic approach yielded a survey of patient-centered care interventions, providing a critical look at the multitude of methods that address different target groups along with their respective effectiveness. The quality of the studies suffered considerably from a lack of specificity. Study protocols were missing in all studies, and it was unclear whether all relevant information had been conveyed. Thus, it is necessary to be cautious when interpreting these results. This review focused on the primary outcome of a reduction in BZD and z-drug use and prescribing, and it did not consider secondary outcomes, such as the patients' general health status (biological factors), social lives (social factors), or mental health status (psychological outcomes). The HCPs were also not analyzed in terms of their duration in the profession or their experience in treating older patients. An assessment of these factors is recommended in further scientific investigations to obtain a complete understanding of the problems involved in the inappropriate prescription and use of BZDs and z-drugs. Furthermore, one of the limitations is that although patient education seems to be more effective than approaches regarding HCPs, caution must be practiced with regard to generalization. A number of cognitively impaired older patients, especially in nursing homes (e.g., dementia patients), are not able to benefit from educational information. Finally, many studies were conducted using qualitative designs, and many were written in languages other than English; thus, these studies were not included in the current review, though they may also have been relevant. Therefore, future reviews should incorporate additional research designs. Conclusion ========== The main finding of our systematic review is that patient information and educational strategies for HCPs can effectively lead to the appropriate use and prescription of BZDs. All three examined areas of patient-centered care (patient information, essential characteristics of the clinician, and clinician-patient-communication), alone or in combination, were generally effective at reducing and/or stopping the use of BZDs and z-drugs completely. These results suggest that inappropriate BZD and z-drug users (older adults) require and benefit from in-depth information about appropriate consumption. On the other hand, HCPs require more interventions in which they may communicate their clinical experiences with other groups of caregivers, discuss guidelines, and obtain additional knowledge to optimize their prescribing practices. Although this review focused on a patient-centered approach, it also revealed the limitations of studies that use this method. Before any final conclusions can be drawn, further investigations are needed to reconfirm the findings discussed here. Supplemental Information ======================== 10.7717/peerj.5535/supp-1 ###### PRISMA checklist. ###### Click here for additional data file. We would like to thank Eva Christale for her collaboration in the screening and data extraction processes. Additional Information and Declarations ======================================= The authors declare that they have no competing interests. [Aliaksandra Mokhar](#author-1){ref-type="contrib"} conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, approved the final draft. [Janine Topp](#author-2){ref-type="contrib"} conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables. [Martin Härter](#author-3){ref-type="contrib"} conceived and designed the experiments, authored or reviewed drafts of the paper. [Holger Schulz](#author-4){ref-type="contrib"} conceived and designed the experiments, supervision. [Silke Kuhn](#author-5){ref-type="contrib"} conceived and designed the experiments, authored or reviewed drafts of the paper, supervision. [Uwe Verthein](#author-6){ref-type="contrib"} conceived and designed the experiments, authored or reviewed drafts of the paper. [Jörg Dirmaier](#author-7){ref-type="contrib"} conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. The following information was supplied regarding data availability: This article is a systematic review and did not generate any data.
Sign up to our mailing list to receive the latest news in your email. College News 35,000 pupils left without direct access to psychologists Almost 200 schools are without the direct services of an educational psychologist due to massive staff shortages. The National Education Psychological Service (NEPS) is currently trying to recruit new practitioners, but in the meantime schools are having to outsource the work. The situation is affecting children’s access to Resource Teaching Hours and Learning Supports, and according to Opposition politicians, “privileging households that have the resources to opt for private assessment”. Figures released by Education Minister Richard Bruton shows that 199 schools, with around 35,000 pupils, are not being directly serviced by NEPS. “In the case of some schools, NEPS psychologists may no longer be assigned to those schools as a result of retirement, resignation or transfer to another NEPs region, and while every effort was made to recruit a replacement from the existing PAS panels, sometimes this is not successful,” Mr Bruton said. He noted that the schools affected are given access to the Scheme for Commissioning Psychological Assessments (SCPA), where they can have an assessment carried out by private psychologists approved by NEPS, and NEPS will pay the fees. However, Fianna Fáil’s education spokesman Thomas Byrne told the Irish Independent that in most instances it takes over a year for students to be publicly assessed. “This is why, the recruitment of 100 new NEPS psychologists was a condition of the our Confidence and Supply Agreement with the Government. “However, these figures reveal that the situation is a lot worse for many schools who do not have an educational psychologist assigned to take care of their pupil’s needs,” he said. The Meath East TD said schools which do not have an assigned psychologist are “severely disadvantaged in terms of delays to assessments for special education needs or behavioural difficulties, as well as having more limited access to psychological supports and counselling for children when a crisis presents itself”. He said all schools are able to access full NEP supports in the event of a critical incident. Mr Bruton also noted that a national recruitment competition was has been put in place by PAS to fill vacancies within all NEPS Regions. Work is currently ongoing in relation to the examination of applications and short-listing of candidates for interview. “It is envisaged, following interviews, that recruitment panels will be formed and active filling of vacancies will commence in the New Year,” Mr Bruton said.
The spiritual blindness that happens in the night of the spirit happens because the divine light of God is brighter than the eyes of our soul can handle. This is one reason the night of the spirit hurts — because our souls, being human, are much weaker than the brightness of the divine light of God. John of the Cross says this: “The light and wisdom of this contemplation are so pure and bright and the soul it invades is so dark and impure that their meeting is going to be painful. When the eyes are bad — impure and sickly — clear light feels like an ambush and it hurts.” There’s another reason the night of the spirit is so painful, though, and it’s because what the soul is able to see when the divine light shines upon it are all its imperfections. The saint describes it this way: “Consider common, natural light: a sunbeam shines through a window. The freer the air is from little specks of dust, the less clearly we see the ray of light. The more motes that are floating in the air, the more clearly the sunbeam appears to our eyes. This is because light itself is invisible. Light is the means by which the things it strikes are perceived.” The light of God is a sunbeam on the soul, and our native imperfections are dust motes and particles floating through the air, now clearly visible because of that ray of light. The sudden, acute awareness of all these imperfections makes the soul in this place feel quite wretched.
/* Generated by RuntimeBrowser. */ @protocol SFAutomaticPasswordInputViewSizing <NSObject> @required - (struct CGSize { double x1; double x2; })intrinsicContentSizeForInputView:(SFAutomaticPasswordInputView *)arg1; @end
In a sense, Michigan coach Brady Hoke was right when he said Notre Dame was “chickening out” of its rivalry with the Wolverines. No, the Fighting Irish are not literally scared to play Michigan, but if the Wolverines weren’t generally so good Notre Dame might not have pulled out of the series between two of college football’s most famous teams. As is the case with all heavyweight programs, Notre Dame needs to manage the difficulty of its schedule and guarantee regularly playing seven home games. The Fighting Irish (1-0) visit the Big House on Saturday, their last scheduled trip to Ann Arbor. Michigan (1-0) plays at South Bend, Ind., next season, and then the rivalry takes an indefinite hiatus. It’s a rivalry that is both historical and significant — and really cool. Winged helmets vs. golden domes. But when Notre Dame agreed to play five games against Atlantic Coast Conference opponents per season, starting next year, it needed to clear some space — and Michigan got the boot. “It’s just there’s so many complexities with our schedule and our agreement with the ACC that it’s difficult and frustrating,” Notre Dame coach Brian Kelly said. “I can see the frustration that would be there.” Why Michigan and not Navy or Purdue or even Michigan State? To be fair, there is a lot of history with those rivalries. The Irish have played the Midshipmen more (86 times) than even Southern California (84). They’ve played Purdue 84 times as well and Michigan State is next on the most-played list at 75. Michigan’s 40 is behind Pitt (68), Army (50) and Northwestern (47). Just as important: Notre Dame doesn’t necessarily need another heavyweight on its schedule. Next season Notre Dame plays the usual suspects: Stanford, USC, Michigan. No Michigan State. The deal with the ACC added Louisville and Florida State. There is also a road game against Arizona State that the Irish couldn’t get out of, plus Northwestern and North Carolina. You never know for sure how tough a schedule will be until it plays out, but that has potential to be one of the most difficult in the country. As much as programs and conferences are looking for ways to bulk up their future schedules, they are doing so carefully. Notice how much discussion has gone on in the Southeastern Conference about possibly playing nine league games and eliminating cross-division rivalries. Not wanting to give up home games is one of the reasons Florida so rarely plays Miami. The Gators visit the Hurricanes on Saturday, and have no plans to play again. What Notre Dame is doing with Michigan is similar and understandable, but hopefully it won’t be permanent. The picks: MAIN EVENTS No. 6 South Carolina (plus 3) at No. 11 Georgia Jadeveon Clowney gets chance to catch his breath ... GEORGIA 27-23. No. 14 Notre Dame (plus 3½) at No. 17 Michigan Under the lights, Wolverines protect the Big House ... MICHIGAN 23-20. MARQUEE MATCHUPS No. 12 Florida (minus 3) at Miami Fourth regular-season meeting since 1987 ... FLORIDA 31-21. No. 15 Texas (minus 7) at BYU Cougars offense was washed away by Virginia rain, should be better at home ... TEXAS 24-20. West Virginia (plus 20½) at No. 16 Oklahoma Sooners rediscovered their defense last week ... OKLAHOMA 45-21 Washington State (plus 15) at No. 25 USC Whoever is the quarterback for USC needs to play better ... USC 35-17 UPSET SPECIAL Syracuse (plus 12) at No. 19 Northwestern Wildcats came back from California banged up ... SYRACUSE 30-24. PLUCKY UNDERDOGS No. 2 Oregon (minus 22) at Virginia ... OREGON 38-14. San Diego State (plus 28) at No. 3 Ohio State ... OHIO STATE 41-17. San Jose State (plus 26½) at No. 5 Stanford ... STANFORD 35-14. Virginia is coming off a soggy victory against BYU, and could probably use another rain storm to help slow down the Ducks. San Diego State is coming off a surprising loss to Eastern Illinois, but should be better. San Jose State hung tough with Stanford last season and has one of the best quarterbacks in the country in David Fales.
Q: tikz pgf hide or show legend, colorbar, plot lines - similar to hide axis If you have a complex tikzpicture with many \addplot commands and \addlegendentry commands, references to many datasets, probably also a colorbar, and so on: is there a convenient way to only hide/show the legend or only hide/show the colorbar, or to hide all the plot lines while keeping the legend? I want to avoid having to uncomment and edit many things and there is for example a very convenient way to hide the axis by using hide axis, somewhere within the axis options of the tikzpicture. I am looking for similar commands to hide the legend or to hide the colorbar or to hide the actual line data (plots) while still showing the correct legend (with correct colors, entries etc.). The goal is for example to be able to create a pdf, which just shows the cropped legend (with all the correct lines in the corresponding colors of the plots and with the correct legend entries) but hides all of the rest of the figure's elements (colorbar, axis, plot lines...). Is something like this possible? I would like to avoid to create the legend manually as mentioned here: 54794 A: As I already stated in the comments below the question, you can externalize the legend to a PDF. For that have a look at the following example (including the comments). % use this document to externalize the legend of the plot % (it is important that the `--shell-escape' feature is enabled) \documentclass[border=2mm]{standalone} \usepackage{pgfplots} % load `external' library from PGFPlots and not TikZ, % because its newer and some bugs are fixed \usetikzlibrary{ pgfplots.external, } % when you are new to LaTeX/PGFPlots then you should use % the newest `compat' level to use all new features \pgfplotsset{ compat=newest, } % activate externalization \tikzexternalize[ % with the following key you can define where to store the % externalized files. Otherwise they will be stored in the % main/\jobname folder prefix=Pics/pgf-export/, % only externalize stuff that is "named" with `\tikzsetnextfilename' only named=true, % % if you have to force to reexternalize stuff, uncomment the next line % force remake, ] \begin{document} % if you (also) want to externalize the plot, give him a name % (uncomment the next line) % \tikzsetnextfilename{linear_plots} \begin{tikzpicture} \begin{axis}[ legend columns=-1, legend entries={ $(x+0)^k$;, $(x+1)^k$;, $(x+2)^k$;, $(x+3)^k$ }, legend to name=legend:linear_plots, ] \addplot {x}; \addplot {x+1}; \addplot {x+2}; \addplot {x+3}; \end{axis} \end{tikzpicture} bla % to externalize the (external) legend, give it a name with the % following command \tikzsetnextfilename{linear_plots_legend} % then reference the legend, so it is drawm and thus externalized \ref{legend:linear_plots} blub \end{document} Then you can simply use \includegraphics in your real document to show the legend anywhere you want. % in your "real" document you can then include the externalized plot/legend \documentclass[border=2mm]{standalone} \usepackage{graphicx} % also add the externalize path (from above) % to the `\graphicspath' \graphicspath{{Pics/pgf-export/}} \begin{document} bla \includegraphics{linear_plots_legend} blub \end{document}
[Neck pain: which physical therapies are recommended ?] Different physical therapies for cervicalgia are described, as well as their efficacy, adverse effects, degree of evidence and recommendations. Several guidelines recommend active exercise, patient education and various treatments described here. According to the strict criteria of evidence-based medicine, active exercise, dry needling and probably laser therapy are effective as well as acupuncture. The use of a collar-neck brace is not recommended as first intention and should be limited in duration. Despite their clinical benefit and the inclusion of many therapies in several high-level scientific guidelines, the level of evidence of the recommendations could be enhanced by new research.
Telework is most commonly performed from a person's home, and thus can accommodate the needs of persons with disabilities ranging from agoraphobia to quadriplegia. Telework is a practice that appears to meet Universal Design criteria, viz., a broad-spectrum design feature that is beneficial to the general population, but is particularly helpful to persons with disabilities. However, some disability advocates fear that the practice is unacceptably isolating and compromises community integration and competitive employment. The National Telework Consumer Survey was designed to inform the debate with real primary data on consumers' views. We surveyed a population of persons with disabilities who had differing degrees of connection with a national disability employment network that recruits, trains, and hires prospective workers with disabilities to fulfill assignments the organization has brokered with government agencies and businesses. The survey sample frame consisted of nearly 10,000 individuals who received an e-mail recruitment flier for a 20-minute on-line survey about their all aspects of their experience with Telework, While many write-in comments from ardent advocates expressed gratitude for the opportunity to engage in meaningful employment from home and the flexibility Telework afforded, over 71 percent of those who had actually worked at Telework jobs, primarily in part-time call center positions, expressed dissatisfaction with Telework. This presentation will focus on what respondents found least (and most) satisfying about this type of employment. Because theses issues were addressed using open-ended survey questions, the qualitative survey data are particularly rich and provide a unique window into Teleworkers' attitudes and experiences. Learning Areas: Conduct evaluation related to programs, research, and other areas of practice Occupational health and safety Social and behavioral sciences Learning Objectives:• Explain the concept of Telework, • Describe how Telework can be advantageous to persons with • disabilities, • Describe the advantages and disadvantages of Telework for these consumers, • Assess the potential efficacy of Telework in improving self-sufficiency for persons with disabilities. Keywords: Disability, Workplace Stressors Presenting author's disclosure statement: Qualified on the content I am responsible for because: I am the co-author of the survey upon which this presentation is based. Currently, I am a tenured Associate of Abt Associates Inc., the social research firm, and Founder and Director of its Center for the Advancement of Rehabilitation and Disability Services. My recent disability employment research involvement includes 10 years of top level participation in the policy research program and implementation efforts of the Massachusetts Medicaid Infrastructure and Comprehensive Employment Opportunities grant from CMS and its WorkWithoutLimits initiative.Any relevant financial relationships? No I agree to comply with the American Public Health Association Conflict of Interest and Commercial Support Guidelines, and to disclose to the participants any off-label or experimental uses of a commercial product or service discussed in my presentation.
Plus Size Evening Dresses Head into the night in style with one of our plus size evening dresses. Whether it's one of our plus size cocktail dresses offering timeless glamour, a sequin embellished evening gown or a lace dinner dress for all those formal affairs, we've got a collection of new season styles in midi and maxi lengths that will take you to every event effortlessly. YOURS LONDON Wine Red Bardot Maxi Dress Evening Dresses For Every Occasion Elevate your occasion wear wardrobe with our range of plus size evening dresses. Whether you have a wedding, dinner party or a special occasion to attend, find curve-flattering styles that will make you look and feel fabulous throughout the event. Wedding Guest Make sure you’re wedding guest ready with our range of plus size evening gowns. Ideal for those extra special occasions, our collection is filled with glamorous garments for you to choose from. Think on-trend midi dresses and bold floral prints for the new season, or go for a figure flattering skater shape to make the most of your curves. Whatever your style, we’ve got your aisle-side look covered. Plus Size Prom Dresses Get that prom queen look with our collection of plus size formal dresses. Whether you’re looking for something simple-and-chic, or you want to make a statement, we have a range of designs for you to choose from. From sequin embellished maxi dresses to flared skater skirt designs, turn heads on the big night in one of our gorgeous gowns. Special Occasion Dresses Dress to impress for that special occasion with a new dress from our latest collection. From floor-sweeping plus size ball gowns to sophisticated maxi dresses and head-turning lace designs, our range is bursting with on-trend styles for you to choose from. From strapless styles to plus size evening dresses with sleeves, find dresses to suit every season in classic, timeless designs. We noticed you are not shopping from United Kingdom We can ONLY ship to United Kingdom from this site. Continue shopping If you wish to ship to an alternative country, select one of our dedicated sites below
1. Field of the Invention The present invention relates to an improved package and method for making same for packaging electrical button cells in general and more particularly concerns an impact-resistant handling, shipping and storage package, especially for electric button cells of the metal-air type wherein such cells are packaged to maintain the cells in a sealed condition throughout subsequent handling, shipping and storage. 2. Description of the Prior Art Packaging devices for button cells are known. U.S. Pat. No. 4,015,708 (assigned to the same assignee as the present invention) describes a storage and merchandising package for battery cells in which cells, such as zinc-air cells, are inserted in apertures and adhesively held against a backing material which, in the preferred embodiment, for zinc-air cells, has preferential barrier properties to extend the shelf life of the cells. The package includes an openable cover which surrounds the cells to prevent accidental shorting or physical dislodgment thereof. While the various packaging and storage devices of the prior art all provide some advantage over previous configurations, none has solved the problem of providing a package for all types of button cells, including zinc-air button cells, in a manner which provides easy access to the button cells by persons lacking in physical dexterity and of providing an impact-resistant package which may be embodied in a variety of packaging configurations. The aforementioned patent is directed to a button cell package which has a flat first layer of material having a plurality of apertures dimensioned to receive respective button cells therein. For releasably retaining the button cells in the apertures, a second layer of backing material is laminated to the first layer with pressure-sensitive adhesive on the backing material exposed in the areas underlying the apertures. During packaging, the button cells are firmly pressed against the exposed pressure-sensitive adhesive areas on the backing material. To prevent excessive flexing of the backing material, particularly in the apertures or window areas, the first layer is preferably made of a relatively rigid material, such as card stock, to thereby stiffen the backing material. This substantially precludes warpage of the backing material which tends to separate the exposed adhesive areas from the button cells. Moreover, the first layer and/or the backing material may be formed of a blotter-like material to absorb electrolyte which may leak from the cell. The patent also discloses that the adhesive itself can serve as the barrier member. In that case, the backing material can be made of a gas permeable material such as another layer of paper board or card stock material similar to the first layer. While the preferred backing material of the referenced patent is a thin polyester plastic film, such as "Mylar", sold by E. I. duPont de Nemours, the packaging has no resilient layer or member in the laminate, nor does the alternative embodiment provide for a resilient layer or member in the laminate. Thus, when the button cells is pressed into the receiving apertures, the button cell surface area in contact with the adhesive layer is dependent upon the compressibility or resiliency of the backing material and upon the shape of the button cell contact surface. Thus, inconsistent and unpredictable degrees of adherence result between the button cell and the adhesive layer when the cell is compressed against the relatively incompressible paper board or card stock. This may result in cells inadvertently separating from the adhesive layer and, in the case of metal-air cells which depend upon good adherence to the adhesive layer to maintain the cell in a sealed condition, will prematurely reduce cell shelf life as the result of oxygen, carbon dioxide or moisture in the air entering the cell, or moisture escaping from the cell, resulting in the possibility of a dead cell when subsequently placed in service by the consumer. Previous button cell packaging configurations are not readily adaptable for packaging all types of button cells and are generally unacceptable from the viewpoint of cell accessibility and ease of cell removal, or are unacceptable because cells may easily become dislodged and lost during handling or shipping and do not adequately protect metal-air cells to prevent inadvertent and premature loss of cell capacity. The present invention greatly reduces the possibility of a button cell becoming inadvertently dislodged or lost, or, in the case of a metal-air cell, premature loss of cell capacity. This is accomplished by utilizing a laminated structure which includes a resilient member between a pressure-sensitive adhesive surface layer and a nonresilient base member. When the button cell is pressed onto the adhesive layer, the resilient member serves as a cushion and deforms or yields to the shape of the button cell contact surface to maximize the contact area between the cell contact surface and the adhesive surface layer, thus providing a reproducible and reliable adhesive contact between the cell and the adhesive layer, regardless of the shape of the button cell contact surface. The present invention also provides a basic button cell package which has sufficient impact resistance to withstand rough handling without cell dislodgement, thereby permitting button cells of any type to be commercially packaged in a wide variety of package or housing configurations, including reclosable housing configurations which provide easy access to and removal of the individual cells by persons lacking in physical dexterity.
node-configuration: id: ID url: URL state: State state-transition-time: State transition time scheduling-state: Scheduling state node-agent-version: Node agent version node-size: Node size last-boot-time: Last boot time allocation-time: Allocation time running-tasks-count: Running tasks total-tasks-run: Total task run total-tasks-succeeded: Total successful tasks internal-ip: Internal ip external-ip: External ip is-dedicated: Is dedicated start-task-execution: Start task execution start-task-not-started: Start task execution start-task-running: Running for start-task-completed: Completed in start-time: Start time end-time: End time run-time: Execution time exit-code: Exit code retry-count: Retry count
If this were to be the case, a vote in favour of the “Save Our Swiss Gold” campaign would have meant further gold purchases to maintain the 20pc quota. The sales ban would mean gold making up an increasing proportion of the central bank’s balance sheet when it attempts to shrink its euro holdings.
The role of endemic plants in Mauritian traditional medicine - Potential therapeutic benefits or placebo effect? The Mauritian endemic flora has been recorded to be used as medicines for nearly 300 years. Despite acceptance of these endemic plants among the local population, proper documentation of their therapeutic uses is scarce. This review aims at summarising documented traditional uses of Mauritian endemic species with existing scientific data of their alleged bioactivities, in a view to appeal for more stringent validations for their ethnomedicinal uses. A comprehensive bibliographic investigation was carried out by analysing published books on ethnopharmacology and international peer-reviewed papers via scientific databases namely ScienceDirect and PubMed. The keywords "Mauritius endemic plants" and "Mauritius endemic medicinal plants" were used and articles published from 1980 to 2016 were considered. 675 works of which 12 articles were filtered which documented the ethnomedicinal uses and 22 articles reported the biological activities of Mauritian endemic plants. Only materials published in English or French language were included in the review. Available data on the usage of Mauritian endemic plants in traditional medicine and scientific investigation were related. We documented 87 taxa of Mauritian endemic plants for their medicinal value. Endemic plants are either used as part of complex herbal formulations or singly, and are prescribed by herbalists to mitigate a myriad of diseases from metabolic disorders, dermatological pathologies, arthritis to sexually transmissible diseases. However, these species have undergone a limited consistent evaluation to validate their purported ethnomedicinal claims. As the World Health Organization Traditional Medicine Strategy 2014-2023 emphasises on moving traditional medicine into mainstream medicine on an equally trusted footage, the re-evaluation and modernization of Mauritius cultural heritage become necessary. With a consumer-driven 'return to nature', scientific validation and valorization of the herbal remedies, including efficacy and safety are, therefore, important. This review reports the scarcity of research on validating the efficacy and safety of medicinal endemic plants. This calls for the use of optimised methodologies to investigate the claims of therapeutic effects resulting from the use of these traditional medicines.
China’s best-known billionaire namesakes, Jack Ma and Pony Ma, have entered the ranking of the world’s 20 wealthiest people compiled by Forbes, also becoming the first Chinese people to make the iconic list. Ma Huateng, the founder and CEO of internet company Tencent, who is known as Pony Ma, was ranked 17th this year with a net worth of $45.3 billion, while Alibaba founder Jack Ma, worth $39 billion, grabbed the 20th spot. China has more billionaires than US: Number growing despite 20% drop in domestic stock markets & economic slowdown https://t.co/nZvODNGbi8 — RT (@RT_com) October 14, 2016 Tencent operates the WeChat messenger application that has become super-popular in China over recent years. Launched in January 2011, WeChat has turned into the largest messenger by monthly users, with over 938 million active users as of the middle of last year. The app allows users not only to send messages and make calls, but to pay bills, order goods and services, transfer money, as well as pay in shops if they have the WeChat payment option. The popularity of Tencent’s products helped nearly double its share price last year, launching Pony Ma directly into the top 20 richest people across the planet. The market value of the Hong Kong-listed company has reached nearly $535 billion, outstripping Facebook’s market capitalization. Alipay, launched by Jack Ma’s Alibaba Group in 2004, is WeChat Pay's major rival in China. In a bid to beat the competition, Alipay unveiled a facial recognition payment service in 2017 and added Ripple blockchain protocol to its backend to speed up payment processes. Alibaba, founded in 1999 by former English teacher Jack Ma, provides e-commerce, internet, AI and technology. Earlier this year, Alibaba's market capitalization surged to $527 billion, making the Hangzhou-based firm one of the most valuable companies in the world. At the same time, the two Chinese tycoons are ranked as the first and the third wealthiest people in Asia. For more stories on economy & finance visit RT's business section
On January 22, the armed formations of the Russian Federation violated ceasefire in the Joint Forces Operation (JFO) area in eastern Ukraine six times. “The enemy used 82mm mortars banned under the Minsk agreements, grenade launchers of different systems, heavy machine guns and other small arms to fire on positions of the Armed Forces of Ukraine. Sniping was also recorded,” the press center of the JFO Headquarters reports. In the zone of action of tactical force “East”, Russian-led forces used hand-held antitank grenade launcher to shell Ukrainian troops near Pavlopil (25km north-west of Mariupol); heavy machine guns, small arms and sniping rifle – outside Talakivka (17km north-east of Mariupol); 82mm mortars, mounted antitank grenade launcher, heavy machine gun and small arms – in the area of Lebedynske (16km east of Mariupol); 82mm mortars and heavy machine gun – near Vodiane (94km south of Donetsk). In the zone of action of tactical force “North”, the enemy fired heavy machine gun on Ukrainian positions near Mayorske (45km north of Donetsk); automatic mounted grenade launcher – outside Shumy (41km north of Donetsk). One member of the Joint Forces was killed as a result of the enemy shelling on January 22. Today, the Russian-occupation troops have already opened fire from small arms on defenders of Opytne (12km north-west of Donetsk). One Ukrainian soldier was wounded. ol
ggest value in 9, -5, 0.2, 1, -0.4, 34, 0.079? 0.2 Which is the second smallest value? (a) -1 (b) -605 (c) 19 (d) -14/5 d What is the smallest value in -0.06, 15, -843/10? -843/10 What is the fifth biggest value in -4, 43/1502, 0.5, 3/4, 0.6? -4 Which is the biggest value? (a) 2 (b) -32 (c) 0.4 (d) -1/9 (e) -1 (f) -181 (g) 2/19 a Which is the fourth biggest value? (a) 0.4 (b) -2 (c) -0.3 (d) 3 (e) -15/11362 (f) 13 (g) -1 e Which is the biggest value? (a) 24 (b) -758/27 (c) -13 a Which is the third smallest value? (a) -2/5 (b) 10827 (c) -1 (d) -3/32 d What is the fifth smallest value in 4/9, -462, 6, 5, -4/5, -1/8, 0.8? 0.8 What is the second biggest value in 0.0664, -3/5, -0.3, -4/9, -2, 5, 24? 5 What is the fourth smallest value in -0.2, -821, -448.4, 5? 5 Which is the fifth biggest value? (a) -3 (b) -0.5 (c) -5 (d) 589 (e) 8/49 c What is the biggest value in -6, 5, -0.3, 0, -3/2, 0.7, -86? 5 Which is the fifth biggest value? (a) -22.5963 (b) -2/5 (c) -9 (d) -31 (e) 1/4 d What is the second smallest value in 1, 0.3, -6042, 2/7, 6/13, -3/4? -3/4 Which is the biggest value? (a) -82 (b) -3 (c) 3/8 (d) 42 d What is the third biggest value in 1/6, -2, -0.1, 159, -11, -19, -5? -0.1 What is the biggest value in 0.695, 17, 10, 1? 17 Which is the sixth biggest value? (a) 5 (b) 0.5 (c) -131 (d) 37 (e) -2/9 (f) -3 c What is the third smallest value in 0.1, 46, 1/5, -406, 1/2, -5, -4? -4 What is the sixth smallest value in -3, 2, -4, 7, 12, 3407? 3407 Which is the second biggest value? (a) -0.0359 (b) 8/15 (c) 4 (d) -0.167 (e) -0.4 b Which is the smallest value? (a) 133 (b) -21 (c) -2/53 (d) -39/2 (e) -0.4 (f) -4 b What is the third biggest value in 5, -1, 0.808852, -3/7? -3/7 Which is the biggest value? (a) -1/39436 (b) 5/2 (c) -363 b What is the biggest value in -1/6, -102, -11/7, 0, -2/41? 0 What is the fifth smallest value in -2/9, 0.01, 3, 1/5, -117110? 3 What is the second smallest value in 17091, -1.6, -0.08? -0.08 What is the smallest value in -12/19, 32793, 0.2, 5, 4? -12/19 Which is the second biggest value? (a) -1/17 (b) -9912 (c) -7 c What is the fourth biggest value in -135/7, -2, -1.7, -850, 2? -135/7 Which is the fifth biggest value? (a) 0.5 (b) 3 (c) -0.11 (d) 1 (e) 2/7 (f) 5/1379 (g) 3/2 e What is the smallest value in -1326, -11/10, -6, 5/46? -1326 Which is the third biggest value? (a) 2/3 (b) -0.7 (c) 211756 b What is the third biggest value in -5, 2/187, 0, 486/7, 2/7, -3? 2/187 Which is the smallest value? (a) 1 (b) 0.2 (c) -2/7 (d) -2/51 (e) -2/5 (f) -1504 f Which is the second biggest value? (a) 4 (b) -0.386 (c) 4/13 (d) 1/5 (e) 0.8 e What is the fourth smallest value in -0.5, 3, 12946, 1, -1, -5, -0.4? -0.4 What is the sixth smallest value in -0.5, 0, -5, -4/9, 4, 8/177, 110? 4 What is the seventh smallest value in -1.6, -7, -3, 3, -124, 35, -0.3? 35 What is the biggest value in -287, 5/6, 1938? 1938 Which is the smallest value? (a) -648.4 (b) 0.0721 (c) 0 (d) 1/3 a What is the fifth smallest value in -3/2, 7, 0.2, 1/3, -0.2, 4331? 7 Which is the sixth biggest value? (a) 0.4 (b) -4 (c) 4 (d) 18/2411 (e) -3 (f) -0.48 b What is the fourth biggest value in 83, 0.05, 1/2, 6/121, 4? 0.05 What is the fourth smallest value in -1/30, -0.64, 0.74, -0.6? 0.74 Which is the third biggest value? (a) -1/10 (b) 5 (c) -0.052138 (d) 4 c What is the smallest value in -2, -0.09, 472403/3? -2 Which is the second smallest value? (a) -0.034 (b) 692/77 (c) -6 a What is the second biggest value in 2/27, -1/7, 1890, 0.1, 0.4, -0.5, -2? 0.4 What is the second smallest value in -4, 0.1, -809403, 0.6? -4 What is the fifth smallest value in -3/7, -6.86, -2, 3, 0.3? 3 Which is the fourth smallest value? (a) 170 (b) -1 (c) -1/4 (d) 192.5 d Which is the fourth smallest value? (a) 2/7 (b) -239 (c) -0.4 (d) -1230 (e) -3 (f) -3/5 f Which is the fourth smallest value? (a) 3524 (b) -4/7 (c) -2 (d) 0.2 (e) 2/11 (f) -3 (g) -4/3 b Which is the fourth biggest value? (a) -0.1 (b) -5/3 (c) 192 (d) 3 (e) 1/3 a What is the biggest value in -2/29, -3/14, 9.2, 1/5? 9.2 Which is the fourth biggest value? (a) 2/13 (b) -2/9 (c) 0 (d) -0.174 (e) -0.003 (f) -2721 d Which is the biggest value? (a) 4/7 (b) 31 (c) 179288 (d) 4 c What is the second smallest value in -2158, -2/3, 0.028347? -2/3 Which is the smallest value? (a) 1/16 (b) 10 (c) -11290 c Which is the second biggest value? (a) 229506 (b) -1 (c) 1 c Which is the smallest value? (a) 2 (b) 1332 (c) -1362 c What is the biggest value in -3/5, 0.12, -0.4, -5/3, -2, 59, -25? 59 What is the fourth biggest value in -8, -2/13, -1/2, -2.162, -3/2, -0.6? -3/2 Which is the second biggest value? (a) 4/7 (b) 0.5 (c) -150 (d) 3 (e) 320 d Which is the third smallest value? (a) -1/894 (b) 2/23649 (c) -3 b Which is the smallest value? (a) 0.01 (b) -315 (c) 275/3 b Which is the biggest value? (a) 1/3 (b) 0.1 (c) 121 (d) 88 c Which is the third biggest value? (a) -0.5 (b) -5/2 (c) 2/11 (d) -1/407 (e) 10 (f) -2 (g) 0.1 g What is the second biggest value in 1344, -3, 0.2, 7/2, -53.5? 7/2 Which is the second biggest value? (a) 4/5 (b) 53/4 (c) 0.05 (d) 64.2 b What is the fifth biggest value in -1.28469, 6/5, 0.2, -3/5, -0.4? -1.28469 What is the fifth biggest value in -1, -1/64, 0.06, -31, -5? -31 Which is the third biggest value? (a) 3 (b) 3/128 (c) 10 (d) 4 (e) 151.2 d Which is the third biggest value? (a) 5 (b) -5/2462 (c) -1324 (d) 0.3 b What is the third biggest value in -38, -2/3, -29/8, -0.05, 18/17? -2/3 Which is the second smallest value? (a) -29.1 (b) 80 (c) 3.7 c Which is the second smallest value? (a) -7 (b) 2/15 (c) 0.2 (d) 0.55978 b What is the seventh smallest value in 23/6, -4, -5, 2/7, -1/47, 14, 5? 14 What is the biggest value in -23, 1/2, 205, -4/5, 42? 205 What is the fourth smallest value in 46/3, 241, -3/4, -8722, -5? 46/3 Which is the second smallest value? (a) -180000 (b) -1 (c) 0 (d) -80 d What is the fourth smallest value in -7/4, -2/3, -1.8, 13, 0.03, 2, -0.3? -0.3 Which is the biggest value? (a) 5/2 (b) 4151 (c) -3/4 (d) 0.2936 (e) 0.2 b What is the fourth smallest value in -1/2, -1/348, -1375, 0.8? 0.8 What is the fifth biggest value in 0.5, 3, 0.2, 2674, -0.1, 32, 19.2? 0.5 What is the smallest value in -41/19, -280, 1/3, -0.6? -280 What is the biggest value in 2/3, 2/15, 43, -2/13, -0.23, 9/8, 0.2? 43 What is the third biggest value in -0.38, -2/15, 0.4, 1340, 4? 0.4 Which is the fourth biggest value? (a) 2 (b) 5 (c) 8 (d) 3830 a Which is the third biggest value? (a) -1.5 (b) 5 (c) 6 (d) -1/6 (e) 4 (f) 0.1 (g) 2/5 e What is the third smallest value in 0.5, -0.14, -0.2, -5, 466, 98? -0.14 Which is the sixth biggest value? (a) 0.5 (b) -50/3 (c) -3 (d) 0.025 (e) -4 (f) 0.49 b What is the third smallest value in -5, 1, 10, -0.2254? 1 Which is the fourth biggest value? (a) 5 (b) -5 (c) 0.7 (d) -112.98 d Which is the biggest value? (a) 482093 (b) 0.03 (c) 0.5 a Which is the sixth biggest value? (a) -2/9 (b) -0.8 (c) 2/55 (d) -3 (e) 2/9 (f) 2/17 (g) -1/8 b What is the second biggest value in -82, 4/3587, -0.7? -0.7 What is the second biggest value in -3/5, 1442, 4, 0.11, 2? 4 Which is the fourth biggest value? (a) 1.01 (b) 3 (c) -0.08 (d) 3/23 (e) 2 (f) 0.1 d Which is the fourth smallest value? (a) -0.1 (b) -1/57 (c) -1089 (d) 0.4 (e) 0.5 (f) -0.5 b Which is the seventh biggest value? (a) -2/9 (b) -0.5 (c) -5 (d) -902/5 (e) 5 (f) 1 (g) -51 d Which is the fourth biggest value? (a) -11 (b) 2/3 (c) 0.2 (d) -0.5 (e) 0.0066 (f) -3/7 f Which is the second biggest value? (a) 2 (b) 2372 (c) 0.072 (d) 4 (e) 7 e Which is the smallest value? (a) -1 (b) 26.9 (c) 2/5 (d) 0.5 (e) 0.0362 a Which is the third biggest value? (a) -0.1 (b) -4 (c) 5/4 (d) -0.25 (e) 32 (f) -0.06 (g) -5 f Which is the second smallest value? (a) -62 (b) 0.4 (c) -1/5 (d) -201 a What is the second biggest value in -21, -3/8, 57012? -3/8 What is the fourth smallest value in -0.08, -4, 4876, 11.1? 4876 What is the fifth biggest value in 0.23, -5, -1.67, 16, 2/7? -5 Which is the smallest value? (a)
// UIAlertView+AFNetworking.m // // Copyright (c) 2013-2015 AFNetworking (http://afnetworking.com) // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN // THE SOFTWARE. #import "UIAlertView+AFNetworking.h" #if defined(__IPHONE_OS_VERSION_MIN_REQUIRED) #import "AFURLConnectionOperation.h" #if __IPHONE_OS_VERSION_MIN_REQUIRED >= 70000 #import "AFURLSessionManager.h" #endif static void AFGetAlertViewTitleAndMessageFromError(NSError *error, NSString * __autoreleasing *title, NSString * __autoreleasing *message) { if (error.localizedDescription && (error.localizedRecoverySuggestion || error.localizedFailureReason)) { *title = error.localizedDescription; if (error.localizedRecoverySuggestion) { *message = error.localizedRecoverySuggestion; } else { *message = error.localizedFailureReason; } } else if (error.localizedDescription) { *title = NSLocalizedStringFromTable(@"Error", @"AFNetworking", @"Fallback Error Description"); *message = error.localizedDescription; } else { *title = NSLocalizedStringFromTable(@"Error", @"AFNetworking", @"Fallback Error Description"); *message = [NSString stringWithFormat:NSLocalizedStringFromTable(@"%@ Error: %ld", @"AFNetworking", @"Fallback Error Failure Reason Format"), error.domain, (long)error.code]; } } @implementation UIAlertView (AFNetworking) #if __IPHONE_OS_VERSION_MIN_REQUIRED >= 70000 + (void)showAlertViewForTaskWithErrorOnCompletion:(NSURLSessionTask *)task delegate:(id)delegate { [self showAlertViewForTaskWithErrorOnCompletion:task delegate:delegate cancelButtonTitle:NSLocalizedStringFromTable(@"Dismiss", @"AFNetworking", @"UIAlertView Cancel Button Title") otherButtonTitles:nil, nil]; } + (void)showAlertViewForTaskWithErrorOnCompletion:(NSURLSessionTask *)task delegate:(id)delegate cancelButtonTitle:(NSString *)cancelButtonTitle otherButtonTitles:(NSString *)otherButtonTitles, ... NS_REQUIRES_NIL_TERMINATION { va_list otherTitleList; va_start(otherTitleList, otherButtonTitles); UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:nil message:nil delegate:delegate cancelButtonTitle:cancelButtonTitle otherButtonTitles:nil, nil]; for(NSString *otherTitle = otherButtonTitles; otherTitle != nil; otherTitle = va_arg(otherTitleList, NSString *)){ [alertView addButtonWithTitle:otherTitle]; } va_end(otherTitleList); __block id observer = [[NSNotificationCenter defaultCenter] addObserverForName:AFNetworkingTaskDidCompleteNotification object:task queue:[NSOperationQueue mainQueue] usingBlock:^(NSNotification *notification) { NSError *error = notification.userInfo[AFNetworkingTaskDidCompleteErrorKey]; if (error) { NSString *title, *message; AFGetAlertViewTitleAndMessageFromError(error, &title, &message); [alertView setTitle:title]; [alertView setMessage:message]; [alertView show]; } [[NSNotificationCenter defaultCenter] removeObserver:observer name:AFNetworkingTaskDidCompleteNotification object:notification.object]; }]; } #endif #pragma mark - + (void)showAlertViewForRequestOperationWithErrorOnCompletion:(AFURLConnectionOperation *)operation delegate:(id)delegate { [self showAlertViewForRequestOperationWithErrorOnCompletion:operation delegate:delegate cancelButtonTitle:NSLocalizedStringFromTable(@"Dismiss", @"AFNetworking", @"UIAlertView Cancel Button Title") otherButtonTitles:nil, nil]; } + (void)showAlertViewForRequestOperationWithErrorOnCompletion:(AFURLConnectionOperation *)operation delegate:(id)delegate cancelButtonTitle:(NSString *)cancelButtonTitle otherButtonTitles:(NSString *)otherButtonTitles, ... NS_REQUIRES_NIL_TERMINATION { va_list otherTitleList; va_start(otherTitleList, otherButtonTitles); UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:nil message:nil delegate:delegate cancelButtonTitle:cancelButtonTitle otherButtonTitles:nil, nil]; for(NSString *otherTitle = otherButtonTitles; otherTitle != nil; otherTitle = va_arg(otherTitleList, NSString *)){ [alertView addButtonWithTitle:otherTitle]; } va_end(otherTitleList); __block id observer = [[NSNotificationCenter defaultCenter] addObserverForName:AFNetworkingOperationDidFinishNotification object:operation queue:[NSOperationQueue mainQueue] usingBlock:^(NSNotification *notification) { if (notification.object && [notification.object isKindOfClass:[AFURLConnectionOperation class]]) { NSError *error = [(AFURLConnectionOperation *)notification.object error]; if (error) { NSString *title, *message; AFGetAlertViewTitleAndMessageFromError(error, &title, &message); [alertView setTitle:title]; [alertView setMessage:message]; [alertView show]; } } [[NSNotificationCenter defaultCenter] removeObserver:observer name:AFNetworkingOperationDidFinishNotification object:notification.object]; }]; } @end #endif
package com.lynn.demo.lock; import org.redisson.api.RLock; import org.redisson.api.RedissonClient; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import java.util.concurrent.TimeUnit; @Component public class RedisLocker implements DistributedLocker{ private final static String LOCKER_PREFIX = "lock:"; @Autowired RedissonConnector redissonConnector; @Override public <T> T lock(String resourceName, AquiredLockWorker<T> worker) throws InterruptedException, UnableToAquireLockException, Exception { return lock(resourceName, worker, 100); } @Override public <T> T lock(String resourceName, AquiredLockWorker<T> worker, int lockTime) throws UnableToAquireLockException, Exception { RedissonClient redisson= redissonConnector.getClient(); RLock lock = redisson.getLock(LOCKER_PREFIX + resourceName); // Wait for 100 seconds seconds and automatically unlock it after lockTime seconds boolean success = lock.tryLock(100, lockTime, TimeUnit.SECONDS); if (success) { try { return worker.invokeAfterLockAquire(); } finally { lock.unlock(); } } throw new UnableToAquireLockException(); } }
// ***************************************************************** -*- C++ -*- /* * Copyright (C) 2004-2012 Andreas Huggel <[email protected]> * * This program is part of the Exiv2 distribution. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, 5th Floor, Boston, MA 02110-1301 USA. */ /* File: actions.cpp Version: $Rev: 2681 $ Author(s): Andreas Huggel (ahu) <[email protected]> History: 08-Dec-03, ahu: created 30-Apr-06, Roger Larsson: Print filename if processing multiple files */ // ***************************************************************************** #include "rcsid_int.hpp" EXIV2_RCSID("@(#) $Id: actions.cpp 2681 2012-03-22 15:19:35Z ahuggel $") // ***************************************************************************** // included header files #ifdef _MSC_VER # include "exv_msvc.h" #else # include "exv_conf.h" #endif #ifndef EXV_HAVE_TIMEGM # include "timegm.h" #endif #include "actions.hpp" #include "exiv2app.hpp" #include "image.hpp" #include "jpgimage.hpp" #include "xmpsidecar.hpp" #include "utils.hpp" #include "types.hpp" #include "exif.hpp" #include "easyaccess.hpp" #include "iptc.hpp" #include "xmp.hpp" #include "preview.hpp" #include "futils.hpp" #include "i18n.h" // NLS support. // + standard includes #include <string> #include <iostream> #include <iomanip> #include <fstream> #include <sstream> #include <cstring> #include <cstdio> #include <ctime> #include <cmath> #include <cassert> #include <sys/types.h> // for stat() #include <sys/stat.h> // for stat() #ifdef EXV_HAVE_UNISTD_H # include <unistd.h> // for stat() #endif #ifdef _MSC_VER # include <sys/utime.h> #else # include <utime.h> #endif // ***************************************************************************** // local declarations namespace { //! Helper class to set the timestamp of a file to that of another file class Timestamp { public: //! C'tor Timestamp() : actime_(0), modtime_(0) {} //! Read the timestamp of a file int read(const std::string& path); //! Read the timestamp from a broken-down time in buffer \em tm. int read(struct tm* tm); //! Set the timestamp of a file int touch(const std::string& path); private: time_t actime_; time_t modtime_; }; /*! @brief Convert a string "YYYY:MM:DD HH:MI:SS" to a struct tm type, returns 0 if successful */ int str2Tm(const std::string& timeStr, struct tm* tm); //! Convert a localtime to a string "YYYY:MM:DD HH:MI:SS", "" on error std::string time2Str(time_t time); //! Convert a tm structure to a string "YYYY:MM:DD HH:MI:SS", "" on error std::string tm2Str(const struct tm* tm); /*! @brief Copy metadata from source to target according to Params::copyXyz @param source Source file path @param target Target file path. An *.exv file is created if target doesn't exist. @param targetType Image type for the target image in case it needs to be created. @param preserve Indicates if existing metadata in the target file should be kept. @return 0 if successful, else an error code */ int metacopy(const std::string& source, const std::string& target, int targetType, bool preserve); /*! @brief Rename a file according to a timestamp value. @param path The original file path. Contains the new path on exit. @param tm Pointer to a buffer with the broken-down time to rename the file to. @return 0 if successful, -1 if the file was skipped, 1 on error. */ int renameFile(std::string& path, const struct tm* tm); /*! @brief Make a file path from the current file path, destination directory (if any) and the filename extension passed in. @param path Path of the existing file @param ext New filename extension (incl. the dot '.' if required) @return 0 if successful, 1 if the new file exists and the user chose not to overwrite it. */ std::string newFilePath(const std::string& path, const std::string& ext); /*! @brief Check if file \em path exists and whether it should be overwritten. Ask user if necessary. Return 1 if the file exists and shouldn't be overwritten, else 0. */ int dontOverwrite(const std::string& path); } // ***************************************************************************** // class member definitions namespace Action { Task::~Task() { } Task::AutoPtr Task::clone() const { return AutoPtr(clone_()); } TaskFactory* TaskFactory::instance_ = 0; TaskFactory& TaskFactory::instance() { if (0 == instance_) { instance_ = new TaskFactory; } return *instance_; } // TaskFactory::instance void TaskFactory::cleanup() { if (instance_ != 0) { Registry::iterator e = registry_.end(); for (Registry::iterator i = registry_.begin(); i != e; ++i) { delete i->second; } delete instance_; instance_ = 0; } } //TaskFactory::cleanup void TaskFactory::registerTask(TaskType type, Task::AutoPtr task) { Registry::iterator i = registry_.find(type); if (i != registry_.end()) { delete i->second; } registry_[type] = task.release(); } // TaskFactory::registerTask TaskFactory::TaskFactory() { // Register a prototype of each known task registerTask(adjust, Task::AutoPtr(new Adjust)); registerTask(print, Task::AutoPtr(new Print)); registerTask(rename, Task::AutoPtr(new Rename)); registerTask(erase, Task::AutoPtr(new Erase)); registerTask(extract, Task::AutoPtr(new Extract)); registerTask(insert, Task::AutoPtr(new Insert)); registerTask(modify, Task::AutoPtr(new Modify)); registerTask(fixiso, Task::AutoPtr(new FixIso)); registerTask(fixcom, Task::AutoPtr(new FixCom)); } // TaskFactory c'tor Task::AutoPtr TaskFactory::create(TaskType type) { Registry::const_iterator i = registry_.find(type); if (i != registry_.end() && i->second != 0) { Task* t = i->second; return t->clone(); } return Task::AutoPtr(0); } // TaskFactory::create Print::~Print() { } int Print::run(const std::string& path) try { path_ = path; int rc = 0; switch (Params::instance().printMode_) { case Params::pmSummary: rc = printSummary(); break; case Params::pmList: rc = printList(); break; case Params::pmComment: rc = printComment(); break; case Params::pmPreview: rc = printPreviewList(); break; } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in print action for file " << path << ":\n" << e << "\n"; return 1; } // Print::run int Print::printSummary() { if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifData& exifData = image->exifData(); align_ = 16; // Filename printLabel(_("File name")); std::cout << path_ << std::endl; // Filesize struct stat buf; if (0 == stat(path_.c_str(), &buf)) { printLabel(_("File size")); std::cout << buf.st_size << " " << _("Bytes") << std::endl; } // MIME type printLabel(_("MIME type")); std::cout << image->mimeType() << std::endl; // Image size printLabel(_("Image size")); std::cout << image->pixelWidth() << " x " << image->pixelHeight() << std::endl; if (exifData.empty()) { std::cerr << path_ << ": " << _("No Exif data found in the file\n"); return -3; } // Camera make printTag(exifData, "Exif.Image.Make", _("Camera make")); // Camera model printTag(exifData, "Exif.Image.Model", _("Camera model")); // Image Timestamp printTag(exifData, "Exif.Photo.DateTimeOriginal", _("Image timestamp")); // Image number // Todo: Image number for cameras other than Canon printTag(exifData, "Exif.Canon.FileNumber", _("Image number")); // Exposure time // From ExposureTime, failing that, try ShutterSpeedValue bool done = false; printLabel(_("Exposure time")); if (!done) { done = 0 != printTag(exifData, "Exif.Photo.ExposureTime"); } if (!done) { done = 0 != printTag(exifData, "Exif.Photo.ShutterSpeedValue"); } std::cout << std::endl; // Aperture // Get if from FNumber and, failing that, try ApertureValue done = false; printLabel(_("Aperture")); if (!done) { done = 0 != printTag(exifData, "Exif.Photo.FNumber"); } if (!done) { done = 0 != printTag(exifData, "Exif.Photo.ApertureValue"); } std::cout << std::endl; // Exposure bias printTag(exifData, "Exif.Photo.ExposureBiasValue", _("Exposure bias")); // Flash printTag(exifData, "Exif.Photo.Flash", _("Flash")); // Flash bias printTag(exifData, Exiv2::flashBias, _("Flash bias")); // Actual focal length and 35 mm equivalent // Todo: Calculate 35 mm equivalent a la jhead Exiv2::ExifData::const_iterator md; printLabel(_("Focal length")); if (1 == printTag(exifData, "Exif.Photo.FocalLength")) { md = exifData.findKey( Exiv2::ExifKey("Exif.Photo.FocalLengthIn35mmFilm")); if (md != exifData.end()) { std::cout << " ("<< _("35 mm equivalent") << ": " << md->print(&exifData) << ")"; } } else { printTag(exifData, "Exif.Canon.FocalLength"); } std::cout << std::endl; // Subject distance printLabel(_("Subject distance")); done = false; if (!done) { done = 0 != printTag(exifData, "Exif.Photo.SubjectDistance"); } if (!done) { done = 0 != printTag(exifData, "Exif.CanonSi.SubjectDistance"); } std::cout << std::endl; // ISO speed printTag(exifData, Exiv2::isoSpeed, _("ISO speed")); // Exposure mode printTag(exifData, Exiv2::exposureMode, _("Exposure mode")); // Metering mode printTag(exifData, "Exif.Photo.MeteringMode", _("Metering mode")); // Macro mode printTag(exifData, Exiv2::macroMode, _("Macro mode")); // Image quality setting (compression) printTag(exifData, Exiv2::imageQuality, _("Image quality")); // Exif Resolution printLabel(_("Exif Resolution")); long xdim = 0; long ydim = 0; if (image->mimeType() == "image/tiff") { xdim = image->pixelWidth(); ydim = image->pixelHeight(); } else { md = exifData.findKey(Exiv2::ExifKey("Exif.Image.ImageWidth")); if (md == exifData.end()) { md = exifData.findKey(Exiv2::ExifKey("Exif.Photo.PixelXDimension")); } if (md != exifData.end() && md->count() > 0) { xdim = md->toLong(); } md = exifData.findKey(Exiv2::ExifKey("Exif.Image.ImageLength")); if (md == exifData.end()) { md = exifData.findKey(Exiv2::ExifKey("Exif.Photo.PixelYDimension")); } if (md != exifData.end() && md->count() > 0) { ydim = md->toLong(); } } if (xdim != 0 && ydim != 0) { std::cout << xdim << " x " << ydim; } std::cout << std::endl; // White balance printTag(exifData, Exiv2::whiteBalance, _("White balance")); // Thumbnail printLabel(_("Thumbnail")); Exiv2::ExifThumbC exifThumb(exifData); std::string thumbExt = exifThumb.extension(); if (thumbExt.empty()) { std::cout << _("None"); } else { Exiv2::DataBuf buf = exifThumb.copy(); if (buf.size_ == 0) { std::cout << _("None"); } else { std::cout << exifThumb.mimeType() << ", " << buf.size_ << " " << _("Bytes"); } } std::cout << std::endl; // Copyright printTag(exifData, "Exif.Image.Copyright", _("Copyright")); // Exif Comment printTag(exifData, "Exif.Photo.UserComment", _("Exif comment")); std::cout << std::endl; return 0; } // Print::printSummary void Print::printLabel(const std::string& label) const { std::cout << std::setfill(' ') << std::left; if (Params::instance().files_.size() > 1) { std::cout << std::setw(20) << path_ << " "; } std::cout << std::setw(align_) << label << ": "; } int Print::printTag(const Exiv2::ExifData& exifData, const std::string& key, const std::string& label) const { int rc = 0; if (!label.empty()) { printLabel(label); } Exiv2::ExifKey ek(key); Exiv2::ExifData::const_iterator md = exifData.findKey(ek); if (md != exifData.end()) { md->write(std::cout, &exifData); rc = 1; } if (!label.empty()) std::cout << std::endl; return rc; } // Print::printTag int Print::printTag(const Exiv2::ExifData& exifData, EasyAccessFct easyAccessFct, const std::string& label) const { int rc = 0; if (!label.empty()) { printLabel(label); } Exiv2::ExifData::const_iterator md = easyAccessFct(exifData); if (md != exifData.end()) { md->write(std::cout, &exifData); rc = 1; } if (!label.empty()) std::cout << std::endl; return rc; } // Print::printTag int Print::printList() { if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); // Set defaults for metadata types and data columns if (Params::instance().printTags_ == Exiv2::mdNone) { Params::instance().printTags_ = Exiv2::mdExif | Exiv2::mdIptc | Exiv2::mdXmp; } if (Params::instance().printItems_ == 0) { Params::instance().printItems_ = Params::prKey | Params::prType | Params::prCount | Params::prTrans; } return printMetadata(image.get()); } // Print::printList int Print::printMetadata(const Exiv2::Image* image) { int rc = 0; if (Params::instance().printTags_ & Exiv2::mdExif) { const Exiv2::ExifData& exifData = image->exifData(); for (Exiv2::ExifData::const_iterator md = exifData.begin(); md != exifData.end(); ++md) { printMetadatum(*md, image); } if (exifData.empty()) { if (Params::instance().verbose_) { std::cerr << path_ << ": " << _("No Exif data found in the file\n"); } rc = -3; } } if (Params::instance().printTags_ & Exiv2::mdIptc) { const Exiv2::IptcData& iptcData = image->iptcData(); for (Exiv2::IptcData::const_iterator md = iptcData.begin(); md != iptcData.end(); ++md) { printMetadatum(*md, image); } if (iptcData.empty()) { if (Params::instance().verbose_) { std::cerr << path_ << ": " << _("No IPTC data found in the file\n"); } rc = -3; } } if (Params::instance().printTags_ & Exiv2::mdXmp) { const Exiv2::XmpData& xmpData = image->xmpData(); for (Exiv2::XmpData::const_iterator md = xmpData.begin(); md != xmpData.end(); ++md) { printMetadatum(*md, image); } if (xmpData.empty()) { if (Params::instance().verbose_) { std::cerr << path_ << ": " << _("No XMP data found in the file\n"); } rc = -3; } } return rc; } // Print::printMetadata bool Print::grepTag(const std::string& key) { if (Params::instance().keys_.empty()) return true; for (Params::Keys::const_iterator k = Params::instance().keys_.begin(); k != Params::instance().keys_.end(); ++k) { if (*k == key) return true; } return false; } void Print::printMetadatum(const Exiv2::Metadatum& md, const Exiv2::Image* pImage) { if (!grepTag(md.key())) return; if ( Params::instance().unknown_ && md.tagName().substr(0, 2) == "0x") { return; } bool const manyFiles = Params::instance().files_.size() > 1; if (manyFiles) { std::cout << std::setfill(' ') << std::left << std::setw(20) << path_ << " "; } bool first = true; if (Params::instance().printItems_ & Params::prTag) { if (!first) std::cout << " "; first = false; std::cout << "0x" << std::setw(4) << std::setfill('0') << std::right << std::hex << md.tag(); } if (Params::instance().printItems_ & Params::prGroup) { if (!first) std::cout << " "; first = false; std::cout << std::setw(12) << std::setfill(' ') << std::left << md.groupName(); } if (Params::instance().printItems_ & Params::prKey) { if (!first) std::cout << " "; first = false; std::cout << std::setfill(' ') << std::left << std::setw(44) << md.key(); } if (Params::instance().printItems_ & Params::prName) { if (!first) std::cout << " "; first = false; std::cout << std::setw(27) << std::setfill(' ') << std::left << md.tagName(); } if (Params::instance().printItems_ & Params::prLabel) { if (!first) std::cout << " "; first = false; std::cout << std::setw(30) << std::setfill(' ') << std::left << md.tagLabel(); } if (Params::instance().printItems_ & Params::prType) { if (!first) std::cout << " "; first = false; std::cout << std::setw(9) << std::setfill(' ') << std::left; const char* tn = md.typeName(); if (tn) { std::cout << tn; } else { std::ostringstream os; os << "0x" << std::setw(4) << std::setfill('0') << std::hex << md.typeId(); std::cout << os.str(); } } if (Params::instance().printItems_ & Params::prCount) { if (!first) std::cout << " "; first = false; std::cout << std::dec << std::setw(3) << std::setfill(' ') << std::right << md.count(); } if (Params::instance().printItems_ & Params::prSize) { if (!first) std::cout << " "; first = false; std::cout << std::dec << std::setw(3) << std::setfill(' ') << std::right << md.size(); } if (Params::instance().printItems_ & Params::prValue) { if (!first) std::cout << " "; first = false; if ( Params::instance().binary_ && ( md.typeId() == Exiv2::undefined || md.typeId() == Exiv2::unsignedByte || md.typeId() == Exiv2::signedByte) && md.size() > 128) { std::cout << _("(Binary value suppressed)") << std::endl; return; } bool done = false; if (0 == strcmp(md.key().c_str(), "Exif.Photo.UserComment")) { const Exiv2::CommentValue* pcv = dynamic_cast<const Exiv2::CommentValue*>(&md.value()); if (pcv) { Exiv2::CommentValue::CharsetId csId = pcv->charsetId(); if (csId != Exiv2::CommentValue::undefined) { std::cout << "charset=\"" << Exiv2::CommentValue::CharsetInfo::name(csId) << "\" "; } std::cout << pcv->comment(Params::instance().charset_.c_str()); done = true; } } if (!done) std::cout << std::dec << md.value(); } if (Params::instance().printItems_ & Params::prTrans) { if (!first) std::cout << " "; first = false; if ( Params::instance().binary_ && ( md.typeId() == Exiv2::undefined || md.typeId() == Exiv2::unsignedByte || md.typeId() == Exiv2::signedByte) && md.size() > 128) { std::cout << _("(Binary value suppressed)") << std::endl; return; } bool done = false; if (0 == strcmp(md.key().c_str(), "Exif.Photo.UserComment")) { const Exiv2::CommentValue* pcv = dynamic_cast<const Exiv2::CommentValue*>(&md.value()); if (pcv) { std::cout << pcv->comment(Params::instance().charset_.c_str()); done = true; } } if (!done) std::cout << std::dec << md.print(&pImage->exifData()); } if (Params::instance().printItems_ & Params::prHex) { if (!first) std::cout << std::endl; first = false; if ( Params::instance().binary_ && ( md.typeId() == Exiv2::undefined || md.typeId() == Exiv2::unsignedByte || md.typeId() == Exiv2::signedByte) && md.size() > 128) { std::cout << _("(Binary value suppressed)") << std::endl; return; } Exiv2::DataBuf buf(md.size()); md.copy(buf.pData_, pImage->byteOrder()); Exiv2::hexdump(std::cout, buf.pData_, buf.size_); } std::cout << std::endl; } // Print::printMetadatum int Print::printComment() { if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); if (Params::instance().verbose_) { std::cout << _("JPEG comment") << ": "; } std::cout << image->comment() << std::endl; return 0; } // Print::printComment int Print::printPreviewList() { if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); bool const manyFiles = Params::instance().files_.size() > 1; int cnt = 0; Exiv2::PreviewManager pm(*image); Exiv2::PreviewPropertiesList list = pm.getPreviewProperties(); for (Exiv2::PreviewPropertiesList::const_iterator pos = list.begin(); pos != list.end(); ++pos) { if (manyFiles) { std::cout << std::setfill(' ') << std::left << std::setw(20) << path_ << " "; } std::cout << _("Preview") << " " << ++cnt << ": " << pos->mimeType_ << ", "; if (pos->width_ != 0 && pos->height_ != 0) { std::cout << pos->width_ << "x" << pos->height_ << " " << _("pixels") << ", "; } std::cout << pos->size_ << " " << _("bytes") << "\n"; } return 0; } // Print::printPreviewList Print::AutoPtr Print::clone() const { return AutoPtr(clone_()); } Print* Print::clone_() const { return new Print(*this); } Rename::~Rename() { } int Rename::run(const std::string& path) { try { if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " << _("Failed to open the file\n"); return -1; } Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifData& exifData = image->exifData(); if (exifData.empty()) { std::cerr << path << ": " << _("No Exif data found in the file\n"); return -3; } Exiv2::ExifKey key("Exif.Photo.DateTimeOriginal"); Exiv2::ExifData::iterator md = exifData.findKey(key); if (md == exifData.end()) { key = Exiv2::ExifKey("Exif.Image.DateTime"); md = exifData.findKey(key); } if (md == exifData.end()) { std::cerr << _("Neither tag") << " `Exif.Photo.DateTimeOriginal' " << _("nor") << " `Exif.Image.DateTime' " << _("found in the file") << " " << path << "\n"; return 1; } std::string v = md->toString(); if (v.length() == 0 || v[0] == ' ') { std::cerr << _("Image file creation timestamp not set in the file") << " " << path << "\n"; return 1; } struct tm tm; if (str2Tm(v, &tm) != 0) { std::cerr << _("Failed to parse timestamp") << " `" << v << "' " << _("in the file") << " " << path << "\n"; return 1; } if ( Params::instance().timestamp_ || Params::instance().timestampOnly_) { ts.read(&tm); } int rc = 0; std::string newPath = path; if (Params::instance().timestampOnly_) { if (Params::instance().verbose_) { std::cout << _("Updating timestamp to") << " " << v << std::endl; } } else { rc = renameFile(newPath, &tm); if (rc == -1) return 0; // skip } if ( 0 == rc && ( Params::instance().preserve_ || Params::instance().timestamp_ || Params::instance().timestampOnly_)) { ts.touch(newPath); } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in rename action for file " << path << ":\n" << e << "\n"; return 1; }} // Rename::run Rename::AutoPtr Rename::clone() const { return AutoPtr(clone_()); } Rename* Rename::clone_() const { return new Rename(*this); } Erase::~Erase() { } int Erase::run(const std::string& path) try { path_ = path; if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); // Thumbnail must be before Exif int rc = 0; if (Params::instance().target_ & Params::ctThumb) { rc = eraseThumbnail(image.get()); } if (0 == rc && Params::instance().target_ & Params::ctExif) { rc = eraseExifData(image.get()); } if (0 == rc && Params::instance().target_ & Params::ctIptc) { rc = eraseIptcData(image.get()); } if (0 == rc && Params::instance().target_ & Params::ctComment) { rc = eraseComment(image.get()); } if (0 == rc && Params::instance().target_ & Params::ctXmp) { rc = eraseXmpData(image.get()); } if (0 == rc) { image->writeMetadata(); } if (Params::instance().preserve_) { ts.touch(path); } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in erase action for file " << path << ":\n" << e << "\n"; return 1; } // Erase::run int Erase::eraseThumbnail(Exiv2::Image* image) const { Exiv2::ExifThumb exifThumb(image->exifData()); std::string thumbExt = exifThumb.extension(); if (thumbExt.empty()) { return 0; } exifThumb.erase(); if (Params::instance().verbose_) { std::cout << _("Erasing thumbnail data") << std::endl; } return 0; } int Erase::eraseExifData(Exiv2::Image* image) const { if (Params::instance().verbose_ && image->exifData().count() > 0) { std::cout << _("Erasing Exif data from the file") << std::endl; } image->clearExifData(); return 0; } int Erase::eraseIptcData(Exiv2::Image* image) const { if (Params::instance().verbose_ && image->iptcData().count() > 0) { std::cout << _("Erasing IPTC data from the file") << std::endl; } image->clearIptcData(); return 0; } int Erase::eraseComment(Exiv2::Image* image) const { if (Params::instance().verbose_ && image->comment().size() > 0) { std::cout << _("Erasing JPEG comment from the file") << std::endl; } image->clearComment(); return 0; } int Erase::eraseXmpData(Exiv2::Image* image) const { if (Params::instance().verbose_ && image->xmpData().count() > 0) { std::cout << _("Erasing XMP data from the file") << std::endl; } image->clearXmpData(); // Quick fix for bug #612 image->clearXmpPacket(); return 0; } Erase::AutoPtr Erase::clone() const { return AutoPtr(clone_()); } Erase* Erase::clone_() const { return new Erase(*this); } Extract::~Extract() { } int Extract::run(const std::string& path) try { path_ = path; int rc = 0; if (Params::instance().target_ & Params::ctThumb) { rc = writeThumbnail(); } if (Params::instance().target_ & Params::ctXmpSidecar) { std::string xmpPath = newFilePath(path_, ".xmp"); if (dontOverwrite(xmpPath)) return 0; rc = metacopy(path_, xmpPath, Exiv2::ImageType::xmp, false); } if (Params::instance().target_ & Params::ctPreview) { rc = writePreviews(); } if ( !(Params::instance().target_ & Params::ctXmpSidecar) && !(Params::instance().target_ & Params::ctThumb) && !(Params::instance().target_ & Params::ctPreview)) { std::string exvPath = newFilePath(path_, ".exv"); if (dontOverwrite(exvPath)) return 0; rc = metacopy(path_, exvPath, Exiv2::ImageType::exv, false); } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in extract action for file " << path << ":\n" << e << "\n"; return 1; } // Extract::run int Extract::writeThumbnail() const { if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifData& exifData = image->exifData(); if (exifData.empty()) { std::cerr << path_ << ": " << _("No Exif data found in the file\n"); return -3; } int rc = 0; Exiv2::ExifThumb exifThumb(exifData); std::string thumbExt = exifThumb.extension(); if (thumbExt.empty()) { std::cerr << path_ << ": " << _("Image does not contain an Exif thumbnail\n"); } else { std::string thumb = newFilePath(path_, "-thumb"); std::string thumbPath = thumb + thumbExt; if (dontOverwrite(thumbPath)) return 0; if (Params::instance().verbose_) { Exiv2::DataBuf buf = exifThumb.copy(); if (buf.size_ != 0) { std::cout << _("Writing thumbnail") << " (" << exifThumb.mimeType() << ", " << buf.size_ << " " << _("Bytes") << ") " << _("to file") << " " << thumbPath << std::endl; } } rc = exifThumb.writeFile(thumb); if (rc == 0) { std::cerr << path_ << ": " << _("Exif data doesn't contain a thumbnail\n"); } } return rc; } // Extract::writeThumbnail int Extract::writePreviews() const { if (!Exiv2::fileExists(path_, true)) { std::cerr << path_ << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path_); assert(image.get() != 0); image->readMetadata(); Exiv2::PreviewManager pvMgr(*image); Exiv2::PreviewPropertiesList pvList = pvMgr.getPreviewProperties(); const Params::PreviewNumbers& numbers = Params::instance().previewNumbers_; for (Params::PreviewNumbers::const_iterator n = numbers.begin(); n != numbers.end(); ++n) { if (*n == 0) { // Write all previews for (int num = 0; num < static_cast<int>(pvList.size()); ++num) { writePreviewFile(pvMgr.getPreviewImage(pvList[num]), num + 1); } break; } if (*n > static_cast<int>(pvList.size())) { std::cerr << path_ << ": " << _("Image does not have preview") << " " << *n << "\n"; continue; } writePreviewFile(pvMgr.getPreviewImage(pvList[*n - 1]), *n); } return 0; } // Extract::writePreviews void Extract::writePreviewFile(const Exiv2::PreviewImage& pvImg, int num) const { std::string pvFile = newFilePath(path_, "-preview") + Exiv2::toString(num); std::string pvPath = pvFile + pvImg.extension(); if (dontOverwrite(pvPath)) return; if (Params::instance().verbose_) { std::cout << _("Writing preview") << " " << num << " (" << pvImg.mimeType() << ", "; if (pvImg.width() != 0 && pvImg.height() != 0) { std::cout << pvImg.width() << "x" << pvImg.height() << " " << _("pixels") << ", "; } std::cout << pvImg.size() << " " << _("bytes") << ") " << _("to file") << " " << pvPath << std::endl; } long rc = pvImg.writeFile(pvFile); if (rc == 0) { std::cerr << path_ << ": " << _("Image does not have preview") << " " << num << "\n"; } } // Extract::writePreviewFile Extract::AutoPtr Extract::clone() const { return AutoPtr(clone_()); } Extract* Extract::clone_() const { return new Extract(*this); } Insert::~Insert() { } int Insert::run(const std::string& path) try { if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " << _("Failed to open the file\n"); return -1; } int rc = 0; Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } if (Params::instance().target_ & Params::ctThumb) { rc = insertThumbnail(path); } if ( rc == 0 && ( Params::instance().target_ & Params::ctExif || Params::instance().target_ & Params::ctIptc || Params::instance().target_ & Params::ctComment || Params::instance().target_ & Params::ctXmp)) { std::string suffix = Params::instance().suffix_; if (suffix.empty()) suffix = ".exv"; if (Params::instance().target_ & Params::ctXmpSidecar) suffix = ".xmp"; std::string exvPath = newFilePath(path, suffix); rc = metacopy(exvPath, path, Exiv2::ImageType::none, true); } if (0 == rc && Params::instance().target_ & Params::ctXmpSidecar) { rc = insertXmpPacket(path); } if (Params::instance().preserve_) { ts.touch(path); } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in insert action for file " << path << ":\n" << e << "\n"; return 1; } // Insert::run int Insert::insertXmpPacket(const std::string& path) const { std::string xmpPath = newFilePath(path, ".xmp"); if (!Exiv2::fileExists(xmpPath, true)) { std::cerr << xmpPath << ": " << _("Failed to open the file\n"); return -1; } if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " << _("Failed to open the file\n"); return -1; } Exiv2::DataBuf buf = Exiv2::readFile(xmpPath); std::string xmpPacket; xmpPacket.assign(reinterpret_cast<char*>(buf.pData_), buf.size_); Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); image->setXmpPacket(xmpPacket); image->writeMetadata(); return 0; } int Insert::insertThumbnail(const std::string& path) const { std::string thumbPath = newFilePath(path, "-thumb.jpg"); if (!Exiv2::fileExists(thumbPath, true)) { std::cerr << thumbPath << ": " << _("Failed to open the file\n"); return -1; } if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifThumb exifThumb(image->exifData()); exifThumb.setJpegThumbnail(thumbPath); image->writeMetadata(); return 0; } // Insert::insertThumbnail Insert::AutoPtr Insert::clone() const { return AutoPtr(clone_()); } Insert* Insert::clone_() const { return new Insert(*this); } Modify::~Modify() { } int Modify::run(const std::string& path) { try { if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " << _("Failed to open the file\n"); return -1; } Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); int rc = applyCommands(image.get()); // Save both exif and iptc metadata image->writeMetadata(); if (Params::instance().preserve_) { ts.touch(path); } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in modify action for file " << path << ":\n" << e << "\n"; return 1; } } // Modify::run int Modify::applyCommands(Exiv2::Image* pImage) { if (!Params::instance().jpegComment_.empty()) { if (Params::instance().verbose_) { std::cout << _("Setting JPEG comment") << " '" << Params::instance().jpegComment_ << "'" << std::endl; } pImage->setComment(Params::instance().jpegComment_); } // loop through command table and apply each command ModifyCmds& modifyCmds = Params::instance().modifyCmds_; ModifyCmds::const_iterator i = modifyCmds.begin(); ModifyCmds::const_iterator end = modifyCmds.end(); int rc = 0; int ret = 0; for (; i != end; ++i) { switch (i->cmdId_) { case add: ret = addMetadatum(pImage, *i); if (rc == 0) rc = ret; break; case set: ret = setMetadatum(pImage, *i); if (rc == 0) rc = ret; break; case del: delMetadatum(pImage, *i); break; case reg: regNamespace(*i); break; case invalidCmdId: assert(invalidCmdId == i->cmdId_); break; } } return rc; } // Modify::applyCommands int Modify::addMetadatum(Exiv2::Image* pImage, const ModifyCmd& modifyCmd) { if (Params::instance().verbose_) { std::cout << _("Add") << " " << modifyCmd.key_ << " \"" << modifyCmd.value_ << "\" (" << Exiv2::TypeInfo::typeName(modifyCmd.typeId_) << ")" << std::endl; } Exiv2::ExifData& exifData = pImage->exifData(); Exiv2::IptcData& iptcData = pImage->iptcData(); Exiv2::XmpData& xmpData = pImage->xmpData(); Exiv2::Value::AutoPtr value = Exiv2::Value::create(modifyCmd.typeId_); int rc = value->read(modifyCmd.value_); if (0 == rc) { if (modifyCmd.metadataId_ == exif) { exifData.add(Exiv2::ExifKey(modifyCmd.key_), value.get()); } if (modifyCmd.metadataId_ == iptc) { iptcData.add(Exiv2::IptcKey(modifyCmd.key_), value.get()); } if (modifyCmd.metadataId_ == xmp) { xmpData.add(Exiv2::XmpKey(modifyCmd.key_), value.get()); } } else { std::cerr << _("Warning") << ": " << modifyCmd.key_ << ": " << _("Failed to read") << " " << Exiv2::TypeInfo::typeName(value->typeId()) << " " << _("value") << " \"" << modifyCmd.value_ << "\"\n"; } return rc; } // This function looks rather complex because we try to avoid adding an // empty metadatum if reading the value fails int Modify::setMetadatum(Exiv2::Image* pImage, const ModifyCmd& modifyCmd) { if (Params::instance().verbose_) { std::cout << _("Set") << " " << modifyCmd.key_ << " \"" << modifyCmd.value_ << "\" (" << Exiv2::TypeInfo::typeName(modifyCmd.typeId_) << ")" << std::endl; } Exiv2::ExifData& exifData = pImage->exifData(); Exiv2::IptcData& iptcData = pImage->iptcData(); Exiv2::XmpData& xmpData = pImage->xmpData(); Exiv2::Metadatum* metadatum = 0; if (modifyCmd.metadataId_ == exif) { Exiv2::ExifData::iterator pos = exifData.findKey(Exiv2::ExifKey(modifyCmd.key_)); if (pos != exifData.end()) { metadatum = &(*pos); } } if (modifyCmd.metadataId_ == iptc) { Exiv2::IptcData::iterator pos = iptcData.findKey(Exiv2::IptcKey(modifyCmd.key_)); if (pos != iptcData.end()) { metadatum = &(*pos); } } if (modifyCmd.metadataId_ == xmp) { Exiv2::XmpData::iterator pos = xmpData.findKey(Exiv2::XmpKey(modifyCmd.key_)); if (pos != xmpData.end()) { metadatum = &(*pos); } } // If a type was explicitly requested, use it; else // use the current type of the metadatum, if any; // or the default type Exiv2::Value::AutoPtr value; if (metadatum) { value = metadatum->getValue(); } if ( value.get() == 0 || ( modifyCmd.explicitType_ && modifyCmd.typeId_ != value->typeId())) { value = Exiv2::Value::create(modifyCmd.typeId_); } int rc = value->read(modifyCmd.value_); if (0 == rc) { if (metadatum) { metadatum->setValue(value.get()); } else { if (modifyCmd.metadataId_ == exif) { exifData.add(Exiv2::ExifKey(modifyCmd.key_), value.get()); } if (modifyCmd.metadataId_ == iptc) { iptcData.add(Exiv2::IptcKey(modifyCmd.key_), value.get()); } if (modifyCmd.metadataId_ == xmp) { xmpData.add(Exiv2::XmpKey(modifyCmd.key_), value.get()); } } } else { std::cerr << _("Warning") << ": " << modifyCmd.key_ << ": " << _("Failed to read") << " " << Exiv2::TypeInfo::typeName(value->typeId()) << " " << _("value") << " \"" << modifyCmd.value_ << "\"\n"; } return rc; } void Modify::delMetadatum(Exiv2::Image* pImage, const ModifyCmd& modifyCmd) { if (Params::instance().verbose_) { std::cout << _("Del") << " " << modifyCmd.key_ << std::endl; } Exiv2::ExifData& exifData = pImage->exifData(); Exiv2::IptcData& iptcData = pImage->iptcData(); Exiv2::XmpData& xmpData = pImage->xmpData(); if (modifyCmd.metadataId_ == exif) { Exiv2::ExifData::iterator pos; Exiv2::ExifKey exifKey = Exiv2::ExifKey(modifyCmd.key_); while((pos = exifData.findKey(exifKey)) != exifData.end()) { exifData.erase(pos); } } if (modifyCmd.metadataId_ == iptc) { Exiv2::IptcData::iterator pos; Exiv2::IptcKey iptcKey = Exiv2::IptcKey(modifyCmd.key_); while((pos = iptcData.findKey(iptcKey)) != iptcData.end()) { iptcData.erase(pos); } } if (modifyCmd.metadataId_ == xmp) { Exiv2::XmpData::iterator pos; Exiv2::XmpKey xmpKey = Exiv2::XmpKey(modifyCmd.key_); while((pos = xmpData.findKey(xmpKey)) != xmpData.end()) { xmpData.erase(pos); } } } void Modify::regNamespace(const ModifyCmd& modifyCmd) { if (Params::instance().verbose_) { std::cout << _("Reg ") << modifyCmd.key_ << "=\"" << modifyCmd.value_ << "\"" << std::endl; } Exiv2::XmpProperties::registerNs(modifyCmd.value_, modifyCmd.key_); } Modify::AutoPtr Modify::clone() const { return AutoPtr(clone_()); } Modify* Modify::clone_() const { return new Modify(*this); } Adjust::~Adjust() { } int Adjust::run(const std::string& path) try { adjustment_ = Params::instance().adjustment_; yearAdjustment_ = Params::instance().yodAdjust_[Params::yodYear].adjustment_; monthAdjustment_ = Params::instance().yodAdjust_[Params::yodMonth].adjustment_; dayAdjustment_ = Params::instance().yodAdjust_[Params::yodDay].adjustment_; if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " << _("Failed to open the file\n"); return -1; } Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifData& exifData = image->exifData(); if (exifData.empty()) { std::cerr << path << ": " << _("No Exif data found in the file\n"); return -3; } int rc = adjustDateTime(exifData, "Exif.Image.DateTime", path); rc += adjustDateTime(exifData, "Exif.Photo.DateTimeOriginal", path); rc += adjustDateTime(exifData, "Exif.Photo.DateTimeDigitized", path); if (rc) return 1; image->writeMetadata(); if (Params::instance().preserve_) { ts.touch(path); } return rc; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in adjust action for file " << path << ":\n" << e << "\n"; return 1; } // Adjust::run Adjust::AutoPtr Adjust::clone() const { return AutoPtr(clone_()); } Adjust* Adjust::clone_() const { return new Adjust(*this); } int Adjust::adjustDateTime(Exiv2::ExifData& exifData, const std::string& key, const std::string& path) const { Exiv2::ExifKey ek(key); Exiv2::ExifData::iterator md = exifData.findKey(ek); if (md == exifData.end()) { // Key not found. That's ok, we do nothing. return 0; } std::string timeStr = md->toString(); if (timeStr == "" || timeStr[0] == ' ') { std::cerr << path << ": " << _("Timestamp of metadatum with key") << " `" << ek << "' " << _("not set\n"); return 1; } if (Params::instance().verbose_) { bool comma = false; std::cout << _("Adjusting") << " `" << ek << "' " << _("by"); if (yearAdjustment_ != 0) { std::cout << (yearAdjustment_ < 0 ? " " : " +") << yearAdjustment_ << " "; if (yearAdjustment_ < -1 || yearAdjustment_ > 1) { std::cout << _("years"); } else { std::cout << _("year"); } comma = true; } if (monthAdjustment_ != 0) { if (comma) std::cout << ","; std::cout << (monthAdjustment_ < 0 ? " " : " +") << monthAdjustment_ << " "; if (monthAdjustment_ < -1 || monthAdjustment_ > 1) { std::cout << _("months"); } else { std::cout << _("month"); } comma = true; } if (dayAdjustment_ != 0) { if (comma) std::cout << ","; std::cout << (dayAdjustment_ < 0 ? " " : " +") << dayAdjustment_ << " "; if (dayAdjustment_ < -1 || dayAdjustment_ > 1) { std::cout << _("days"); } else { std::cout << _("day"); } comma = true; } if (adjustment_ != 0) { if (comma) std::cout << ","; std::cout << " " << adjustment_ << _("s"); } } struct tm tm; if (str2Tm(timeStr, &tm) != 0) { if (Params::instance().verbose_) std::cout << std::endl; std::cerr << path << ": " << _("Failed to parse timestamp") << " `" << timeStr << "'\n"; return 1; } const long monOverflow = (tm.tm_mon + monthAdjustment_) / 12; tm.tm_mon = (tm.tm_mon + monthAdjustment_) % 12; tm.tm_year += yearAdjustment_ + monOverflow; // Let's not create files with non-4-digit years, we can't read them. if (tm.tm_year > 9999 - 1900 || tm.tm_year < 1000 - 1900) { if (Params::instance().verbose_) std::cout << std::endl; std::cerr << path << ": " << _("Can't adjust timestamp by") << " " << yearAdjustment_ + monOverflow << " " << _("years") << "\n"; return 1; } time_t time = mktime(&tm); time += adjustment_ + dayAdjustment_ * 86400; timeStr = time2Str(time); if (Params::instance().verbose_) { std::cout << " " << _("to") << " " << timeStr << std::endl; } md->setValue(timeStr); return 0; } // Adjust::adjustDateTime FixIso::~FixIso() { } int FixIso::run(const std::string& path) { try { if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " <<_("Failed to open the file\n"); return -1; } Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifData& exifData = image->exifData(); if (exifData.empty()) { std::cerr << path << ": " << _("No Exif data found in the file\n"); return -3; } Exiv2::ExifData::const_iterator md = Exiv2::isoSpeed(exifData); if (md != exifData.end()) { if (strcmp(md->key().c_str(), "Exif.Photo.ISOSpeedRatings") == 0) { if (Params::instance().verbose_) { std::cout << _("Standard Exif ISO tag exists; not modified\n"); } return 0; } // Copy the proprietary tag to the standard place std::ostringstream os; md->write(os, &exifData); if (Params::instance().verbose_) { std::cout << _("Setting Exif ISO value to") << " " << os.str() << "\n"; } exifData["Exif.Photo.ISOSpeedRatings"] = os.str(); } image->writeMetadata(); if (Params::instance().preserve_) { ts.touch(path); } return 0; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in fixiso action for file " << path << ":\n" << e << "\n"; return 1; } } // FixIso::run FixIso::AutoPtr FixIso::clone() const { return AutoPtr(clone_()); } FixIso* FixIso::clone_() const { return new FixIso(*this); } FixCom::~FixCom() { } int FixCom::run(const std::string& path) { try { if (!Exiv2::fileExists(path, true)) { std::cerr << path << ": " <<_("Failed to open the file\n"); return -1; } Timestamp ts; if (Params::instance().preserve_) { ts.read(path); } Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); Exiv2::ExifData& exifData = image->exifData(); if (exifData.empty()) { std::cerr << path << ": " << _("No Exif data found in the file\n"); return -3; } Exiv2::ExifData::iterator pos = exifData.findKey(Exiv2::ExifKey("Exif.Photo.UserComment")); if (pos == exifData.end()) { if (Params::instance().verbose_) { std::cout << _("No Exif user comment found") << "\n"; } return 0; } Exiv2::Value::AutoPtr v = pos->getValue(); const Exiv2::CommentValue* pcv = dynamic_cast<const Exiv2::CommentValue*>(v.get()); if (!pcv) { if (Params::instance().verbose_) { std::cout << _("Found Exif user comment with unexpected value type") << "\n"; } return 0; } Exiv2::CommentValue::CharsetId csId = pcv->charsetId(); if (csId != Exiv2::CommentValue::unicode) { if (Params::instance().verbose_) { std::cout << _("No Exif UNICODE user comment found") << "\n"; } return 0; } std::string comment = pcv->comment(Params::instance().charset_.c_str()); if (Params::instance().verbose_) { std::cout << _("Setting Exif UNICODE user comment to") << " \"" << comment << "\"\n"; } comment = std::string("charset=\"") + Exiv2::CommentValue::CharsetInfo::name(csId) + "\" " + comment; // Remove BOM and convert value from source charset to UCS-2, but keep byte order pos->setValue(comment); image->writeMetadata(); if (Params::instance().preserve_) { ts.touch(path); } return 0; } catch(const Exiv2::AnyError& e) { std::cerr << "Exiv2 exception in fixcom action for file " << path << ":\n" << e << "\n"; return 1; } } // FixCom::run FixCom::AutoPtr FixCom::clone() const { return AutoPtr(clone_()); } FixCom* FixCom::clone_() const { return new FixCom(*this); } } // namespace Action // ***************************************************************************** // local definitions namespace { //! @cond IGNORE int Timestamp::read(const std::string& path) { struct stat buf; int rc = stat(path.c_str(), &buf); if (0 == rc) { actime_ = buf.st_atime; modtime_ = buf.st_mtime; } return rc; } int Timestamp::read(struct tm* tm) { int rc = 1; time_t t = mktime(tm); // interpret tm according to current timezone settings if (t != (time_t)-1) { rc = 0; actime_ = t; modtime_ = t; } return rc; } int Timestamp::touch(const std::string& path) { if (0 == actime_) return 1; struct utimbuf buf; buf.actime = actime_; buf.modtime = modtime_; return utime(path.c_str(), &buf); } //! @endcond int str2Tm(const std::string& timeStr, struct tm* tm) { if (timeStr.length() == 0 || timeStr[0] == ' ') return 1; if (timeStr.length() < 19) return 2; if ( timeStr[4] != ':' || timeStr[7] != ':' || timeStr[10] != ' ' || timeStr[13] != ':' || timeStr[16] != ':') return 3; if (0 == tm) return 4; std::memset(tm, 0x0, sizeof(struct tm)); tm->tm_isdst = -1; long tmp; if (!Util::strtol(timeStr.substr(0,4).c_str(), tmp)) return 5; tm->tm_year = tmp - 1900; if (!Util::strtol(timeStr.substr(5,2).c_str(), tmp)) return 6; tm->tm_mon = tmp - 1; if (!Util::strtol(timeStr.substr(8,2).c_str(), tmp)) return 7; tm->tm_mday = tmp; if (!Util::strtol(timeStr.substr(11,2).c_str(), tmp)) return 8; tm->tm_hour = tmp; if (!Util::strtol(timeStr.substr(14,2).c_str(), tmp)) return 9; tm->tm_min = tmp; if (!Util::strtol(timeStr.substr(17,2).c_str(), tmp)) return 10; tm->tm_sec = tmp; // Conversions to set remaining fields of the tm structure if (mktime(tm) == (time_t)-1) return 11; return 0; } // str2Tm std::string time2Str(time_t time) { struct tm* tm = localtime(&time); return tm2Str(tm); } // time2Str std::string tm2Str(const struct tm* tm) { if (0 == tm) return ""; std::ostringstream os; os << std::setfill('0') << tm->tm_year + 1900 << ":" << std::setw(2) << tm->tm_mon + 1 << ":" << std::setw(2) << tm->tm_mday << " " << std::setw(2) << tm->tm_hour << ":" << std::setw(2) << tm->tm_min << ":" << std::setw(2) << tm->tm_sec; return os.str(); } // tm2Str int metacopy(const std::string& source, const std::string& target, int targetType, bool preserve) { if (!Exiv2::fileExists(source, true)) { std::cerr << source << ": " << _("Failed to open the file\n"); return -1; } Exiv2::Image::AutoPtr sourceImage = Exiv2::ImageFactory::open(source); assert(sourceImage.get() != 0); sourceImage->readMetadata(); // Apply any modification commands to the source image on-the-fly Action::Modify::applyCommands(sourceImage.get()); Exiv2::Image::AutoPtr targetImage; if (Exiv2::fileExists(target)) { targetImage = Exiv2::ImageFactory::open(target); assert(targetImage.get() != 0); if (preserve) targetImage->readMetadata(); } else { targetImage = Exiv2::ImageFactory::create(targetType, target); assert(targetImage.get() != 0); } if ( Params::instance().target_ & Params::ctExif && !sourceImage->exifData().empty()) { if (Params::instance().verbose_) { std::cout << _("Writing Exif data from") << " " << source << " " << _("to") << " " << target << std::endl; } targetImage->setExifData(sourceImage->exifData()); } if ( Params::instance().target_ & Params::ctIptc && !sourceImage->iptcData().empty()) { if (Params::instance().verbose_) { std::cout << _("Writing IPTC data from") << " " << source << " " << _("to") << " " << target << std::endl; } targetImage->setIptcData(sourceImage->iptcData()); } if ( Params::instance().target_ & Params::ctXmp && !sourceImage->xmpData().empty()) { if (Params::instance().verbose_) { std::cout << _("Writing XMP data from") << " " << source << " " << _("to") << " " << target << std::endl; } // Todo: Should use XMP packet if there are no XMP modification commands targetImage->setXmpData(sourceImage->xmpData()); } if ( Params::instance().target_ & Params::ctComment && !sourceImage->comment().empty()) { if (Params::instance().verbose_) { std::cout << _("Writing JPEG comment from") << " " << source << " " << _("to") << " " << target << std::endl; } targetImage->setComment(sourceImage->comment()); } try { targetImage->writeMetadata(); } catch (const Exiv2::AnyError& e) { std::cerr << target << ": " << _("Could not write metadata to file") << ": " << e << "\n"; return 1; } return 0; } // metacopy // Defined outside of the function so that Exiv2::find() can see it struct String { const char* s_; bool operator==(const char* s) const { return 0 == strcmp(s_, s); } }; int renameFile(std::string& newPath, const struct tm* tm) { std::string path = newPath; std::string format = Params::instance().format_; Util::replace(format, ":basename:", Util::basename(path, true)); Util::replace(format, ":dirname:", Util::basename(Util::dirname(path))); Util::replace(format, ":parentname:", Util::basename(Util::dirname(Util::dirname(path)))); const size_t max = 1024; char basename[max]; std::memset(basename, 0x0, max); if (strftime(basename, max, format.c_str(), tm) == 0) { std::cerr << _("Filename format yields empty filename for the file") << " " << path << "\n"; return 1; } newPath = Util::dirname(path) + EXV_SEPERATOR_STR + basename + Util::suffix(path); if ( Util::dirname(newPath) == Util::dirname(path) && Util::basename(newPath) == Util::basename(path)) { if (Params::instance().verbose_) { std::cout << _("This file already has the correct name") << std::endl; } return -1; } bool go = true; int seq = 1; std::string s; Params::FileExistsPolicy fileExistsPolicy = Params::instance().fileExistsPolicy_; while (go) { if (Exiv2::fileExists(newPath)) { switch (fileExistsPolicy) { case Params::overwritePolicy: go = false; break; case Params::renamePolicy: newPath = Util::dirname(path) + EXV_SEPERATOR_STR + basename + "_" + Exiv2::toString(seq++) + Util::suffix(path); break; case Params::askPolicy: std::cout << Params::instance().progname() << ": " << _("File") << " `" << newPath << "' " << _("exists. [O]verwrite, [r]ename or [s]kip?") << " "; std::cin >> s; switch (s[0]) { case 'o': case 'O': go = false; break; case 'r': case 'R': fileExistsPolicy = Params::renamePolicy; newPath = Util::dirname(path) + EXV_SEPERATOR_STR + basename + "_" + Exiv2::toString(seq++) + Util::suffix(path); break; default: // skip return -1; break; } } } else { go = false; } } if (Params::instance().verbose_) { std::cout << _("Renaming file to") << " " << newPath; if (Params::instance().timestamp_) { std::cout << ", " << _("updating timestamp"); } std::cout << std::endl; } // Workaround for MinGW rename which does not overwrite existing files remove(newPath.c_str()); if (std::rename(path.c_str(), newPath.c_str()) == -1) { std::cerr << Params::instance().progname() << ": " << _("Failed to rename") << " " << path << " " << _("to") << " " << newPath << ": " << Exiv2::strError() << "\n"; return 1; } return 0; } // renameFile std::string newFilePath(const std::string& path, const std::string& ext) { std::string directory = Params::instance().directory_; if (directory.empty()) directory = Util::dirname(path); std::string newPath = directory + EXV_SEPERATOR_STR + Util::basename(path, true) + ext; return newPath; } int dontOverwrite(const std::string& path) { if (!Params::instance().force_ && Exiv2::fileExists(path)) { std::cout << Params::instance().progname() << ": " << _("Overwrite") << " `" << path << "'? "; std::string s; std::cin >> s; if (s[0] != 'y' && s[0] != 'Y') return 1; } return 0; } }
“I understand that. The best way to fund my business, I’ve found, is to get a pound from everyone who thinks we will not have a grand prix here. I could fund Donington’s rebuild a thousand times over." "The threat to depart and the FIA's stance that Ferrari is not that important are both charades. Yes, the sport would survive without Ferrari and Ferrari would survive without F1, but both would be poorer if that were the case. All the talk is therefore not to be taken too seriously." 20 comments on F1 links: More on the Ferrari feud If Ferrari opts for the budget cap, how is it going to work with the annual $80 million that Ferrari is supposed to get? BTW Do people that this whole budget cap idea is set up to make sure the teams don’t ask for more money from FOM? I know FIA doesn’t have anything to do with that directly, but obviously Mosley and Ecclestone are looking out for each other. Hello there Keith, love the site, but I think you mean feud. I’m a believer in budget capping, but not two-tier system as the grandest teams have too much interest in how the sport is now. Difficult one for the FIA to manage but by saying those who spend less will have more opportunity to develop and test the car is rather noble, I think, to encourage a better way forward. Not sure how they will pay for the unlimited testing though! Wasn’t that quite expensive? I think a higher cap would be better but a one tier system so everyone knows the teams are working to the same set of rules. I would hope the F1 business model is strong enough to give the teams the lions share of the budget they need to go racing, the rest be made up of private investment and sponsorship. What is the total revenue for F1 (circuit & TV fees etc), surely we the fans could distribute the money well enough and not make too much profit out of it? FIA without ferrari is like Formula 1 without any European or American circuits – this is more like it, not like F1 without English GP, more than that. Mostly McLaran would also quit, after the genius Ron went off due to the bulying. I’d hate F1, not that I love it becoz of Ferrari in the first place (but I fell in love with it becoz of Ferrari, and Schumi). I started watching F1 when Ferrari sucked and didn’t know the history, and was a little confused when MS went to that “crap team” as I though of them at the time. Ferrari has nice cars, did great with Brawn, but I would keep watching F1 without them. Homogenization is what I worry about, not one team leaving, no matter if it’s Ferrari or a real team like Williams ;) F1 without Ferrari or for the likes of it possibly other big teams such as McLaren would be a very sad situation. F1 without a British GP is already looking very very sad. We must remember that all efforts from the FIA should be to make the sports more popular not unpopular. How long will FOTA last at this rate? All the teams need to adopt the budget cap and get on with it and this includes Ferrari, I don’t understand what their problem is, do they doubt their engineers, and think the only way they can win is through vast expenditure? Ferrari are on full tilt at the moment and it seems to me they need some fresh talent to get them out of their current posiition, (i include some PR advisors in that as well). Ferrari have deep history and have always seemed to be a team with pride, but its current incarnation seem to be too whingy, whiney for me. Keith, I know that driver salaries are not included in the cap, but is the cap a starting figure, an investment figure or a total spend? IE can race winnings be spent as an additional spend? I am trying to do the maths, and if FOTA can get a higher figure from Bernie then this would make most teams quite profitable, and I think that may attract a different element of “Investors” to the sport. I think that Ferrari is the most upset about the budget cap because their business model is very motorsports (Formula 1) centric, and this would interfere greatly. They would be as competitive as any other team under a budget cap, but they would have to SIGNIFICANTLY decrease the size of their F1 team, and shed maybe 3/4 of their headcount. And unlike BMW, Renault, or Toyota, those people couldnt as easily be absorbed into other parts of their (comparatively small)organization. Just a thought. Well Hello – Here goes Max’s attempt to blow holes through the Fota group – the little teams without any F1 background/history they can put the boot in – and help Max the whipper in power – for a certain person’s sake – according to youre devinaty – dont fall for this ******** – it’s only short term and Ferrari are needed by F1 – only Bernie and Max are trying to devide and conquer as per usual I dont think many people understand Ferrari’s dilemma. They simply wouldn’t be able to comply with this budget cap, as it is now. In the first place they would properly have fire about 500 employees or so, and the same properly goes for Mclaren, surely that can’t be good for the economy. Ferrari and Mclaren also have a few established contracts with their sponsors that are already worth more then 40 million. What do they do with these contracts? Do they just break the contracts? What if the FIA again changes all their rules and regulations for 2011, and they again need the sponsorship money or the extra personnel that they had to fire? And this also highlights the big problem with the FIA, they are just not very consistent, they have been changing the regulation way to often. It s also because of all these regulation changes year after year that the spending is so high in the first place, a perfect example of this would be KERS. The FIA are really placing the big teams in an impossible situation, with the current commitments they have, it would be impossible for them to accept the budget cap. And if they are not part of the budget capped teams, they will be heavily restricted by all sorts of regulations and rules, which properly means they will not stand a fair chance against all the capped teams. That would really be unfair if you consider that a team like Ferrari have always been there through the good and lean years, while new teams can enter and immediately again an unfair advantage over them. If Ferrari somehow manages to go for the budget option, are they then suppose to race with more standards parts? I can easily understand why this system as it stands now could prompt Ferrari to just rather pack-up and leave. It is easy to say that F1 wouldn’t miss Ferrari, but if you look at all the discussions and concern that was evoked when Honda decided to leave, then surely it wouldn’t be a good thing for F1 if another team decided to leave. The fact is, Honda didn’t even nearly have the same prestige, history or fans that Ferrari has. Ferrari properly still have the biggest fanbase or at least one of the biggest fanbases worldwide, and the the Ferrari brand is properly better known then the F1 brand itself worldwide. New teams like USGP, Lola or GP2 teams (with their low budgets and standard components) will not be able to replace a team like Ferrari. This also begs the question, why does these new teams need a budget cap, if they have 40 million to enter into the sport, then surely they can enter if they have the money? Does the FIA really have an obligation to secure immediate gratification for them? Perhaps it is time for Ferrari to start building a Lemans car, I am sure the series would welcome a team like Ferrari with open arms, and at least Ferrari also have A1GP already. I am sure there will be heavy discussions on the 6th of May, when FOTA will meet again. — With regards to the driver salaries, they really can’t include them in the budget cap. F1 drivers will properly at average only spend about ten years racing in F1, so they proportionally only have a small amount of time to accumulate their life long earnings. It is a highly specialized job that they train for from very early on, so they are not really too well equipped for anything else. I can see Ferrari have some problems here, but thats their mistake for putting all their eggs in the F1 basket and not really doing too much of other motorsports. As I see it, there is no reason why a Ferrari-backed Arden team couldn’t become the ‘works’ F1 team, and at the same time Ferrari give more backing to Torro Rosso, and maybe bring back a Ferrari-engined Minardi team. That could keep the factory well employed during the race season. Also, as Melanie says, its about time they took Le Mans and ALMS seriously again, and started building prototypes again. They do support the small GT teams around the world, and now have A1GP to look after as well, so its not as if everything depends on F1, no matter what Luca says. McLaren and BMW are not quite in the same situation. Mercedes can scale down its involvement with McLaren, but increase its support for Force India and Brawn GP, I don’t see why that isn’t happening already. McLaren itself looks all set to re-enter the GT racing series, and will be making some exotic road cars too. BMW can revert back to being Sauber, and can maybe help iSport or another new team. Whats all the fuss about FOTA? Grow up, make new connections and GET ON WITH IT! I think Mosley’s comments that F1 can live without Ferrari were just him playing politics again. If we were to take the comments seriously, then yes F1 could survive without Ferrari, it could even survive if all the current teams were replaced by completely new teams with no F1 history, F1 could also survive if it didn’t visit any of its traditional circuits or countries. It could survive all these things but it would loose a lot of popularity. The Formula 1 name will always attract some fans even if the only connection with the past is the name itself. Ecclestone may not think much about keeping the traditional circuits, except Monaco, but he defiantly wants Ferrari in F1, why else would he pay them more than other teams just because of who they are. I am not a Ferrari fan, but I admit that they are the most popular team in F1. Apart from cheering for the underdog such as Brawn GP this season the only teams I support regardless of drivers are Williams and McLaren, if they gave up racing I would still follow F1 but if a rival series was setup and I could only watch one series then my decision would largely based on which one my favourite teams and drivers were competing in.
The present invention relates to a device for manufacturing so-called gobs from glass. Such gobs serve as intermediate products for optical articles, such as lenses. U.S. Pat. No. 5,762,673 describes a device, in which glass balls defined by dropping are produced from a molten mass of glass. The glass balls are kept in suspension in a gas stream while at the same time being brought to a specific temperature and a specific viscosity. In a further procedural step the glass gobs are subjected to a pressing procedure, followed by further processing steps. The present invention focuses on that phase of the known process, in which the glass gob is kept in suspension for a certain period by means of a gas stream. During this time the glass balls or gobs can cool off, be heated and/or kept at a certain temperature. The associated device comprises a membrane of an open-pored material as an essential element. The membrane may be discoid. The disc can be even or have the shape of a trough corresponding to the shape of the glass gob. JP-A-H10-139465 describes such membranes. These have the form of a trough-like circular disc which is clamped by its outer circumference in the carrier. The circular disc is relatively thin-walled and comprises an upper and lower surface. A compressed gas is applied to the lower surface, which migrates through the pores of the membrane and exits again at the upper surface of the membrane. Glass drops from a molten mass are applied intermittently to the membrane. The individual glass drop is suspended for a certain time by the compressed gas exiting from the upper membrane surface, as per procedural requirements. The known devices are encumbered with disadvantages. A particular disadvantage is that the membrane material exhibits minimal stability only. From this viewpoint considerable wall strength of the membrane is preferable to reduce the risk of breakage. On the other hand, with a given gas pressure, a specific quantity of gas should penetrate through the membrane from top to bottom to guarantee that the gas cushion required to levitate the glass gob develops. To prevent unnecessarily high supply gas pressures, the membrane must be made thin-walled.
Definition of Capillary Capillary: A tiny blood vessel that connects an arteriole (the smallest division of an artery) with a venule (the smallest division of a vein). Although tiny, the capillary plays an imortant role in the circulatory system. The walls of capillaries act as semipermeable membranes that permit the exchange of various substances, including fluids and the gases oxygen and carbon dioxide, between the blood stream and the tissues of the body.
BREAKTHROUGH CANCER RESEARCH “The results suggest that the biofield treatment administered in the laboratory can induce cell death of brain cancer cells from a distance while at the same time promoting the viability of normal brain cells. Such selective efficacy against cancer cells is the sought after “holy grail” for novel experimental anti-cancer agents and makes this work all the more exciting. – GARRETT L. YOUNT, PH.D. California Pacific Medical Center Research Institute, California, USA
An expected congressional proposal to overhaul and privatize the Federal Aviation Administration is drawing the concern of local pilots and airport management. They worry that the plan would result in increased fees that would make general aviation flying less accessible. Congress is required to periodically reauthorize the Federal Aviation Administration. It did so in October, but that will only last until March 31. When the next re-authorization bill is introduced — likely in February — it’s expected that the House of Representatives version will include provisions to privatize the FAA and make changes to how it is funded. U.S. Rep. Collin Peterson said he expects to see a privatization effort brought up by in the House of Representatives next month. “I’m not in favor of it, and the general aviation people — the ones I listen to — are skeptical of it as well,” Peterson said. Peterson, a pilot himself, often uses his plane when traveling in 7th Congressional District in Minnesota. “The concern is if they privatize they will charge us any time we fly,” Peterson said, speaking about the potential user fees that could come along with privatization. The current system already ensures that both commercial and general aviation pays its fair share, according to the Alliance for Aviation Across America, a nonprofit coalition of organizations that support general aviation. “We have always been against user fees,” said Devin Osting, deputy director of Alliance for Aviation across America. The FAA is currently funded through a fuel tax, which Osting said is a fair way to distribute the cost. Administering user fees would be a time-consuming process comparatively, he said. “If you buy gas at an airport, you are complying with the law,” Osting said. Those fees would be a problem for local pilots, according one airport manager. “Having some user fees in there, like they do in Europe, would make it more cumbersome or expensive for people,” said Joe LaRue, manager of the Elbow Lake Pride of the Prairie Airport. The airport, which hosts about 100 flights per month, focuses on general aviation with mostly hobbyists and weekend pilots, he said. Privatizing the FAA would make it harder for rural communities, which rely more heavily on general aviation, to have a say on regulations, Osting said. “The key mechanism that guarantees access for rural communities and smaller airports is the congressional component of it,” Osting said. However, Peterson said it would be a unlikely, in his opinion, for the FAA to become privatized, she believes it isn’t as likely for that to be approved in the senate. “I’d be surprised if the privatization survives in the process,” he said. “if it is in the bill, depending on how it is structured, I would vote against it,” Peterson said. Another important aviation change being considered are revisions to medical certificates for pilots, Peterson said. One problem older pilots are facing is many have lost medical certificates over technicalities, Peterson said. Fixing this situation is something that has been requested by many of his constituents, Peterson said. “We’ve been pushing on that for quite a while,” he said. Local pilot Paul Brutlag described the current medical certification process as laborious and expensive for pilots. Legislation passed by the U.S. Senate in December — The Pilot’s Bill of Rights 2 — includes a provision to change the medical requirements for most general aviation pilots. A similar bill has been introduced, but not voted on, in the House of Representatives. Under the proposal, pilots would be considered healthy to fly as long as they possess a valid state driver’s license, comply with medical requirements for the driver’s license, are transporting less than six passengers and are operating under visual or instrument flight rules. These changes would apply to aircraft with a maximum certified takeoff weight of 6,000 pounds or less.
// // MIKMIDINoteOffCommand.h // MIDI Testbed // // Created by Andrew Madsen on 6/2/13. // Copyright (c) 2013 Mixed In Key. All rights reserved. // #import "MIKMIDIChannelVoiceCommand.h" /** * A MIDI note off message. */ @interface MIKMIDINoteOffCommand : MIKMIDIChannelVoiceCommand /** * The note number for the message. In the range 0-127. */ @property (nonatomic, readonly) NSUInteger note; /** * Velocity of the note off message. In the range 0-127. */ @property (nonatomic, readonly) NSUInteger velocity; @end /** * The mutable counterpart of MIKMIDINoteOffCommand. */ @interface MIKMutableMIDINoteOffCommand : MIKMIDINoteOffCommand @property (nonatomic, strong, readwrite) NSDate *timestamp; @property (nonatomic, readwrite) MIDITimeStamp midiTimestamp; @property (nonatomic, readwrite) UInt8 channel; @property (nonatomic, readwrite) NSUInteger value; @property (nonatomic, readwrite) NSUInteger note; @property (nonatomic, readwrite) NSUInteger velocity; @end
SOCOM’s Technology Wish List Special Operations Command (SOCOM) recently released a Broad Agency Announcement soliciting proposals for several types of technologies that they are looking for in cooperation with the private sector. Here at Crucial Point LLC, we are interested in all forms of disrupted technologies and after looking at why we are calling SOCOM’s ‘wish list’, we believe SOCOM is in the market for disruptive techs as well. Below is a breakdown of what they are looking for. If your company can help in any of these fields, and likely some of you readers work for a company that does, you can check to the BAA here. Sensors – All types of activity sensors (magnetic, seismic, passive infrared, acoustic, fiber optic and break wire), small radars, through wall imaging, and about any type of other sensor you can think of. Sensors are going to be huge in the future with reductions in manpower and “boots on the ground”. Tactical Exploitation of National Capabilities (TENCAP) – This area concentrates on technologies, processing and capabilities to extend National Technical Means investments and capabilities to the lowest tactical echelon user possible. Includes data infix/exfil, force tracking, SIGINT, HUMINT, GEOINT, MASINT, targeting, and Command and control (C2). Air Droppable, Scatterable Electronic Media – This media will be used to disseminate information and could take the form of a large variety of broadcast electronic media receivers including miniaturized loudspeakers; entertainment devices; game device technologies, greeting cards; telephony technologies; text messaging; or other media capable of receiving and/or transmitting Internet broadcast or commercial radio frequency signals, or pre-programmed audio/audio-visual data. Scatterable Media Integrated Radio, Cellular, Web, and MOP/MOE Requirements – The requirement is a user-generated social media radio application powered by the human voice, available on the PC, Mac, Android, iPhone, and Nokia smart phones, that lets users share their thoughts and experiences. The list of items and ideas that SOCOM is looking for is exhaustive, but at the same time I would love to be the one getting to evaluate these products as proposals come in. SOCOM has always led the way in military acquisition by taking risks on new technologies and implementing disruptive ones to its smaller and more flexible force. It is likely that these technologies will help keep the United States military, specifically its Special Operations Forces, at the tip of the spear in any conflict in this world.
"Kids, back in 2009, your Aunt Robin was the host of a morning show for local New York cable." "And it was on pretty early." "How early?" "Mike?" "wake up!" "But then, everything changed." "Hey." "Hey." "See my show?" "Oh, I meant to watch it." "I just got so busy with the whole "being sound asleep" thing." "It took all night, eight hours down the drain." "Oh, it's fine." "But get this:" "After the broadcast..." "Hi." "Are you Robin?" "Yeah." "I'm Don, your new co-host." "Don was Don Frank, seasoned veteran of no fewer than 39 local morning news teams from all over the country." "The guy was an industry legend." "Oh." "Wow!" "You are so going to hit that." "No, I just think we're going to be great together on the air." "And on the sofa and on the bed and on the coffee table." "All right, all right." "I'm gonna go up on the roof and stand there by myself for five minutes." "Have fun." "And that's exactly what she did." "She just stood there." "All right, kids, I'm gonna level with you." "That's not what she did." "Here's what she did." "All right, all right." "I'm going to go have a cigarette." "What?" "!" "I promised her I'd never tell you this, but once upon a time, your Aunt Robin did enjoy the occasional cigarette, and occasionally that occasional cigarette... was more than just occasional." "I just left something like this in my apartment." "Robin, come on, take it to the roof." "We said no smoking in the apartment after you torched the throw rug doing push-ups." "All right, all right, all right." "Geez." "Yeah, Robin, I mean, God, not only is that a filthy habit, but also, can I bum one?" "Sure." "What?" "!" ""How I Met Your Mother"" " Sync By YesCool " "Kids, your Uncle Marshall definitely doesn't want you to know this, but he also smoked off and on." "It all started when he was 13, on a camping trip in Minnesota." "Come on, Marshall, let's celebrate." "It's summer vacation." "Okay, but just one." "This is my first and last cigarette ever." "And that was the first of many, many "last cigarettes ever."" "That's it." "I am done, I am out." "Last..." "Cigarette ev-arrh!" "So, by that point, I'd heard it all before." "Last cigarette ever." "What are you doing?" "You haven't smoked in six months." "Is this about the McRib?" "It's gone, dude, let it go." "I'm worried about work, okay?" "They just hired a new head of the legal department, and he's going to be letting people go." "So that's why you're worried?" "The new head of the legal department is Arthur Hobbes." "As in "Artillery Arthur"?" "As in your former boss?" "Arthur Hobbes was the meanest boss" "Marshall or anyone else had ever had." "The last time Marshall worked for him, it ended like this." "I quit!" "Yikes." "So does he hold it against you?" "Worse." "I'm sorry, who are you?" "I'm, I'm Marshall Eriksen." "Hmm." "Um, we had a fairly intense screaming match." "No." "Wherein I suggested that you take your head and store it within yourself in a fashion that, while, while space-saving, might limit its exposure to, to sunshine." "Well, that describes 95% of my employees and everyone in my family..." "except for my dog." "He's such a good boy." "Well..." "I'll see you later, uh..." "Randall Wilkerson." "Marshall Eriksen." "Gary Dinkersfield, right." "Great, he doesn't remember you." "Not great." "Arthur Hobbes hating Marshall-- that's no big deal." "He hates everyone." "It's the people he doesn't know that he cuts loose." "He just fired What's-His-Face." "He fired What's- His-Face, Ted, and What's-His-Face was invaluable." "Look, I can understand you getting upset, but it's not worth killing yourself over." "Yeah, wait till you get laid off, then kill yourself." "Like What's-His-Face." "Although I guess now it's more like Where's-His-Face." "Look, it was just two cigarettes." "Okay, I can handle two." "As long as I don't have three within 24 hours, then I'm not going to get hooked again." "What's Lily going to say when she finds out you smoked?" "Lily's not going to find out." "I have a system." "Ah, yes, Marshall's system." "Hey, Lil." "You smoked." "Damn it!" "The next morning, your Aunt Robin was thrilled to be finally going on the air with a real pro." "In three, two, one..." "Hi, I'm Robin Scherbatsky." "And I... am Don Frank." "Two teens were arrested late last night for stealing a police cart." "No, I'm sorry, not a police cart, a police car." "Oh." "Aw, screw it." "Brain fart." "Don't you hate those?" "Oh, look at that, the teleprompter's still running." "Something about a woman giving birth on an uptown bus." "Well, no point in jumping in halfway." "I'll just wait till it's done." "Uh... and she cut the cord with a Metropass." "We'll be right back." "And we're clear." "What the hell was that?" "Don, you said "brain fart."" "Look, Robin, you seem like a nice kid, but this is my 39th local news show, okay?" "And in that time, I've learned three things:" "avoid the all-you-can-eat sushi buffet in Bismarck, do not go to the bathroom with your lapel mic still on, and three, at this hour, your entire viewing audience is one half-drunk slob sitting in his underwear, so..." "Back in five, four..." "Well, let's do a great show for that half-drunk slob." "Well, that half-drunk slob appreciates it." "The next day, Marshall found himself craving a cigarette." "It was driving him crazy, so he decided to get some fresh air." "Oh, no." "You're not up here to jump, are you?" "No, no, no, no." "I fired a lot of people today." "I don't need another jumper in my file." "Oh, uh, cigarette?" "No, no, thank you." "Too bad." "You know what I miss, Jeffrey?" "Getting to know somebody over a smoke." "People are so interchangeable now, but you share a butt with somebody, you got a real bond." "You know what?" "I will, I will take one." "Okay." "I'm Marshall, by the way, it's Marshall, Marshall Eriksen." "Yeah." "Tell me something Marshall Eriksen." "How would you like to see a picture of the cutest dog in the world?" "Hey." " You smoked." "Yes, I smoked, and it was my third of the day." "You know what that means?" "I'm a smoker now." "It's all over." "I even bought a pack on the way home and a lighter and a Vikings lamp, which has nothing to do with anything, but I saw it in the window and I liked it." "Damn it, Marshall." "We already have four Viking lamps and smoking kills." "It was a way to bond with my boss, okay?" "You should have seen me up there." "That is a cute dog." "Yeah." "Are those your kids?" "Yeah, yeah, whatever." "Hey, oh, look what I got at the mall." "Aw." "There he is." "There he is." "Go ahead, you can pet him now." "No." "Yeah, yeah, come on, he likes it." "Scratch him under the chin." "I don't care what your reasons are." "You know how I feel about smoking." "Now, give me the cigarettes." "And the lighter." "Ah, that's the stuff." "What?" "!" "What?" "!" "Oh, yeah, add your Aunt Lily to the list." "Whenever Uncle Marshall fell off the wagon, your Aunt Lily got dragged right down with him." "What you doing?" "If you must know, I am reaching out to City Hall to try to get the mayor on our show." "Oh, my goodness, you are adorable, but the mayor's not coming on a show nobody watches." "My colonoscopy had more viewers than this show." "At least that had some twists and turns." "You know, I don't, I don't know why you're acting like this." "Maybe you're just bitter because you never had a shot at a network job, but I think I still do, so I really need to focus." "I've been on a network." "You were on a network?" "It was the best Labor Day weekend of my life." "When you do the news, you're in these ergonomic chairs." "It feels like you're sitting on a cloud." "Which was nice 'cause it was right after my colonoscopy." "And the dressing rooms?" "Oh." "There's dressing rooms?" "Oh, you bet your sweet headset there's dressing rooms, Mike." "No changing in the KFC bathroom across the street." "No, sir." "It was heaven." "But the second you get used to it, they go find someone who isn't "going through a bitter divorce"" "and doesn't "reek of gin,"" "and before you know it, you're stuck in a dead-end gig, surrounded by people going nowhere, doing the news in your tighty whities." "Okay, the underwear thing was your choice, and I don't like that it's catching on." "Ah, looking good, fellas." "Feels good, right?" "God, I want to kill him." "Of course people watch the show." "You guys watch the show, that's, like, two right there." "Oh, my God." "You guys still haven't seen my show." "What?" "We never miss it." "We've seen it!" "Really?" "What color is the set?" "Uh..." "Yeah, uh..." "It's black." "Right." "Yeah." "With, like, silver" "Uh-huh." "Yeah." "around the edges." "Not your TV set, my show set." "Ah..." "Oh..." "God, if I can't even get my best friends to watch, who's going to watch?" "Lots of people." "Uh, you got bedridden insomniacs." "Hmm." "Bums camping outside a department store." "People waiting in the ER, where the TV is in a cage, so you can't change the channel." "Ooh, do you have any stalkers?" "Yeah, but even Leonard won't watch my show." "I can't believe those guys are smoking out there." "It's freezing out." "Remember when you used to be able to smoke in bars?" "Oh, hey, dude." "I think that hot girl over there's smiling at me." "Uh, that's a chair." "But yeah, dude, hit that." "Guys?" "Marco!" "Polo!" "Polo!" "Well, it's dividing our group into smokers and non-smokers." "And that's not healthy." "You're right." "Let's go have a smoke." "What?" "!" "What?" "!" "Yeah, I'm not proud of it." "Look at you two." "Smokers." "Just like the rest of us." "Whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa." "I am not a smoker." "I only smoke in certain situations." "Postcoital, when I'm with Germans-- sometimes those two overlap-- coital, birthdays, to annoy my mom, precoital, on a sailboat, the day the Mets are mathematically eliminated every year, and, of course" "wait for it, 'cause Lord knows I have-- pregnancy scares." "Why are you smoking right now?" "I'm always precoital, Ted." "You know, maybe smoking's not so bad." "I mean, at least it gets us out in the fresh air." "Yeah, and all the coughing really works my abs." "I... am... ripped!" "But, as glamorous as it was, within a week, we all hit our breaking point." "You okay, sweetie?" "Actually, baby, my throat's a bit sore." "So, I'm whapping him across the nose with the newspaper, right?" "Yeah." "And my wife says," ""Come on, you can't treat your son like that."" "I don't know." "Michael, call 911." "Oh... oh, my God." "Okay, yes, yeah, right away." "It's Marshall, by the way." "Oh, my God." "I hope Arthur's okay." "Dibs on his office if he's not." "We have to quit smoking." "Honey, you said a mouthful." "I wish I had never started." "I mean, I think back to myself at 13 years old." "If I could only go back to that moment..." "I hate that little bastard." "Me, too." "Okay, that's it." "Let's quit." "Let's..." "Let's do it." "Well, I am proud of you guys." "I have heard how difficult it is for smokers like yourself to quit, so, on behalf of nonsmokers," "I salute you, and I am here to help." "So, hand in your cigarettes, and I will get rid of them one at a time." "You're quitting, dollface." "I know I don't normally call you dollface, but it kind of works in this voice." "Dollface." "Hey, guys." "Hey." "Uh, I was just wondering, um, is anyone else interviewing the mayor tomorrow on TV, or is it just me?" "Hey, that's great." "Whoa!" "Hey!" "Yeah, Don can suck it while I suck this." "Light me, Marshall." "Actually, um, we've decided that we're all quitting smoking." "That's fantastic." "I'm sick of you guys bumming my cigarettes." "No, come on, Robin." "We-we can't do it unless we all do it together." "No." "You can sleep with Marshall." "Oh, God, Lily, no." "Sorry, baby, you got to take one for the team." "I-I don't want to sleep with Marshall." "I've seen the looks." "Right." "I can't quit right now." "Not before the biggest interview of my life." "It's-it's too stressful." "It's too stressful!" "Let's just have one more." "Give me a cigarette!" "No, wait!" "Robin, Robin, think about this for a second." "Bloomberg is the antismoking mayor." "Do you really want to show up to that interview smelling like smoke?" "It'd be like interviewing a vegetarian smelling like a McRib." "Really?" "Like I'm not going through enough right now?" "Look, you're quitting." "We're all quitting." "Fine." "I'll quit." "Great." "Great." "We just have to get through the first 24 hours." "After that, it's a cakewalk." "Barney, do you have to bite your nails so loud?" "I'm not biting my nails." "I'm trying to suck the leftover nicotine out of my fingertips." "Marshall, can you pass the onion rings?" "What, you got dinosaur arms?" "They're right there." "What do you think cigarettes are doing right now?" "You think they're thinking about us?" "Dude, if you don't stop tapping your foot, it's coming off." "Oh, my God, Ted, I'm so sorry." "Maybe I should move it a little bit closer to your ass!" "All right." "I am ready to do this right now!" "None of us knew what we were fighting about." "We just knew we wanted to smoke." "More than anything in the world." "It wasn't going very well for Robin, either." "Sorry I'm late." "Someone used the microwave, and the elevator stopped." "What are you doing?" "Okay, Don, Don, seriously, not tonight." "Okay, I'm a little bit on edge because I quit smoking for my interview with the mayor, so just..." "Oh, you precious little porcelain unicorn, you." "Why would you do that?" "Why?" "!" "Because I care, Don." "I-I care about the show, I-I care about my career." "Unlike you, you unprofessional jerk!" "You're-You're sloppy, you're rude, and I wish you worked half as hard as the elastic on those stretched-out underpants." "I'm starting to feel like this is getting a little personal." "No, Don, it would be getting personal if I would say that the reason you bounce from market to market is because you're a loser-- a lazy, obnoxious, loser." "The mayor canceled." "In five, four, three, two..." "Hi, I'm Robin Scherbatsky." "You think I'm a loser?" "You're' right." "I'm a loser." "But at least I've accepted it." "A plucky raccoon has been cheering up patients at a local senior center." "I used to be just like that." "Always wanting more and never getting it." "It's a dead end, Robin." "You're never going to be a network anchor." "Just like you're never going to quit smoking." "What are you doing?" "I'm enjoying a cigarette." "Oh, that'll get you there." "Oh, that's good." "You want a drag?" "Can we be professionals?" "Please?" "Yeah." "Yeah, professionals?" "You know who's working the camera right now?" "A chair." "We're on a show where we can't even get the cameraman to watch." "That's why the mayor canceled, Robin." "And that's why Mike is on a fried chicken run." "And that's why you and I can enjoy a cigarette right here on the air." "No, thank you." "Why?" "Because of our millions and millions of impressionable viewers?" "Okay..." "If any impressionable viewers have a problem with this, please give us a call." "The number's on the screen." "Chirp, chirp?" "Chirp, chirp?" "Come on, Robin... live a little." "Oh, that must be Mike." "He always forgets-- extra crispy." "Hello?" "Robin, don't smoke that cigarette!" "Marshall?" "!" "We're all watching and we're all very impressionable." "And I swear to God, if you smoke that cigarette, we're all going to smoke." "You..." "You guys are watching?" "We sure are, sweetheart, and you look fabulous." "Oh, you guys." "You don't have to do this, Robin." "You don't have to smoke." "Resist it, honey." "Thanks for the call, guys." "Oh, she did it!" "Good for her." "We're not going to smoke." "I have a pack stashed up on the roof." "Uh-oh." "I'm awake." "And I'm smoking!" "Robin, you know how dangerous it is to wake a sleep smoker?" "It's fine." "I bought a pack on my way home." "Hey, great show, Robin." "Yeah, but that" "Don guy, what a tool." "Well, no, he actually apologized to me." "No, I mean when he stood up." "Those briefs were pretty revealing." "You should date that guy." "Yeah." "Not in this lifetime." "They were dating within two months." "But more on that later." "Hey, guys, look, the sun's coming up." "You know what right now is a perfect time for?" "A last cigarette ever." "No, I mean, a real last cigarette ever." "Damn it, let's do it." "Okay." "All right." "Last cigarette ever on, uh, on the count of three." "One..." "Two..." "Three." "We all quit for a while after that." "But it wasn't anyone's last cigarette." "But we did eventually all quit smoking for real." "Robin's last cigarette was in June 2013." "Barney's last cigarette was in March 2017." "Lily's last cigarette was the day she started trying to get pregnant." "And Marshall's last cigarette was the day his son was born." "And my last cigarette?" "Two weeks into dating your mother." "And i never looked back." "I'm sorry I hit you, buddy." "I wanted to make it up to you, by giving you this." "Wow, she's hot!" "Yeah." "Well... some day, you're going to marry her." "No way." "We're pretty lucky." "Totally." "I'll be in my tent." "Oh, no." "Wait, hey!" "No, don't, don't do that." "Well... have fun for me."
Fact: Bacon Grease Makes You Run Faster 5 This post brought to you by Bacon Chase. All opinions are 100% mine. Chicago has gone buck-wild with fun runs. It seems like my facebook feed these days is either pictures of homogenous babies or detailed descriptions of other people’s excersize. My friend Jen showed me this app when I’m tired of the baby posts, not sure if one exists for the latter. Now the entire god-damn country has lost their shit over bacon, and rightfully so. It’s delicious in nearly every form. I’ve been known to take down an entire package of Baco’s while watching a Three’s Company Marathon on meTV. And yes, I know it’s not real, but neither are other things that I love. So the natural progression of popular ideas leads us to the Bacon Chase, namely the Chicago Bacon Chase. Yes, that’s right, it’s a very real, cardiologist’s paradox. These mf’ers are cray. These jibroni’s have turned exercise into something that actually sounds fun. Participants have the choice to either run a 5k, or 0.05k. To put it in perspective, 0.05k is just 164 feet. Robbie Gould can consistently hit a field goal at that distance. Post that shit to Facebook jock-o. You also get hooked up with: Infinite Bacon at the finish line (Philosophical question of the day: how can one fit an infinite amount of bacon in a finite amount of space?) Unlimited Bacon bits along the course. A free bloody mary A bacon chase t-shirt (to wash your car with.) A bacon scented race bib. Some of the Chicago Gluttons crew will be at the race, supporting our much healthier girlfriends who are actually running. We’ll be live tweeting roadside heart attacks as they happen.
Perioperative smoking behavior of Chinese surgical patients. Surveys suggest that, consistent with a high smoking prevalence, Chinese smokers in the general population report little interest in quitting. In other cultures, surgery is a powerful teachable moment for smoking cessation, increasing the rate of spontaneous quitting. We determined the perioperative tobacco use behavior of Chinese patients scheduled for elective surgery who smoke cigarettes and factors associated with both preoperative intent to abstain and self-reported smoking behavior at 30 days postoperatively. Specifically, we tested the hypothesis that perception of the health risks of smoking would be independently associated with both preoperative intent to abstain and self-reported abstinence at 30 days postoperatively. Patients ≥18 years of age scheduled for elective noncardiovascular surgery at Peking Union Medical College Hospital in Beijing, China, were assessed preoperatively and up to 30 days postoperatively for factors associated with smoking behavior, including indices measuring knowledge of smoking-related health risks. Of the 227 patients surveyed at baseline, most (164, 72%) intended to remain abstinent after hospital discharge. For the 204 patients contacted at 30 days postoperatively, 126 (62%) self-reported abstinence. In multivariate analysis, factors associated with preoperative intent to abstain after surgery included older age, self-efficacy for abstaining, and undergoing major surgery; factors associated with abstinence included older age, self-efficacy, major surgery, and preoperative intent to abstain. Higher perception of benefits from quitting was associated with intent, but not abstinence. Knowledge of the health risks caused by smoking was not found to be associated with either intent or abstinence, so that the hypothesis was not supported. Both intent to quit and self-efficacy for maintaining abstinence appear to be much higher in Chinese surgical patients than in prior surveys of the general Chinese population, and the majority of surgical patients maintained abstinence for at least 30 days. These findings suggest that surgery can serve as a powerful teachable moment for smoking cessation in China.
Arizona State University Arizona State University is a new model for American higher education, an unprecedented combination of academic excellence, broad access, and impact. This New American University is a single, unified institution comprising four differentiated campuses that positively impact the economic, social, cultural and environmental health of the communities it serves. Its research is inspired by real world application, blurring the boundaries that traditionally separate academic disciplines. ASU serves more than 63,000 students in metropolitan Phoenix, Arizona, the nation’s fifth largest city. ASU champions intellectual and cultural diversity, and welcomes students from all fifty states and more than one hundred nations. The School of Art offers a total of 28 degrees in fine arts, art education and art history: 14 undergraduate degrees and 14 at the graduate level. In addition, a minor is offered in art history. The school’s particular strengths are art education, ceramics, photography and printmaking. Click on an area of study from the listing on the left to find detailed information, including an introduction to the area, its curriculum, faculty and facilities.
967 F.2d 116 70 A.F.T.R.2d 92-5398, 92-2 USTC P 50,412 David E. HEASLEY and Kathleen Heasley,Petitioners-Appellants, Cross-Appellees,v.COMMISSIONER OF INTERNAL REVENUE, Respondent-Appellee,Cross-Appellant. No. 91-4526. United States Court of Appeals,Fifth Circuit. July 20, 1992. John D. Copeland, Dallas, Tex., for petitioners-appellants, cross-appellees. Abramam N.M. Shashy, Jr., Chief Counsel, I.R.S., Kimberly S. Stanely, Atty., Gary R. Allen, Chief, William S. Estabrook, Appellate Section, Tax Div., Dept. of Justice, Washington, D.C., for C.I.R. Appeals from the Decision of the United States Tax Court. Before BRIGHT,1 JOLLY, and BARKSDALE, Circuit Judges. BRIGHT, Senior Circuit Judge: 1 David and Kathleen Heasley (The Heasleys) appeal from the decision of the Tax Court denying a portion of their request for attorneys' fees and litigation costs under 26 U.S.C. § 7430 (1988). The Heasleys incurred the sought-after fees and costs during prior litigation before the Tax Court and on appeal to this court. The Internal Revenue Service cross-appeals, challenging the Heasleys' entitlement to any fee award and disputing the manner in which the Tax Court calculated the award. We affirm in part, reverse in part and remand in part.I. BACKGROUND 2 The facts that led to the underlying litigation have been set forth in an earlier decision by this court. Heasley v. Commissioner, 902 F.2d 380 (5th Cir.1990) [Heasley I ]. We elaborate only as necessary to frame our analysis of the issues raised on this appeal. 3 Prompted by Gaylen Danner, who purported to be a financial and securities dealer, the Heasleys invested in an energy conservation plan in December 1983. Under the plan, which was sponsored by the O.E.C. Leasing Corporation [O.E.C.], the Heasleys leased two energy savings units from O.E.C. at a yearly cost of $5,000 per unit. O.E.C. ascribed a value of $100,000 to each unit. 4 Neither Heasley graduated from high school. Both had limited investment experience. As a return on their investment, the Heasleys thought they would receive a percentage of the energy savings yielded by the end users of the units. Although Danner discussed the investment's tax advantages, the Heasleys viewed the O.E.C. leasing plan as a source of future income.2 5 At Danner's suggestion, the Heasleys employed Gene Smith, a C.P.A., to prepare their 1983 tax return. Smith claimed a $10,000 deduction on the advance rent of the units and a $20,000 investment tax credit, which he carried back to 1980 and 1981. After investing $14,161 in the O.E.C. plan, the Heasleys received in excess of $23,000 in refunds from the Internal Revenue Service [IRS] for the three years. The O.E.C. investment never generated any income. The Heasleys lost all the money they invested with Danner, over $25,000. 6 After sending the Heasleys a prefiling notification letter in 1986, the IRS totally disallowed the $10,000 deduction and $20,000 investment tax credit. The Heasleys became liable for the $23,000 deficiency, plus interest. The IRS also assessed $7,419.75 in penalties: a $1,153.05 negligence penalty under I.R.C. § 6653(a)(1) (1988); a $5,940.90 valuation overstatement penalty under I.R.C. § 6659 (1988); a $325.80 substantial understatement penalty under I.R.C. § 6661 (1988) and an additional interest penalty on the disallowed investment tax credit under I.R.C. § 6621 (1988). 7 After exhausting their administrative remedies, the Heasleys sued the IRS. They conceded their liability for the deficiency and only challenged the assessment of the penalties and additional interest. The Tax Court upheld the assessment of the penalties and interest. Heasley v. Commissioner, 55 T.C.M. (CCH) 1748 (1988). A panel of this court reversed the Tax Court on July 20, 1990. Heasley I, 902 F.2d at 382-86. The Tax Court revised its decision accordingly on October 26, 1990. 8 On November 19, 1990, the Heasleys moved for an award of $40,221.86 in attorneys' fees and litigation costs under I.R.C. § 7430 (1988), which permits a "prevailing party" in a tax proceeding against the IRS to recover reasonable litigation costs. The Heasleys' attorney, John D. Copeland, submitted a supporting affidavit. Copeland did not submit billing records with the motion for litigation costs. 9 The Tax Court held that the Heasleys were entitled to reasonable litigation costs for the section 6661 substantial understatement penalty only. Heasley v. Commissioner, 61 T.C.M. (CCH) 2503 (1991). This was the sole instance in which they demonstrated that the position of the IRS was "not substantially justified." I.R.C. § 7430(c)(4)(A)(i). The Tax Court awarded $198.99 in costs, or one-fourth of the requested award of $795.94. The Tax Court disallowed the Heasleys' request for reimbursement in excess of the statutory rate of $75.00 per hour. See id. § 7430(c)(1)(B)(iii). In addition, the Tax Court determined that the statutory reimbursement rate, indexed to account for an increase in the cost-of-living, was $91.43 per hour. 10 The Tax Court noted that the Heasleys failed to provide a breakdown of specific hours and hourly rates as provided by Tax Court Rule 231(d).3 The Tax Court also observed that after the IRS disagreed with the reasonableness of the fee request, the Heasleys failed to submit a more detailed affidavit, as required by Tax Court Rule 232(d). Consequently, the Tax Court divided the total fee award claimed by the Heasleys ($39,425.92) by Copeland's hourly rate ($200) and yielded a figure of 197 hours. After dividing this number by four and yielding a figure of forty-nine hours, the Tax Court determined that the total award for attorneys' fees was $4,480.07. 11 The Heasleys filed a motion for reconsideration with a supplemental affidavit that broke down their request for fees by attorney, hourly rate and the number of hours worked by each attorney. The Tax Court denied the motion. This appeal and the Government's cross-appeal followed. II. DISCUSSION A. Substantial Justification 12 The Heasleys argue that they are entitled to an award of fees and costs incurred in litigating the three remaining penalties. The Heasleys assert that they established that the position of the IRS with respect to each penalty was "not substantially justified." I.R.C. § 7430(c)(4)(A)(i). We agree only in part. 13 In order to recover an award of attorneys' fees from the Government, a tax litigant must qualify as a "prevailing party" under section 7430(c)(4)(A).4 First, the litigant must "establis[h] that the position of the United States ... was not substantially justified." Id. Second, the taxpayer must also "substantially prevail[ ]" with respect to either "the amount in controversy" or "the most significant issue or set of issues presented." Id. § 7430(c)(4)(A)(ii). 14 A position is "substantially justified" when it is "justified to a degree that could satisfy a reasonable person." Pierce v. Underwood, 487 U.S. 552, 565, 108 S.Ct. 2541, 2550, 101 L.Ed.2d 490 (1988) (interpreting similar language in 28 U.S.C. § 2412(d), the Equal Access to Justice Act). The Government's failure to prevail in the underlying litigation does not require a determination that the position of the IRS was unreasonable, but it clearly remains a factor for our consideration. Perry v. Commissioner, 931 F.2d 1044, 1046 (5th Cir.1991). Nor does a trial court ruling in the government's favor preclude a finding of unreasonableness, although this acts as a similarly important consideration. Huckaby v. Department of the Treasury, 804 F.2d 297, 299 (5th Cir.1986) (per curiam). We review the Tax Court's determination on the issue of substantial justification for abuse of discretion. Pierce, 487 U.S. at 557-63, 108 S.Ct. at 2546-49 (requires abuse of discretion review for analogous EAJA provision); Cassuto v. Commissioner, 936 F.2d 736, 740 (2d Cir.1991) (citing Pierce, 487 U.S. at 557-63, 108 S.Ct. at 2546-49). 1. Negligence Penalty 15 As this court explained in Heasley I, the IRS may penalize taxpayers for any underpayment due to negligence or disregard of the rules and regulations. Heasley I, 902 F.2d at 383 (citing I.R.C. § 6653(a)(1)). "Negligence" includes any failure to make a reasonable attempt to comply with the Tax Code, including the failure to do what a reasonable person would do under similar circumstances. Id. (citations omitted); I.R.C. § 6653(a)(3). "Disregard" includes any careless, reckless or intentional disregard. Heasley I, 902 F.2d at 383 (citing section 6653(a)(3)). Due care does not require moderate income investors, like the Heasleys, to investigate independently their investments; they may rely upon the expertise of their financial advisors and accountants. Id. 16 The Heasleys assert that they made reasonable efforts to comply with the Tax Code and the Government unreasonably asserted the negligence penalty. We agree. The Heasleys demonstrated that they are moderate income investors with a limited education and minimal investment experience. They relied on the expertise of their financial advisor, whom they believed to be knowledgeable and trustworthy. Although the Heasleys had always prepared their own tax returns in the past, they hired a C.P.A. to handle the more complicated tax matters created by their ill-fated investment. The Heasleys also monitored their investment. Heasley I, 902 F.2d at 384. 17 Under these circumstances, we cannot say that a reasonable person would have been satisfied with the IRS's position on the negligence penalty. See Pierce, 487 U.S. at 565, 108 S.Ct. at 2550. The Heasleys thus demonstrated that the position of the IRS with respect to the negligence penalty was "not substantially justified." I.R.C. § 7430(c)(4)(A). Accordingly, the Tax Court's holding to the contrary was abuse of discretion and we reverse. 2. Valuation Overstatement Penalty 18 The IRS may impose a valuation overstatement penalty for any underpayment "attributable to a valuation overstatement." I.R.C. § 6659(a)(2). A "valuation overstatement" occurs when a taxpayer overstates the value of property on a tax return by 150% or more. Id. § 6659(c). The IRS may waive any or all of the penalty when a taxpayer shows good faith and a reasonable basis for claiming the overvaluation. Id. § 6659(e). 19 The Heasleys overvalued the energy conservation units, which were actually worth $5,000, by $95,000. The Tax Court upheld the penalty. We reversed on the ground that the overvaluation was not attributable to a valuation overstatement, but rather to an improperly claimed deduction or credit. Heasley I, 902 F.2d at 383 (citing Todd v. Commissioner, 862 F.2d 540, 542-43 (5th Cir.1988)). 20 At the fee dispute phase, the Tax Court held that the IRS was substantially justified in seeking the valuation overstatement penalty. The Tax Court refused to award the Heasleys fees and costs incurred in challenging this penalty. The Tax Court reasoned that the IRS asserted the penalty before the decision in Todd, when the issue was in flux and litigants reasonably could have argued either position. 21 The Heasleys do not now contest the determination that they overstated the value of the energy conservation units. They contend that they had a reasonable basis for the valuation and made the claim in good faith. See I.R.C. § 6659(e). They also assert that the IRS abused its discretion by failing to waive the valuation overstatement penalty. 22 The Heasleys have not shown, however, that the position of the IRS with respect to this penalty was "not substantially justified." We are persuaded, as was the Tax Court, that before Todd this issue was unresolved in our Circuit. See 862 F.2d at 541-45. The IRS simply argued for one of two plausible interpretations of the statute. See Huckaby, 804 F.2d at 299. Accordingly, the IRS reasonably asserted the section 6659 valuation overstatement penalty against the Heasleys. We affirm. 3. Additional Interest Penalty 23 The IRS may impose a penalty for any substantial underpayment attributable to a tax motivated transaction. I.R.C. § 6621(c)(1); Heasley I, 902 F.2d at 385. A "tax motivated" transaction includes a valuation overstatement. Heasley I, 902 F.2d at 385 (citing I.R.C. § 6659(c) (overstatement of property value by 150%)). In addition, the IRS may specify other types of transactions which may be treated as "tax motivated." Id. (citing section 6621(c)(3)(B)). 24 The Tax Court originally held that the Heasleys' investment in O.E.C. leasing was tax motivated because they had not engaged in the transaction for profit. Id. at 385-86 (citation omitted). This court reversed, concluding that the Heasleys displayed the requisite profit motive and the IRS should have considered their intent to earn future income. Id. at 386. At the fee dispute phase, the Tax Court held that the IRS's position on the additional interest penalty was substantially justified because the evidence supported the absence of a profit motive. 25 The Heasleys now maintain, under the authority of Heasley I, that the IRS was not substantially justified in pressing for the section 6621 additional interest penalty. We disagree. The additional interest penalty is necessarily bound up with the valuation overstatement penalty. See I.R.C. § 6621(c)(3)(A)(i) (" 'tax motivated transaction' means ... any valuation overstatement (within the meaning of section 6659(c))"). We have already held that the IRS reasonably asserted the valuation overstatement penalty. It would be inconsistent to hold that the IRS did not reasonably assert the section 6621(c) additional interest penalty, which draws its definition in part from the valuation overstatement penalty. Accordingly, we affirm. B. Substantially Prevail Requirement 26 Having determined that the Heasleys established that the IRS's position with respect to the negligence penalty was "not substantially justified," we must determine whether the Heasleys also substantially prevailed with respect to the amount in controversy or the most significant issue or set of issues. See I.R.C. § 7430(c)(4)(A)(ii). The Tax Court held that the Heasleys, who secured a reversal of all four penalties on appeal, substantially prevailed with respect to the most significant issue or set of issues presented. We review this determination for abuse of discretion. See Cassuto, 936 F.2d at 741. 27 The IRS asserts that the Heasleys are not entitled to an award of reasonable litigation costs because they conceded the most important issue, their liability for the deficiency. Although the Government acknowledges that the Heasleys prevailed on the penalties, it claims these were not significant issues because they lacked collateral or future impact. According to the IRS, the only significant issue in this case was the deficiency. Br. for Appellee/Cross-Appellant at 40-43. We disagree. 28 In order to determine whether a taxpayer has "substantially prevailed" within the meaning of section 7430(c)(4)(A), we look to the final outcome of the case, whether by judgment or settlement. Cassuto, 936 F.2d at 741. This section "is phrased in terms of issues not claims." Huckaby, 804 F.2d at 300. Thus, a victory on the primary issue suffices. See id. But see Ralston Dev. Corp. v. United States, 937 F.2d 510, 515 (10th Cir.1991) (taxpayer who recovers only 19% of the amount at issue in a tax case has not substantially prevailed with respect to the amount in controversy). 29 The Heasleys, who conceded their liability for the deficiency, only challenged the penalties. The primary issue in the underlying litigation, therefore, was their liability for over $7,000 in penalties and additional interest. After appeal to this court, the Heasleys secured the reversal of all four penalties. As in Huckaby, the final outcome of the case, reversal of the penalties, represented their complete vindication on the most significant issue. Unlike the taxpayers in Ralston, the Heasleys here did not accomplish only a proportionally slight vindication. The Heasleys "substantially prevailed" with respect to the most significant issue within the meaning of section 7430(c)(4)(A)(ii). Finding no abuse of discretion, we affirm. 30 The Heasleys, who established that the position of the IRS was "not substantially justified" with respect to the negligence and substantial understatement penalties, meet the requirements of the first level of "prevailing party" analysis. I.R.C. § 7430(c)(4)(A)(i). Because they "substantially prevailed" with respect to the penalties, the most significant issue or set of issues presented, they also withstand scrutiny under the second tier of section 7430(c)(4)(A) analysis. Id. § 7430(c)(4)(A)(ii). Accordingly, the Heasleys qualify as a "prevailing party" with respect to the substantial understatement and negligence penalties. They are entitled to an award of the reasonable litigation costs incurred in connection with challenging these two penalties.5 31 The remaining issues relate to the amount of the attorneys' fee award. C. Documentation 32 The IRS asserts that the Heasleys failed to document adequately their request for attorneys' fees. According to the IRS, the taxpayers should have provided contemporaneous billing records and a breakdown of the tasks performed by particular attorneys. See Bode v. United States, 919 F.2d 1044, 1047 (5th Cir.1990). The IRS asks us to remand with instructions to limit the fee award to the number of hours that the Heasleys' attorneys spent before the Tax Court. 33 We apply an abuse of discretion standard of review to the decision to grant attorneys' fees to a prevailing party. Cassuto, 936 F.2d at 740 (citing Pierce, 487 U.S. at 571, 108 S.Ct. at 2553). We review the overall amount of the award under the same standard. Id.; Bode, 919 F.2d at 1047 (citing Hensley v. Eckerhart, 461 U.S. 424, 437, 103 S.Ct. 1933, 1941, 76 L.Ed.2d 40 (1983)). Subsidiary findings of fact are reviewed for clear error. Bode, 919 F.2d at 1047 (citation omitted). 34 We agree with the IRS that the Heasleys, as parties seeking reimbursement for attorneys' fees under section 7430, bore the burden of establishing the number of attorney hours expended. Id. Failure to provide contemporaneous billing records, however, does not preclude recovery so long as the Heasleys presented adequate evidence to permit the Tax Court to determine the number of reimbursable hours. Id. In addition, the Heasleys had the burden of establishing that their attorneys expended a reasonable number of hours on this case and that the hours were reasonably expended. Id. (citation omitted). 35 In his affidavit in support of the motion for attorneys' fees and litigation costs, John D. Copeland stated that he is a certified specialist in tax law and that he devoted a substantial number of hours to the Heasleys' case. Copeland charged clients $200.00 per hour. His associate on the case, Andrea Winters, billed at $100.00 per hour. Copeland also stated that substantially all of the attorneys' time devoted to this case was devoted to the penalty issues, which were the only issues to proceed to trial. As the IRS points out, Copeland did not submit contemporaneous billing records in support of this motion. 36 Unlike the IRS, however, we do not conclude that the Tax Court abused its discretion by granting an award on the basis of the evidence before it. The Tax Court had the opportunity to observe the Heasleys' attorneys at trial and assess their credibility. The Tax Court precisely set forth the means by which it arrived at an overall figure of 197 hours. The Tax Court reasonably could have determined, on the basis of the evidence in the affidavit, that 197 hours was a reasonable number and that those hours were reasonably expended. Cf. Bode, 919 F.2d at 1049 (reversed attorneys' fee award where the only evidence before the district court failed to provide a reasonable basis for its calculation). 37 In addition, the Tax Court clearly noted that by failing to submit a detailed affidavit which set forth the nature and amount of each item for which costs and fees were claimed, the Heasleys' attorneys failed to comply with Tax Court Rule 231(d). Nevertheless, the Tax Court proceeded to calculate a fee award on the basis of the evidence Copeland did provide in his affidavit. We cannot say that the manner in which the Tax Court calculated the award of fees and costs constitutes abuse of its discretion to interpret its own procedural rules.6 38 Finally, the IRS relies primarily upon Bode, which is readily distinguishable. First, the taxpayers in Bode produced no documentary evidence in support of their request for attorneys' fees; they only presented vague expert testimony which did not establish the total number of hours or the hourly rate of the attorneys. 919 F.2d at 1046-47. The expert testimony gave the court no basis upon which to conclude whether the hours at issue were reasonable and reasonably expended. Id. at 1047-48. 39 Second, the district court in Bode awarded 600 hours at $150.00 per hour without articulating its reasons. Id. at 1046. Here, however, the Tax Court articulated both its reasons and its methodology for deriving the 197 hour figure. The Tax Court divided Copeland's hourly rate of $200.00, which was set forth in the affidavit, by the total fee award sought by the Heasleys, $39,425.92. 40 Accordingly, the Tax Court did not abuse its discretion by awarding attorneys' fees on the basis of the evidence before it. Nor did it err by determining that 197 hours served as the base figure for the attorneys' fee award. 41 As we have already decided that the Heasleys are entitled to an award of the costs and fees incurred in challenging the negligence and substantial understatement penalties, we now hold that they are entitled to reimbursement for one-half of the hours found by the Tax Court, rather than just one-quarter. The base figure for which they are entitled to attorneys' fees, therefore, is ninety-eight hours. Under the same reasoning, the Heasleys are also entitled to an award of $397.97, one-half of the costs they claimed. D. Special Factors 42 The Heasleys contend that the Tax Court erred by not granting them reimbursement based upon the actual hourly fee charged by their attorneys. Taxpayers who recover attorneys' fees against the United States may receive reasonable litigation costs at prevailing market rates. I.R.C. § 7430(c)(1). A maximum hourly rate of $75.00 applies unless the court determines that an increase in the cost of living or a "special factor" justifies a higher rate. Id. § 7430(c)(1)(B)(iii). The statute suggests one special factor: the limited availability of qualified attorneys for a proceeding. Id. 43 The Heasleys attempted to persuade the Tax Court that their attorneys were entitled to hourly fees of $100.00 to $200.00, the going rate in Dallas, Texas. The Tax Court held that the "going rate" did not qualify as a "special factor" within the meaning of section 7430. Accordingly, the Tax Court denied their request for reimbursement in excess of the $75.00 statutory hourly fee. We review this determination for abuse of discretion. Cassuto, 936 F.2d at 740, 743. 44 The Heasleys now maintain that several "special factors" warrant a higher award. They point to: (1) the limited availability of qualified attorneys in Dallas who practice for $75.00 per hour; (2) the need to deter harsh administrative action; (3) the need to encourage attorneys to take on essentially pro bono cases that speak to the fair administration of the tax laws; (4) the tax expertise of their attorneys and (5) the unusual results obtained by their attorneys. Although the Heasleys have made substantial arguments in favor of a higher rate, we cannot say that the Tax Court abused its discretion by limiting the attorneys' fees to the statutory rate. See, e.g., Pierce, 487 U.S. at 572, 108 S.Ct. at 2554; Cassuto, 936 F.2d at 743-44; Bode, 919 F.2d at 1050-52. Accordingly, we affirm the award of attorneys' fees at the statutory rate of $75.00 per hour, plus a cost-of-living increase. E. Cost-of-Living Increase 45 Section 7430 permits a court to grant more than $75.00 per hour in attorneys' fees when an increase in the cost-of-living justifies a higher rate. I.R.C. § 7430(c)(1)(B)(iii). The Tax Court awarded an hourly fee of $91.43, with the cost-of-living adjustment calculated from October 1, 1981, the effective date of a similar cost-of-living provision in the Equal Access to Justice Act. We review this purely legal determination de novo. Cassuto, 936 F.2d at 740. 46 The IRS contends that the proper date from which to calculate a section 7430 cost-of-living increase is January 1, 1986, the effective date of the section 7430 COLA provision. We agree. See Cassuto, 936 F.2d at 742-43; Bode, 919 F.2d at 1053 n. 8. Accordingly, we remand to the Tax Court to recalculate the cost-of-living increase from January 1, 1986. F. Attorneys' Fees For This Appeal 47 The Heasleys have requested attorneys' fees for the time devoted to the motion for litigation costs and this appeal. Br. for Appellants at 22. We have the power to make an award for services rendered in this court; and we elect to do so here in order to bring this long-pending dispute to a close. Leroy v. City of Houston, 906 F.2d 1068, 1086 (5th Cir.1990) (citing Davis v. Board of Sch. Comm'rs, 526 F.2d 865, 868 (5th Cir.1976)). 48 In order to award attorneys' fees for this appeal, we need only decide whether it was abuse of discretion for the Tax Court to determine that the IRS's position with respect to the underlying litigation was "not substantially justified." Bode, 919 F.2d at 1052 (citation omitted). We need not determine whether the Government's appellate position was substantially justified once this threshold decision has been made by the trial court. Id. (citing Commissioner, INS v. Jean, 496 U.S. 154, 110 S.Ct. 2316, 2320, 110 L.Ed.2d 134 (1990)). We must determine, however, whether the Heasleys are a "prevailing party" on appeal. Id. 49 We have already held that the Tax Court did not abuse its discretion by determining that the IRS's position with respect to the section 6661 substantial understatement penalty was "not substantially justified." We thus proceed to the next inquiry. 50 The Heasleys have not prevailed on every issue raised during this appeal. They secured additional attorneys' fees with respect to the section 6653 negligence penalty, which will result in a greater overall award of attorneys' fees.7 They did not prevail with respect to the requested "special factor" reimbursement in excess of the statutory hourly rate. In addition, the IRS prevailed on the cost-of-living increase, which will yield a lower COLA than previously awarded. 51 On balance, these losses are " 'not of such magnitude as to deprive [them] of prevailing party status.' " Bode, 919 F.2d at 1052 (quoting Leroy, 906 F.2d at 1082 n. 24). Thus, to the extent that the Heasleys prevailed on this appeal, they are entitled at least to reimbursement for appellate fees that relate to their success on appeal and in defending against the cross-appeal. See Jean, 110 S.Ct. 2321 n. 10; Bode, 919 F.2d at 1052. Accordingly, we direct the Heasleys to submit to this court their application for fees incurred during these appeals, together with supporting documents, prior to the issuance of the mandate in this case. See Fed.R.App.P. 41. III. CONCLUSION 52 We AFFIRM the Tax Court with respect to the section 6661 substantial understatement penalty, the section 6659 valuation overstatement penalty and the section 6621 additional interest penalty. We REVERSE with respect to the section 6653 negligence penalty and hold that the Heasleys are entitled to reasonable litigation costs because the IRS's position on this issue was not substantially justified. We AFFIRM the determination that the Heasleys substantially prevailed with respect to the most significant issues presented and are thereby entitled to reasonable litigation costs and fees for the negligence and substantial understatement penalties. We AFFIRM the Tax Court's base figure of compensable hours. We AFFIRM the Tax Court's denial of reimbursement at the attorneys' actual hourly rate. We REMAND to the Tax Court to award attorneys' fees for ninety-eight hours at $75.00 per hour, plus a cost-of-living increase calculated from January 1, 1986. The Heasleys are entitled to costs from the previous litigation in the amount of $397.97, plus an award of attorneys' fees from these appeals, to be determined by this court after submission of the necessary documentation. 1 Senior Circuit Judge of the Eighth Circuit, sitting by designation 2 For a more detailed description of the plan, see the Tax Court's memorandum opinion, Heasley v. Commissioner, 55 T.C.M. (CCH) 1748 (1988), and Soriano v. I.R.S., 90 T.C. 44 (1988) 3 Rule 231(d) provides, in relevant part: A motion for an award of reasonable litigation costs shall be accompanied by a detailed affidavit by the moving party or counsel for the moving party which sets forth distinctly the nature and amount of each item of costs paid or incurred for which an award is claimed. Tax Ct.R. 231(d). 4 See, e.g., Sher v. Commissioner, 861 F.2d 131, 133 (5th Cir.1988); Smith v. United States, 850 F.2d 242, 245 (5th Cir.1988); Huckaby v. Department of the Treasury, 804 F.2d 297, 298 (5th Cir.1986) (per curiam) 5 The IRS does not challenge on this appeal the Tax Court's finding that no substantial justification supported the section 6661 substantial understatement penalty. Rather, the IRS argues that the Heasleys are not entitled to an award of fees and costs because they did not substantially prevail with respect to the amount in controversy or the most significant issue or set of issues. I.R.C. § 7430(c)(4)(A)(ii). Because we hold that the Heasleys substantially prevailed with respect to the most significant issue or set of issues, we reject the IRS's arguments to the contrary. We also affirm the Tax Court's findings that the position of the IRS with respect to the section 6661 substantial understatement penalty was "not substantially justified." Id. § 7430(c)(4)(A)(i). We necessarily hold that the Heasleys are a "prevailing party" with respect to the substantial understatement penalty. Id. § 7430(c)(4)(A) 6 See, e.g., Ward v. Commissioner, 907 F.2d 517, 520-21 (5th Cir.1990) (not abuse of discretion for Tax Court to set aside default judgment under Tax Ct.R. 123); Kelley v. Commissioner, 877 F.2d 756, 761 (9th Cir.1989) (abuse of discretion to deny taxpayers leave to amend under Tax Ct.R. 41(a)); Noli v. Commissioner, 860 F.2d 1521, 1526 (9th Cir.1988) (not abuse of discretion to dismiss petition for failure to prosecute under Tax Ct.R. 123(b)) 7 The Tax Court previously awarded the Heasleys $4,480.07 in fees, including the cost-of-living adjustment. The Heasleys are now entitled to at least $7350 in attorneys' fees, plus a cost-of-living increase
MILLBANK – Lester Berman, 53, cares a lot about the reputation of his fellow Millbankians, even going beyond the mere appearance of their attire. For over fifty years, Spotless Dry Cleaners and Alterations has not only cleaned, pressed and mended clothes, but also offered other services in order to uphold what he calls “a certain level of peace and stability” in the town. For a price, Berman and his capable crew can remove stains, hem cuffs and “influence people who are giving his customers difficulties”. According to Berman, “We know that clean and tailored clothes are a sign of a well-managed life, and my expert crew, Moosejaw and Chopper, are there to help my customers solve any problems that might lead to mussed apparel. They work quickly, efficiently and without bringing the authorities into it.” “A good dry cleaner learns a lot about people by what they bring in. Sometimes, you can tell that someone is having money problems or love troubles – they keep wearing the same suit and it’s getting frayed, or they suddenly bring in a flashy ensemble. A good cleaner might discreetly ask if they can help “remove” the issue for a certain fee. Voila, everything is clean and taken care of.” Berman says that Millbank has changed a lot over the past few decades, and his business has expanded to meet a growing demand for his services. “When my pa opened this place in 1962, people in Millbank walked around wearing potato sacks, often with the potatoes still in them. At least, that’s how he tells it,” smiles Berman, “and you never argued with Pa.” As a child, Berman helped his father with a number of simple tasks, checking pockets for money and jewelry, tailing suspicious customers and sweeping the floor. By the time he inherited the business in the early 1990s, Berman had received a full education in the family business and was able to apply the same strict standards set by his father. “The most important skills for a good cleaner are discretion, attention to detail and the ability to get blood out of anything.” When asked, customers at Spotless were uniformly positive about the level of service, but none of them wished to be quoted or identified. Berman hopes that Spotless Dry Cleaner and Alterations can help to impose yet more refined standards upon Millbank’s populace. “There’s just no reason for sloppy dressing,” says Berman. “Sometimes I look at some guy in ripped jeans or with mustard on the front of his shirt and I think he should be garroted and thrown in Water River.” Spotless Dry Cleaners is located at 810 Divan Street in Milbank. Business hours are 10 am – 4 pm, Monday – Thursday, but with a “special deposit” and references, “cleaning” services may be requested on the “Spotline” 24 hours a day. No squealers.
Head and neck aspergillosis in patients undergoing bone marrow transplantation. Report of four cases and review of the literature. Aspergillus infection can be a major cause of morbidity in immunocompromised patients, especially when there is pulmonary involvement. Diagnosis of aspergillosis is often complicated by the varied clinical presentation and compromised medical status of such patients. Four cases of head and neck Aspergillus infections in bone marrow transplant patients are presented. Involvement predominantly was limited to the oral cavity and/or sinuses, and in one case, the ear. Two cases were successfully managed with a combined antibiotic and surgical approach, and one case with antibiotics only. The fourth case was managed with antibiotics and surgery, but fatal hemorrhage secondary to sinus involvement developed.
# typed: strict class A extend T::Sig sig {returns(Dep::ExportedItem)} def self.get_exported_item Dep::ExportedItem.new end end
Ferdousi Mazumder Ferdousi Mazumder (; born 18 June 1943) is a Bangladeshi film, television and stage actress. In 1998, she was awarded Ekushey Padak in the drama category by the Government of Bangladesh. As of 2009, on stage she has given over 1200 performances of about 35 plays, mostly for her own group, Theatre. Early life Mazumder was an intermediate student of Eden College. She earned her master's degree in both Bengali and Arabic from the University of Dhaka. Career Mazumder started her drama career through her brother, Munier Chowdhury, a playwright and novelist. She first acted in the drama Daktar Abdullahar Karkhana, written by Shawkat Osman, which was a production of the then Iqbal Hall of the University of Dhaka. Ferdousi also acted at the very first televised drama of Bangladesh Television, Ektala Dotala (1964). Over the years, she performed in plays like Kokilara, a one-woman play, Eka, an one-character non-verbal play, Tamoshi, written by Nilima Ibrahim and others. She directed five stage plays including Meherjan Arekbar, Tahara Tokhono, Chithi and Dui Bon. After the independence of Bangladesh, in 1972, a group of Chhatra Shikkhak Natya Goshthi members formed a theatre troupe calling it Theatre. Majumdar was one of the founding members of the troupe. Personal life Mazumder is married to Ramendu Majumdar. Together they have a daughter, Tropa Mazumder. Her father, Khan Bahadur Abdul Halim Chowdhury, was a district magistrate. Her brother Munier Chowdhury was an intellectual. Another brother, Kabir Chowdhury, was a professor and intellectual. Awards Ekushey Padak (1998) William Kerry Award (1998) Bangladesh Shilpakala Academy Award for Best Actor (1978) First National TV Award for Best Acting (1975) Sequence Award of Merit for performance in TV for a decade Independence Award - 2020 Works References Category:1943 births Category:Living people Category:Bangladeshi stage actresses Category:Bangladeshi television actresses Category:Bangladeshi film actresses Category:20th-century Bangladeshi actresses Category:Recipients of the Ekushey Padak Category:Honorary Fellows of Bangla Academy Category:Place of birth missing (living people) Category:People from Noakhali District Category:Recipients of the Independence Day Award
Re: Bourbons that really SHINE with ice and water Bless me, Father, for I have sinned. I drank my bourbon neat, then added a splash, and finished off with some ice. What I find interesting about this thread is the number of SBers who confess to not only drinking but enjoying a wide variety of bourbons on ice. I would have guessed there would be more input from those who always, always, always take theirs neat the way God intended it. If God made anything better than bourbon he must have kept it for Hisself. Re: Bourbons that really SHINE with ice and water I pretty much only drink whiskey over ice if it's mixed in a cocktail. Anything BIB or under pretty much always gets drunk neat, over that may or may not get a splash of water. I try every new to me whiskey neat, regardless of proof, but if it's over 114 pf I usually prefer it with a little water. The key is to enjoy it. Whatever someone else does to enjoy theirs makes no difference to me. Whenever I serve whiskey to guests, I always have a small pitcher of ice water at hand, so they can have it their way.
It’s the time of year where everyone posts and comments on “End of the Year Lists.” From the top movies, music, and books of the year to the top restaurants, people and cars — it seems like there is a list for everything. Well, John and Cait will not let you down, as we have compiled a list of our Top Ten Brewers Moments from what was a very memorable and historic season. We worked together on this list. Some comments are from Cait and other are from John. Some moments were John’s and others were Cait’s. You might agree with some and disagree with others, as all ten moments will surely be up for debate. You can share with us your top ten Brewers moments from 2011 through the comments section at the bottom of this post. #10 — Late Night at Miller Park, Brewers vs. Rockies, May 20 Were you one of the 33,361 on hand to witness the Brewers vs. Rockies game as it kicked off at 7:10pm on a warm Friday evening in May at Miller Park? Maybe the better question is: were you still remaining 4 hours and 35 minutes later when Prince Fielder hit his 2-run walk-off blast in the bottom of the 14th? This memorable game featured a volley of scoring. Brewers led 1–0 after 1. Rockies led 2–1 after 3. Brewers tied it 2–2 in the 4th. Colorado took the lead 3–2 in the 5th and tacked on another run in the 6th, before the Brewers brought it back to 4–3 in the bottom of that inning. A homer by Casey McGehee tied the game 4–4 in the 8th. The game went into extras and there was no more scoring until the Rockies took the lead in the 13th, 5–4. The Crew answered with a towering blast by Yuniesky Betancourt and it was back to even. The Rockies scratched their way back on top, 6–5 in the 14th, but then Fielder hit his booming 2-run, 422-footer to right and rewarded those Brewers fans who had stuck it out to the end by sending them home happy. #9 — Brewers Fans Rock the Vote; Fielder Blast Rocks the AL in 2011 All-Star Game at Chase Field For the first time in our 43-year history, three players, a homegrown trio — Ryan Braun (first-round Draft pick in 2005), Prince Fielder (first-round Draft pick in 2002) and Rickie Weeks (first-round Draft pick in 2003) — were elected to start the All-Star Game, making the Crew the only NL team with multiple starters. Braun was elected for his fourth straight All-Star start, leading NL outfielders in balloting for the fourth straight season, and this time leading all NL players in votes. Fielder, a three-time All-Star, was elected to his second start and Weeks was elected to his very first All-Star Game. Although Braun did not end up playing in the 2011 All-Star Game due to a calf strain, this was still a memorable achievement for the Crew and the Brewers fans whose voting paid off! More memories and history were also made at the actual game itself as Fielder’s three-run home run in the fourth inning at Chase Field gave the National League the lead in the game and contributed in a big way to its 5–1 victory over the American League. With that blast, Fielder became the first player in Brewers history to hit a home run in the All-Star Game and after the game, he was named the Most Valuable Player in the All-Star Game, another first for our franchise. #8 — Crew Turns Triple Play, Brewers vs. Dodgers, August 15 On Monday, August 15, four Brewers defenders combined on the sixth triple play in franchise history, a sensational play that bailed starter Randy Wolf out of a two-on, no-out jam in the second inning and aided in their 3–0 win over the Dodgers at Miller Park. The 4–6–3–2 triple play went like this: With Dodgers at first and second and nobody out in a scoreless game, James Loney hit a grounder up the middle. Josh Wilson ranged to his right, gobbled up the baseball and flipped it with his glove to shortstop Yuniesky Betancourt for the first out. Betancourt threw to Prince Fielder at first for out №2, and Fielder, seeing Dodgers outfielder Matt Kemp trying to sneak home, fired to catcher George Kottaras for the inning-ending tag. In other news, the Crew also turned four double plays in this defensive clinic of a game. #7 — Doug Melvin Named Baseball America’s Executive of the Year Brewers Executive Vice President and General Manager Doug Melvin was awarded the Baseball America Executive of the Year award for 2011. Melvin and his staff worked hard to put a winner on the field and the bold moves that were made helped the Brewers to a National League Central Division Championship. As anyone who works for Melvin would say, he is a treat. Very knowledgeable about the game and its many intricacies, Melvin is also a very kind and caring person. His success is well-deserved! #6 — Ryan Braun Signs Contract Extension, April 21 On Thursday, April 21, Ryan Braun signed a $105 million, five-year contract extension adding to a seven-year deal he signed in May 2008, meaning that Braun will be a part of the Brewers franchise through 2020. “I’m proud to be a Milwaukee Brewer, I really am,” Braun said on that historic day. “I’m going to be for the next 10 years. It’s an exciting day for me, my family, the organization. It’s like being part of a family. “It’s truly special to me to come to work here every day, and I’m excited about being able to do that for the next 10 years. Thanks to the fans — that was the single biggest reason I wanted to stay here. To be in one of the smallest media markets in baseball and to have 3 million people to come watch us every year is incredibly special. “The more time we spend in other cities, the more we recognize how special it is here. The fans are the single biggest factor. I’m excited, man, truly excited to be able to say I’m a Milwaukee Brewer from this point forward.” Having Braun want to be the cornerstone of the Brewers franchise is a treat for Brewers fans. He truly enjoys playing here in Milwaukee and the fans will look forward to his play for many years to come. #5 — Muhammad Ali Visits Brewers Spring Training Camp This was a very special moment for the team at Spring Training. It was so neat to see the look in the eyes of Brewers players when Muhammad Ali entered the clubhouse. Being in the presence of such a powerful figure was inspiring to many of the players and staff members on hand that day. Although his health isn’t 100%, it was still amazing to see Ali in person, making for one of the most memorable days of 2011. #4 — John Visits Fenway Park and Cait Presents/Club Wins SAMMY at the 2011 National Sports Forum Since each fan has his or her own personal memories of 2011, we figured we should pick a moment that was unique to each of us this year, so here you have them: John’s visit to Fenway and Cait bringing home the SAMMY Award. John: Prior to June, I had never been to Fenway Park in Boston. I made the weekend trip with the team and it definitely lived up to the hype. I enjoyed how they were able to transform a historic ballpark into the modern era while still keeping much of the historical foundation. I have heard many great stories about Fenway and I certainly was not disappointed. I try to visit one park I have never been to each year and was happy to cross this one off this list. I already have my new park picked out for 2012 — Citi Field in New York! Cait: I knew it was going to be a great year when the winning started in January at the National Sports Forum in Louisville, KY. As a Club, we were a finalist for the SAMMY Award (Sales, Advertising, Marketing or Management Idea of the Year) for a Season Seat Holder renewal campaign that had been particularly successful for us in 2010. Although we had been a finalist for the award for another campaign the previous year but had not won, and although I am not particularly fond of speaking in front of large groups of strangers, I was tasked with presenting our campaign at the conference and this time, we were triumphant. I was very proud to help bring home this award for our talented team. #3 — Opening Day vs. Atlanta, April 4 Even though the Brewers fell to the Braves on this day, there is always something special about Opening Day. Everyone is excited, optimistic and energetic. Everything around the ballpark is fresh, clean and sparkling. From a working standpoint, there is a lot to get done. The rush up to Opening Day can be quite stressful filling in last minute details, but once the first pitch is thrown, we can relax and enjoy the game! #2 — Brewers Clinch the NL Central Division, September 23 The Brewers entered the game against the Marlins with a “Magic Number” of 2, meaning they not only needed to win, but also needed St. Louis to lose in order to clinch the division at Miller Park. The game was tied at 1 for much of the game, until Ryan Braun stepped up to the plate in the bottom of the eighth inning. Cait: I was on-hand that night to help with a potential streamer launch if we were to win the game and St. Louis was to lose. I remember loading the cannons with the game still tied and making my way to my post, hoping against all hope that I’d get to shoot them that night. But the score of our game was tied and so was the Cubs-Cardinals game. The air was a little tense, and I will never forget leaning over to one of my co-workers and jokingly stage whispering, “Here’s where he hits the home run,” alluding to Ryan Braun’s heroic home run against the Cubs on the last day of the season in 2008 which sent us to the Postseason for then the first time in 26 years. And then he did it. Chills. I still get chills just thinking about it. John: There was a special feeling in the ballpark that night. You just felt like it was our time to finish this off. Great crowd, Friday night, Yovani on the mound — it was all aligned for things to go our way. The game itself was kind of a blur. I know everyone remembers Braun’s home run, but remember Braun’s diving catch and laser-perfect throw to get Emilio Bonifacio out at first? To me that was the play of the game. It was a fantastic night and one I will never forget. The Crew went on to win 4–1 on that go-ahead, 3-run home run. Then, as if Braun’s blast wasn’t enough of a sense of déjà vu for the crowd of 44, 584, fans and players had to wait for the Cubs-Cardinals game to end in order to find out if they would clinch the division that night. In an ironic twist of fate, the packed house cheered loudly for the rival Cubs who had hit their own 3-run home run in the 8th and went on to prevail over the Cardinals 5–1 as confetti and streamers finally rained down from the rafters at Miller Park. #1 — Game 5 of the NLDS, October 7 For those of you who were at Miller Park that Friday evening, you definitely understand why this would be #1. The excitement in the building was more than electric. The crowd was deafening. The sigh of relief let out by Brewers fans could be felt across the nation as Nyjer Morgan drove in Carlos Gomez for the game-winning run, sending the Crew to its first-ever National League Championship Series. And…there you have it, our favorite moments of 2011. What are yours? Please feel free to debate in the comment field below. Here’s to many more memorable moments in 2012! - John and Cait [email protected]
top clicks Archives The captain of this ship is blindsided when he returns from a mission to find his crew has decided to rid themselves of him. Andu Nehrengel is the captain of this spaceship in a faraway and remote place in outer space. Without any warning or rhyme or reason that order him to surrender, come back as their prisoner or force him off his own ship and place him in a small runabout which has provisions and power for five days. But, although he is alone, needs to plan his strategy needless to say that he survives and crashes on an alien planet. The science lessons begin when he meets Chandra and he survives attacks by huge meat eating alien bird beasts and then finds himself alone with three beautiful how lucky can he get female human clones. The four are now stuck on this planet. However he…
Background {#Sec1} ========== Renal cell carcinoma (RCC), as one of most frequent cancers worldwide, is a common lethal malignancy \[[@CR1], [@CR2]\]. RCC incidence and death rates are high, with 63,000 new cases and 14,000 deaths per year in United States \[[@CR3]\]. Surgical resection and immunotherapy are currently being applied to treat patients with RCC \[[@CR4]--[@CR6]\]. RCC includes more than 10 histological and molecular subtypes \[[@CR7]\], with clear cell RCC (ccRCC) as the most common subtype \[[@CR8]\]. The detailed molecular mechanisms underlying ccRCC development remain elusive. Non-coding RNAs (ncRNAs) are found to be important players in epigenetic regulation, especially long ncRNAs (\> 200 nucleotides, lncRNAs) and microRNAs (\< 22 nucleotides, miRNAs) \[[@CR9]--[@CR11]\]. Emerging evidence indicates that numerous dysregulated lncRNAs are involved in the development and progression of ccRCC \[[@CR12], [@CR13]\]. LncRNA-LET, as a recently identified lncRNA, was located at chromosome 15q24.1 \[[@CR11], [@CR14]\], and it plays a suppressive role in regulating cancer cell growth in malignancies, including esophageal squamous cell carcinoma and lung adenocarcinoma \[[@CR14]--[@CR16]\]. In ccRCC, the role that lncRNA-LET plays is unknown. Interestingly, lncRNA-LET expression is low in serum samples from patients with ccRCC \[[@CR17]\], suggesting its involvement in the carcinogenesis of this cancer. Previous studies have shown that lncRNA-LET has the potential to target miRNAs, thereby regulating the expression of miRNA targets to affect the process of human cancers \[[@CR18], [@CR19]\]. Although varied miRNAs may be regulated by lncRNA-LET, we here focused on miR-373-3p. Both tumor-promoting and anti-tumor effects of miR-373-3p have been reported before \[[@CR20], [@CR21]\]. An earlier study has revealed that miR-373-3p acts as an oncomiR in RCC \[[@CR22]\]. By analyzing the sequence information of lncRNA-LET and miR-373-3p, we noted that lncRNA-LET contained a potential binding area for miR-373-3p. This study is thus performed to explore whether lncRNA-LET regulates the malignant behaviors of ccRCC cells by regulating miR-373-3p. Herein, we explored the specific role of lncRNA-LET in regulating ccRCC growth in vitro and in vivo. LncRNA-LET overexpression or knockdown was performed in ccRCC cells. Meanwhile, the xenograft mouse model was constructed. We demonstrated that lncRNA-LET suppressed the growth of ccRCC cells. LncRNA-LET-induced cell cycle arrest and apoptosis in ccRCC cells were attenuated by miR-373-3p mimics. Materials and methods {#Sec2} ===================== Patients and tissues {#Sec3} -------------------- The human ccRCC and matched adjacent non-tumor tissues were obtained from 16 ccRCC patients from the First Affiliated Hospital of Zhengzhou University during September 2018--November 2018. Each patient provided an informed consent prior to specimen collection. This study was approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University, and conferred to Declaration of Helsinki. Cell culture and transfection {#Sec4} ----------------------------- The ccRCC cell lines (Caki-1, 786-O, 769-P) and 293T cell line were purchased from Procell Life Science & Technology Co,. Ltd. (Wuhan, China). Caki-1 cells were cultured in McCoy's 5A medium (Procell Life Science & Technology Co,. Ltd) containing 10% fetal bovine serum (FBS; Biological Industries, Kibbutz Beit-Haemek, Israel), 786-O and 769-P cells were cultured in RPMI-1640 medium (Procell Life Science & Technology Co,. Ltd.) supplemented with 10% FBS, and 293T cells were grown in DMEM medium (Procell Life Science & Technology Co,. Ltd.) containing 10% FBS in an incubator at 37 °C and 5% CO~2~. 786-O cells were transiently transfected with lncRNA-LET overexpression (lncRNA-LET OV), negative control (OV NC) vector, miR-373-3p inhibitor or negative control inhibitor (inhibitor NC), whereas 769-P cells were transiently transfected with lncRNA-LET siRNA, siRNA NC, miR-373-3p mimics or negative control mimics (mimics NC). In addition, 786-O cells were co-transfected with lncRNA-LET OV and miR-373-3p mimics or mimics NC. The miR-373-3p inhibitor, inhibitor NC, miR-373-3p mimics and mimics NC were purchased from JTSBIO (Wuhan, China). Further, 786-O cells stably transfected with lncRNA-LET overexpression (lncRNA-LET) or empty control (EV), 769-P cells stably transfected with lncRNA-LET knockdown (lncRNA-LET shRNA) or control (shRNA Ctrl) were established by selecting cells with 200 μg/ml or 300 μg/ml G418. Plasmid construction {#Sec5} -------------------- The pcDNA3.1 and pRNAH1.1 vectors were purchased from GenScript (Nanjing, China). To overexpress lncRNA-LET, pcDNA3.1-lncRNA-LET was constructed by GenScript (Nanjing, China). The lncRNA-LET siRNAs and siRNA NC were purchased from JTSBIO (Wuhan, China). The shRNA targeting lncRNA-LET was designed and synthesized, and then cloned into pRNAH1.1 vector to generate the shRNA against lncRNA-LET (lncRNA-LET shRNA). The sequences of lncRNA-LET siRNA-1, lncRNA-LET siRNA-2 and lncRNA-LET shRNA were listed in Table [1](#Tab1){ref-type="table"}.Table 1Sequence informationNameSequenceslncRNA-LET siRNA-1Sense: 5′-GUCUGAUGUAUCCACCCAUTT-3′Antisense: 5′-AUGGGUGGAUACAUCAGACTT-3′lncRNA-LET siRNA-2Sense: 5′-GUGCAUGUGGUAGGUUAGATT-3′Antisense: 5′-UCUAACCUACCACAUGCACTT-3′lncRNA-LET shRNASense: 5′-GATCCGGTCTGATGTATCCACCCATTTCAAGAGAATGGGTGGATACATCAGACTTTTTA-3′Antisense: 5′-AGCTTAAAAAGTCTGATGTATCCACCCATTCTCTTGAAATGGGTGGATACATCAGACCG-3′ Quantitative real-time PCR {#Sec6} -------------------------- Total RNAs were extracted using RL reagent (BioTeke, China). The mRNA was reversely transcribed into complementary DNA with M-MLV Reverse Transcriptase (Takara, China). The expression levels of lncRNA-LET and miR-373-3p were detected via SYBR Green (BioTeke, China). β-actin and U6 were applied as internal controls. The relative expression levels were calculated with the 2^−ΔΔCt^ method. The primers were shown in Table [2](#Tab2){ref-type="table"}.Table 2Primers for quantitative real-time PCRGene namePrimer sequenceslncRNA-LETForward: 5′-AGCGTTTACTTCGTTGTTGT-3′Reverse: 5′-CAGAATGGAAATACTGGAGC-3′β-actinForward: 5′-CACTGTGCCCATCTACGAGG-3′Reverse: 5′-TAATGTCACGCACGATTTCC-3′miR-373-3pForward: 5′-GGCGGAAGTGCTTCGATTTT-3′Reverse: 5′-GTGCAGGGTCCGAGGTATTC-3′U6Forward: 5′-GCTTCGGCAGCACATATACT-3′Reverse: 5′-GTGCAGGGTCCGAGGTATTC-3′ Cell cycle analysis {#Sec7} ------------------- The 786-O and 769-P cells were firstly cultured in RPMI-1640 medium supplemented with 10% FBS, respectively. Then, the same batch of cells (4 × 10^5^/well) were seeded onto 6-well plates and cultured in RPMI-1640 medium containing 10% FBS. 786-O cells were transiently transfected with lncRNA-LET OV or OV NC vector, or co-transfected with lncRNA-LET OV and miR-373-3p mimics or mimics NC. The 769-P cells were transiently transfected with lncRNA-LET siRNA or siRNA NC. After 48 h, cells were collected and fixed in ice-cold 70% ethanol for 12 h at 4 °C, and then incubated with 25 μl propidium iodide (PI) and 10 μl RNase A (Beyotime, China) for 30 min at 37 °C in the dark. Cell cycle distribution was analyzed using flow cytometer. EdU assay {#Sec8} --------- Cells were cultured with cell medium containing a final concentration of 10 μM EdU (Keygen, China) for 2 h. They were then fixed in 4% paraformaldehyde for 15 min, and incubated with 0.5% Triton X-100 for 20 min at room temperature. Cells were subsequently washed twice with PBS containing 3% BSA and then reacted with Click-iT for 30 min. The nuclei were stained with Hoechst 33342 (1:2000, Keygen, China) for 15 min. Finally, the images were captured under fluorescence microscopy and the EdU-positive cells were calculated. Western blot analysis {#Sec9} --------------------- Total proteins were obtained using RIPA buffer (Beyotime, China), and mitochondrial proteins were extracted with Mitochondrial Protein Extraction Kit (BOSTER, China). Then, protein concentrations were determined via a BCA Protein Assay Kit (Beyotime, China). Proteins were separated through SDS-PAGE and transferred to PVDF membranes. After blocking in 5% BSA, the membranes were subsequently incubated with primary antibodies, including Cyclin D1 (1:500; \#2978, CST, USA), Cyclin E (1:500; \#20808, CST, USA), Bax (1:5000; 50599-2-Ig, Proteintech, China), Bcl-2 (1:500; 12789-1-AP, Proteintech, China), Cytochrome *C* (1:5000; ab133504, Abcam, UK), Dickkopf-1 (DKK1) (1:1000; 21112-1-AP, Proteintech, China), tissue inhibitor of metalloproteinase-2 (TIMP2) (1:500; A1558, Abclonal, China), and β-actin (1:2000; 60008-1-Ig, Proteintech, China) overnight at 4 °C. Afterwards, the membranes were incubated with the secondary antibody (1:10,000; SA00001-1 or SA00001-2, Proteintech, China) for 40 min at 37 °C. Signals were detected with enhanced chemiluminescence (7 Sea biotech, China). Cell apoptosis detection {#Sec10} ------------------------ Cells were collected and centrifuged at 1000*g* for 5 min. Then, the cells in 195 μl binding buffer were incubated with 5 μl AnnexinV-FITC and 10 μl PI for 15 min at room temperature in the dark according to the manufacturer's instruction (Beyotime, China). Cell apoptosis was analyzed by flow cytometer. Caspase activity assay {#Sec11} ---------------------- The activities of caspase-3 and caspase-9 were analyzed with corresponding Caspase Assay Kits (Beyotime or Solarbio, China). Briefly, proteins were extracted from cells and then qualified with Bradford Protein Assay Kit (Beyotime, China). Subsequently, samples were incubated with the caspase substrate for 24 h at 37 °C. The absorbance was determined at 405 nm. JC-1 assay {#Sec12} ---------- Cells were obtained and centrifuged at 550*g* for 5 min. Then, the cells were resuspended in 500 μl JC-1 staining working solution (Beyotime, China). After incubation for 20 min in the incubator at 37 °C, cells were centrifuged at 600*g* for 5 min and washed twice with 1× JC-1 staining buffer, and resuspended with 500 μl 1× JC-1 staining buffer. JC-1 aggregate was measured via the flow cytometer. Hematoxylin--eosin (HE) staining {#Sec13} -------------------------------- The tumor tissues were fixed with 4% paraformaldehyde, embedded with paraffin and then cut into 5-µm sections. Afterwards, the sections were deparaffinized and rehydrated before being stained with hematoxylin (Solarbio, China) and eosin (Sangon, China). The staining was visualized under a microscope. TUNEL staining {#Sec14} -------------- The tumor tissues were fixed with 4% paraformaldehyde and 5-µm sections were embedded in paraffin, followed by deparaffinization and rehydration. The TUNEL-positive cells were labeled by Label Solution with Enzyme solution for 60 min at 37 °C in the dark, and then these sections were incubated with converter-peroxidase (POD) according to the manufacturer's protocol. Afterwards, hematoxylin (Solarbio, China) was used for the counterstaining of cell nuclei. The analysis of apoptotic cells was conducted and images were taken under a microscope. Immunofluorescence analysis {#Sec15} --------------------------- Cells were fixed in 4% paraformaldehyde for 15 min and incubated with 0.1% Triton X-100 (Beyotime, China) for 30 min. Additionally, tumor tissues were fixed in 4% paraformaldehyde, embedded with paraffin and cut into 5-µm sections. Then, the sections were incubated with goat serum to block nonspecific binding. The sections were subsequently incubated with anti-Ki67 antibody (1:50, Proteintech, China) or anti-Cytochrome *C* antibody (1:100, proteintech, China) overnight at 4 °C. After washing thrice with PBS, the sections were incubated with Cy3 goat anti-rabbit IgG (1:200, Beyotime, China) and counterstained with DAPI (Biosharp, China). The results were analyzed under a fluorescence microscope. Luciferase reporter assay {#Sec16} ------------------------- 293 T cells were seeded onto 12-well plates. The partial lncRNA-LET sequences containing wild-type (WT) and mutant (MUT) binding sites for miR-373-3p were synthesized and subcloned into pmirGLO luciferase reporter vectors. The 293 T cells were transfected with the luciferase reporter constructs together with miR-373-3p mimics or mimics NC with Lipofectamine 2000. After a 48-h incubation, the transfected cells were collected and the luciferase activity analysis was conducted. In viv*o* xenograft mouse model {#Sec17} ------------------------------- Male 5-week-old BALB/c nude mice were obtained from BEIJING HFK BIOSCIENCE Co., LTD (China). All animal experiments were conducted according to the Guideline for the Care and Use of Laboratory Animals and approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University. Mice acclimated for 1 week and then randomly assigned to four groups (n = 6/group): EV group, lncRNA-LET group, shRNA Ctrl group, lncRNA-LET shRNA group. 5 × 10^6^ cells stably expressing EV, lncRNA-LET, shRNA Ctrl or lncRNA-LET shRNA vectors were subcutaneously injected in the right fore-flank of each nude mouse. Then, the size of the tumor was recorded every 3 days for 21 days. Finally, the mice were sacrificed and tumor tissues were photographed. The tumor volume was calculated with the equation volume (mm^3^) = length × width^2^/2. Statistical analysis {#Sec18} -------------------- All date were analyzed with GraphPad Prism version 7.0 and presented as mean ± SD. The two-tailed paired and unpaired Student's t-test was used to test for significant differences between two groups. One-way ANOVA analysis followed by Tukey's test was used to analyze the multi-sample analysis. p value less than 0.05 was considered statistically significant. Results {#Sec19} ======= LncRNA-LET expression is down-regulated in ccRCC tissues {#Sec20} -------------------------------------------------------- To examine the clinical significances of lncRNA-LET in ccRCC tissues, we conducted quantitative real-time PCR. The lncRNA-LET expression level was significantly decreased in ccRCC tissues compared with matched adjacent non-tumor tissues (Fig. [1](#Fig1){ref-type="fig"}a). On the contrary, miR-373-3p expression was higher in ccRCC tissues (Fig. [1](#Fig1){ref-type="fig"}b).Fig. 1LncRNA-LET expression is down-regulated in ccRCC tissues. The expression levels of LncRNA-LET (**a**) and miR-373-3p (**b**) were detected using quantitative real-time PCR in ccRCC tissues and matched adjacent non-tumor tissues. n = 16, ^\#\#\#^p\<0.001. *ccRCC* clear cell renal cell carcinoma LncRNA-LET arrests cell cycle at G1 stage in ccRCC cells {#Sec21} -------------------------------------------------------- The basal expression levels of lncRNA-LET in three ccRCC cancer cell lines were first determined via quantitative real-time PCR. The lowest and highest lncRNA-LET expression were observed in 786-O cells and 769-P cells, respectively (Fig. [2](#Fig2){ref-type="fig"}a). Next, lncRNA-LET overexpression plasmid was transfected into 786-O cells, while two lncRNA-LET siRNAs were transfected into 769-P cells. The overexpression and knockdown efficiencies were confirmed via quantitative real-time PCR in these two cell lines (Fig. [2](#Fig2){ref-type="fig"}b). EdU incorporation assays demonstrated that lncRNA-LET inhibited ccRCC cell proliferation (Fig. [2](#Fig2){ref-type="fig"}c). Cell cycle progression detection revealed that lncRNA-LET overexpression caused a dramatic accumulation in G1-phase and reduction in S-phase of 786-O cells, whereas lncRNA-LET silencing accelerated cell cycle of 769-P cells to S-phase (Fig. [3](#Fig3){ref-type="fig"}a). Moreover, lncRNA-LET overexpression down-regulated the expression of Cyclins D1 and E in 786-O cells, while lncRNA-LET knockdown up-regulated their expression in 769-P cells (Fig. [3](#Fig3){ref-type="fig"}b). These findings suggest that lncRNA-LET suppresses the proliferation and arrests cell cycle progress of ccRCC cells.Fig. 2LncRNA-LET arrests cell proliferation in ccRCC cells. **a** LncRNA-LET level was measured in Caki-1, 786-O, 769-P cells by quantitative real-time PCR. **b** The level of lncRNA-LET was assayed in lncRNA-LET overexpression 786-O cells or lncRNA-LET knockdown 769-P cells via quantitative real-time PCR. **c** Cell proliferation was detected via EdU staining. Scale bars, 50 μm. ^\#^p \< 0.05, ^\#\#^p \< 0.01, ^\#\#\#^p \< 0.001. *ccRCC* clear cell renal cell carcinoma Fig. 3LncRNA-LET arrests cell cycle in ccRCC cells. **a** The flow cytometry was used to analyze cell cycle phase distribution. **b** The Cyclin D1 and Cyclin E protein levels were analyzed with western blot analysis. ^\#\#^p \< 0.01, ^\#\#\#^p \< 0.001. *ccRCC* clear cell renal cell carcinoma LncRNA-LET promotes cell apoptosis in ccRCC cells {#Sec22} ------------------------------------------------- Results from flow cytometry indicated that lncRNA-LET overexpression significantly promoted cell apoptosis while lncRNA-LET silencing suppressed cell apoptosis (Fig. [4](#Fig4){ref-type="fig"}a). Meanwhile, lncRNA-LET increased caspase-3 and caspase-9 activities (Fig. [4](#Fig4){ref-type="fig"}b), up-regulated Bax expression and reduced Bcl-2 expression (Fig. [4](#Fig4){ref-type="fig"}c) in 786-O cells. Data from JC-1 assay illustrated that lncRNA-LET increased the ratio of green/monomeric forms of JC-1 in ccRCC cells (Fig. [5](#Fig5){ref-type="fig"}a). Further western blot analysis confirmed that lncRNA-LET led to the release of Cytochrome *C* from mitochondria (Fig. [5](#Fig5){ref-type="fig"}b). Immunofluorescence also showed that lncRNA-LET overexpression significantly facilitated cytosolic translocation of Cytochrome *C* and lncRNA-LET silencing inhibited its translocation in ccRCC cells (Fig. [6](#Fig6){ref-type="fig"}a, b). The data reveal that lncRNA-LET promotes cell apoptosis in ccRCC cells possibly.Fig. 4LncRNA-LET promotes cell apoptosis in ccRCC cells. **a** Cell apoptosis was studied by flow cytometry. **b** The caspase-3 and caspase-9 activities were detected through corresponding caspase assay kits. **c** Western blot analysis was used to determine the Bax and Bcl-2 protein levels. ^\#^p \< 0.05, ^\#\#^p \< 0.01, ^\#\#\#^p \< 0.001. *ccRCC* clear cell renal cell carcinoma Fig. 5LncRNA-LET promotes mitochondrial membrane potential and cytochrome *C* release in ccRCC cells. **a** The ratio of green/monomeric forms of JC-1 dye was calculated with flow cytometry. **b** Cytochrome *C* protein level in cytoplasm or mitochondrion was examined using western blot analysis. ^\#^p \< 0.05, ^\#\#^p \< 0.01, ^\#\#\#^p \< 0.001. *ccRCC* clear cell renal cell carcinoma Fig. 6Effect of lncRNA-LET on Cytochrome *C* translocation in ccRCC cells. **a**, **b** Immunofluorescence detected the cytosolic translocation of Cytochrome *C*. Scale bar, 50 μm. ^\#\#\#^p \< 0.001. *ccRCC* clear cell renal cell carcinoma LncRNA-LET targets miR-373-3p to regulate ccRCC cell growth {#Sec23} ----------------------------------------------------------- We further explored the mechanism underlying the role of lncRNA-LET in ccRCC. We hypothesized that lncRNA-LET bound to miR-373-3p to regulate development of ccRCC. As shown in Fig. [1](#Fig1){ref-type="fig"}b, the miR-373-3p expression level was increased in ccRCC tissues. Then, we carried out miR-373-3p overexpression or knockdown in ccRCC cells (Fig. [7](#Fig7){ref-type="fig"}a). Quantitative real-time PCR revealed that miR-373-3p mimics down-regulated lncRNA-LET expression, whereas miR-373-3p inhibitor up-regulated the lncRNA-LET expression level (Fig. [7](#Fig7){ref-type="fig"}b). MiR-373-3p was predicted to bind to lncRNA-LET (Fig. [7](#Fig7){ref-type="fig"}c, d). Luciferase report assay confirmed the interaction between lncRNA-LET and miR-373-3p (Fig. [7](#Fig7){ref-type="fig"}e). Further, we found that lncRNA-LET positively regulated DKK1 and TIMP2 expression in ccRCC cells (Fig. [7](#Fig7){ref-type="fig"}f). However, miR-373-3p mimics reduced the DKK1 and TIMP2 expression caused by lncRNA-LET (Fig. [8](#Fig8){ref-type="fig"}a). Additionally, miR-373-3p mimics alleviated the effects of lncRNA-LET overexpression on cell cycle and apoptosis (Fig. [8](#Fig8){ref-type="fig"}b, c). These data indicate that lncRNA-LET inhibits cell growth of ccRCC cells through targeting miR-373-3p.Fig. 7LncRNA-LET is involved in cell growth of ccRCC cells by targeting miR-373-3p. **a** MiR-373-3p level was assayed in ccRCC cells with miR-373-3p mimics or inhibitor via quantitative real-time PCR. **b** LncRNA-LET level was assayed in miR-373-3p mimics or inhibitor ccRCC cells via quantitative real-time PCR. **c** Schematic diagram: lncRNA-LET (black) function as a target of miR-373-3p (green). MiR-373-3p was indicated to bind with DKK1 (red) and TIMP2 (burgandy). ORF was filled with rectangles. **d** The binding sites between lncRNA-LET and miR-373-3p, miR-373-3p and DKK1, miR-373-3p and TIMP2 were predicted. **e** The relationship between lncRNA-LET and miR-373-3p was demonstrated through luciferase reporter assay. **f** DKK1 and TIMP2 protein levels were examined with western blot analysis. ^\#\#^p \< 0.01, ^\#\#\#^p \< 0.001. *ccRCC* clear cell renal cell carcinoma, *DKK1* Dickkopf-1, *TIMP2* tissue inhibitor of metalloproteinase-2, *mimics NC* negative control mimics Fig. 8MiR-373-3p mimics alleviate the effects of lncRNA-LET overexpression on cell cycle and apoptosis. **a** Western blot analysis was used to measure the DKK1 and TIMP2 protein levels. Cell cycle phase distribution (**b**) and cell apoptosis (**c**) were analyzed through flow cytometry. ^\#\#^p \< 0.01, ^\#\#\#^p \< 0.001. ns, *ccRCC* clear cell renal cell carcinoma, *mimics NC* negative control mimics LncRNA-LET inhibits tumor growth in vivo {#Sec24} ---------------------------------------- In order to evaluate the function of lncRNA-LET in tumor growth in vivo, we stably overexpressed lncRNA-LET in 786-O cells or knocked lncRNA-LET down in 769-P cells. These stably transfected cells were subcutaneously injected into nude mice. LncRNA-LET led to reduction of tumor volume (Fig. [9](#Fig9){ref-type="fig"}a, b). Further HE staining, TUNEL staining and Ki67 immunostaining results showed that lncRNA-LET increased apoptosis, and suppressed cell proliferation (Fig. [9](#Fig9){ref-type="fig"}c--e). These results indicate that lncRNA-LET functions similarly in vivo and in vitro.Fig. 9LncRNA-LET inhibits tumor growth in vivo. 786-O cell xenograft tumors stably transfected with lncRNA-LET or EV vectors, as well as 769-P cell xenograft tumors stably transfected with lncRNA-LET shRNA or shRNA Ctrl vectors were obtained. The size of the tumor was then recorded every 3 days for 21 days. Finally, the mice were sacrificed, and tumor tissues were (**a**) took pictures. **b** The volume of tumor tissues were measured by caliper and then calculated. **c** Cell apoptosis and necrosis from tumor tissues was analyzed using HE staining. Scale bars, 100 μm. **d** Cell apoptosis in tumor tissues was detected through TUNEL staining. Scale bars, 50 μm. **e** A representative Ki67 immunofluorescence in tumor tissues was explored. Scale bars, 50 μm. n = 6, ^\#\#^p \< 0.01. *ccRCC* clear cell renal cell carcinoma, *lncRNA-LET* lncRNA-LET overexpression, *EV* lncRNA-LET overexpression matched control, *lncRNA-LET shRNA* lncRNA-LET knockdown, *shRNA Ctrl* lncRNA-LET knockdown control Discussion {#Sec25} ========== In this study, low lncRNA-LET was found in ccRCC tissues compared with matched adjacent non-tumor tissues. LncRNA-LET induced cell cycle arrest and apoptosis of ccRCC in vitro and inhibited the growth of xenografts in vivo. Further study showed lncRNA-LET performed its role in ccRCC by targeting miR-373-3p. LncRNAs are reported to regulate various biological processes. They can contribute to the tumor progression or act as tumor-suppressors in ccRCC \[[@CR12], [@CR23]--[@CR25]\]. It was reported that the low lncRNA-LET level was correlated to the poor prognosis of patients with lung cancer or gastric cancer \[[@CR26], [@CR27]\]. Wu et al. \[[@CR17]\] showed a low lncRNA-LET level in the serum of ccRCC patients and regarded it as highly indicative of ccRCC diagnosis. In this study, we confirmed a low lncRNA-LET level in ccRCC tissues, which was consistent with the report of Wu et al. However, the role of lncRNA-LET in the growth of ccRCC is still unclear and needs further exploration. In our study, lncRNA-LET inhibited the growth of ccRCC, both in vitro and in vivo. Cell cycle contributes to the growth of cells, and cyclins are key drivers of cell cycle. The growth-inhibitory effect of lncRNA-LET was accompanied with cell cycle arrest, which was further confirmed by declines in cyclins. Besides, lncRNA-LET also induced apoptosis in ccRCC cells, which was further confirmed by increased activities of caspase-3 and caspase-9. Interestingly, we found that lncRNA-LET increased the level of Bax and decreased the level of Bcl-2. As the ratio of Bax/Bcl-2 contributes to the opening of mitochondrial permeability transition pore, it indicated that the induction of apoptosis by lncRNA-LET may be associated with mitochondria-mediated apoptosis. Therefore, the mitochondrial membrane potential and release of cytochrome *C* were detected. Our results demonstrated that lncRNA-LET modulated mitochondrial membrane potential and enhanced the release of cytochrome *C*, indicating that apoptosis of ccRCC induced by lncRNA-LET may belong to mitochondria-mediated apoptosis. Additionally, we found that lncRNA-LET increased the levels of DKK1 and TIMP2, which are involved in the regulation of cell migration \[[@CR28]--[@CR30]\], suggesting that lncRNA-LET may also have an effect on the metastasis of RCC. More researches are needed to verify this speculation. Furthermore, how lncRNA-LET performed its tumor-suppressor role in ccRCC was explored. Li et al. revealed that miR-373-3p promoted tumorigenesis of RCC in vitro and in vivo \[[@CR22]\]. In our study, we identified lncRNA-LET as a direct target of miR-373-3p. Interestingly, we found that the levels of DKK1 and TIMP2, which are two verified targets of miR-373-3p, were increased by lncRNA-LET. These results prompt us that lncRNA-LET may also modulate the expression of miR-373-3p target genes. Thus, rescue experiments were performed in our study. The results showed that miR-373-3p down-regulated the lncRNA-LET-induced increase of DKK1 and TIMP2 levels, and reversed the effects of lncRNA-LET on cell cycle and apoptosis. These results indicate that lncRNA-LET may perform its tumor-suppressor role in ccRCC cells through regulating the expression of miR-373-3p target genes. In 2011, Salmena et al. proposed a concept of competing endogenous RNA (ceRNA) \[[@CR31]\] that targets of microRNA, such as mRNAs, lncRNAs and pseudogenes, can inversely target microRNAs using their microRNA response elements, thus modulating the expression of target genes and resulting in their various roles. We hypothesize that lncRNA-LET may act as a tumor-suppressor in ccRCC through a ceRNA pattern. As the ceRNA network is a large-scale regulatory network, there must be other microRNAs which lncRNA-LET may target to perform its role in ccRCC. However, our study focused on only miR-373-3p, there remains a large scale of microRNAs which can be targeted by lncRNA-LET. Hence, further explorations are needed to reveal the mechanism underlying lncRNA-LET. Conclusions {#Sec26} =========== In the present study, lncRNA-LET repressed cell cycle, induced apoptosis and inhibited tumor growth of ccRCC by targeting miR-373-3p. We identified lncRNA-LET as a tumor-suppressor in ccRCC. The results of the present study provide a potential biomarker and therapeutic target for ccRCC treatment. ccRCC : clear cell renal cell carcinoma lncRNA : long non-coding RNA DKK1 : Dickkopf-1 TIMP2 : tissue inhibitor of metalloproteinase-2 FBS : fetal bovine serum NC : negative control EV : empty control PI : propidium iodide **Publisher\'s Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Zhuo Ye and Jiachen Duan contributed equally to this study Not applicable. ZY, JD and BQ conceived and designed the experiments. ZY, JD, LW and YJ performed the experiments and analyzed the data. ZY, JD and BQ contributed to the writing of the manuscript. All authors read and approved the final manuscript. This study was supported by a Grant from the National Natural Science Foundation of China (No. 81370869). The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. This study was approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University, and conferred to Declaration of Helsinki. Not applicable. The authors declare that they have no competing interests.
GOOSEBUMPS / O.S.T. Vinyl Record GOOSEBUMPS / O.S.T. Vinyl Record Was $45.49 Now just $37.99 For a limited time only Product Details Limited double 180 gram audiophile vinyl LP pressing including 12 bonus tracks. Housed in gatefold jacket in PVC protective sleeve plus bonus four page insert. Since it's introduction in 1992, over 400 million of R.L. Stine's Goosebumps books have been printed in 32 languages, earning critical acclaim and dominating global best seller lists. Now, the literary phenomenon gets it's big screen debut. It's story revolves around a teenager who teams up with the daughter of a young R.L. Stine (played by Jack Black), after the writer's imaginary demons are set free on the town of Greendale, Maryland. The score to Goosebumps is in the hands of industry icon Danny Elfman. Over the last 30 years, Elfman has established himself as one of the most versatile and accomplished film composers in the industry. He is known for creating The Simpsons main title theme, and scoring many movies, including the majority of his long-time friend Tim Burton's films. He has also collaborated with such directors as Gus Van Sant, Sam Raimi, Ang Lee, Guillermo del Toro, Brian de Palma and Peter Jackson. His impressive resume mentions film scores for amongst others Milk, Good Will Hunting, Big Fish and Men in Black (all Oscar nominated), Edward Scissorhands, Charlie and the Chocolate Factory, Mission: Impossible, Planet of the Apes, Spider-Man (1 & 2) and many more. Elfman won a Grammy Award for Tim Burton's Batman, and an Emmy Award for his Desperate Housewives theme. Track List [Disc 1] Goosebumps Ferris Wheel To the Rescue Camcorder Ice Rink Capture Slappy Confession Slappy's Revenge Bus Escape Lawn Gnomes Ghost Hannah Mantis Chase Hannah's Back They're Here [Disc 2] Farewell Credit Something's Wrong* Champ* Break In* The Books* Instagram* Floating Poodle* Werewolf* Lovestruck* Panic* On the Run* Fun House* The Twist* Protection Each record is protected within its record sleeve by a white vellum anti-dust sleeve. Packaging All items are shipped brand-new and unopened in original packaging. Every record is shipped in original factory-applied shrink wrap and has never been touched by human hands.
OMAHA BEACH, France (AP) — With the silence of remembrance and respect, nations honored the memory of the fallen and the singular bravery of all Allied troops who sloshed through bloodied water to the landing beaches of Normandy, a tribute of thanks 75 years after the massive D-Day assault that doomed the Nazi occupation of France and portended the fall of Hitler’s Third Reich. French President Emmanuel Macron and President Donald Trump praised the soldiers and airmen, the survivors and those who lost their lives, in powerful speeches Thursday that credited the June 6, 1944 surprise air and sea operation that brought tens of thousands of men to Normandy, each not knowing whether he would survive the day. “You are the pride of our nation, you are the glory of our republic and we thank you from the bottom of our heart,” Trump said, of the “warriors” of an “epic battle” engaged in the ultimate fight of good against evil. ASSOCIATED PRESS In his speech, Macron praised the “unthinkable courage,” ″the generosity” of the soldiers and “the strength of spirit” that made them press on “to help men and women they didn’t know, to liberate a land most hadn’t seen before, for no other cause but freedom, democracy.” He expressed France’s debt to the United States for freeing his country from the reign of the Nazis. Macron awarded five American veterans with the Chevalier of Legion of Honor, France’s highest award. “We know what we owe to you vets, our freedom,” he said, switching from French to English. “On behalf of my nation I just want to say ‘thank you.’” ASSOCIATED PRESS Nearly 160,000 Allied troops landed in Normandy on D-Day. Of those 73,000 were from the United States, 83,000 from Britain and Canada. ASSOCIATED PRESS Some of the first assault troops to hit the Normandy, France beachhead on June 6, 1944 take cover behind enemy obstacles to fire on German forces as others follow the first tanks plunging through the water towards the German-held shore during World War II. The second day of ceremonies moved to France after spirited commemorations in Portsmouth, England, the main embarkation point for the transport boats. Leaders, veterans, their families and the grateful from France, Europe and elsewhere were present for the solemn day that began under a radiant sun. At dawn, hundreds of people, civilians and military alike, hailing from around the world, gathered at the water’s edge, remembering the troops who stormed the fortified Normandy beaches to help turn the tide of the war and give birth to a new Europe. Dick Jansen, 60, from the Netherlands, drank Canadian whisky from an enamel cup on the water’s edge. Others scattered carnations into the waves. Randall Atanay, a medic’s son who tended the dying and injured, waded barefoot into the water near Omaha Beach — the first of five code-named beaches where the waters ran red the morning of June 6, 1944. Up to 12,000 people gathered hours later at the ceremony at the Normandy American Cemetery, where Macron and Trump spoke. U.S. veterans, their numbers fast diminishing as years pass, were the guests of honor. ASSOCIATED PRESS Rows of white crosses and Stars of David where more than 9,380 of the fallen are buried stretched before the guests on a bluff overlooking Omaha Beach. Britain’s Prince Charles, his wife Camilla and Prime Minister Theresa May attended a service of remembrance at the medieval cathedral in Bayeux, the first Normandy town liberated by Allied troops after D-Day. Cardinal Marc Ouellet read a message from Pope Francis with a tribute for those who “gave their lives for freedom and peace.” At daybreak, a lone piper played in Mulberry Harbor, exactly 75 years after British troops came ashore at Gold Beach. “It is sobering, surreal to be able to stand here on this beach and admire the beautiful sunrise where they came ashore, being shot at, facing unspeakable atrocities,” said 44-year-old former U.S. paratrooper Richard Clapp, of Julian, North Carolina. Gratitude was a powerful common theme. Macron thanked those who did not survive the assault “so that France could become free again” at an earlier ceremony overlooking Gold Beach with British Prime Minister Theresa May and uniformed veterans to lay the cornerstone of a new memorial that will record the names of thousands of troops under British command who died on D-Day and ensuing Battle of Normandy. “If one day can be said to have determined the fate of generations to come, in France, in Britain, in Europe and the world, that day was the 6th of June, 1944,” May said. “As the sun rose that morning,” she said, not one of the thousands of men arriving in Normandy “knew whether they would still be alive when the sun set once again.” Passing on memories is especially urgent, with hundreds of World War II veterans now dyingevery day. A group of five Americans parachuted into Normandy on Wednesday as part of a commemorative jump, and showed up on the beach Thursday morning still wearing their jumpsuits, all World War II-era uniforms, and held an American flag. All five said they fear that the feats and sacrifices of D-Day are being forgotten. “I have all kinds of friends buried,” said William Tymchuk, 98, who served with the 4th Canadian Armored Division during some of the deadliest fighting of the brutal campaign after the Normandy landings. “They were young. They got killed. They couldn’t come home,” Tymchuk, who was back in Normandy, continued. “Sorry,” he said, tearing up. “They couldn’t even know what life is all about.” ASSOCIATED PRESS American soldiers land on the French coast in Normandy during the D-Day invasion on June 6, 1944. The biggest-ever air and seaborne invasion took place on D-Day, involving more than 150,000 troops that day itself and many more in the ensuing Battle of Normandy. Troops started landing overnight from the air, then were joined by a massive force by sea on the beaches code-named Omaha, Utah, Juno, Sword and Gold, carried by 7,000 boats. In that defining moment of military strategy confounded by unpredictable weather and human chaos, soldiers from the U.S., Britain, Canada and other Allied nations applied relentless bravery to carve out a beachhead on ground that Nazi Germany had occupied for four years. “The tide has turned! The free men of the world are marching together to Victory,” Gen. Dwight D. Eisenhower predicted in his order of the day. The Battle of Normandy, codenamed Operation Overlord, hastened Germany’s defeat less than a year later. Still, that single day cost the lives of 4,414 Allied troops, 2,501 of them Americans. More than 5,000 were injured. On the German side, several thousand were killed or wounded. From there, Allied troops would advance their fight, take Paris in late summer and march in a race against the Soviet Red Army to control as much German territory as possible by the time Adolf Hitler died in his Berlin bunker and Germany surrendered in May 1945. The Soviet Union also fought valiantly against the Nazis — and lost more people than any other nation in World War II — but those final battles would divide Europe for decades between the West and the Soviet-controlled East, the face-off line of the Cold War. ___ Sylvie Corbet in Colleville-sur-Mer, France and Milos Krivokapic and Adam Pemble in Ver-sur-Mer and Elaine Ganley in Paris contributed to this report.
Sputum examination for acid-fast bacilli in private laboratories, Kathmandu Valley, Nepal. To investigate the characteristics of private laboratories and the process of sputum examination for acid-fast bacilli (AFB). A door-to-door survey of private laboratories in an urban municipality of Kathmandu valley was conducted during the first quarter of 1998. Semi-structured interviews were conducted with staff of 14/20 (70%) identified laboratories. All 14 private laboratories conducted sputum examination for AFB. The majority (71%) of staff lacked special training for AFB examinations. Monocular microscopes were commonly used (36%). Reagents were prepared irregularly, without quality control, and kept for as long as they lasted, often up to 4-6 months (43%). Laboratory registers were usually present (86%), but lacked information on patient's address and the purpose of the test. A median of 12.5 slides per laboratory had been examined during the previous month (range 0-70). A total of 235 AFB slides were examined, of which 18 (7.7%) were reported as positive. AFB examinations were widely available. Lack of training and quality control suggest a variable standard of AFB test results. It is recommended that the National Tuberculosis Programme (NTP) provide support and quality control to two to three (i.e., one for every 10) private laboratories in the area to secure private doctors' confidence in sputum testing.
<!DOCTYPE html><html lang="en"><head><base href="/jekyll-algolia/"><meta content="IE=edge" http-equiv="X-UA-Compatible"><meta charset="utf-8"><meta content="width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no" name="viewport"><meta name="ROBOTS" content="NOINDEX, NOFOLLOW"><link rel="icon" href="assets/images/favicon-54339122a2393b28d82d46ac8b785542.png"><meta content="IE=edge,chrome=1" http-equiv="X-UA-Compatible"><meta content="Add fast and relevant search to your Jekyll site" name="description"><meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0" name="viewport"><!-- Twitter card--><meta content="summary_large_image" name="twitter:card"><meta content="https://community.algolia.com/jekyll-algolia/" name="twitter:site"><meta content="Algolia" name="twitter:creator"><meta content="Algolia for Jekyll" name="twitter:title"><meta content="Add fast and relevant search to your Jekyll site" name="twitter:description"><meta content="https://community.algolia.com/jekyll-algolia/assets/images/card-d601b3714c5bd2fd25a0cd1179cef970.png" name="twitter:image"><!-- OG meta--><meta content="https://community.algolia.com/jekyll-algolia/" property="og:url"><meta content="Algolia for Jekyll" property="og:title"><meta content="https://community.algolia.com/jekyll-algolia/assets/images/card-d601b3714c5bd2fd25a0cd1179cef970.png" property="og:image"><meta content="website" property="og:type"><meta content="Add fast and relevant search to your Jekyll site" property="og:description"><meta content="Algolia for Jekyll" property="og:site_name"><title>Algolia for Jekyll | Add fast and relevant search to your Jekyll site</title><link rel="stylesheet" href="https://cdn.jsdelivr.net/docsearch.js/2/docsearch.min.css"><link rel="stylesheet" href="stylesheets/index-42b232f98a42eb1d3800e69c5bb9aa0f.css"></head><body><div><!-- Start community header --> <nav class='algc-navigation'> <div class='algc-navigation__container'> <div class='algc-mainmenu'> <ul class='algc-navigation__brands'> <li class='algc-navigation__li algc-navigation__li--algolia'> <a href='https://www.algolia.com/'> <svg class="algolia-logo" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 387 96"><defs><linearGradient x1="-37.75%" y1="134.936%" x2="130.239%" y2="-27.7%" id="a"><stop stop-color="#00AEFF" offset="0%"/><stop stop-color="#3369E7" offset="100%"/></linearGradient></defs><g fill="none"><path d="M12.614 0h70.571c6.947 0 12.614 5.637 12.614 12.611V83.19c0 6.945-5.639 12.611-12.614 12.611H12.614C5.667 95.801 0 90.164 0 83.19V12.582C0 5.637 5.639 0 12.614 0z" fill="url(#a)"/><path d="M49.202 24.321c-14.964 0-27.105 12.117-27.105 27.081 0 14.964 12.14 27.052 27.105 27.052 14.964 0 27.105-12.117 27.105-27.081 0-14.964-12.111-27.052-27.105-27.052zm0 46.142c-10.539 0-19.098-8.543-19.098-19.061 0-10.519 8.559-19.061 19.098-19.061S68.3 40.884 68.3 51.402c0 10.519-8.53 19.061-19.098 19.061zm0-34.229v14.209c0 .407.437.697.815.494l12.635-6.538c.291-.145.378-.494.233-.785a15.728 15.728 0 0 0-13.101-7.933c-.291 0-.582.232-.582.552zM31.501 25.803l-1.659-1.656a4.153 4.153 0 0 0-5.881 0l-1.98 1.976a4.133 4.133 0 0 0 0 5.869l1.63 1.627c.262.262.64.203.873-.058a32.015 32.015 0 0 1 3.173-3.719 29.932 29.932 0 0 1 3.756-3.196c.291-.174.32-.581.087-.843zm26.581-4.3V18.22a4.158 4.158 0 0 0-4.163-4.155h-9.695c-2.3 0-4.163 1.86-4.163 4.155v3.371c0 .378.349.639.728.552a30.381 30.381 0 0 1 8.443-1.191c2.766 0 5.502.378 8.152 1.104a.564.564 0 0 0 .699-.552z" fill="#fff"/><path d="M240.04 73.397c0 7.758-1.98 13.424-5.968 17.027-3.989 3.603-10.073 5.405-18.283 5.405-2.999 0-9.229-.581-14.207-1.685l1.834-9.008c4.163.872 9.666 1.104 12.548 1.104 4.571 0 7.832-.93 9.782-2.789 1.951-1.86 2.911-4.62 2.911-8.281v-1.86c-1.135.552-2.62 1.104-4.454 1.685-1.834.552-3.959.843-6.347.843-3.144 0-5.997-.494-8.588-1.482-2.591-.988-4.833-2.441-6.667-4.359-1.834-1.918-3.29-4.329-4.309-7.206-1.019-2.877-1.543-8.02-1.543-11.797 0-3.545.553-7.991 1.63-10.954 1.106-2.964 2.678-5.521 4.804-7.642 2.096-2.121 4.658-3.748 7.657-4.94 2.999-1.191 6.521-1.947 10.335-1.947 3.697 0 7.104.465 10.423 1.017 3.319.552 6.143 1.133 8.443 1.772v45.096zm-31.646-22.432c0 4.765 1.048 10.054 3.144 12.262s4.804 3.312 8.123 3.312a17.37 17.37 0 0 0 5.124-.755c1.601-.494 2.882-1.075 3.901-1.772V35.798c-.815-.174-4.221-.872-7.511-.959-4.134-.116-7.278 1.569-9.491 4.271-2.184 2.702-3.29 7.439-3.29 11.855zm85.681 0c0 3.835-.553 6.741-1.689 9.908-1.135 3.167-2.737 5.869-4.804 8.107-2.067 2.237-4.542 3.981-7.453 5.201-2.911 1.22-7.395 1.918-9.637 1.918-2.242-.029-6.696-.668-9.578-1.918-2.882-1.249-5.357-2.964-7.424-5.201-2.067-2.237-3.668-4.94-4.833-8.107-1.165-3.167-1.747-6.073-1.747-9.908 0-3.835.524-7.526 1.689-10.664s2.795-5.811 4.891-8.049c2.096-2.237 4.6-3.952 7.453-5.172 2.882-1.22 6.056-1.802 9.491-1.802 3.435 0 6.609.61 9.52 1.802 2.911 1.22 5.415 2.935 7.453 5.172 2.067 2.237 3.668 4.911 4.833 8.049 1.223 3.138 1.834 6.828 1.834 10.664zm-11.645.029c0-4.911-1.077-9.008-3.173-11.855-2.096-2.877-5.037-4.3-8.792-4.3-3.756 0-6.696 1.424-8.792 4.3-2.096 2.877-3.115 6.945-3.115 11.855 0 4.969 1.048 8.31 3.144 11.187 2.096 2.906 5.037 4.329 8.792 4.329 3.756 0 6.696-1.453 8.792-4.329 2.096-2.906 3.144-6.218 3.144-11.187zm37.003 25.105c-18.662.087-18.662-15.051-18.662-17.463l-.029-53.697 11.383-1.802v53.348c0 1.366 0 10.025 7.307 10.054v9.56zm20.059 0H328.05V27.051l11.442-1.802v50.849zm-5.735-56.225c3.814 0 6.929-3.08 6.929-6.886s-3.086-6.886-6.929-6.886c-3.843 0-6.929 3.08-6.929 6.886s3.115 6.886 6.929 6.886zm34.179 5.405c3.756 0 6.929.465 9.491 1.395 2.562.93 4.629 2.237 6.143 3.894 1.514 1.656 2.591 3.923 3.232 6.305.67 2.383.99 4.998.99 7.874v29.231c-1.747.378-4.396.814-7.948 1.337-3.552.523-7.54.785-11.966.785-2.94 0-5.648-.291-8.064-.843-2.446-.552-4.513-1.453-6.259-2.702-1.718-1.249-3.057-2.848-4.047-4.823-.961-1.976-1.456-4.765-1.456-7.671 0-2.789.553-4.562 1.63-6.48a13.789 13.789 0 0 1 4.454-4.707c1.892-1.22 4.047-2.092 6.521-2.615a37.437 37.437 0 0 1 7.744-.785c1.252 0 2.562.087 3.959.232 1.397.145 2.853.407 4.425.785v-1.86c0-1.308-.146-2.557-.466-3.719a7.97 7.97 0 0 0-1.63-3.109c-.786-.901-1.805-1.598-3.086-2.092s-2.911-.872-4.862-.872c-2.62 0-5.008.32-7.191.697-2.184.378-3.989.814-5.357 1.308l-1.368-9.327c1.427-.494 3.552-.988 6.288-1.482 2.737-.494 5.677-.755 8.821-.755zm.961 41.232c3.494 0 6.085-.203 7.89-.552V54.394a23.243 23.243 0 0 0-2.737-.552 27.24 27.24 0 0 0-3.959-.291c-1.252 0-2.533.087-3.814.291-1.281.174-2.446.523-3.464 1.017-1.019.494-1.863 1.191-2.475 2.092-.64.901-.932 1.424-.932 2.789 0 2.673.932 4.213 2.62 5.23 1.718 1.046 3.989 1.54 6.871 1.54zM144.083 25.57c3.756 0 6.929.465 9.491 1.395 2.562.93 4.629 2.237 6.143 3.894 1.543 1.685 2.591 3.923 3.232 6.305.67 2.383.99 4.998.99 7.874v29.231c-1.747.378-4.396.814-7.948 1.337-3.552.523-7.54.785-11.966.785-2.94 0-5.648-.291-8.064-.843-2.446-.552-4.513-1.453-6.259-2.702-1.718-1.249-3.057-2.848-4.047-4.823-.961-1.976-1.456-4.765-1.456-7.671 0-2.789.553-4.562 1.63-6.48a13.789 13.789 0 0 1 4.454-4.707c1.892-1.22 4.047-2.092 6.521-2.615a37.437 37.437 0 0 1 7.744-.785c1.252 0 2.562.087 3.959.232 1.368.145 2.853.407 4.425.785v-1.86c0-1.308-.146-2.557-.466-3.719a7.97 7.97 0 0 0-1.63-3.109c-.786-.901-1.805-1.598-3.086-2.092s-2.911-.872-4.862-.872c-2.62 0-5.008.32-7.191.697-2.184.378-3.989.814-5.357 1.308l-1.368-9.327c1.427-.494 3.552-.988 6.288-1.482 2.737-.523 5.677-.755 8.821-.755zm.99 41.261c3.494 0 6.085-.203 7.89-.552V54.714a23.243 23.243 0 0 0-2.737-.552 27.24 27.24 0 0 0-3.959-.291c-1.252 0-2.533.087-3.814.291-1.281.174-2.446.523-3.464 1.017-1.019.494-1.863 1.191-2.475 2.092-.64.901-.932 1.424-.932 2.789 0 2.673.932 4.213 2.62 5.23 1.689 1.017 3.989 1.54 6.871 1.54zm46.145 9.269c-18.662.087-18.662-15.051-18.662-17.463l-.029-53.697 11.383-1.802v53.348c0 1.366 0 10.025 7.307 10.054v9.56z" fill="#182359"/></g></svg> </a> </li> <li class='algc-navigation__li algc-navigation__li--community'> <a href='https://community.algolia.com/' data-enabledropdown="true" data-dropdown="integrations"> <svg class="algc-arrowseparator" viewBox="0 0 18 35" xmlns="http://www.w3.org/2000/svg"><g id="Symbols" fill="none" fill-rule="evenodd"><g id="community/header" fill="#3369E6"><g id="Group-13"><g id="Group-2"><path id="Combined-Shape-Copy" d="M1.8537 34.7643l15.5597-17.268L1.8537 0H0l15.5597 17.4964L0 34.7644z"/></g></g></g></g></svg> <svg class="algolia-community-logo" width="145" height="37" viewBox="0 0 145 37" xmlns="http://www.w3.org/2000/svg"><title>logo/algolia-community/short</title><g fill="none"><path fill="#16205A" d="M18.36.104l18.403 18.429-18.318 18.345L.042 18.449z"/><path fill="#46AEDA" d="M15.503 5.268l2.862-2.866 3.45 3.456-1.145 1.147-2.3-2.304-1.717 1.72z"/><path fill="#FE336F" d="M21.302 31.712l-2.862 2.866-9.776-9.791 1.145-1.147 8.626 8.639 1.717-1.72z"/><path fill="#F5A623" d="M22.447 30.566l12.021-12.038L22.966 7.01l-1.145 1.147 10.352 10.366-10.876 10.892z"/><path fill="#50E3C2" d="M7.489 13.294l-5.152 5.159 5.176 5.183 1.145-1.146-4.025-4.031 4.007-4.013z"/><path fill="#BD0FE1" d="M8.634 12.147l5.724-5.733 1.15 1.152-5.724 5.733z"/><path d="M18.538 13.796c-3.069 0-5.558 2.477-5.558 5.536 0 3.059 2.49 5.53 5.558 5.53 3.069 0 5.558-2.477 5.558-5.536 0-3.059-2.483-5.53-5.558-5.53zm0 9.432c-2.161 0-3.916-1.746-3.916-3.896s1.755-3.897 3.916-3.897 3.916 1.746 3.916 3.896-1.749 3.896-3.916 3.896zm0-6.997v2.904c0 .083.09.143.167.101l2.591-1.336c.06-.03.077-.101.048-.16a3.226 3.226 0 0 0-2.686-1.621c-.06 0-.119.048-.119.113zm-3.63-2.132l-.34-.339a.853.853 0 0 0-1.206 0l-.406.404a.844.844 0 0 0 0 1.2l.334.332c.054.054.131.042.179-.012.197-.27.415-.524.651-.76a6.07 6.07 0 0 1 .77-.653c.06-.036.066-.119.018-.172zm5.451-.879v-.671a.851.851 0 0 0-.854-.849h-1.988a.851.851 0 0 0-.854.849v.689c0 .078.072.131.149.113a6.318 6.318 0 0 1 3.403-.017l.099-.021.044-.092z" fill="#fff"/><path d="M43.436 18.501a6.4 6.4 0 0 1 .357-2.158 5.1 5.1 0 0 1 1.023-1.74 4.69 4.69 0 0 1 1.619-1.154c.636-.278 1.351-.418 2.145-.418.98 0 1.907.179 2.781.537l-.516 1.969a5.422 5.422 0 0 0-.943-.298 5.04 5.04 0 0 0-1.122-.119c-.94 0-1.655.295-2.145.885-.49.59-.735 1.422-.735 2.496 0 1.034.232 1.853.695 2.456.463.603 1.245.905 2.344.905.41 0 .814-.04 1.212-.119.397-.08.741-.179 1.033-.298l.338 1.989c-.265.133-.665.252-1.202.358a8.546 8.546 0 0 1-1.658.159c-.887 0-1.658-.136-2.314-.408a4.403 4.403 0 0 1-1.629-1.134 4.71 4.71 0 0 1-.963-1.73 7.184 7.184 0 0 1-.318-2.178zm19.126-.02c0 .822-.119 1.571-.357 2.247a4.968 4.968 0 0 1-1.013 1.73 4.555 4.555 0 0 1-1.579 1.114 5.097 5.097 0 0 1-2.036.398 5.027 5.027 0 0 1-2.026-.398 4.568 4.568 0 0 1-1.569-1.114 5.098 5.098 0 0 1-1.023-1.73c-.245-.676-.367-1.425-.367-2.247 0-.822.123-1.568.367-2.238a5.018 5.018 0 0 1 1.033-1.72 4.58 4.58 0 0 1 1.579-1.104 5.075 5.075 0 0 1 2.006-.388c.728 0 1.4.129 2.016.388a4.452 4.452 0 0 1 1.579 1.104 5.119 5.119 0 0 1 1.023 1.72c.245.67.367 1.416.367 2.238zm-2.463 0c0-1.034-.222-1.853-.665-2.456-.444-.603-1.063-.905-1.857-.905s-1.414.302-1.857.905c-.444.603-.665 1.422-.665 2.456 0 1.048.222 1.876.665 2.486.444.61 1.063.915 1.857.915s1.414-.305 1.857-.915c.444-.61.665-1.439.665-2.486zm10.864-.259c0-1.087-.136-1.873-.407-2.357-.272-.484-.778-.726-1.519-.726-.265 0-.556.02-.874.06l-.715.099v8.393h-2.403V13.627c.463-.133 1.069-.259 1.817-.378s1.539-.179 2.373-.179c.715 0 1.301.093 1.758.278.457.186.838.431 1.142.736.146-.106.331-.219.556-.338.225-.119.477-.229.755-.328.289-.102.584-.186.884-.249.311-.066.626-.099.943-.099.808 0 1.473.116 1.996.348.523.232.933.557 1.231.975.298.418.503.921.616 1.512.112.59.169 1.237.169 1.939v5.847h-2.403v-5.47c0-1.087-.132-1.873-.397-2.357-.265-.484-.775-.726-1.529-.726a3.15 3.15 0 0 0-1.092.189c-.344.126-.602.249-.775.368.106.332.179.683.218 1.054.04.371.06.769.06 1.193v5.748h-2.403v-5.47zm17.18 0c0-1.087-.136-1.873-.407-2.357-.272-.484-.778-.726-1.519-.726-.265 0-.556.02-.874.06l-.715.099v8.393h-2.403V13.627c.463-.133 1.069-.259 1.817-.378s1.539-.179 2.373-.179c.715 0 1.301.093 1.758.278.457.186.838.431 1.142.736.146-.106.331-.219.556-.338.225-.119.477-.229.755-.328.289-.102.584-.186.884-.249.311-.066.626-.099.943-.099.808 0 1.473.116 1.996.348.523.232.933.557 1.231.975.298.418.503.921.616 1.512.112.59.169 1.237.169 1.939v5.847h-2.403v-5.47c0-1.087-.132-1.873-.397-2.357-.265-.484-.775-.726-1.529-.726a3.15 3.15 0 0 0-1.092.189c-.344.126-.602.249-.775.368.106.332.179.683.218 1.054.04.371.06.769.06 1.193v5.748h-2.403v-5.47zm19.781 5.151c-.463.119-1.066.242-1.807.368a14.62 14.62 0 0 1-2.443.189c-.834 0-1.533-.119-2.095-.358-.563-.239-1.013-.57-1.351-.994a3.914 3.914 0 0 1-.725-1.522 8.071 8.071 0 0 1-.218-1.939v-5.828h2.403v5.45c0 1.114.162 1.909.487 2.387.324.477.89.716 1.698.716.291 0 .599-.013.924-.04.324-.026.566-.06.725-.099V13.29h2.403v10.084zm3.059-9.746a15.907 15.907 0 0 1 1.807-.378 15.604 15.604 0 0 1 2.463-.179c.847 0 1.556.116 2.125.348.57.232 1.023.557 1.36.975.338.418.576.921.715 1.512.139.59.209 1.237.209 1.939v5.847h-2.403v-5.47c0-.557-.037-1.031-.109-1.422-.073-.391-.192-.709-.357-.955a1.431 1.431 0 0 0-.675-.537c-.285-.113-.632-.169-1.043-.169-.304 0-.622.02-.953.06l-.735.099v8.393h-2.403V13.626zm14.042 10.064h-2.403V13.289h2.403v10.402zm.258-13.445c0 .451-.146.809-.437 1.074a1.485 1.485 0 0 1-1.033.398c-.41 0-.761-.133-1.053-.398-.291-.265-.437-.623-.437-1.074 0-.464.146-.829.437-1.094a1.51 1.51 0 0 1 1.053-.398c.397 0 .741.133 1.033.398.291.265.437.63.437 1.094zm2.681.298l2.403-.398v3.143h3.694v2.009h-3.694v4.236c0 .835.132 1.432.397 1.79.265.358.715.537 1.351.537.437 0 .824-.046 1.162-.139.338-.093.606-.179.804-.259l.397 1.909a9.068 9.068 0 0 1-1.092.368c-.45.126-.98.189-1.589.189-.741 0-1.36-.099-1.857-.298-.497-.199-.89-.487-1.182-.865a3.433 3.433 0 0 1-.616-1.372 8.546 8.546 0 0 1-.179-1.84v-9.01zm16.902 2.745c-1.072 3.952-2.317 7.532-3.734 10.74a10.786 10.786 0 0 1-.814 1.531 4.54 4.54 0 0 1-.933 1.054c-.34.276-.732.483-1.152.607a5.173 5.173 0 0 1-1.47.189 5.18 5.18 0 0 1-1.102-.119c-.364-.08-.639-.166-.824-.259l.437-1.929a3.86 3.86 0 0 0 1.41.278c.622 0 1.109-.149 1.46-.448.351-.298.659-.759.924-1.382a55.873 55.873 0 0 1-2.175-4.773 52.914 52.914 0 0 1-1.857-5.489h2.562c.132.557.295 1.16.487 1.81a55.804 55.804 0 0 0 1.341 3.978c.235.623.487 1.24.755 1.85.424-1.18.821-2.436 1.192-3.769.358-1.284.696-2.574 1.013-3.868h2.483z" fill="#16205A"/></g></svg> <svg width="12" height="7" role="img" arial-labelledby="algc-icon-separator-alt" viewBox="0 0 12 7" xmlns="http://www.w3.org/2000/svg"> <title id="algc-icon-separator-alt">menu with dropdown</title> <path d="M6.458 3.58L2.81.37C2.344-.04 1.634.01 1.225.477c-.41.468-.362 1.18.105 1.59L5.51 5.74s.537.375 1.05.356c.515-.02.984-.433.984-.433l4.072-3.596c.467-.41.515-1.12.107-1.59C11.315.01 10.605-.04 10.138.37l-3.68 3.21z" fill="#FFF" fill-rule="evenodd" /> </svg> </a> </li> <li class="algc-navigation__li"> <a class='algc-badge algc-navigation__navitem' href='https://community.algolia.com/jekyll-algolia/ ' data-dropdown="links" data-enabledropdown="false"> <svg class="algc-arrowseparator" viewBox="0 0 18 35" xmlns="http://www.w3.org/2000/svg"><g id="Symbols" fill="none" fill-rule="evenodd"><g id="community/header" fill="#3369E6"><g id="Group-13"><g id="Group-2"><path id="Combined-Shape-Copy" d="M1.8537 34.7643l15.5597-17.268L1.8537 0H0l15.5597 17.4964L0 34.7644z"/></g></g></g></g></svg> <span>Algolia for Jekyll</span> </a> </li> </ul> <div class='algc-navigation__menu'> <div class='algc-menu__search'> <div class='algc-menu__search--holder'> <div class='algc-search__input algc-search__input--docsearch'> <input id='searchbox' placeholder='Search the docs' type='search'> <button id='search'> <svg role="img" aria-labelledby="algc-top-search-icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 15 15"> <title id="algc-top-search-icon">Open search input</title> <path d="M10.052 10.88c-1.1.91-2.483 1.406-3.91 1.403C2.75 12.283 0 9.533 0 6.14 0 2.75 2.75 0 6.142 0c3.392 0 6.14 2.75 6.14 6.142 0 1.485-.526 2.847-1.403 3.91l3.95 3.95c.227.227.228.596-.002.826-.228.227-.597.228-.826 0l-3.95-3.95zm-3.91.234c2.745 0 4.972-2.227 4.972-4.972 0-2.747-2.227-4.972-4.972-4.972-2.747 0-4.972 2.225-4.972 4.972 0 2.745 2.225 4.972 4.972 4.972z" fill="#FFF" fill-rule="evenodd" /> </svg> </button> <button id='cancel'> <svg role="img" aria-labelledby="algc-top-search-close" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 13 13"> <title id="algc-top-search-close">Close search input</title> <path d="M5.274 6.5L.614 1.84 0 1.225 1.226 0l.613.613 4.66 4.66 4.66-4.66.614-.613L13 1.226l-.613.613-4.66 4.66 4.66 4.66.613.614L11.774 13l-.613-.613-4.66-4.66-4.66 4.66-.614.613L0 11.774l.613-.613" fill="#FFF" fill-rule="evenodd" /> </svg> </button> </div> </div> </div> <ul class='algc-menu__list'> <li class="algc-menu__list__item "> <a href="getting-started.html" class=""> Documentation </a> </li> <li class="algc-menu__list__item "> <a href="blog.html" class=""> Tutorial </a> </li> <li class="algc-menu__list__item "> <a href="https://github.com/algolia/jekyll-algolia" class=""> <img src='assets/images/github-icon-5afb2ba9b7e186c8f6390cbd14c7015a.svg'/> </a> </li></ul> <button class='algc-openmobile'><span></span></button> </div> <div class='algc-navigation__dropdown-holder'> <div class='algc-dropdownroot notransition'> <div class='algc-dropdownroot__dropdownbg'></div> <div class='algc-dropdownroot__dropdownarrow'></div> <div class='algc-dropdownroot__dropdowncontainer'> <div class="algc-dropdownroot__section"> <div class="algc-dropdownroot__content" data-dropdown-content="integrations"> <ul class="algc-dropdownroot__widelist"> <li> <div class="dropdown-item"> <a href="https://community.algolia.com/instantsearch.js/v2" > <span class="item-icon" style="background: #fecf50"><img src="http://res.cloudinary.com/hilnmyskv/image/upload/v1500619122/instantsearch-icon_black.svg" alt="InstantSearch.js"/></span> <h4>InstantSearch.js</h4> </a> </div> </li> <li> <div class="dropdown-item"> <a href="https://community.algolia.com/instantsearch.js/react" > <span class="item-icon" style="background: linear-gradient(45deg, #3369e7, #00aeff), linear-gradient(#fafafa, #fafafa)"><img src="https://community.algolia.com/img/logo-react-instantsearch.svg" alt="React InstantSearch"/></span> <h4>React InstantSearch</h4> </a> </div> </li> <li> <div class="dropdown-item"> <a href="https://community.algolia.com/instantsearch-android/" > <span class="item-icon" style="background: linear-gradient(112deg, #21c7d0, #2dde98)"><img src="http://res.cloudinary.com/hilnmyskv/image/upload/v1500619122/instantsearch-icon_white.svg" alt="Android InstantSearch"/></span> <h4>Android InstantSearch</h4> </a> </div> </li> <li> <div class="dropdown-item"> <a href="https://community.algolia.com/vue-instantsearch/" > <span class="item-icon" style="background: linear-gradient(to right, #4DBA87, #2F9088)"><img src="http://res.cloudinary.com/hilnmyskv/image/upload/v1500619122/instantsearch-icon_white.svg" alt="Vue InstantSearch"/></span> <h4>Vue InstantSearch</h4> </a> </div> </li> <li> <div class="dropdown-item"> <a href="https://community.algolia.com/wordpress/" > <span class="item-icon" style="background: linear-gradient(to bottom right, #4041B2, #516ED1)"><img src="https://community.algolia.com/wordpress/img/icons/wp-icon.svg" alt="Wordpress"/></span> <h4>Wordpress</h4> </a> </div> </li> <li> <div class="dropdown-item"> <a href="https://community.algolia.com/magento/" > <span class="item-icon" style="background: linear-gradient(to bottom right, #ed9259, #e76d22)"><img src="https://res.cloudinary.com/hilnmyskv/image/upload/v1477318624/magento-icon-white.svg" alt="Magento"/></span> <h4>Magento</h4> </a> </div> </li></ul> <div class="algc-dropdownroot__footer"> <a href="https://discourse.algolia.com/?utm_medium=social-owned&utm_source=communityHeader"> <span style="font-weight:bold;">Community Forum</span> </a> </div> </div> </div> <div class="algc-dropdownroot__section"> <div class="algc-dropdownroot__content" data-dropdown-content="links"> <ul class="algc-dropdownroot__widelist"></ul> <div class="algc-dropdownroot__footer"> <a href="https://discourse.algolia.com/?utm_medium=social-owned&utm_source=communityHeader"> <span style="font-weight:bold;">Community Forum</span> </a> </div> </div> </div> </div> </div> </div> </div> </div> <div class='algc-mobilemenu'><div class='algc-mobilemenuwrapper'><ul class='algc-mobilemenulist'> <li class="algc-mobilemenu__item"> <a href="getting-started.html" > Documentation </a> </li> <li class="algc-mobilemenu__item"> <a href="blog.html" > Tutorial </a> </li> <li class="algc-mobilemenu__item"> <a href="https://github.com/algolia/jekyll-algolia" > <img src='assets/images/github-icon-5afb2ba9b7e186c8f6390cbd14c7015a.svg'/> Github </a> </li></ul></div></div> </nav> <!-- End community_header --> </div><div class="spacer56"></div><section class="documentation-section"><div class="container relative"><nav class="sidebar z-100 sidebar_fixed"><div class="sidebar-container"><h2 class="sidebar-header text-bold">Essentials</h2><ul class="sidebar-elements"><li class="sidebar-element"><a href="getting-started.html">Getting Started</a></li><li class="sidebar-element"><a href="how-it-works.html">How it works</a></li><li class="sidebar-element"><a href="faq.html">FAQ</a></li></ul><h2 class="sidebar-header text-bold">Configuration</h2><ul class="sidebar-elements"><li class="sidebar-element"><a class="sidebar-element_active" href="options.html">Options</a><ul><li class="sidebar-element"><a href="options.html#extensions-to-index">extensions_to_index </a></li><li class="sidebar-element"><a href="options.html#files-to-exclude">files_to_exclude </a></li><li class="sidebar-element"><a href="options.html#nodes-to-index">nodes_to_index </a></li><li class="sidebar-element"><a href="options.html#settings">settings </a></li><li class="sidebar-element"><a href="options.html#indexing-batch-size">indexing_batch_size </a></li><li class="sidebar-element"><a href="options.html#max-record-size">max_record_size </a></li></ul></li><li class="sidebar-element"><a href="commandline.html">Commandline</a></li><li class="sidebar-element"><a href="hooks.html">Hooks</a></li></ul><h2 class="sidebar-header text-bold">Advanced</h2><ul class="sidebar-elements"><li class="sidebar-element"><a href="netlify.html">Deploying on Netlify</a></li><li class="sidebar-element"><a href="github-pages.html">Deploying on Github Pages</a></li><li class="sidebar-element"><a href="themes.html">Themes</a></li><li class="sidebar-element"><a href="migration-guide.html">Migration guide</a></li></ul><h2 class="sidebar-header text-bold">Tutorials</h2><ul class="sidebar-elements"><li class="sidebar-element"><a href="blog.html">Blog</a></li></ul></div></nav><a class="sidebar-opener"></a><div class="documentation-container"><h1 id="options">Options <a class="anchor" href="options.html#options" aria-hidden="true"></a></h1> <p>The plugin should work out of the box for most websites, but there are options you can tweak if needed. All the options should be added under the <code>algolia</code> section of your <code>_config.yml</code> file.</p> <p>You should be familiar with <a href="./how-it-works.html">how this plugin works</a> under the hood to better understand what some options are doing.</p> <h2 id="extensions-to-index"><code>extensions_to_index</code> <a class="anchor" href="options.html#extensions-to-index" aria-hidden="true"></a></h2> <p>This options defines which source files should be indexed, based on their file extension. If an extension is not in the list, then the file will not be indexed.</p> <p>By default, all HTML and markdown source files will be indexed.</p> <p>If you are using another markup language (such as <a href="http://www.methods.co.nz/asciidoc/">AsciiDoc</a> or <a href="https://github.com/textile">Textile</a>), you might want to update the value like this:</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-comment"># Also index AsciiDoc and Textile files</span> <span class="cm-atom"> extensions_to_index</span><span class="cm-meta">:</span> <span class="cm-meta"> - </span>html <span class="cm-meta"> - </span>md <span class="cm-meta"> - </span>adoc <span class="cm-meta"> - </span>textile </code></div></pre> <h2 id="files-to-exclude"><code>files_to_exclude</code> <a class="anchor" href="options.html#files-to-exclude" aria-hidden="true"></a></h2> <p>This option lets you define a list of source files you don’t want to index.</p> <p>By default it will exclude the <code>index.html</code> and <code>index.md</code> files found at the root. Those files are usually not containing much text (landing pages) or containing redundant text (latest blog articles) so they are not included by default.</p> <p>If you want to index those files, you should set the value to an empty array.</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-comment"># Actually index the index.html/index.md pages</span> <span class="cm-atom"> files_to_exclude</span><span class="cm-meta">: </span><span class="cm-meta">[</span><span class="cm-meta">]</span> </code></div></pre> <p>If you want to exclude more files, you should add them to the array. Note that you can use glob patterns (<code>*</code> and <code>**</code>) to exclude several files at once.</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-comment"># Exclude more files from indexing</span> <span class="cm-atom"> files_to_exclude</span><span class="cm-meta">:</span> <span class="cm-meta"> - </span>index.html <span class="cm-meta"> - </span>index.md <span class="cm-meta"> - </span>excluded-file.html <span class="cm-meta"> - </span>_posts/2017-01-20-date-to-forget.md <span class="cm-meta"> - </span>subdirectory/*.html <span class="cm-meta"> - </span>**/*.tmp.html </code></div></pre> <p><em>Note that some files (pagination pages, static assets, etc) will <strong>always</strong> be excluded and you don’t have to specify them.</em></p> <h2 id="nodes-to-index"><code>nodes_to_index</code> <a class="anchor" href="options.html#nodes-to-index" aria-hidden="true"></a></h2> <p>This options defines how each page is split into chunks. It expects a CSS selector that will be applied on the HTML content generated by Jekyll. Each matching node will be indexed as a different record.</p> <p>The default value is <code>p</code>, meaning that one record will be created for each <code>&lt;p&gt;</code> paragraph of content.</p> <p>If you would like to index other elements, like <code>&lt;blockquote&gt;</code>, <code>&lt;li&gt;</code> or a custom <code>&lt;div class=&quot;paragraph&quot;&gt;</code>, you should edit the value like this:</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-comment"># Also index quotes, list items and custom paragraphs</span> <span class="cm-atom"> nodes_to_index</span><span class="cm-meta">: </span><span class="cm-string">&#39;p,blockquote,li,div.paragraph&#39;</span> </code></div></pre> <h2 id="settings"><code>settings</code> <a class="anchor" href="options.html#settings" aria-hidden="true"></a></h2> <p>This option let you pass specific settings to your Algolia index.</p> <p>By default the plugin will configure your Algolia index with settings tailored to the format of the extracted records. You are of course free to overwrite them or configure them as best suits your needs. Every option passed to the <code>settings</code> entry will be set as <a href="https://www.algolia.com/doc/api-reference/api-methods/set-settings/?language=ruby#set-settings">setting to your index</a>.</p> <p>For example if you want to change the HTML tag used for the highlighting, you can overwrite it like this:</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-atom"> settings</span><span class="cm-meta">:</span> <span class="cm-atom"> highlightPreTag</span><span class="cm-meta">: </span><span class="cm-string">&#39;&lt;em class=&quot;custom_highlight&quot;&gt;&#39;</span> <span class="cm-atom"> highlightPostTag</span><span class="cm-meta">: </span><span class="cm-string">&#39;&lt;/em&gt;&#39;</span> </code></div></pre> <p>Settings defined here will take precedence over any setting you manually defined through the <a href="https://www.algolia.com/dashboard">Algolia dashboard</a> UI, though. If you’d like this not to happen at all, and only take the dashboard settings in account, pass <code>false</code> to the settings configuration.</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-atom"> settings</span><span class="cm-meta">: </span><span class="cm-keyword">false</span> </code></div></pre> <p>We suggest users to at least run with the default settings once, so that the default relevance settings are set, which you can override via the dashboard after.</p> <h2 id="indexing-batch-size"><code>indexing_batch_size</code> <a class="anchor" href="options.html#indexing-batch-size" aria-hidden="true"></a></h2> <p>This option defines the number of operations that will be grouped as part of one updating batch. All operations of one batch are applied atomically. The default value is <code>1000</code>.</p> <p>You might want to increase this value if you are doing a lot of updates on each run and still want to have your changes done atomically.</p> <p>You might want to decrease this value if you’re using an unstable internet connection. Smaller batches are easier to send that large ones.</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-comment"># Send fewer records per batch</span> <span class="cm-atom"> indexing_batch_size</span><span class="cm-meta">: </span><span class="cm-number">500</span> </code></div></pre> <h2 id="max-record-size"><code>max_record_size</code> <a class="anchor" href="options.html#max-record-size" aria-hidden="true"></a></h2> <p><strong>This is an advanced option. It has no effect on Community plans.</strong></p> <p>If you’re using a paid Algolia plan, you can push records that weight up to 20Kb (as opposed to 10Kb with the free Community plan). This option allows you to adjust the maximum size of one record (in bytes).</p> <pre class="code-sample cm-s-mdn-like codeMirror yaml" data-code-type="Code"><div class="code-wrap"><code><span class="cm-atom">algolia</span><span class="cm-meta">:</span> <span class="cm-comment"># Recommended setting for paid plans</span> <span class="cm-atom"> max_record_size</span><span class="cm-meta">: </span><span class="cm-number">20000</span> </code></div></pre> <p><em>Note that if you push a record that is larger than your allowed limit, the push will be rejected by the API. This might result in incomplete data being uploaded.</em></p> </div></div></section></body><section class="footer-new-cta footer-new h300 pos-rel"><div class="container color-white stellar-container vh-center"><div class="col-md-5"><div class="spacer120 hidden-sm"></div><div class="spacer32 visible-xs"></div><header><h2 class="text-normal m-t-none">Start creating stellar search,<span class="cf hidden-xs"></span>no strings attached.</h2><p>Dive into Algolia with our forever-free Community plan. No credit card required and up to 10k records to give us a spin.</p></header></div><div class="col-md-7 pos-rel z-10"><div class="spacer120 inline hidden-sm"></div><div class="spacer32 inline hidden-sm"></div><div class="spacer16 visible-sm"></div><div class="button-holder h200 p-r-large"><div class="spacer16 hidden-md hidden-sm"></div><span class="inline pos-rel"><a class="btn btn-static-primary btn-static-shadow-dark" href="https://www.algolia.com/users/sign_up/hacker">Get Started<i class="icon icon-arrow-right color-bunting m-l-small"></i></a><svg class="search-icon" width="22"><use xlink:href="#search-icon"></use></svg></span></div></div></div></section><div id="footer"><div class="credits"><div class="container pos-rel"><div class="row"><div class="col-md-12 text-center"><a data-no-turbolink="true" href="/"><img width="40" src="https://www.algolia.com/static_assets/images/flat2/algolia/algolia-logo_badge-598a1fe6.svg"></a></div></div><div class="spacer40"></div></div></div></div><svg style="display: none;"><symbol width="40" height="40" viewbox="0 0 40 40" xmlns="http://www.w3.org/2000/svg" id="search-icon"><path d="M26.806 29.012a16.312 16.312 0 0 1-10.427 3.746C7.33 32.758 0 25.425 0 16.378 0 7.334 7.333 0 16.38 0c9.045 0 16.378 7.333 16.378 16.38 0 3.96-1.406 7.593-3.746 10.426L39.547 37.34c.607.608.61 1.59-.004 2.203a1.56 1.56 0 0 1-2.202.004L26.808 29.012zm-10.427.627c7.32 0 13.26-5.94 13.26-13.26 0-7.325-5.94-13.26-13.26-13.26-7.325 0-13.26 5.935-13.26 13.26 0 7.32 5.935 13.26 13.26 13.26z" fill-rule="evenodd"></path></symbol><symbol width="46" height="38" viewbox="0 0 46 38" xmlns="http://www.w3.org/2000/svg" id="arrow-right"><path d="M34.852 15.725l-8.624-9.908L24.385 3.7 28.62.014l1.84 2.116 13.1 15.05 1.606 1.846-1.61 1.844-13.1 15.002-1.845 2.114-4.23-3.692 1.85-2.114 9.465-10.84h-24.66v-5.615h23.817zm-26.774 0h-.002 2.96v5.614H0v-5.615h8.078z" fill-rule="evenodd"></path></symbol><symbol xmlns="http://www.w3.org/2000/svg" viewbox="0 0 708.8 717" id="icon-copy"><path d="M658.8 158H490.2c-13.3 0-26 5.3-35.4 14.6l-4.6 4.6V25c0-13.8-11.2-25-25-25H235.6c-6.6 0-13 2.6-17.7 7.3L7.3 218C2.6 222.6 0 229 0 235.6V541c0 13.8 11.2 25 25 25h227.8v101c0 27.6 22.4 50 50 50h356c27.6 0 50-22.4 50-50V208c0-27.6-22.4-50-50-50zm-204 85.4V360H338.2l116.6-116.6zm-253-149.2V209H87L201.8 94.2zM50 516V259h176.8c13.8 0 25-11.2 25-25V50h148.4v177.3L267.5 360c-9.4 9.4-14.6 22.1-14.6 35.4V516H50zm608.8 151h-356V410h177c13.8 0 25-11.2 25-25V208h154v459z"></path></symbol><symbol id="check-icon" viewbox="0 0 33 26"><path d="M32.57872 2.63298L30.2617.31596c-.38617-.38617-1.01808-.38617-1.40425 0l-18.1851 18.20266-6.4947-6.49468c-.38616-.38617-1.01808-.38617-1.40425 0L.45638 14.34096c-.38617.38617-.38617 1.01808 0 1.40425l7.17926 7.17928 2.31702 2.31702c.38617.38616 1.01808.38616 1.40425 0l2.3346-2.29948 18.8872-18.9048c.38617-.38616.38617-1.01807 0-1.40424z" fill-rule="evenodd"></path></symbol></svg><!--Google Tag Manager--><noscript><iframe src="//www.googletagmanager.com/ns.html?id=GTM-N8JP8G" height="0" width="0" style="display:none;visibility:hidden;"></iframe></noscript><script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-N8JP8G');</script><script src="https://cdn.jsdelivr.net/docsearch.js/2/docsearch.min.js"></script><script src="/jekyll-algolia/js/common-build-d9bb17198ab9c9f1b71123862c7d1397.js"></script><script src="/jekyll-algolia/js/main-build-6dda02b567024cb3711660bf411fd74f.js"></script><script></script></html>
Sleeping Giant Brewing Company in Thunder Bay, Ont., is celebrating a recent award after taking home a bronze at the 16th annual Canadian Brewing Awards and Conference in Halifax. A total of 55 categories were judged, and the entries came from all over the country. Sleeping Giant was recognized for their newest flagship beer, Mr. Canoehead, which earned a bronze in the North American Style Amber/Red Ale category. This is the second time the company has won a Canadian Brewing Award.
About the Team / ​Sobre el Equipo The CLC Team is composed of highly qualified staff with a passion for working with children. The Team is composed of licensed teachers, college undergraduate students, high school co-op students, and special education aides in the Waukesha School District. Everyone on the Team is bilingual so language will never be a barrier to meeting the needs of our participants. ​
Japan Haunted By Chilling History Of North Korean Kidnappings For years, Pyongyang kidnapped hundreds of people from neighboring countries, in order to train spies in foreign languages and cultures, or to steal identities. Today, their families are still looking for them. TOKYO — When Rumiko Masumoto, a 24-year-old secretary, left home the evening of August 12, 1978, she told her parents she’d only be gone a few minutes. She and her fiancé Shuichi Ichikawa were going to take photos of the sunset on Fukiage Beach in Kagoshima. It was muggy out that evening in the southern Japanese town, but the sky was clear. She would come back quickly so she could spend time with her brother Teruaki, a student at Hokkaido University who was home for a few days on school holiday. "That was the last time we saw her," the brother recalled recently. As night fell, there was no sign of her. Her family started to worry immediately. "That’s not like her. We all instantly knew that something had happened," recounts Teruaki Masumoto, now 61 years old. Ichikawa’s parents were also distressed that their son had not returned. They looked around town and at the beach, and then contacted the police. The next day, the couple’s car was found in a parking lot hidden by pine trees, near the sea. It was locked. Inside were Rumiko’s purse and camera. The camera's film showed anodyne images taken the same evening of their disappearance, but no indication of what happened. No witnesses. "The police had nothing, and people were coming up with all sorts of theories," says Masumoto. "They ran away, they drowned, or even that they were picked up by a UFO." The investigation ended, and the families grieved, but ultimately got on with their lives. Then, in 1980, a journalist called with a crazy theory. There had been a strange increase in disappearances of couples or young people along that particular Japanese border during the summer of 1978. What if they had all been kidnapped by North Korean agents? "No one could have imagined something like that," says Teruaki, who was dubious at the time. But the journalist laid out the facts. There were at least eight suspected disappearances in two months, a North Korean spy ship spotted nearby, strange radio transmissions picked up near the coast, and — most importantly — a failed kidnapping, further north but still not far from that same beach. Three days after Rumiko’s disappearance, two men speaking a strange language had seized two kids, beating and gagging them before trying to put them in two large sacks. Two passers-by had stopped the abduction. "We began to believe it," says Masumoto. Slowly, the evidence began to accumulate. "The police realized that North Korean agents were trying to enter Japan from the sea. They found radio transmitters, recorded conversations and salvaged skiffs," remembers Tsutomu Nishioka, an international relations professor who became the president of the National Association for the Rescue of Japanese Kidnapped by North Korea (NARKN). In November 1987, North Korean agent Kim Hyun Hee posed as a Japanese woman and successfully placed a bomb on a Korean Air flight, which blew up the plane midair over the Andaman Sea. When she was arrested, Kim admitted that she had acted under orders from Kim Jong-il, who at the time had been designated as the successor to his father and founder of the Stalinist North Korean regime. She was able to pass as a Japanese citizen after having been trained in Pyongyang by Japanese who the regime had kidnapped for this exact purpose. Kim gave him a list of names. Over the years, Pyongyang has essentially been collecting humans. First it was hundreds of South Koreans, often fishermen kidnapped at sea. Then men and women of other nationalities in order to train spies in foreign languages and cultures, or to steal identities and plant sleeper agents in neighboring nations. In NARKN’s narrow office, near the Gokoku-ji Temple in Tokyo, Tsutomu Nishioka skims the long list of kidnap victims. In addition to Japanese, who started to disappear in 1977, there are Malaysians, Chinese, Lebanese, a Thai woman and even three French people, identities unknown. On the walls are photos of those who have disappeared, including Megumi Yokota, who was grabbed in the port city of Niigata in November 1977, in an alley between her badminton club and her house. She was 13 years old at the time, and eventually became a cause célèbre in Japan. A quarter-century after the abductions, North Korea finally admitted to them. Pressured by the international community, which had discovered the nature of his military programs, Kim Jong-il was looking to improve relations with Tokyo. In 2002, he agreed to meet Junichiro Koizumi, the Japanese Prime Minister, in Pyongyang, where Kim gave him a list of names of Japanese citizens that he said renegade groups within his government had kidnapped on their own initiative. The dictator explained that he had already punished the responsible. Out of 17 disappeared people officially listed by Tokyo, Pyongyang only acknowledged 13 kidnappings and insisted that eight of those people had already died. The other five were authorized to return to Japan one month later. With some difficulty, those five told of their kidnapping on the beach, the beatings, the transfer from a small dinghy to a large cargo ship, the arrival in Pyongyang, the long isolation and the indoctrination sessions. Not to mention 20 years of another life, in another reality. But on Oct. 16, 2002, neither Rumiko, Shuichi, nor Megumi got off the plane in front of the world’s cameras. They were on the list of the deceased. Officially, Rumiko died of a heart attack in 1981, when she was only 27. Megumi allegedly committed suicide in 1994, leaving behind a young daughter, born from a union with a kidnapped South Korean. Two years later, North Korean authorities sent her parents ashes, presenting them as their daughter’s remains. When Megumi was born, her parents had saved her umbilical cord, a Japanese tradition. Now, the cord was brought out for DNA testing. "The tests showed that the ashes were probably the remains of a man," Shigeru Yokota, Megumi’s father, wearily explained at a conference in Tokyo. "We can therefore believe that she is likely still alive." Teruaki Masumoto harbors the same hope. He doesn’t believe in the authenticity of the death certificate he received from Pyongyang, which claims his sister’s remains were lost in a flood. He also found witnesses who confirm that his sister was seen in Pyongyang several years after the official report of her death. He calls her by her childhood nickname. Convinced that the North Korean regime continues to hide both the truth and their children, families have continued to fight. One Monday morning this past October, Teruaki Masumoto — now retired after years of negotiating tuna prices at the Tsukiji fish market — was at Radio Shiokaze (Sea Breeze), created 12 years ago to send messages to kidnap victims. Twice a month, he comes and reads a letter for his sister, who he calls by her childhood nickname, as well as messages of encouragement from parents of other kidnapped people. The messages are sent via shortwave radio to the other side of the Sea of Japan, to the Korean peninsula. "We know that we’re being heard, thanks to testimonials from North Koreans who escaped the country," explains Kazuhiro Araki, the president of COMJAN, a commission created to investigate more than 200 cases of suspected North Korean-linked disappearances. In addition to personal messages, the radio broadcasts information on North Korea in Korean, Japanese, Chinese and English. "Every day, we broadcast two and a half hours of programming that Pyongyang always tries to block out," says Tatsuru Murao, the Shiokaze technician. "We will continue for as long as we have to." Recently, the families and their supporters have found a new source of hope. "The rare occasions when Pyongyang releases information is during times of high geopolitical tension," explained Tsutomu Nishioka, citing George W. Bush's inclusion of Pyongyang in his 2002 Axis of Evil speech. "In the past, when the Japanese government approached North Korean authorities during a calm period with offers of humanitarian aid or cooperative projects, they wouldn’t admit anything about the kidnappings." Shinzo Abe, the current conservative Japanese Prime Minister, has made the kidnappings a personal affair. He had been in Pyongyang back in 2002, seated next to Junichiro Koizumi, when Kim Jong-il brought out the list of kidnap victims. Every day since then, Abe has worn a blue ribbon on the lapel of his suit jacket in homage to the kidnapped, whose families he meets with regularly. "This is to remind myself every morning that I have to bring these Japanese people back to their country," he declared during a recent speech in Washington. With limited leverage alone, Abe regularly urges the United States to include the kidnap victims’ release in their demands to Pyongyang. When Donald Trump declared in his September speech to the United Nations that North Korea had "kidnapped a sweet 13-year-old Japanese girl from a beach in her own country," the Japanese executive branch celebrated. The U.S. president confirmed that he had met with the families of kidnap victims during his official visit to Tokyo in the beginning of November. "In this context, we have an exceptional window of opportunity. But this will be very difficult. We don’t know how Kim Jong-un will react to pressure," says Tsutomu Nishioka. Teruaki Masumoto notes that 2018 will be 40 years since he last saw his sister. "I don’t have crazy hopes," he says. "Rumiko is probably dead. I just want to know the truth."
module["exports"] = [ "Adaptive", "Advanced", "Ameliorated", "Assimilated", "Automated", "Balanced", "Business-focused", "Centralized", "Cloned", "Compatible", "Configurable", "Cross-group", "Cross-platform", "Customer-focused", "Customizable", "Decentralized", "De-engineered", "Devolved", "Digitized", "Distributed", "Diverse", "Down-sized", "Enhanced", "Enterprise-wide", "Ergonomic", "Exclusive", "Expanded", "Extended", "Face to face", "Focused", "Front-line", "Fully-configurable", "Function-based", "Fundamental", "Future-proofed", "Grass-roots", "Horizontal", "Implemented", "Innovative", "Integrated", "Intuitive", "Inverse", "Managed", "Mandatory", "Monitored", "Multi-channelled", "Multi-lateral", "Multi-layered", "Multi-tiered", "Networked", "Object-based", "Open-architected", "Open-source", "Operative", "Optimized", "Optional", "Organic", "Organized", "Persevering", "Persistent", "Phased", "Polarised", "Pre-emptive", "Proactive", "Profit-focused", "Profound", "Programmable", "Progressive", "Public-key", "Quality-focused", "Reactive", "Realigned", "Re-contextualized", "Re-engineered", "Reduced", "Reverse-engineered", "Right-sized", "Robust", "Seamless", "Secured", "Self-enabling", "Sharable", "Stand-alone", "Streamlined", "Switchable", "Synchronised", "Synergistic", "Synergized", "Team-oriented", "Total", "Triple-buffered", "Universal", "Up-sized", "Upgradable", "User-centric", "User-friendly", "Versatile", "Virtual", "Visionary", "Vision-oriented" ];
Q: Has solidifying liquid propellant been considered for space exploration? Liquid hydrogen is $0.071 g/cm^3$ Solid hydrogen is $0.086 g/cm^3$ Since denser fuel is desirable, has solid hydrogen been considered for use in space exploration? I suspect the very low temperature (hydrogen melting point is 14K) to be impossible to manage. Maybe hydrogen ice cubes in the tanks would help reduce boil off? A: The melting point of hydrogen isn't much lower than its boiling point (6K), so the temperature isn't necessarily that much of an obstacle. However, using solid fuel requires either melting it, or burning it in place. Trying to burn solid hydrogen would likely result in the whole thing flash-boiling from the radiated heat, so that's a bit of a non-starter. And having to melt fuel out of the tank is a serious problem. However, Wikipedia notes that there are ideas to use hydrogen slush for fuel, as that will, indeed, increase density a bit. It should also reduce boil-off as long as at least some of the tank is emptied early on; otherwise, the decreasing density of the melting slush is guaranteed to require more space or a much higher-pressure tank.
Denim blue jeans which have been faded, "stone-washed", ice washed, or sand blasted to produce a particular appearance are very popular. However, to produce the desired effect it has been necessary to utilize processes which cause substantial deterioration or degradation of the fabric. Bleaching solutions containing chlorine or actual pelleting of the garment with sand or stones to produce a fashion effect causes damage to the fabric which affects its wear life. Ozone has been used in the bleaching of cellulosic materials. U.S. Pat. No. 4,283,251 to Singh discloses the bleaching of cellulosic pulp with gaseous ozone in an acidic pH followed by an alkaline treatment. U.S. Pat. Nos. 4,214,330 and 4,200,367 to Thorsen, which are herewith incorporated by reference, describe a method and an apparatus for treatment of undyed fabrics with a ozone-steam mixture. The process is used to shrinkproof the fabric with a minimum amount of deterioration of the fabric fibers. The ozone treatment reacts with the undyed fibers and provides whiter fibers. The treatment is stated to increase subsequent dyeability and dye fastness of the garment. W. J. Thorsen et al in their paper entitled, "Vapor-Phase Ozone Treatment of Wool Garments", Textile Research Journal, Textile Research Institute, 1979, p. 190-197, describe the treatment of wool fabrics and garments with ozone and steam to provide shrink resistance to the fabric or garment. The process is based on the reaction of the ozone with the wool fibers. It should be understood that the term "dye" as used herein is meant to include any of the materials which are used to provide a color to a fabric such as conventional dyes, pigments, or the like. It should be understood that the term "ozone and steam" as used herein denotes a preferable method of the invention and is meant to include ozone alone or ozone diluted with inert gases.
The Netherlands, one of the biggest plug-in markets in Europe, is going to gradually decrease the incentive for all-electric cars. Recently, the Dutch government announced the planned increase of the benefit-in-kind (BiK) tax to push BEVs to the standard level of 22% by 2026. BiK for BEVs: 2018: 4% 2019: 4% up to a maximum of €50,000 (22% for the amount above) 2020: 8% up to a maximum of €45,000 (22% for the amount above) 2021: 12% up to a maximum of €40,000 (22% for the amount above) 2022: 16% up to a maximum of €40,000 (22% for the amount above) 2023: 16% up to a maximum of €40,000 (22% for the amount above) 2024: 16% up to a maximum of €40,000 (22% for the amount above) 2025: 17% up to a maximum of €40,000 (22% for the amount above) 2026: 22% Higher BiK means that the popular company car — used by employees but also for private purposes — will be gradually taxed higher. As we already saw in case of the premium models during the switch from 2018 to 2019, sales of BEVs are expected to significantly increase in the fall of 2019 and decrease in early 2020. A similar situation will play out multiple times in the future until the incentive totally disappears and stabilizes. The planned increase for a car that costs €50,000: BiK in 2019: €2,000 BiK in 2020: €4,700 BiK in 2021: €7,000 BiK in 2022: €7,000 The planned increase for a car that costs €40,000: BiK in 2019: €1,600 BiK in 2020: €3,200 BiK in 2021: €4,800 BiK in 2022: €6,400 The planned increase for a car that costs €30,000: BiK in 2019: €1,200 BiK in 2020: €2,400 BiK in 2021: €3,600 BiK in 2022: €4,800 Hat Tip to Benz!!!
Q: Добавить тёмный фон вокруг View Есть ли в Android способ добавить тёмный фон вокруг layout'а, перекрывающий все, кроме статус бара, не наследуясь от Dialog? Мне нужен такой же затемненный фон, какой бывает при открытии диалогов. Создание layout-подложки тоже не подходит, потому, что в данном случае не будет перекрыт ActionBar. A: Cоздание layout-подложки тоже не подходит, потому, что в данном случае не будет перекрыт ActionBar. Можно попробовать заменить ActionBar на Toolbar. Потом создаете кастомный View для диалога наследуясь от FrameLayout и указываете темный фон и атрибуты для высоты и ширины match_parent, в таком случае весь экран будет перекрыт, в том числе и Toolbar Вот пример как можно сверстать корневой layout <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <android.support.design.widget.AppBarLayout android:id="@+id/app_bar" android:layout_width="match_parent" android:layout_height="wrap_content" app:elevation="0dp"> <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:minHeight="?attr/actionBarSize" android:theme="@style/ToolbarTheme"/> </android.support.design.widget.AppBarLayout> <FrameLayout android:id="@+id/content" android:layout_width="match_parent" android:layout_height="match_parent"/> </LinearLayout> <com.example.DialogView android:id="@+id/dialog_view" android:layout_width="match_parent" android:layout_height="match_parent" android:visibility="gone"/>
Q: Remove Unicode characters from textfiles - sed , other Bash/shell methods How do I remove Unicode characters from a bunch of text files in the terminal? I've tried this, but it didn't work: sed 'g/\u'U+200E'//' -i *.txt I need to remove these Unicode characters from the text files: U+0091 - sort of weird "control" space U+0092 - same sort of weird "control" space A0 - non-space break U+200E - left to right mark A: Clear all non-ASCII characters of file.txt: $ iconv -c -f utf-8 -t ascii file.txt $ strings file.txt A: If you want to remove only particular characters and you have Python, you can: CHARS=$(python -c 'print u"\u0091\u0092\u00a0\u200E".encode("utf8")') sed 's/['"$CHARS"']//g' < /tmp/utf8_input.txt > /tmp/ascii_output.txt A: For UTF-8 encoding of Unicode, you can use this regular expression for sed: sed 's/\xc2\x91\|\xc2\x92\|\xc2\xa0\|\xe2\x80\x8e//g'
Introduction {#S1} ============ The earliest outbreaks of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in the United States were seeded by travelers who became infected abroad and initiated chains of community transmission. Several months later, SARS-CoV-2 is now ubiquitous. More than 96% of the 3,144 United States administrative subdivisions (i.e., counties, boroughs, and parishes) have reported at least one SARS-CoV-2 case by June 23, 2020 ^[@R1]^. Movement between administrative subdivisions and states, rather than introduction from abroad, now poses the greatest risk for seeding new clusters of community transmission. Is it still possible to interrupt the spread of SARS-CoV-2 between nearby counties once community transmission is established? Case counts from diagnostic SARS-CoV-2 testing are used to understand community transmission, but community-level testing may not be widely available and passive surveillance is unlikely to detect asymptomatic or presymptomatic infections. Viral genome sequencing has emerged as a critical tool to overcome these limitations and provides a complementary means of understanding viral transmission dynamics. The value of this approach was demonstrated during the West African Ebolavirus outbreak in 2014--2016 and again during the emergence of Zika virus in the Americas in 2015--2016 ^[@R2],[@R3]^. The collective global effort to sequence SARS-CoV-2 dwarfs these earlier efforts. As of 28 June 2020, more than 55,000 SARS-CoV-2 sequences collected from over 82 countries have been sequenced and shared publicly on repositories like the Global Initiative on Sharing All Influenza Data (GISAID), enabling real-time phylogenetic analyses encompassing global SARS-CoV-2 diversity ^[@R4]^. Patterns of viral sequence variation can also be used to estimate epidemiological parameters, including the total number of infections in a given population and epidemic doubling time, independent of case counts ^[@R4]--[@R14]^. Here we apply these methods to gain a nuanced view of SARS-CoV-2 transmission within and between regions of the American Upper Midwest. Dane and Milwaukee Counties are the two most populous counties in the US state of Wisconsin. They are separated by approximately 100 kilometers of rural and suburban communities in Jefferson and Waukesha Counties. An interstate highway that typically carries \~40,000 vehicles a day connects all four of these counties ^[@R15]^. Madison and Milwaukee are the largest cities in Wisconsin as well as in Dane and Milwaukee Counties, respectively, and are demographically dissimilar ^[@R16],[@R17]^. On 25 March 2020, the Wisconsin Department of Health Services ordered most individuals to stay at home, closed non-essential businesses, and prohibited most gatherings, an order termed "Safer at Home" ^[@R18]--[@R20]^. While there were some policies enacted to reduce the viral spread prior to this order ^[@R21]^, the "Safer at Home" order represented the most significant restriction on individuals and businesses. This Executive Order remained in effect until 13 May 2020, when it was struck down by the Wisconsin Supreme Court. From the start of the Executive Order through 21 April 2020, Dane and Milwaukee Counties had the highest documented number of SARS-CoV-2 cases in Wisconsin. Therefore, these two counties provide a "natural experiment" to understand the impact of the "Safer at Home" Executive Order on within- and between-county SARS-CoV-2 transmission in two nearby US counties with distinguishing demographic features. Our analyses indicate that the Dane and Milwaukee County SARS-CoV-2 outbreaks were seeded by a different number of introductions and subsequently defined by distinct patterns of viral spread. Despite growing cumulative case counts in both counties, virus transmission clusters remained largely localized within individual counties with evidence of little mixing across counties. Moreover, we find that the virus's basic reproductive number decreased in both counties evaluated during the time in which the "Safer at Home" order was in place, consistent with adoption of physical distancing, use of face coverings, and other related practices ^[@R22]^. Results {#S2} ======= SARS-CoV-2 epidemics and community demographics in Dane and Milwaukee Counties {#S3} ------------------------------------------------------------------------------ Dane County and Milwaukee County are both located in Southern Wisconsin. Milwaukee County is 127 km east of Dane County, measured from center to center. As of 2015, Dane County had a population of 516,850 at a density of 166 people per km^2^ compared to 952,150 at 1,522 per km^2^ for Milwaukee County ([Fig 1A](#F1){ref-type="fig"}) ^[@R16],[@R17]^. The majority of individuals living in Dane County are White (81.5%). The next largest group identifies as Hispanic or Latinx (6.3%), followed by Asian (6.0%), Black (5.9%), and American Indian (0.3%) ^[@R17]^. Milwaukee County is less predominantly White (53.3%) with much larger Black (27.2%) and Hispanic or Latinx (14.5%) populations, followed by Asian (4.3%) and American Indian (0.7%) ^[@R16]^. The percent of individuals ≥65 years old is similar in Dane County (13.7%) and Milwaukee County (13.6%), while the percent of individuals under 18 years is slightly lower in Dane County (20.4%) than Milwaukee County (24%). In addition, median income and access to healthcare resources is lower in Milwaukee County than in Dane County ^[@R23]^. The median individual in Milwaukee County is also more likely to experience poverty and to live with comorbidities such as type II diabetes, hypertension, and obesity ([Table 1](#T1){ref-type="table"}) ^[@R23]^. Dane County is home to the 12th reported SARS-CoV-2 case in the United States, detected on 30 January 2020. Subsequent cases were not reported until 9 March 2020. By 26 April, Dane County had 405 confirmed SARS-CoV-2 cases and 19 deaths ^[@R24]^. Milwaukee County reported its first case on 11 March 2020. By 26 April, Milwaukee County had reported 2,629 confirmed SARS-CoV-2 infections and 126 deaths ^[@R25]^ ([Fig 1B](#F1){ref-type="fig"}). Sequences for this study were derived from 247 nasopharyngeal (NP) swab samples collected from Dane County between 14 March 2020 through 18 April 2020, and Milwaukee County from 12 March 2020 though 26 April 2020, Wisconsin. Additional sample metadata are available in [Supplemental Information 1](#SD1){ref-type="supplementary-material"}. Dane and Milwaukee County viruses are genetically distinct {#S4} ---------------------------------------------------------- If an outbreak is fueled by community spread following a single introduction, one would expect viral genomes to be close phylogenetic relatives, in which case genetic distances measured in any pairwise comparisons of sequences would be low. To examine this, we generated SARS-CoV-2 consensus sequences using the ARTIC Network protocol ^[@R26],[@R27]^ and defined the population of consensus single-nucleotide variants (SNVs) relative to the initial SARS-CoV-2 Wuhan reference (Genbank: [MN908947.3](MN908947.3)). In Dane County, we identified 155 distinct SNVs across 122 samples evaluated. These SNVs are evenly distributed throughout the genome, and 92.9% (144/155) are located in open reading frames (ORFs). In Dane County, 52.9% (82/155) of consensus SNVs result in an amino acid change (nonsynonymous) and 40% (62/155) do not (synonymous) ([Fig 2A](#F2){ref-type="fig"}). In Milwaukee County, we identified 148 distinct SNVs across 125 samples evaluated. Among the observed consensus SNVs in Milwaukee County, 63.5% (94/148) are nonsynonymous and 31.8% (47/148) are synonymous ([Fig 2B](#F2){ref-type="fig"}). Mean inter-sequence pairwise SNV distance was 7.65 (std 1.83) and 5.02 (std 3.63) among Dane County and Milwaukee County sequences, respectively ([Fig 2C](#F2){ref-type="fig"}). Likewise, we detected an average of 4.4 new SNVs per day (sampling period of 35 days) in Dane County and 3.6 new SNVs per day (sampling period of 41 days) in Milwaukee County. Previous reports suggested SARS-CoV-2 is expected to acquire approximately one fixed SNV every fifteen days following a single introduction ^[@R28]^. Compared to this benchmark, both Dane County and Milwaukee County have "excess" diversity which can be most parsimoniously explained by multiple introductions of divergent viruses. These patterns are consistent with a greater number of introductions of distinct viruses into Dane County compared to Milwaukee County. To further analyze genetic differences among viruses in the two locations, we assigned clades using the Nextstrain nomenclature. For example, clade 19B is defined by two mutations at nucleotides 8,782 (ORF1ab S2839S) and 28,144 (Spike L84S) relative to a reference SARS-CoV-2 isolate from Wuhan, China (Genbank: [MN908947.3](MN908947.3)). The majority of Dane County sequences, 51.6% (n = 63 sequences), cluster in the 20A clade ([Fig 3A](#F3){ref-type="fig"}). This clade is defined by four variants, at nucleotide positions 241 (upstream of the first open reading frame), 3,037 (ORF1a F924F), 14,408 (ORF1b P314L), and 23,403 (S D614G). A minority (n = 31 sequences; 24.8%) of Milwaukee County sequences also cluster in this clade. In contrast, the 19A clade is most common (n = 75 sequences; 60.0%) in sequences from Milwaukee County. This clade is distinguished by a U-to-C variant at nucleotide position 29,711 (downstream of ORF10) ([Fig 3B](#F3){ref-type="fig"}). No onward spread from Dane County index case {#S5} -------------------------------------------- The first known SARS-CoV-2 case in Wisconsin was a person who was likely infected during travel to Wuhan, Hubei province, China, where they were exposed to family members with confirmed SARS-CoV-2 infections. The patient reported a sore throat shortly before departing China and returning to the US on 30 January 2020. The person wore a mask during the return flight. Upon arrival in the US, the person immediately presented to an emergency department while still wearing a mask. The person was afebrile and had no respiratory or gastrointestinal signs or symptoms, but began to develop mild respiratory symptoms shortly thereafter. The person's condition remained stable and never required hospitalization or advanced care, with symptoms resolving five days later. The person self-quarantined in an isolated room in a home with a dedicated bathroom for 30 days following symptom onset. During this time, nasopharynx samples repeatedly tested positive for SARS-CoV-2 viral RNA. Documentation of asymptomatic infections of SARS-CoV-2 has led to concerns about the role of cryptic community transmission in the United States ^[@R7],[@R29],[@R30]^. However, sequencing in other locations in the United States has revealed early introduction events did not always go on to seed downstream community spread ^[@R31]^. To determine whether SARS-CoV-2 cases detected in Dane County in March might have been due to undetected spread from the first Wisconsin introduction, we compared the sequence of this early case with local and global SARS-CoV-2 sequence diversity. The first Dane County patient's virus contains an in-frame deletion at nucleotide positions 20,298 -- 20,300, in a region that codes for the poly(U)-specific endoribonuclease; the impact of this mutation on viral fitness is unknown ^[@R32]^ ([Supplemental Fig 1](#SD2){ref-type="supplementary-material"}). Notably, this deletion was not detected in any other Dane County sequence, nor in any other sample(s) submitted to GISAID as of 18 April 2020. Moreover, there are no branches originating from the index Dane County case on either the global (Wisconsin sequences plus a subsampled set of global sequences) or local phylogenies (Wisconsin sequences only, maximum likelihood) ([Fig 2C](#F2){ref-type="fig"}, [Fig 3A](#F3){ref-type="fig"}). Thus, this early case appears to be an example of successful infection control practices. SARS-CoV-2 outbreak dynamics differ between Milwaukee and Dane Counties {#S6} ----------------------------------------------------------------------- The independent local phylogenies in Dane and Milwaukee County suggested that these two nearby locations had largely distinct SARS-CoV-2 epidemics through April 2020. To better understand the number of introductions and continued transmission dynamics, we generated a time-resolved sub-sampled global phylogeny incorporating Dane County (red tips) and Milwaukee County (blue tips) sequences alongside representative global SARS-CoV-2 sequences, including all other available Wisconsin sequences (purple tips) ([Fig 4A](#F4){ref-type="fig"}). Dane County viruses are distributed throughout the tree, consistent with multiple unique introductions. In contrast, Milwaukee County viruses cluster more closely together, consistent with fewer introductions and subsequent community transmission. To estimate the number of introductions into the state and subsequently each county, we used an ancestral state reconstruction of internal nodes. We performed 100 bootstrap replicates to account for uncertainty in the phylogenetic inference. This yielded an estimate of 59 \[59, 63\] (median \[95% highest density interval (HDI)\]) independent introductions of SARS-CoV-2 into the state of Wisconsin. Of these, 29 \[28, 31\] led to introductions into Dane county whereas only 21 \[19, 21\] led to introductions into Milwaukee county ([Fig 4B](#F4){ref-type="fig"}). Surprisingly, only 9 \[6, 10\] of the introductions into Wisconsin were associated with sequences from both counties. Furthermore, these shared introductions accounted for only 20--30% of the samples from Dane and Milwaukee County present in our dataset. Together, our analyses suggest that transmission between Dane and Milwaukee counties has not been a principal component of viral spread within either region. We find that local transmission in Milwaukee County began earlier, with an introduction event in late January/early February leading to a large number of the Milwaukee County sequences ([Fig 4C](#F4){ref-type="fig"}). In comparison, most samples collected from Dane County are associated with multiple introductions in late February/early March ([Fig 4C](#F4){ref-type="fig"}). Despite the fact that there were more introductions into Dane County, the reported number of cases was considerably less than in Milwaukee County. This indicates that each introduction into Dane County contributed less to onward viral transmission than in Milwaukee County. To account for sampling bias on our estimates, we randomly sampled sequences from our set of Dane and Milwaukee County samples (N = 20--240, increments of 20) and pruned all other Dane and Milwaukee samples from the maximum likelihood tree. This was repeated 10 times for each N, creating a set of 120 trees. We repeated the ancestral state reconstruction on each of these trees and re-estimated the number of introductions ([Supplemental Fig 2](#SD2){ref-type="supplementary-material"}). The number of estimated introductions into Dane County continued to increase with the number of sampled sequences, indicating that these data may be undersampling the true circulating viral lineages. In contrast, the number of estimated introductions into Milwaukee County decreases more slowly than Dane County, consistent with a small number of introductions. Although, we cannot rule out that the small number of introductions in Milwaukee County is an artifact of biased sampling, where the available sequences may only represent a portion of the transmission chains and not a true estimation of the total circulating viral population. Because of this, the true number of introductions is likely to change as more sequences become available in each county. Spread of SARS-CoV-2 was reduced following Wisconsin's "Safer at Home" Order {#S7} ---------------------------------------------------------------------------- We next used viral sequence data to assess the impact of Wisconsin's "Safer at Home" order on the basic reproduction number (R~0~). Given the role of superspreading dynamics in SARS-CoV-2 epidemics ^[@R9],[@R33],[@R34]^, we evaluated the impact on R~0~ for the Dane County and Milwaukee County epidemics in low, medium, and high transmission heterogeneity scenarios, where the level of transmission heterogeneity reflects the role for superspreading events, i.e high transmission heterogeneity reflects many supersupreading events. In both counties, under all three scenarios, R~0~ fell by at least 40% after 25 March, indicating that the sequencing data support the observed decline in reported cases. In Dane County, estimated median R~0~ was reduced by 40% \[4, 74\], 49% \[13, 79\], and 60% \[30, 83\] under low, medium, and high transmission heterogeneity, respectively. Similarly, in Milwaukee County, estimated median R~0~ was reduced by 68% \[50, 83\], 71% \[56, 86\], and 72% \[60, 84\] under low, medium, and high transmission heterogeneity, respectively. In Dane County, estimated cumulative incidence was best predicted with the medium transmission heterogeneity model based on alignment with reported incidence ([Fig 5A](#F5){ref-type="fig"}). Whereas Milwaukee County's cumulative incidence was best predicted with the model using high transmission heterogeneity ([Fig 5B](#F5){ref-type="fig"}). A greater role for superspreading events in Milwaukee versus Dane County could be explained by higher population density, higher poverty rates, and worse healthcare access ([Table 1](#T1){ref-type="table"}), all of which may increase contact rates and impede physical distancing efforts ^[@R34]--[@R38]^. Assuming moderate transmission heterogeneity in Dane County, estimated R~0~ prior to 25 March was 2.24 \[1.86, 2.65\] and the median estimated cumulative incidence at the end of the study period (26 April) was 4,546 infections \[1,187, 23,709\] compared to 405 positive tests. In contrast, assuming high transmission heterogeneity in Milwaukee County, estimated R~0~ prior to 25 March was 2.82 \[2.48, 3.20\] and the median cumulative incidence on 26 April was only 3,008 infections \[1,483, 7,508\] compared to 2,629 positive tests. With passive SARS-CoV-2 surveillance efforts in both counties likely missing subclinical and asymptomatic SARS-CoV-2 infections, we expect the true cumulative incidence to be considerably greater than the reported incidence, as has been suggested by others ^[@R39]^. Indeed, estimated cases were \~10x higher than reported cases in Dane County. Given that there were no substantial differences in the surveillance efforts between counties, we expected more than the 1.1-fold difference in estimated and reported cases in Milwaukee County. Nearly equivalent estimated and reported cumulative incidence in Milwaukee County could be explained by better detection rates, inaccurate model parameters, and/or biased sampling. With better detection rates, a greater proportion of actual infections would be reported, but given the similar surveillance efforts between counties we expect detection rates to be comparable. Another possible explanation we cannot rule out is that different model parameters are required to more accurately model Milwaukee County's epidemic. Our testing of three superspreading scenarios demonstrated that the superspreading parameters, at least, may be county-specific. In the case of biased sampling, where the available sequences only represent a portion of transmission chains in the county, our model would only estimate the caseload resulting from a subset of transmission chains in Milwaukee County and would underestimate the county-wide caseload. In support of representative county-wide sampling in Dane, but not Milwaukee County, sequences from 26.4% (107/405) of test-positive cases in Dane County, but only 3.9% (117/3008) of test-positive cases in Milwaukee County were available for phylodynamic modelling ^[@R24],[@R25]^. Discussion {#S8} ========== Dane County, Wisconsin had one of the earliest detected cases of SARS-CoV-2 infection in the United States, but this infection did not spark community spread. This is probably due to a combination of good infection control practices by healthcare providers, the patient, and sheer luck. Since March 2020 we find evidence for extensive introductions of SARS-CoV-2 into Dane County, none of which led to large-scale transmission clusters by the end of April 2020. As of 24 June 2020, Dane County has had a cumulative prevalence of 233 cases per 100,000 residents. In contrast, Milwaukee County, a larger and more densely populated region \~100km away, has had 1,105 cases per 100,000 residents as of 24 June 2020 ^[@R40]^. Our findings suggest that Milwaukee County's higher caseload stems from greater levels of community spread descendant from fewer introduction points than in Dane County. Strikingly, we see little evidence for mixing of virus populations between these two closely-linked communities in the same US state. We used patterns of SARS-CoV-2 diversification in a phylodynamic model to estimate the initial reproductive rate of infections in each county before official social distancing policies were enacted. In this initial phase of the outbreak, the median estimated R~0~ trended lower in Dane County than in Milwaukee County (2.24 vs 2.82). Higher population density in Milwaukee County could have contributed to a higher reproductive rate. A potential additional explanation for greater community spread in Milwaukee County is that the average individual in Milwaukee County, compared to Dane County, has access to fewer financial and healthcare resources and is more likely to experience poverty and to live with comorbid conditions, many of which are also risk factors for testing positive for SARS-CoV-2, the latter of which are also risk factors for severe Coronavirus Disease (COVID-19) ^[@R16],[@R17],[@R41],[@R42]^. Additionally, Milwaukee County is home to a higher proportion of Black and Hispanic or Latinx individuals compared to Dane County. Because of race-based discrimination, people belonging to these groups experience worse health outcomes than White individuals, despite being treated in the same healthcare systems ^[@R16],[@R17],[@R43],[@R44]^. The social vulnerability index (SVI) is a metric ranging designed to determine how resilient a community is when confronted with external stressors like natural disasters or a pandemic ^[@R45]^. A higher SVI indicates a community is vulnerable to experiencing worsened outcomes secondary to an external stressor (range of zero to one). All of the factors mentioned above contribute to a higher SVI in Milwaukee County (0.8268) compared to Dane County (0.1974) ^[@R45]^. While the association between SIV and SARS-CoV-2 indicidence is not significant, according to a recent study, the SVI sub-components of socioeconomic and minority status are both predictors of higher SARS-CoV-2 incidence and case fatality rates ^[@R46]^. These sub-components are likely to be among the main drivers in the outbreak dynamics between Dane and Milwaukee County. Like most US states, in late March 2020 Wisconsin enacted a set of social distancing policies aimed at reducing the spread of SARS-CoV-2. Wisconsin's order, termed "Safer at Home," was enacted 25 March 2020. After this time point, the estimated R~0~ was reduced by 40% or more in both counties. The sequencing data is consistent with the observed reduction in positive tests, as clusters expanded more slowly and new clusters arose more slowly. Throughout this time, we find that the Dane County and Milwaukee County outbreaks were largely independent of one another. Our data reveal only limited mixing of SARS-CoV-2 genotypes between these geographically-linked communities, supporting the notion that public health policies emphasizing physical distancing effectively reduce transmission between communities. Notably, "Safer at Home" ended abruptly 13 May 2020, when it was overturned by the Wisconsin Supreme Court. Additional sequencing and epidemiological data will be necessary to understand whether virus intermingling between these counties increased after the cessation of the Executive Order. Viral determinants could also affect differential transmission patterns within and between Dane and Milwaukee Counties. If variants with greater transmission potential exist, then early introductions of such a variant into a community could contribute to greater spread there. Recent reports have suggested that a point mutation in the SARS-CoV-2 spike protein-encoding an aspartate-to-glycine substitution at amino acid residue 614 (D164G) may enhance transmissibility. This mutation confers increased infectivity of pseudotyped murine retroviruses in ACE2-expressing HEK293T cells ^[@R47]^ and has been proposed to be increasing in global prevalence, perhaps under natural selection ^[@R48]^. Importantly, however, the rise in D614G frequency could also be due to founder effects, as viruses bearing the glycine allele may have been the first to establish local transmission in Europe. D614G is one of the mutations defining the 20A clade; these viruses remain dominant in Europe ^[@R31]^, so introductions from Europe into the United States, including into Dane County, predominantly carry D614G. In comparison, in Milwaukee County, the vast majority of viruses have an aspartic acid residue at this site despite much higher levels of community transmission. This observation does not necessarily indicate that D614G does not impact viral transmissibility; its role may be muted by other determinants of transmission, including demographic and socioeconomic factors. There are some important caveats to this study. Of the total reported positives in each county during the study period, high-quality sequences were available for 27% of test-positive cases in Dane County, but only 5% of test-positive cases in Milwaukee County ^[@R24],[@R25]^. Despite the deep sampling of SARS-CoV-2 sequences in Wisconsin relative to other regions in the US, even greater targeted sequencing efforts may be required to fully capture the sequence heterogeneity conferred by multiple introduction events and variable superspreading dynamics. It is possible additional sequencing in Milwaukee County would uncover additional viral lineages, or that the 5% of cases we sequenced do not fully represent the diversity of viruses found throughout the county, skewing our observations. However, in analyzing sample metadata we find no evidence that particular locations within Milwaukee County were over- or under-sampled relative to their known SARS-CoV-2 prevalence. Another potential explanation is that Milwaukee County was under-testing their epidemic. Throughout the period analyzed here, the percentage of SARS-CoV-2 tests returning positive in Milwaukee County was \~20%, compared to only \~5% in Dane County ^[@R24],[@R25]^. As we are only able to sequence test-positive samples, under-testing in Milwaukee County may have limited our ability to capture a complete representation of their epidemic. Through increased testing and continued sequencing efforts, it is likely that we will be able to more fully understand the Milwaukee County outbreak. It is also possible that other sequences from these counties relevant to our analyses were collected by other groups. As of 21 June 2020, there were 477 Wisconsin sequences available, but only 351 of these had geolocation information resolved to the county level. Some of the remaining 126 sequences likely originated from Dane County or Milwaukee County, but we cannot include these sequences in our analysis given their geolocation data resolved only to the state level. Currently there is no clearly stated national-level guidance for metadata to be associated with pathogen sequences. Dates and geographic locations with greater than state-level resolution are required to track the emergence and spread of novel pathogens like SARS-CoV-2. Explicit regulatory guidance from the United States enabling the disclosure of sequencing data with county-level geolocation data and sampling dates would enable other institutions to harmonize reporting of viral sequences and improve subsequent studies comparing viral sequences from different locations. Such reporting may be especially important for identifying disparities in viral transmission due to socioeconomic vulnerabilities in specific counties that would otherwise be masked using state-level data reporting. Here we provide the first insights into the emergence and spread of SARS-CoV-2 in Southern Wisconsin. We show an early introduction of SARS-CoV-2 that did not go on to seed downstream community spread. European lineages account for multiple later introductions in Dane County, but we find little evidence for large-scale community spread stemming from any single introduction. Conversely, SARS-CoV-2 lineages from Asia account for relatively fewer unique introductions into Milwaukee County and are followed by increased community spread. We show strong evidence for a reduction in case counts in both Dane and Milwaukee Counties following implementation of Wisconsin's state-wide "Safer at Home" order, emphasizing the ongoing importance of physical distancing and limiting large gatherings, especially in spaces with limited airflow ^[@R49]^. The factors contributing to greater community transmission in Milwaukee County and extinction of infection clusters within Dane County remain unclear, but regional demographics likely play a critical role in these differences. To this end, continued efforts to sequence SARS-CoV-2 viruses across multiple spatio-temporal scales remain critical for tracking viral transmission dynamics within and between communities and for guiding "precision medicine" public health interventions to suppress future SARS-CoV-2 outbreaks. Methods {#S9} ======= Sample approvals and sample selection criteria {#S10} ---------------------------------------------- Work with residual diagnostic specimens was performed at biosafety level-3 containment at the AIDS Vaccine Research Laboratory at the University of Wisconsin -- Madison. Waiver of HIPAA Authorization and approval to obtain the clinical samples along with a Limited Data Set was provided by the Western Institutional Review Board (WIRB \#1--1290953-1). County-level case data and demographics {#S11} --------------------------------------- The county level map of Wisconsin was obtained from the State Cartographer's Office (<https://www.sco.wisc.edu/maps/wisconsin-outline/>). Wisconsin county-level COVID-19 cumulative case data was obtained from the Wisconsin Department of Health Services COVID-19 dashboard (<https://data.dhsgis.wi.gov/datasets/covid-19-historical-data-table/>, <https://cityofmadison.maps.arcgis.com/apps/opsdashboard/index.html#/e22f5ba4f1f94e0bb0b9529dc82db6a3>, and <https://county.milwaukee.gov/EN/COVID-19>). All Dane and Milwaukee county demographic data came from the Wisconsin Department of Health Services Data & Statistics (<https://www.dhs.wisconsin.gov/stats>) or the U.S. Census Bureau QuickFacts table (<https://www.census.gov/quickfacts/fact/table/>). vRNA isolation {#S12} -------------- Nasopharyngeal swabs received in transport medium (VTM) were briefly centrifuged at 14,000 r.p.m. for 30 seconds at room temperature to ensure all residual sample sediments at the bottom of the tube. Viral RNA (vRNA) was extracted from 100 μl of VTM using the Viral Total Nucleic Acid Purification kit (Promega, Madison, WI, USA) on a Maxwell RSC 48 instrument and was eluted in 50 μL of nuclease free H~2~O. vRNA isolation for index Dane County Sample {#S13} ------------------------------------------- Approximately 140 μL of VTM was passed through a 0.22μm filter (Dot Scientific, Burton, MI, USA). Total nucleic acid was extracted using the Qiagen QIAamp Viral RNA Mini Kit (Qiagen, Hilden, Germany), substituting carrier RNA with linear polyacrylamide (Invitrogen, Carlsbad, CA, USA) and eluting in 30 μL of nuclease free H~2~O. The sample was treated with TURBO DNase (Thermo Fisher Scientific, Waltham, MA, USA) at 37°C for 30 min and concentrated to 8μL using the RNA Clean & Concentrator-5 kit (Zymo Research, Irvine, CA, USA). The full protocol for nucleic acid extraction and subsequent cDNA generation is available at <https://www.protocols.io/view/sequence-independent-single-primer-amplification-o-bckxiuxn>. Complementary DNA (cDNA) generation {#S14} ----------------------------------- Complementary DNA (cDNA) was synthesized using a modified ARTIC Network approach ^[@R26],[@R27]^. Briefly, vRNA was reverse transcribed with SuperScript IV Reverse Transcriptase (Invitrogen, Carlsbad, CA, USA) using random hexamers and dNTPs. Reaction conditions were as follows: 1μL of random hexamers and 1μL of dNTPs were added to 11 μL of sample RNA, heated to 65°C for 5 minutes, then cooled to 4°C for 1 minute. Then 7 μL of a master mix (4 μL 5x RT buffer, 1 μL 0.1M DTT, 1μL RNaseOUT RNase Inhibitor, and 1 μL SSIV RT) was added and incubated at 42°C for 10 minutes, 70°C for 10 minutes, and then 4°C for 1 minute. Complementary DNA (cDNA) generation for index Dane County sample {#S15} ---------------------------------------------------------------- Complementary DNA (cDNA) was synthesized using a modified Sequence Independent Single Primer Amplification (SISPA) approach described by Kafetzopoulou et al. ^[@R50],[@R51]^. RNA was reverse-transcribed with SuperScript IV Reverse Transcriptase (Invitrogen, Carlsbad, CA, USA) using Primer A (5'-GTT TCC CAC TGG AGG ATA-(N~9~)-3'). Reaction conditions were as follows: 1μL of primer A was added to 4 μL of sample RNA, heated to 65°C for 5 minutes, then cooled to 4 °C for 5 minutes. Then 5 μL of a master mix (2 μL 5x RT buffer, 1 μL 10 mM dNTP, 1 μL nuclease free H~2~O, 0.5 μL 0.1M DTT, and 0.5 μL SSIV RT) was added and incubated at 42°C for 10 minutes. For generation of second strand cDNA, 5 μL of Sequenase reaction mix (1 μL 5x Sequenase reaction buffer, 3.85 μL nuclease free H~2~O, 0.15 μL Sequenase enzyme) was added to the reaction mix and incubated at 37°C for 8 minutes. This was followed by the addition of a secondary Sequenase reaction mix (0.45 μl Sequenase Dilution Buffer, 0.15 μl Sequenase Enzyme), and another incubation at 37°C for 8 minutes. Following incubation, 1μL of RNase H (New England BioLabs, Ipswich, MA, USA) was added to the reaction and incubated at 37°C for 20 min. Conditions for amplifying Primer-A labeled cDNA were as follows: 5 μL of primer-A labeled cDNA was added to 45 μL of AccuTaq master mix per sample (5 μL AccuTaq LA 10x Buffer, 2.5 μL dNTP mix, 1μL DMSO, 0.5 μL AccuTaq LA DNA Polymerase, 35 μL nuclease free water, and 1 μL Primer B (5′-GTT TCC CAC TGG AGG ATA-3′). Reaction conditions for the PCR were: 98°C for 30s, 30 cycles of 94°C for 15 s, 50°C for 20 s, and 68°C for 2 min, followed by 68°C for 10 min. Multiplex PCR to generate SARS-CoV-2 genomes {#S16} -------------------------------------------- A SARS-CoV-2-specific multiplex PCR for Nanopore sequencing was performed, similar to amplicon-based approaches as previously described ^[@R26],[@R27]^. In short, primers for 96 overlapping amplicons spanning the entire genome with amplicon lengths of 500bp and overlapping by 75 to 100bp between the different amplicons were used to generate cDNA. cDNA (2.5 μL) was amplified in two multiplexed PCR reactions using Q5 Hot-Start DNA High-fidelity Polymerase (New England Biolabs, Ipswich, MA, USA) using the conditions previously described ^[@R26],[@R27]^. Samples were amplified through 25 cycles of PCR and each resulting multiplex sample was pooled together before ONT library prep. Library preparation and sequencing {#S17} ---------------------------------- Amplified PCR product was purified using a 1:1 concentration of AMPure XP beads (Beckman Coulter, Brea, CA, USA) and eluted in 30μL of water. PCR products were quantified using Qubit dsDNA high-sensitivity kit (Invitrogen, USA) and were diluted to a final concentration of 1 ng/μl. A total of 5ng for each sample was then made compatible for deep sequencing using the one-pot native ligation protocol with Oxford Nanopore kit SQK-LSK109 and its Native Barcodes (EXP-NBD104 and EXP-NBD114) ^[@R27]^. Specifically, samples were end repaired using the NEBNext Ultra II End Repair/dA-Tailing Module (New England Biolabs, Ipswich, MA, USA). Samples were then barcoded using 2.5μL of ONT Native Barcodes and the Ultra II End Repair Module. After barcoding, samples were pooled directly into a 1:1 concentration of AMPure XP beads (Beckman Coulter, Brea, CA, USA) and eluted in 30μL of water. Samples were then tagged with ONT sequencing adaptors according to the modified one-pot ligation protocol ^[@R27]^. Up to 24 samples were pooled prior to being run on the appropriate flow cell (FLO-MIN106) using the 72hr run script. Processing raw ONT data {#S18} ----------------------- Sequencing data was processed using the ARTIC bioinformatics pipeline (<https://github.com/artic-network/artic-ncov2019>), with a few modifications. Briefly, we have modified the ARTIC pipeline so that it demultiplexes raw fastq files using qcat as each fastq file is generated by the GridION (<https://github.com/nanoporetech/qcat>). Once a barcode reaches 100k reads, it will trigger the rest of the ARTIC bioinformatics workflow which will map to the severe acute respiratory syndrome coronavirus isolation from Wuhan, Hubei District, China (Genbank: [MN908947.3](MN908947.3)) using minimap2. This alignment will then be used to generate consensus sequences and variant calls using medaka (<https://github.com/nanoporetech/medaka>). The entire ONT analysis pipeline is available at <https://github.com/gagekmoreno/SARS-CoV-2-in-Southern-Wisconsin>. Phylogenetic analysis {#S19} --------------------- All 247 available full length sequences from Dane and Milwaukee County through 26 April 2020 were used for phylogenetic analysis using the tools implemented in Nextstrain custom builds (<https://github.com/nextstrain/ncov>) ^[@R4],[@R52]^. Time-resolved and divergence phylogenetic trees were built using the standard Nextstrain tools and scripts ^[@R4],[@R52]^. We used custom python scripts to filter and clean metadata. An additional subsampled global phylogeny using all available sequences in GISAID as of 21 June 2020 were input into the Nextstrain pipeline. A custom 'Wisconsin' profile was made to create a Wisconsin-centric subsampled build to include representative sequences. We defined representative sequences as 20 sequences from each US state, and 30 sequences from each country, per month per year. This subsampled global build includes 5,378 sequences or roughly 11% of the total sequences in GISAID as of 21 June 2020. We also ensured that the nearest phylogenetic neighbors of every Dane and Milwaukee County sequence are included, increasing the total to 5,417 sequences. All available Wisconsin sequences available on GISAID by 21 June 2020 were incorporated. An additional 20 sequences from each US state, and 30 sequences from each county, per month per year, were added. All of the Wisconsin sequences included in this study are listed in the include.txt to ensure they were represented in the global phylogeny. The scripts and output are available at <https://github.com/gagekmoreno/SARS-CoV-2-in-Southern-Wisconsin>. Estimating the number of introductions {#S20} -------------------------------------- To estimate the number of unique introductions into Dane and Milwaukee County we first identified the closest cophenetic match of each Dane and Milwaukee County sequence in the global SARS-CoV-2 phylogenetic trees generated by Dr. Rob Lanfear at the Australian National University. These trees are generated using MAFFT ^[@R53]^, FastTree ^[@R54]^ and are available at <https://github.com/roblanf/sarscov2phylo/>. If the closest neighbor had an ambiguous date, the next closest was chosen. Any sequences which were not already in the down-sampled alignment described above were added using MAFFT. IQ-TREE ^[@R55]^ with 1000 Ultrafast bootstrap replicates ^[@R56]^ using the flags -nt 4 -ninit 10 -me 0.05 -bb 1000 -wbtl -czb. The clock rate of the maximum likelihood tree was estimated using TreeTime ^[@R52]^. We first pruned tips which failed the clock filter (n_iqd = 4) and then ran TreeTime with the flags The number of introductions into each region was estimated using the maximum likelihood tree as well as 100 of the bootstrap replicate trees. For each, we first generated a time aligned tree with TreeTime with the flags infer_gtr=True max_iter=2 branch_length_mode='auto' resolve_polytomies=False time_marginal='assign' vary_rate=0.0004 fixed_clock_rate=0.0008 ^[@R57]^. Tips which failed the clock filter were pruned from each tree prior to running TreeTime. The 90% highest posterior region was used to calculate a confidence interval for the time of each node. Next, tips in the tree were assigned to either Dane County, Milwaukee County, the U.S. states, or their country of origin and the ancestral states of nodes in the tree were estimated using TreeTime. A sampling bias correction of 2.5 was used to account for under sampling. Nodes were assigned to the region with the highest assigned probability from TreeTime. For each sample from Dane and Milwaukee county we identified the earliest (in calendar time) node assigned to Wisconsin (Dane County, Milwaukee County, and other Wisconsin) in the path between that tip and the root of the tree. Introduction into Dane and MIlwaukee County is assumed to occur at the time between these nodes and their parent node. As we do not know whether Wisconsin samples included in the tree from other studies are from Dane or Milwaukee County (or elsewhere in Wisconsin), our estimates for the timing of introduction into each county represent the timing of introduction of that lineage into Wisconsin as a whole. The time of introduction was evaluated using the mean estimate as well as the lower and upper limits of the timing for each node. Thus, each bootstrap replicate contributes three lines to the plots shown in [Fig 3B](#F3){ref-type="fig"} and [Fig 3C](#F3){ref-type="fig"}. Furthermore, our estimates of the number of introductions will be conservative in the case of reimportations into Dane or Milwaukee County. Because polytomies were not resolved, any Dane or Milwaukee County tips or lineages directly descending from a polytomy were attributed to a single importation event -- to the earliest Wisconsin node. We also conducted a rarefaction analysis to assess the impact of sampling within Dane and Milwaukee County on the estimated number of introductions. This was done using the time aligned maximum likelihood tree described above. N (20 to 240, in increments of 20) sequences were randomly sampled from the set of Dane and Milwaukee County sequences and all non-sampled Dane and Milwaukee County sequences were pruned from the tree prior to ancestral state reconstruction and estimation of the number of introductions as described above. Ten replicates for each N were conducted. Code to replicate this analysis is available at <https://github.com/gagekmoreno/SARS-CoV-2-in-Southern-Wisconsin>. Results were visualized using Matplotlib ^[@R58]^, Seaborn (<https://github.com/mwaskom/seaborn>), and Baltic (<https://github.com/evogytis/baltic>). Phylodynamic analysis {#S21} --------------------- Bayesian phylogenetic inference and dynamic modelling were performed with BEAST2 software (v2.6.2) ^[@R59]^ and the PhyDyn package (v1.3.6) ^[@R14]^. The phylodynamic analysis infers SARS-CoV-2 phylogenies of sequences within a region of interest and exogenous sequences representing the global phylogeny, and uses tree topology to inform a SEIJR compartmental model. For the Bayesian phylogenetic analysis, an HKY substitution model (gamma count=4; *K* lognormal prior (μ=1, S=1.25)) and a strict molecular clock (uniform prior 0.0005 to 0.005 substitution/site/year) were used. To select the exogenous sequences, a maximum-likelihood global phylogeny was generated with IQTree and randomly downsampled in a time-stratified manner by collection week. Closest cophenetic neighbors for each of the Wisconsin sequences were additionally included, if not present already. Only sequences with coverage of the entire coding region and less than 1% of N base calls were used. For the Dane County analyses, 107 local and 107 exogenous SARS-CoV-2 sequences were used. For the Milwaukee County analyses, 117 local and 129 exogenous SARS-CoV-2 sequences were used. The SEIJR model dynamics are defined by the following ordinary differential equations: $$\qquad{dS/dt = - (\beta I(t) + \beta\tau J(t))\frac{S(t)}{S(t) + E(t) + I(t) + J(t) + R(t)}}{dE/dt = (\beta I(t) + \beta\tau J(t))\frac{S(t)}{S(t) + E(t) + I(t) + J(t) + R(t)} - \gamma_{0}E(t)}\mspace{90mu} dI/dt = \gamma_{0}\left( {1 - p_{h}} \right)E(t) - \gamma_{1}J(t)\mspace{108mu} dJ/dt = \gamma_{0}p_{h}E(t) - \gamma_{1}J(t)\mspace{117mu} dR/dt = \gamma_{1}(E(t) + J(t))$$ The dynamics of the exogenous compartment is defined by: $$dY/dt = \left( {\beta_{exog} - \gamma_{exog}} \right)Y(t)$$ During phylodynamic model fitting, *β*, *β*~*exog*~, and *α* are estimated. Estimated R~0~ was derived from *β* as follows. R 0 = ( β ( 1 − p h ) \+ β ( τ p h ) ) / γ 1 The SEIJR model includes a 'high transmission' compartment (J) that accounts for heterogeneous transmission due to superspreading, an important component of SARS-CoV-2 epidemiology ^[@R9],[@R60]--[@R62]^. Published empirical estimates informed parameterization of superspreading and other epidemiological parameters. The mean duration of latent (1/γ~0~) and infectious periods (1/γ~1~) was 3 and 5.5 days, respectively ^[@R63]^. Likewise, the mean duration of infection for the exogenous compartment (1/γ~*exog*~) was fixed at 8.5 days. To model low, medium, and high transmission heterogeneity, the proportion of infectious individuals in the J compartment (*p*~*h*~) and their transmission rate multiplier (τ) were set to 0.2 and 16, 0.1 and 36, or 0.05 and 76, respectively. These *p*~*h*~ and τ settings result in 20, 10, or 5% of individuals contributing 80% of total infections. The initial size of the S compartment was fixed at 5 X 10^5^ for Dane County and 9.5 X 10^5^ for Milwaukee County. To account for changes in epidemic dynamics after the Executive Orders, a 25% reduction in importation/exportation of sequences was applied at a 25 March breakpoint, per observed reductions in Google mobility indices for individuals in Wisconsin ^[@R64]^. Additionally, the estimated R~0~ after 25 March was allowed to vary from the pre-intervention R~0~ proportionally by a modifier variable, *α*. Each analysis was run in duplicate for at least 3 million states in BEAST2. Parameter traces were visually inspected for adequate mixing and convergence in Tracer (v1.7.1). Log files from duplicate runs were merged with LogCombiner and 10% burn-in applied. Similarly, trajectory files from duplicate runs were merged with an in-house R script and 10% burn-in applied. BEAST2 XML files and scripts for exogenous sequence selection and phylodynamic data analysis/visualization are provided in the GitHub repository listed below. Data availability {#S22} ----------------- Sequencing data after mapping to SARS-CoV-2 reference genome (Genbank: [MN908947.3](MN908947.3)) have been deposited in the Sequence Read Archive (SRA) under bioproject PRJNA614504. Derived data, analysis pipelines, and figures have been made available for easy replication of these results at a publically-accessible GitHub repository: <https://github.com/gagekmoreno/SARS-CoV-2-in-Southern-Wisconsin>. Supplementary Material {#SM1} ====================== We gratefully acknowledge Dr. Trevor Bedford and the entire Nextstrain team for making Nextstrain phylogenetic tools publicly available and for their commitment to tracking the global spread of SARS-CoV-2. We also acknowledge the GISAID team for maintaining the largest public repository of SARS-CoV-2 sequence- and metadata. Lastly, we thank Dr. Louise Moncla for her careful reading of and insightful comments on this manuscript. This project was funded in part through a COVID-19 Response grant from the Wisconsin Partnership Program at the University of Wisconsin School of Medicine and Public Health. KMB is supported by F30 AI145182-01A1 from the National Institutes of Allergy and Infectious Disease. GKM is supported by an NLM training grant to the Computation and Informatics in Biology and Medicine Training Program (NLM 5T15LM007359). ![Demography and epidemiology of SARS-CoV-2 in southern Wisconsin.\ A) A map of Wisconsin highlighting Dane County (red) and Milwaukee County (blue). Cumulative case counts through 26 April 2020 are reported within each county border. B) Cumulative SARS-CoV-2 cases in Dane County (red) and Milwaukee County (blue) from 9 March through 26 April. The vertical dashed line indicates the start date of Wisconsin's "Safer at Home" order, which went into effect 25 March 2020 ^[@R22]^.](nihpp-2020.07.09.20149104-f0001){#F1} ![Characterizing consensus-level variants and sequence divergence among Dane and Milwaukee County sequences.\ SNVs are annotated relative to the initial Wuhan SARS-CoV-2 reference (Genbank: [MN908947.3](MN908947.3)). A) Frequency of consensus SNVs among the Dane County sequences. B) Frequency of consensus SNVs among the Milwaukee County sequences. Open symbols denote synonymous or intergenic SNVs and closed symbols denote nonsynonymous SNVs. C) A divergence-based phylogenetic tree built using Nextstrain tools for the 122 Dane County (red) and 125 Milwaukee County (blue) sequences. Wisconsin samples are rooted against Wuhan-Hu-1/2019 and Wuhan/WH01/2019.](nihpp-2020.07.09.20149104-f0002){#F2} ![Dane and Milwaukee County outbreaks are defined by genetically distinct viruses.\ A) A time-resolved phylogenetic tree built using Nextstrain tools for 122 samples collected in Dane County. B) A time-resolved phylogenetic tree for 125 samples collected in Milwaukee County. Clade is denoted by color. Both phylogenies include Wuhan sequences (Wuhan-Hu-1/2019 and Wuhan/WH01/2019, denoted in grey) to more effectively time-align each tree.](nihpp-2020.07.09.20149104-f0003){#F3} ![Estimate of the number of introduction events into Milwaukee and Dane County and their relative contribution to downstream epidemic dynamics.\ A) Maximum likelihood (ML) time-resolved tree with subsampled global sequences and closest phylogenetic neighbors relatives included (grey branches). Sequences from Dane and Milwaukee Counties are highlighted in red and blue, respectively. Sequences with geolocation information available to the state level, or that are located outside of Dane and Milwaukee Counties (i.e. La Crosse) are shown in purple. B) Estimated cumulative number of introduction events into each county. C) Gaussian Kernel Density Estimate plots showing the estimated timing of each introduction event (3 curves per replicate: mean and 90% confidence intervals) into Dane County (red) or Milwaukee County (blue). The relative number of samples from each region attributable to an introduction event is represented on the y-axis. Curves are normalized to a cumulative density of one; therefore, y-axis scale is not shown.](nihpp-2020.07.09.20149104-f0004){#F4} ![Phylodynamic modelling of regional outbreaks informs regional outbreak dynamics before and after government interventions.\ Bayesian phylodynamic modelling of cumulative incidence up to 26 April for outbreaks in A) Dane County and B) Milwaukee County under low (left), medium (center), and high (right) transmission heterogeneity conditions. Model parameters for low, medium, and high transmission heterogeneity were fixed such that 20, 10, and 5% of superspreading events contribute 80% of cumulative infections, respectively. Median cumulative incidence (solid black line) is bound by the 95% confidence intervals (CI; gray ribbon). Dots represent reported cumulative positive tests in Dane County (red) and Milwaukee County (blue). Estimated median reproductive numbers (R~0~) with 95% HDI are listed for the period before the Wisconsin "Safer at Home" order was issued on 25 March 2020. Percent reduction in R~0~ with 95% HDI is provided for the period after 25 March 2020.](nihpp-2020.07.09.20149104-f0005){#F5} ###### County level demographics for Dane and Milwaukee County. County-level demographic data Dane Milwaukee ------------------------------------------------------------ ---------------- -------------- Population size (2015) 516,850 952,150 Population per square mile (2015) 430 3942 Average number of persons per dwelling (2014--2018) 2.35 2.44 Age (2014--2018):  % of population under 5 5.6 6.9  % of population under 18 20.4 24  % of population over 65 13.7 13.6 Race/ethnicity (2015):  White 81.5% 53.3%  African American 5.9% 27.2%  American Indian 0.3% 0.7%  Hispanic 6.3% 14.5%  Asian 6.0% 4.3% Median income (2015) \$65,416 \$45,905 \% of population that is uninsured, under 65 (2014--2018) 4.9% 8.2% Poverty estimate, all ages (2015) 11.2% 20.3% \% of population reported overweight or obese (2012--2016) 54.3% -- 58.5% 64.7% -- 69% \% of adults reporting diagnosed diabetes (2012--2016) 4.2% -- 6.8% 8.6% -- 9.8% [^1]: These authors contributed equally [^2]: These authors contributed equally
While most of our attention has been on the top of the Nashville Predators defensive depth chart, a move to shore up the bottom end hasn't panned out, as reports out of Sweden indicate that Norwegian defenseman Ole-Kristian Tollefsen has rejected a contract offer by the Preds and will stay in Sweden's Elitserien, playing for Farjestad. Ole-Kristian Tollefsen Height: 6-2 Weight: 211 Born: Mar 29, 1984 From the translated report: Do you remember last year? When Ole-Kristian Tollefsen was offered a contract by Nashville and wanted to, but was stopped by Modo because the actual release date had passed. He has gotten even stronger bid from the club this summer. But this time it is he and not the club who choose to put an end to the transition. A veteran of five rather forgettable NHL seasons with Columbus and Philadelphia, Tollefsen would have perhaps added a bit of beef to the 3rd pair, but this isn't a huge loss for the Preds. It sounds like personal reasons played a big role in this decision, as he and his wife have a baby on the way in August.
// dear imgui: Platform Binding for GLFW // This needs to be used along with a Renderer (e.g. OpenGL3, Vulkan..) // (Info: GLFW is a cross-platform general purpose library for handling windows, inputs, OpenGL/Vulkan graphics context creation, etc.) // (Requires: GLFW 3.1+) // Implemented features: // [X] Platform: Clipboard support. // [X] Platform: Gamepad support. Enable with 'io.ConfigFlags |= ImGuiConfigFlags_NavEnableGamepad'. // [x] Platform: Mouse cursor shape and visibility. Disable with 'io.ConfigFlags |= ImGuiConfigFlags_NoMouseCursorChange'. FIXME: 3 cursors types are missing from GLFW. // [X] Platform: Keyboard arrays indexed using GLFW_KEY_* codes, e.g. ImGui::IsKeyPressed(GLFW_KEY_SPACE). // You can copy and use unmodified imgui_impl_* files in your project. See main.cpp for an example of using this. // If you are new to dear imgui, read examples/README.txt and read the documentation at the top of imgui.cpp. // https://github.com/ocornut/imgui // CHANGELOG // (minor and older changes stripped away, please see git history for details) // 2019-03-12: Misc: Preserve DisplayFramebufferScale when main window is minimized. // 2018-11-30: Misc: Setting up io.BackendPlatformName so it can be displayed in the About Window. // 2018-11-07: Inputs: When installing our GLFW callbacks, we save user's previously installed ones - if any - and chain call them. // 2018-08-01: Inputs: Workaround for Emscripten which doesn't seem to handle focus related calls. // 2018-06-29: Inputs: Added support for the ImGuiMouseCursor_Hand cursor. // 2018-06-08: Misc: Extracted imgui_impl_glfw.cpp/.h away from the old combined GLFW+OpenGL/Vulkan examples. // 2018-03-20: Misc: Setup io.BackendFlags ImGuiBackendFlags_HasMouseCursors flag + honor ImGuiConfigFlags_NoMouseCursorChange flag. // 2018-02-20: Inputs: Added support for mouse cursors (ImGui::GetMouseCursor() value, passed to glfwSetCursor()). // 2018-02-06: Misc: Removed call to ImGui::Shutdown() which is not available from 1.60 WIP, user needs to call CreateContext/DestroyContext themselves. // 2018-02-06: Inputs: Added mapping for ImGuiKey_Space. // 2018-01-25: Inputs: Added gamepad support if ImGuiConfigFlags_NavEnableGamepad is set. // 2018-01-25: Inputs: Honoring the io.WantSetMousePos by repositioning the mouse (when using navigation and ImGuiConfigFlags_NavMoveMouse is set). // 2018-01-20: Inputs: Added Horizontal Mouse Wheel support. // 2018-01-18: Inputs: Added mapping for ImGuiKey_Insert. // 2017-08-25: Inputs: MousePos set to -FLT_MAX,-FLT_MAX when mouse is unavailable/missing (instead of -1,-1). // 2016-10-15: Misc: Added a void* user_data parameter to Clipboard function handlers. #include "imgui.h" #include "imgui_impl_glfw.h" // GLFW #include <GLFW/glfw3.h> #ifdef _WIN32 #undef APIENTRY #define GLFW_EXPOSE_NATIVE_WIN32 #include <GLFW/glfw3native.h> // for glfwGetWin32Window #endif #define GLFW_HAS_WINDOW_TOPMOST (GLFW_VERSION_MAJOR * 1000 + GLFW_VERSION_MINOR * 100 >= 3200) // 3.2+ GLFW_FLOATING #define GLFW_HAS_WINDOW_HOVERED (GLFW_VERSION_MAJOR * 1000 + GLFW_VERSION_MINOR * 100 >= 3300) // 3.3+ GLFW_HOVERED #define GLFW_HAS_WINDOW_ALPHA (GLFW_VERSION_MAJOR * 1000 + GLFW_VERSION_MINOR * 100 >= 3300) // 3.3+ glfwSetWindowOpacity #define GLFW_HAS_PER_MONITOR_DPI (GLFW_VERSION_MAJOR * 1000 + GLFW_VERSION_MINOR * 100 >= 3300) // 3.3+ glfwGetMonitorContentScale #define GLFW_HAS_VULKAN (GLFW_VERSION_MAJOR * 1000 + GLFW_VERSION_MINOR * 100 >= 3200) // 3.2+ glfwCreateWindowSurface // Data enum GlfwClientApi { GlfwClientApi_Unknown, GlfwClientApi_OpenGL, GlfwClientApi_Vulkan }; static GLFWwindow* g_Window = NULL; static GlfwClientApi g_ClientApi = GlfwClientApi_Unknown; static double g_Time = 0.0; static bool g_MouseJustPressed[5] = { false, false, false, false, false }; static GLFWcursor* g_MouseCursors[ImGuiMouseCursor_COUNT] = { 0 }; // Chain GLFW callbacks: our callbacks will call the user's previously installed callbacks, if any. static GLFWmousebuttonfun g_PrevUserCallbackMousebutton = NULL; static GLFWscrollfun g_PrevUserCallbackScroll = NULL; static GLFWkeyfun g_PrevUserCallbackKey = NULL; static GLFWcharfun g_PrevUserCallbackChar = NULL; static const char* ImGui_ImplGlfw_GetClipboardText(void* user_data) { return glfwGetClipboardString((GLFWwindow*)user_data); } static void ImGui_ImplGlfw_SetClipboardText(void* user_data, const char* text) { glfwSetClipboardString((GLFWwindow*)user_data, text); } void ImGui_ImplGlfw_MouseButtonCallback(GLFWwindow* window, int button, int action, int mods) { if (g_PrevUserCallbackMousebutton != NULL) g_PrevUserCallbackMousebutton(window, button, action, mods); if (action == GLFW_PRESS && button >= 0 && button < IM_ARRAYSIZE(g_MouseJustPressed)) g_MouseJustPressed[button] = true; } void ImGui_ImplGlfw_ScrollCallback(GLFWwindow* window, double xoffset, double yoffset) { if (g_PrevUserCallbackScroll != NULL) g_PrevUserCallbackScroll(window, xoffset, yoffset); ImGuiIO& io = ImGui::GetIO(); io.MouseWheelH += (float)xoffset; io.MouseWheel += (float)yoffset; } void ImGui_ImplGlfw_KeyCallback(GLFWwindow* window, int key, int scancode, int action, int mods) { if (g_PrevUserCallbackKey != NULL) g_PrevUserCallbackKey(window, key, scancode, action, mods); ImGuiIO& io = ImGui::GetIO(); if (action == GLFW_PRESS) io.KeysDown[key] = true; if (action == GLFW_RELEASE) io.KeysDown[key] = false; // Modifiers are not reliable across systems io.KeyCtrl = io.KeysDown[GLFW_KEY_LEFT_CONTROL] || io.KeysDown[GLFW_KEY_RIGHT_CONTROL]; io.KeyShift = io.KeysDown[GLFW_KEY_LEFT_SHIFT] || io.KeysDown[GLFW_KEY_RIGHT_SHIFT]; io.KeyAlt = io.KeysDown[GLFW_KEY_LEFT_ALT] || io.KeysDown[GLFW_KEY_RIGHT_ALT]; io.KeySuper = io.KeysDown[GLFW_KEY_LEFT_SUPER] || io.KeysDown[GLFW_KEY_RIGHT_SUPER]; } void ImGui_ImplGlfw_CharCallback(GLFWwindow* window, unsigned int c) { if (g_PrevUserCallbackChar != NULL) g_PrevUserCallbackChar(window, c); ImGuiIO& io = ImGui::GetIO(); if (c > 0 && c < 0x10000) io.AddInputCharacter((unsigned short)c); } static bool ImGui_ImplGlfw_Init(GLFWwindow* window, bool install_callbacks, GlfwClientApi client_api) { g_Window = window; g_Time = 0.0; // Setup back-end capabilities flags ImGuiIO& io = ImGui::GetIO(); io.BackendFlags |= ImGuiBackendFlags_HasMouseCursors; // We can honor GetMouseCursor() values (optional) io.BackendFlags |= ImGuiBackendFlags_HasSetMousePos; // We can honor io.WantSetMousePos requests (optional, rarely used) io.BackendPlatformName = "imgui_impl_glfw"; // Keyboard mapping. ImGui will use those indices to peek into the io.KeysDown[] array. io.KeyMap[ImGuiKey_Tab] = GLFW_KEY_TAB; io.KeyMap[ImGuiKey_LeftArrow] = GLFW_KEY_LEFT; io.KeyMap[ImGuiKey_RightArrow] = GLFW_KEY_RIGHT; io.KeyMap[ImGuiKey_UpArrow] = GLFW_KEY_UP; io.KeyMap[ImGuiKey_DownArrow] = GLFW_KEY_DOWN; io.KeyMap[ImGuiKey_PageUp] = GLFW_KEY_PAGE_UP; io.KeyMap[ImGuiKey_PageDown] = GLFW_KEY_PAGE_DOWN; io.KeyMap[ImGuiKey_Home] = GLFW_KEY_HOME; io.KeyMap[ImGuiKey_End] = GLFW_KEY_END; io.KeyMap[ImGuiKey_Insert] = GLFW_KEY_INSERT; io.KeyMap[ImGuiKey_Delete] = GLFW_KEY_DELETE; io.KeyMap[ImGuiKey_Backspace] = GLFW_KEY_BACKSPACE; io.KeyMap[ImGuiKey_Space] = GLFW_KEY_SPACE; io.KeyMap[ImGuiKey_Enter] = GLFW_KEY_ENTER; io.KeyMap[ImGuiKey_Escape] = GLFW_KEY_ESCAPE; io.KeyMap[ImGuiKey_A] = GLFW_KEY_A; io.KeyMap[ImGuiKey_C] = GLFW_KEY_C; io.KeyMap[ImGuiKey_V] = GLFW_KEY_V; io.KeyMap[ImGuiKey_X] = GLFW_KEY_X; io.KeyMap[ImGuiKey_Y] = GLFW_KEY_Y; io.KeyMap[ImGuiKey_Z] = GLFW_KEY_Z; io.SetClipboardTextFn = ImGui_ImplGlfw_SetClipboardText; io.GetClipboardTextFn = ImGui_ImplGlfw_GetClipboardText; io.ClipboardUserData = g_Window; #if defined(_WIN32) io.ImeWindowHandle = (void*)glfwGetWin32Window(g_Window); #endif g_MouseCursors[ImGuiMouseCursor_Arrow] = glfwCreateStandardCursor(GLFW_ARROW_CURSOR); g_MouseCursors[ImGuiMouseCursor_TextInput] = glfwCreateStandardCursor(GLFW_IBEAM_CURSOR); g_MouseCursors[ImGuiMouseCursor_ResizeAll] = glfwCreateStandardCursor(GLFW_ARROW_CURSOR); // FIXME: GLFW doesn't have this. g_MouseCursors[ImGuiMouseCursor_ResizeNS] = glfwCreateStandardCursor(GLFW_VRESIZE_CURSOR); g_MouseCursors[ImGuiMouseCursor_ResizeEW] = glfwCreateStandardCursor(GLFW_HRESIZE_CURSOR); g_MouseCursors[ImGuiMouseCursor_ResizeNESW] = glfwCreateStandardCursor(GLFW_ARROW_CURSOR); // FIXME: GLFW doesn't have this. g_MouseCursors[ImGuiMouseCursor_ResizeNWSE] = glfwCreateStandardCursor(GLFW_ARROW_CURSOR); // FIXME: GLFW doesn't have this. g_MouseCursors[ImGuiMouseCursor_Hand] = glfwCreateStandardCursor(GLFW_HAND_CURSOR); // Chain GLFW callbacks: our callbacks will call the user's previously installed callbacks, if any. g_PrevUserCallbackMousebutton = NULL; g_PrevUserCallbackScroll = NULL; g_PrevUserCallbackKey = NULL; g_PrevUserCallbackChar = NULL; if (install_callbacks) { g_PrevUserCallbackMousebutton = glfwSetMouseButtonCallback(window, ImGui_ImplGlfw_MouseButtonCallback); g_PrevUserCallbackScroll = glfwSetScrollCallback(window, ImGui_ImplGlfw_ScrollCallback); g_PrevUserCallbackKey = glfwSetKeyCallback(window, ImGui_ImplGlfw_KeyCallback); g_PrevUserCallbackChar = glfwSetCharCallback(window, ImGui_ImplGlfw_CharCallback); } g_ClientApi = client_api; return true; } bool ImGui_ImplGlfw_InitForOpenGL(GLFWwindow* window, bool install_callbacks) { return ImGui_ImplGlfw_Init(window, install_callbacks, GlfwClientApi_OpenGL); } bool ImGui_ImplGlfw_InitForVulkan(GLFWwindow* window, bool install_callbacks) { return ImGui_ImplGlfw_Init(window, install_callbacks, GlfwClientApi_Vulkan); } void ImGui_ImplGlfw_Shutdown() { for (ImGuiMouseCursor cursor_n = 0; cursor_n < ImGuiMouseCursor_COUNT; cursor_n++) { glfwDestroyCursor(g_MouseCursors[cursor_n]); g_MouseCursors[cursor_n] = NULL; } g_ClientApi = GlfwClientApi_Unknown; } static void ImGui_ImplGlfw_UpdateMousePosAndButtons() { // Update buttons ImGuiIO& io = ImGui::GetIO(); for (int i = 0; i < IM_ARRAYSIZE(io.MouseDown); i++) { // If a mouse press event came, always pass it as "mouse held this frame", so we don't miss click-release events that are shorter than 1 frame. io.MouseDown[i] = g_MouseJustPressed[i] || glfwGetMouseButton(g_Window, i) != 0; g_MouseJustPressed[i] = false; } // Update mouse position const ImVec2 mouse_pos_backup = io.MousePos; io.MousePos = ImVec2(-FLT_MAX, -FLT_MAX); #ifdef __EMSCRIPTEN__ const bool focused = true; // Emscripten #else const bool focused = glfwGetWindowAttrib(g_Window, GLFW_FOCUSED) != 0; #endif if (focused) { if (io.WantSetMousePos) { glfwSetCursorPos(g_Window, (double)mouse_pos_backup.x, (double)mouse_pos_backup.y); } else { double mouse_x, mouse_y; glfwGetCursorPos(g_Window, &mouse_x, &mouse_y); io.MousePos = ImVec2((float)mouse_x, (float)mouse_y); } } } static void ImGui_ImplGlfw_UpdateMouseCursor() { ImGuiIO& io = ImGui::GetIO(); if ((io.ConfigFlags & ImGuiConfigFlags_NoMouseCursorChange) || glfwGetInputMode(g_Window, GLFW_CURSOR) == GLFW_CURSOR_DISABLED) return; ImGuiMouseCursor imgui_cursor = ImGui::GetMouseCursor(); if (imgui_cursor == ImGuiMouseCursor_None || io.MouseDrawCursor) { // Hide OS mouse cursor if imgui is drawing it or if it wants no cursor glfwSetInputMode(g_Window, GLFW_CURSOR, GLFW_CURSOR_HIDDEN); } else { // Show OS mouse cursor // FIXME-PLATFORM: Unfocused windows seems to fail changing the mouse cursor with GLFW 3.2, but 3.3 works here. glfwSetCursor(g_Window, g_MouseCursors[imgui_cursor] ? g_MouseCursors[imgui_cursor] : g_MouseCursors[ImGuiMouseCursor_Arrow]); glfwSetInputMode(g_Window, GLFW_CURSOR, GLFW_CURSOR_NORMAL); } } static void ImGui_ImplGlfw_UpdateGamepads() { ImGuiIO& io = ImGui::GetIO(); memset(io.NavInputs, 0, sizeof(io.NavInputs)); if ((io.ConfigFlags & ImGuiConfigFlags_NavEnableGamepad) == 0) return; // Update gamepad inputs #define MAP_BUTTON(NAV_NO, BUTTON_NO) { if (buttons_count > BUTTON_NO && buttons[BUTTON_NO] == GLFW_PRESS) io.NavInputs[NAV_NO] = 1.0f; } #define MAP_ANALOG(NAV_NO, AXIS_NO, V0, V1) { float v = (axes_count > AXIS_NO) ? axes[AXIS_NO] : V0; v = (v - V0) / (V1 - V0); if (v > 1.0f) v = 1.0f; if (io.NavInputs[NAV_NO] < v) io.NavInputs[NAV_NO] = v; } int axes_count = 0, buttons_count = 0; const float* axes = glfwGetJoystickAxes(GLFW_JOYSTICK_1, &axes_count); const unsigned char* buttons = glfwGetJoystickButtons(GLFW_JOYSTICK_1, &buttons_count); MAP_BUTTON(ImGuiNavInput_Activate, 0); // Cross / A MAP_BUTTON(ImGuiNavInput_Cancel, 1); // Circle / B MAP_BUTTON(ImGuiNavInput_Menu, 2); // Square / X MAP_BUTTON(ImGuiNavInput_Input, 3); // Triangle / Y MAP_BUTTON(ImGuiNavInput_DpadLeft, 13); // D-Pad Left MAP_BUTTON(ImGuiNavInput_DpadRight, 11); // D-Pad Right MAP_BUTTON(ImGuiNavInput_DpadUp, 10); // D-Pad Up MAP_BUTTON(ImGuiNavInput_DpadDown, 12); // D-Pad Down MAP_BUTTON(ImGuiNavInput_FocusPrev, 4); // L1 / LB MAP_BUTTON(ImGuiNavInput_FocusNext, 5); // R1 / RB MAP_BUTTON(ImGuiNavInput_TweakSlow, 4); // L1 / LB MAP_BUTTON(ImGuiNavInput_TweakFast, 5); // R1 / RB MAP_ANALOG(ImGuiNavInput_LStickLeft, 0, -0.3f, -0.9f); MAP_ANALOG(ImGuiNavInput_LStickRight,0, +0.3f, +0.9f); MAP_ANALOG(ImGuiNavInput_LStickUp, 1, +0.3f, +0.9f); MAP_ANALOG(ImGuiNavInput_LStickDown, 1, -0.3f, -0.9f); #undef MAP_BUTTON #undef MAP_ANALOG if (axes_count > 0 && buttons_count > 0) io.BackendFlags |= ImGuiBackendFlags_HasGamepad; else io.BackendFlags &= ~ImGuiBackendFlags_HasGamepad; } void ImGui_ImplGlfw_NewFrame() { ImGuiIO& io = ImGui::GetIO(); IM_ASSERT(io.Fonts->IsBuilt() && "Font atlas not built! It is generally built by the renderer back-end. Missing call to renderer _NewFrame() function? e.g. ImGui_ImplOpenGL3_NewFrame()."); // Setup display size (every frame to accommodate for window resizing) int w, h; int display_w, display_h; glfwGetWindowSize(g_Window, &w, &h); glfwGetFramebufferSize(g_Window, &display_w, &display_h); io.DisplaySize = ImVec2((float)w, (float)h); if (w > 0 && h > 0) io.DisplayFramebufferScale = ImVec2((float)display_w / w, (float)display_h / h); // Setup time step double current_time = glfwGetTime(); io.DeltaTime = g_Time > 0.0 ? (float)(current_time - g_Time) : (float)(1.0f/60.0f); g_Time = current_time; ImGui_ImplGlfw_UpdateMousePosAndButtons(); ImGui_ImplGlfw_UpdateMouseCursor(); // Gamepad navigation mapping ImGui_ImplGlfw_UpdateGamepads(); }
Butzy.. I'm a fan of vinegar myself.. I prefer for instance a Carolina mustard vinegar for BBQ'd meats particularly things like pork butts. However, I was hoping to come up with something that had just a bit more body so that it could be controlled a little in plating. Xantham gum was mentioned and it is pretty cheap on amazon. Does it impart any sort of negative flavors or chalky sensations like cornstarch can? Is it heat stable or can it "break" like some emulsifiers? Man was one too many cuba libres into the night when I posted this. Sorry for the rambling nature but thank you all for the thoughts. For both vinaigrettes I did season with salt and pepper. The tangerine vinaigrette was a warm vinaigrette so I started it off by heating the oil and then added diced shallot and garlic. In retrospect that actually ended up over powering the tangerine flavor, but another issue was unlike the lemon juice which I did freshly squeeze.. I had... So the ratio is 3 parts fat to 1 part acid. That's what I started the day with.. and oh man.. did I go off the rails. I did a straight up lemon juice with 1/2 canola oil and 1/2 olive oil.. what a horrible flavor that was. I added honey, red win vinegar, and chopped basil and lemon zest along with about a tbsp of sugar. I whisked and whisked.. and it sort of seemed OK.. it wasn't great. What in the hell is a vinaigrette? To me.. it should be like a salad... I just picked up about $400 worth of hotel pans for $133. I compared the 6 inch 1/6th pan on amazon it was $9.45 and I got it for $2.77 - food service warehouse is apparently going out of business and have things 30 to 90% off. http://www.foodservicewarehouse.com/ Just thought you guys would like to know in case there's something you've been wanting to pick up. Content is the name of the game.. this is no different on cheftalk.com than on epicurious.com although the process of creating content is different. Since you are not really building a community you would not have consistent content creators coming to the site to offer that for free. In addition, your content is heavily focused on recipes and relies on a set of functionality that you plan to build to associate meta information (ingredients) in a form that is searchable.... I just read back through the entire "Fish Challenge" thread and honestly that is the best example of the true beauty of this little website I think anyone could find. It illustrates everything that is good and right with the community; encouragement, questioning, support, passion etc. That was one heck of a month.. @Nicko even came out of his shell and posted multiple entries! Two years later - in a world that seems to be spinning out of control, I had tears in my eyes... I am not a fan of wet ribs.. sticky sugary wet ribs need to go away. If I want candy I know where to get it. Give me spice.. and a vinegar mustard sauce on the side. I may be starting a war but that is how I feel.
--- bibliography: - 'sw.bib' title: 'Dependent Types for JavaScript — Appendix[^1]' --- Additional Definitions {#sec:appendix-a} ====================== We presented many parts of $\dRef$ and $\djs$ in [§3]{}, [§4]{}, and [§5]{}. In this appendix, we consider some details that did not fit in that presentation, as well as our treatment of *break* and *label* expressions to facilitate the desugaring of control operators in $\javascript$. Then, in , we outline how to extend $\dRef$ to support better *location polymorphism*. Syntax ------ In addition to the expression syntax in [Figure 7]{}, $\dRef$ includes the following forms: $$\figureFontSize \begin{array}{rrcll} \startRow{\textbf{}}{e}{::=} {\cdots} \spaceItem {\expLabel{\labelName}{e}} \spaceItem {\expBreak{\labelName}{\varVal}} \end{array}$$ An expression $\expLabel{x}{e}$ labels the enclosed expression, and a break expression $\expBreak{\labelName}{\varVal}$ terminates execution of the innermost expression labeled $\labelNameX{\labelName}$ within the function currently being evaluated and produces the result $\varVal$. If no such labeled expression is found, evaluation becomes stuck. Label and break expressions are included to translate the control flow operations of $\djs$. To analyze label and break expressions, the expression typing relation uses a *label environment* $\labelEnv$ (in addition to type and heap environments), where each binding records the world that the expression labeled $\labelNameX{\labelName}$ is expected to satisfy. $$\figureFontSize \begin{array}{rrcll} \startRow{\textbf{}}{\labelEnv}{::=} {\emptyset} \spaceItem {\labelEnv,\tyBind{\labelNameX{\labelName}} {(\world{\varTyp}{\heapSig})}} \end{array}$$ Well-Formedness --------------- The well-formedness relations, defined in , are largely straightforward. We use the procedure $\helperOp{Binders}$ to collect all of the binders in a world or heap. Subtyping --------- presents more of the subtyping relations. As in $\dtypes$, subtyping on refinement types reduces to implication of refinement formulas, which are discharged by a combination of uninterpreted, first-order reasoning and syntactic subtyping. If the SMT solver alone cannot discharge an implication obligation ($\ruleName{I-Valid}$), the formula is rearranged into conjunctive normal form ($\ruleName{I-Cnf}$), and goals of the form $\hasTyp{\varLogVal}{\varUnTyp}$ are discharged by a combination of uninterpreted reasoning and syntactic subtyping ($\ruleName{I-ImpSyn}$). We write $\embed{\varTyp}$ for the *embedding* of a type as a formula, a straightforward definition [@NestedPOPL12] that lifts to environments $\embed{\Gamma}$, heap bindings $\embed{\varHeapSigBinding}$, heaps $\embed{\heapSig}$, and worlds $\embed{\varWorld}$. Because heap binders may refer to each other in any order (recall that a heap can be thought of as a dependent tuple, where each component is named with a binder), the embedding of a heap starts by inserting dummy bindings so that all binders in scope for the type of each heap binding. For example: $$\begin{aligned} \heapSig_0 &\defeq \heapPair{\varHeap_0} {\heapCat{\heapBind{\locName_1}{\tyBind{x}{\varTyp_1}}} {\heapBind{\locName_2}{\tyBind{y}{\varTyp_2}}}} \\ \embed{\heapSig_0} & = \embed{\tyBind{x}{\tyTop}}, \embed{\tyBind{y}{\tyTop}}, \embed{\varTyp_1}(x), \embed{\varTyp_2}(y)\end{aligned}$$ The $\ruleName{U-Arrow}$ rule for function types is familiar, treating input worlds contravariantly and output worlds covariantly. In order to check world subtyping, the judgement $\relSub{\Gamma} {\world{\tyBind{x_1}{\varTyp_1}}{\heapPair{\varHeap}{\varHeapSigBinding_1}}} {\world{\tyBind{x_2}{\varTyp_2}}{\heapPair{\varHeap}{\varHeapSigBinding_2}}} {\maybeN}$ checks that $\varTyp_1$ is a subtype of $\varTyp_2$ and that the heaps agree on the “deep" part $\varHeap$. Then, it checks that the structure of the “shallow" parts match — using a heap matching relation that uses a $\heapEqSym$ operator (not shown) that permutes bindings as necessary — and creates a substitution $\varSubst$ of binders from $\varHeapSigBinding_2$ to $\varHeapSigBinding_1$. Finally, the heap bindings, which can be thought of as dependent tuples, are embedded as formulas and checked by implication. Value Typing ------------ We supplement our discussion of value typing, defined in , from [§4]{}. The $\ruleName{T-Extend}$ rule for dictionaries is straightforward. The $\ruleName{T-Loc}$ rule assigns run-time location $\runLoc$ (which appears during evaluation, but not in source programs) a reference type corresponding to its compile-time location, using the mapping $\helperOp{StaticLoc}$. Notice that, unlike the version of the rule from [Figure 9]{}, the rule $\ruleName{T-Fun}$ uses an empty label environment to type check function bodies, so that break expressions cannot cross function boundaries. Expression Typing ----------------- When we presented expression typing in [§4.4]{}, we ignored break and label expressions, so the typing judgement referred only to type and heap environments. To account for control operators, the expression typing judgement is of the form $\relTypExp{\Gamma}{\heapTyp}{e}{\varTyp}{\heapTyp'}{}$, where a label environment is an additional input. We define the typing rules in and supplement our previous discussion. The $\ruleName{T-As}$ and $\ruleName{T-Sub}$ rules are straightforward. Aside from the rules for label and break expressions, label environments $\labelEnv$ play no interesting role. The rules we discussed in [§4.4]{} carry over directly to the formulation with label environments. For simplicity, we assume that the initial type environment contains all the weak location bindings $\heapBindPair{\weakLocName}{\varTyp}{\locName}$ required by the program. To safely allow a weak location $\weakLocName$ to be treated *temporarily* as strong, $\dRef$ ensures that $\weakLocName$ has at most one corresponding *thawed* location at a time; if there is none, we say $\weakLocName$ is *frozen*. The rule $\ruleName{T-Thaw}$ thaws $\weakLocName$ to a strong location $\locName$ (which we syntactically require be distinct from all other thawed locations for $\weakLocName$) and updates the heap environment with thaw state $\thwd{\locName}$ to track the correspondence. Subtyping allows $\vNull$ weak references, so the output type is $\vNull$ if the original reference is; otherwise, it is a reference to $\locName$. Finally, the new heap also binds a value $x$ of type $\varTyp$, the invariant for all values stored at $\weakLocName$, and the output type introduces an *existential* so that $x$ is in scope in the new heap. The rule $\ruleName{T-Freeze}$ serves two purposes, to merge a strong location $\locName$ into a weak (frozen) location $\weakLocName$ and to *re-freeze* a thawed (strong) location $\locName$ that originated from $\weakLocName$, as long as the heap value stored at $\locName$ satisfies the invariant required by $\weakLocName$. The strong reference is guaranteed to be non-$\vNull$, so the output type remembers that the frozen reference is, too. Compared to the presentation in [@LinLocFI07], we have combined freeze and re-freeze into a single `freeze` expression that includes an explicit thaw state $\thawState$. The result of thawing a weak location is *either* a strong reference or $\vNull$. Although we could statically require that all strong references be non-$\vNull$ before use (to rule out the possibility of null-dereference exceptions), we choose to allow $\vNull$ references to facilitate idiomatic programming. Therefore, we modify the input type for the object primitives in `objects.dref` to allow a $\vNull$ argument. For example, consider the updated input type for `hasPropObj` below, where $\tyMaybeNull{\varTyp} \defeq \refTyp{\varTyp(\theV)\vee\theV=\vNull}$. Notice that we add the predicate $x\neq\vNull$ to the *output* type, because if `hasPropObj` evaluates without raising an exception, then $x$ is guaranteed to be non-$\vNull$. In this way, $\dRef$ precisely tracks the invariants of thawed objects (`passengers` from [§2.7]{}). $$\begin{aligned} &\ \world {\tyTupleTwo{\tyBind{x}{\tyMaybeNull{\refCon}}}{\tyBind{k}{\tyStr}}} {\heapBindPair{x}{\tyBind{d}{\tyDict}}{\locProto{x}}} \\[-2pt] \ttArr &\ \world {\refTypShort{x\neq\vNull\wedge (\formIff{\theV}{\objHasR{d}{k}{\curHeap}{\locProto{x}}})}} {\sameHeap}\end{aligned}$$ The $\helperOp{TInst}$ procedure processes has-type predicates in formulas as follows: $$\begin{aligned} \instantiate{\hasTyp{\varLogVal}{\tyvar}}{\tyvar}{\refTypX{x}{\varFormOne}} &= \subst{\varFormOne}{x}{\varLogVal} \\ \instantiate{\hasTyp{\varLogVal}{\tyvarB}}{\tyvar}{\varTyp} &= \hasTyp{\varLogVal}{\tyVarB}\end{aligned}$$ The rule uses a operator, defined in , that combines the type and heap environments along each branch ($\world{\varTyp_1}{\heapTyp_1}$ and $\world{\varTyp_2}{\heapTyp_2}$) such that the type and output heap for the overall if-expression ($\world{\varTyp}{\heapTyp'}$) are in prenex form. The operator starts by using to move existential binders for the types to the top-level. Rearranging variables in this way is sound because we assume that, by convention, all let-bound variables in a program are distinct. Then, we use to combine the bindings in a heap environment one location at a time. We show a few representative equations in , abusing notation in several ways. For example, we write $\varHeapBinding\ \backslash\ \locName$ to denote that $\locName$ is not bound in $\varHeapBinding$. When a location $\locName$ is bound in both heaps to values $\varVal_1$ and $\varVal_2$, respectively, introduces a new binding $y$ whose type is the join of $\varVal_1$ and $\varVal_2$. When a location $\locName$ is bound in only one heap, we use the dummy type $\tyTop$ to describe the (non-existent) value in the other heap. There is no danger that $\locName$ will be unsoundly dereferenced after the if-expression, since guards the types of references $\utRef{\locName}$ with the appropriate guard predicates. $$\begin{aligned} \iteJoin{b}{\world{\varScm_1}{\heapTyp_1}}{\world{\varScm_2}{\heapTyp_2}} = &\ \world {\existsTyp{\seq{x}}{\seq{\varTyp}} {\existsTyp{\seq{y}}{\seq{\varTyp'}} {\varScm}}} {\heapTyp} \hspace{0.10in}\textrm{where } \iteJoinTypes{b}{\varScm_1}{\varScm_2} = \existsTyp{\seq{x}}{\seq{\varTyp}}{\varScm} \\[-4pt] &\ \phantom{ \world {\existsTyp{\seq{x}}{\seq{\varTyp}} {\existsTyp{\seq{y}}{\seq{\varTyp'}} {\varScm}}} {\heapTyp} } \hspace{0.10in}\textrm{and } \iteJoinHeaps{b}{\heapTyp_1}{\heapTyp_2} = (\exists \seq{y}:\seq{\varTyp'}, \heapTyp) \\[4pt] \iteJoinTypes{b}{\varScm_1}{\varScm_2} = &\ \refTyp{(b=\vTrue\Rightarrow\embed{\varScm_1}) \wedge (b=\vFalse\Rightarrow\embed{\varScm_2})} \\ \iteJoinTypes{b}{(\existsTyp{\seq{x_1}}{\seq{\varTyp_1}}{\varScm_1})}{\varScm_2} = &\ \existsTyp{\seq{x_1}} {(\iteJoinTypes{b}{\seq{\varTyp_1}}{\seq{\tyTop}})} {\iteJoinTypes{b}{\varScm_1}{\varScm_2}} \\ \iteJoinTypes{b}{\varScm_1}{(\existsTyp{\seq{x_2}}{\seq{\varTyp_2}}{\varScm_2})} = &\ \existsTyp{\seq{x_2}} {(\iteJoinTypes{b}{\seq{\tyTop}}{\seq{\varTyp_2}})} {\iteJoinTypes{b}{\varScm_1}{\varScm_2}} \\ \iteJoinTypes{b}{(\existsTyp{\seq{x_1}}{\seq{\varTyp_1}}{\varScm_1})} {(\existsTyp{\seq{x_2}}{\seq{\varTyp_2}}{\varScm_2})} = &\ \existsTyp{\seq{x_1}}{(\iteJoinTypes{b}{\seq{\varTyp_1}}{\seq{\tyTop}})}{ \existsTyp{\seq{x_2}}{(\iteJoinTypes{b}{\seq{\tyTop}}{\seq{\varTyp_2}})}{ \iteJoinTypes{b}{\varScm_1}{\varScm_2}}} \\[6pt] \iteJoinHeaps{b}{\heapCat{\heapBind{\locName}{\varVal_1}}{\varHeapBinding_1}} {\heapCat{\heapBind{\locName}{\varVal_2}}{\varHeapBinding_2}} = &\ (\exists \tyBind{y}{\iteJoinTypes{b}{\refTypX{x}{x=\varVal_1}} {\refTypX{x}{x=\varVal_2}}}, \heapCat{\heapBind{\locName}{y}} {\iteJoinHeaps{b}{\varHeapBinding_1}{\varHeapBinding_2}}) \\ \iteJoinHeaps{b}{\heapCat{\heapBind{\locName}{\varVal_1}}{\varHeapBinding_1}} {\varHeapBinding_2 \backslash\ \locName} = &\ (\exists \tyBind{y}{\iteJoinTypes{b}{\refTypX{x}{x=\varVal_1}}{\tyTop}}, \heapCat{\heapBind{\locName}{y}} {\iteJoinHeaps{b}{\varHeapBinding_1}{\varHeapBinding_2}}) \\ \iteJoinHeaps{b}{\varHeapBinding_1 \backslash\ \locName} {\heapCat{\heapBind{\locName}{\varVal_2}}{\varHeapBinding_2}} = &\ (\exists \tyBind{y}{\iteJoinTypes{b}{\tyTop}{\refTypX{x}{x=\varVal_2}}}, \heapCat{\heapBind{\locName}{y}} {\iteJoinHeaps{b}{\varHeapBinding_1}{\varHeapBinding_2}})\end{aligned}$$ The rule for $\expLabel{x}{e}$ binds the label $\labelNameX{x}$ to an expected world $\world{\varTyp}{\heapSig}$ in the label environment $\labelEnv'$ used to check $e$, and expects that *all* exit points of $e$ produce a value and heap environment that satisfy the expected world. The exit points are all $\expBreak{\labelName}{\varVal}$ expressions in $e$, as well as the “fall-through" of expression $e$ for control flow paths that do not end with `break`; the rule handles the former cases, and the second and third premises of handle the latter. If all exit points satisfy the expected world, we use the procedure to convert the heap type into a heap environment, like in the rule. Notice that derives the type $\refTyp{\formFalse}$ because a `break` immediately completes the evaluation context, thus making the subsequent program point unreachable. Desugaring ---------- In , we show more of the desugaring rules. As discussed in [§5]{}, we desugar non-constructor functions () to scalar function values and constructor functions () to objects. Following $\LamJS$ [@Guha10a], we wrap each desugared function body with the label $\labelNameX{\mathit{return}}$, which facilitates the desugaring of `return` statements (). We desugar named, recursive $\djs$ functions via the standard `letrec` encoding using `fix`; we omit this rule from . first creates a fresh object at location $\locConst_{F}$ with prototype `Function.prototype`, then stores the desugared constructor function in the field, and finally creates an empty object at location $\locConst_{F_{proto}}$ that is stored in the field, to be used when creating an object with this constructor (). Following , the desugars `while` loops to recursive functions (we write `letrec` as syntactic sugar for the standard encoding using `fix`). As such, a (function type) annotation describes the invariants that hold before and after each iteration. A $\labelNameX{\mathit{break}}$ label around the desugared loop body facilitates the desugaring of `break` statements (). We elide similar mechanisms for `do-while` loops, `for` loops, `for-in` loops, and `continue` statements. Extensions {#sec:appendix-extensions} ========== We now outline two ways to increase the expressiveness of location polymorphism in $\dRef$. Weak Location Polymorphism -------------------------- So far, we have universally quantified function types over strong locations. We can make several changes to allow quantification over weak locations as well. First, we extend the syntax of locations. $${\weakLocName}\sep\sep{::=}\sep\sep{\cdots}\spaceItemSmall{\locFrozen{\varLoc}} \sep\sep\sep\sep\sep {\eitherLocName}\sep\sep{::=}\sep\sep{\locName}\spaceItemSmall{\weakLocName} \sep\sep\sep\sep\sep {\varLocEither}\sep\sep{::=}\sep\sep{\varLoc}\spaceItemSmall{\locFrozen{\varLoc}}$$ We use $\locFrozen{\varLoc}$ to range over weak location variables, and we extend the grammar of weak locations $\weakLocName$ to include them (in addition to weak location constants $\locFrozen{\locConst}$). We also define $\eitherLocName$ (resp. $\varLocEither$) to range over *arbitrary* locations (resp. location variables). Next, we extend the syntax of function types and function application. $$\begin{array}{rrcll} \startRow{\textbf{}}{e}{::=} {\cdots} \spaceItem {\appThree{\seq{\varTyp}}{\seq{\eitherLocName}}{\seq{\heapSig}}{\varVal_1}{\varVal_2}} \spaceCategory \startRow{\textbf{}}{\varUnTyp}{::=} {\cdots} \spaceItem {\utArrowDrefWorlds {\seq{\tyVar}}{\seq{\varLocEither}}{\seq{\varHeap}} {\world{\weakHeap}{\varWorld_1}}{\varWorld_2}} \spaceCategory \startRow{\textbf{}}{\weakHeap}{::=} {\heapBindPair{\weakLocName}{\varTyp}{\locName}} \spaceItem {\heapCat{\weakHeap_1}{\weakHeap_2}} \spaceItem {\emptyset} \end{array}$$ A function type is now parametrized over arbitrary location variables $\seq{\varLocEither}$ and a *weak heap* $\weakHeap$ of bindings that describe weak location variables. To match, function application now includes location arguments $\seq{\eitherLocName}$ rather than $\seq{\locName}$; typing must ensure that strong location variables $\varLoc$ (resp. weak location variables $\locFrozen{\varLoc}$) are instantiated only with strong locations $\locName$ (resp. weak locations $\weakLocName$). A function type refers to a weak heap only in the domain of the function, because weak locations are flow-insensitive and do not vary at different program points. Before, we assumed that the initial typing environment contained bindings for all weak locations. The new syntax of function types replaces this convention by *abstracting* over weak locations. Consequently, the function application rule must check that the declared weak heap $\weakHeap$ of a function type is satisfiable given the current heap environment $\heapTyp$ at a call site (after substitution of all polymorphic variables). Existential Locations --------------------- Universally quantifying over all locations, including simple locations, clutters function types and applications with additional arguments, and also exposes locations that are “internal" or “local" to the desugared $\dRef$ program and *not* accessible in the original $\djs$ program. Consider the following example; we refer to the original function as $f$ and the desugared version as $f'$. We might annotate the $\djs$ function $f$ with the type $$\begin{aligned} & \forall \varLoc,\varLoc'.\ \world{\utRef{\varLoc}} {\heapBindPair{\varLoc}{\tyDict}{\varLoc'}} \\[-2pt] \ttArr\ & \world{\tyTop} {\heapBind{\varLoc}{\sameHeap}} \intertext{\normalsize and the desugared version would have the type} & \forall \varLoc,\varLoc',\varLoc_x,\varLoc_y.\ \world{\utRef{\varLoc}} {\heapBindPair{\varLoc}{\tyDict}{\varLoc'}} \\[-2pt] \ttArr\ & \world{\tyTop} {\heapCatThr{\heapBind{\varLoc}{\sameHeap}} {\heapBind{\varLoc_x}{\utRef{\varLoc}}} {\heapBind{\varLoc_y}{\utRef{\varLoc}}}}\end{aligned}$$ that uses additional location variables for the references inserted by the translation. Although it is straightforward to mechanically desugar function types in this manner, the additional location parameters at function calls increase the manual annotation burden or, more likely since we cannot expect $\djs$ programmer to write them, the burden on the type system to infer them. Instead, we can introduce *existential location types* into the system and write the following type for the desugared function $f'$. $$\begin{aligned} & \forall \varLoc,\varLoc'.\ \world{\utRef{\varLoc}} {\heapBindPair{\varLoc}{\tyDict}{\varLoc'}} \\[-2pt] \ttArr\ & \world{\existsLoc{\varLoc_x,\varLoc_y}{\tyTop}} {\heapCatThr{\heapBind{\varLoc}{\sameHeap}} {\heapBind{\varLoc_x}{\utRef{\varLoc}}} {\heapBind{\varLoc_y}{\utRef{\varLoc}}}}\end{aligned}$$ Notice that the *output* world uses existentials to name the (strong, simple) locations inserted by desugaring. As a result, a call to this function need not instantiate the local locations; instead, the type system can generate *fresh* location constants (skolemize) for the existential locations. We provide a sketch of how to extend $\dRef$ with existential locations. First, we extend the syntax of types. $$\begin{array}{rrcll} \startRow{\textbf{}}{\varTyp}{::=} {\cdots} \spaceItem {\existsLoc{\locName}{\varTyp}} \end{array}$$ We intend that existential locations only appear in *positive* positions of function types, which we can compute in similar fashion to the procedure from $\dtypes$ [@NestedPOPL12] that tracks *polarity* of types nested within formulas. Effectively, we require that every function type be of the form $$\utArrowDrefWorlds{\seq{\tyVar}}{\seq{\varLoc}}{\seq{\varHeap}} {\triWorld{x}{(\refTypX{y}{\varFormOne})}{\heapSig}} {\triWorld{x'}{(\existsLoc{\seq{\locName}}{\refTypX{y'}{\varFormOne'}})} {\heapSig'}}$$ where the input type is a refinement type and the output type is in a *prenex* form that requires all existentially-quantified locations to appear at the top-level and which prohibits existentially-quantified values. Intuitively, the locations $\seq{\locName}$ correspond to local reference cells that a function allocates when invoked and are inaccessible to callers. To introduce existential locations for simple references (which are only used by desugaring), we use a new typing rule. For technical reasons, we use a `let`-expression to type check reference allocation *along with* a subsequent expression $e$ as a way to describe the scope of $\locName$. $$\inferrule* {\relTypVal{\Gamma}{\heapTyp}{\varVal}{\varTyp}{\maybeN} \sepPremise \relTypExp{\Gamma, \tyBind{x}{\utRef{\locName}}} {\heapCat{\heapTyp}{\heapBind{\locName}{\varVal}}} {e}{\varScm}{\heapTyp'}{\maybeN}} {\relTypExp{\Gamma}{\heapTyp} {\letinBare{x}{\newref{\locName}{\varVal}}{e}} {\existsLoc{\locName}{\varScm}}{\heapTyp'}{\maybeN}}$$ To facilitate algorithmic type checking, we ensure that existential locations are always prenex quantified in types. The procedure, used for conditionals, rearranges existential locations allocated on different branches to maintain this invariant. Finally, we need to handle subtyping of existential location types. The simplest approach is to require that two types have the same quantifier structure. $$\inferrule* {\relSub{\Gamma}{\varTyp_1}{\varTyp_2}{\maybeN}} {\relSub{\Gamma} {\existsLoc{\locName}{\varTyp_1}} {\existsLoc{\locName}{\varTyp_2}}{\maybeN}}$$ For first-order functions, we can work around this limitation by playing tricks with dummy locations. For higher-order functions, however, the presence of existential locations limits expressivness by constraining the use of the heap. Abstracting over the mutable state of higher-order functions, however, can can be quite heavyweight (see Hoare Type Theory [@HTT]); adding more lightweight support in our setting is left for future work. [^1]: This report supplements our OOPSLA 2012 paper [@DjsOOPSLA12].
Cyberterrorism This week three Alabama hospitals were forced to turn away “all but the most-critical new patients” after a ransomware attack infected their computers as the FBI warned that these attacks are growing more sophisticated and costly. A May 15 Paris conference, co-sponsored by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron, will bring together national leaders and tech companies to focus on terrorists’ use of social media. One critic warned of overreaction to tragedies like the recent New Zealand mosque attacks, which were livestreamed by the shooter. Leon Panetta called the infrastructure changes enacted by the current administration to protect America’s electricity insufficient. Writing with former Senator James Talent in The Hill, he said partnership with the public sector was now critical to shoring up U.S. defenses against a “cyber Pearl Harbor.” In a joint bipartisan resolution, 22 Western governors called on the feds to make cybersecurity a top national priority, beginning with restoring the position of the White House cybersecurity coordinator—eliminated in May. Federal workers back at their computers after the longest shutdown in U.S. history are likely to be overwhelmed by the backlog of cybertraffic—making them vulnerable to hostile overseas hackers, says a cybersecurity expert. And next time could be worse. Americans are asking if there are better ways of running our justice system, but without the numbers to answer fundamental questions, reformers are operating in the dark, write two specialists in criminological research. A lawsuit charges that the Trump administration violated protesters' First Amendment rights in Washington, D.C. The president approvingly tweets a letter from his former lawyer calling the demonstrators "terrorists." The partial shutdown of immigration courts during the COVID-19 health crisis has derailed final hearings for tens of thousands of individuals, and is likely to postpone the decisions about the fate of many more for months, if not years, while they wait in detention, according to data analyzed by the Transactional Records Access Clearinghouse.
Molly Walsh Coventry landfill From a distance, the mounds of land east of Route 5 in Coventry resemble one more set of rolling hills in Vermont's mountainous Northeast Kingdom. But locals know that this topography is man-made. "That was flat land at one time," Chris Jacobs said as he gazed at Vermont's only operating landfill last week. "Now it's a mountain of trash." The white-bearded Northeast Kingdom resident isn't happy about it, but the massive dump near the Canadian border could soon grow bigger — if state regulators approve it, that is. The landfill's owner, Casella Waste Systems, has proposed a 51-acre expansion that would allow it to stay open for another 22 years — and take in an additional 11 million tons of trash. The company has applied for a new solid waste certificate from the state Agency of Natural Resources. That permit is necessary before a broader review can be completed under Act 250, the state's land use law. In the meantime, debate is ramping up. Last month, a dozen Northeast Kingdom opponents of the expansion, including Jacobs, founded DUMP — an acronym for Don't Undermine Memphremagog's Purity — a reference to the scenic lake just across the road from the landfill. And a powerful coalition of advocacy groups, including the Conservation Law Foundation and the Vermont Public Interest Research Group, submitted a joint letter on July 20 urging ANR to reject the expansion. Critics fear that polluted water will leak from the landfill into groundwater and Lake Memphremagog, which straddles the U.S.-Canadian border and supplies drinking water to thousands of people in Québec. The southern tip of the lake is within 1,000 feet of the landfill property. Earlier this summer, Québec politicians jumped into the fray to voice concern, which heartened Vermont critics. Without a new permit, the landfill will probably run out of space and be forced to close in four years. Vermont's haulers would have to truck waste to other states — and would likely pass on higher fuel costs to customers. The situation is partly of Vermont's own making. State officials successfully pushed to close the hundreds of small, oozing, unlined town dumps across Vermont in the 1980s and 1990s. The second part of a state plan, articulated under Act 78, was to encourage the creation of regional landfills with proper environmental controls. That program flopped. A network of transfer stations was created for trash, recycling and, eventually, composting. But the materials are ultimately trucked elsewhere, with most of the trash going to Coventry. High costs and loud public opposition killed various proposals for new landfills, including one in Williston. Environmental groups increasingly agree that adding trash capacity thwarts efforts to become a "zero waste" society. So Vermont wound up with one big landfill. "They say nobody else wants to have a dump," Jacobs harrumphed. "Well, we don't want a dump in our backyard, either." The Albany resident is not the only one who resents that the Northeast Kingdom has become the state's trash can. "It's just out of sight, out of mind," Rep. Vicki Strong (R-Albany), said about the region. "A lot of times, we don't get a voice in something until it's done." Molly Walsh Landfill general manager Jeremy Labbe (left) speaking with Chris Jacobs DUMP wants more stringent weight limits on trucks traveling to and from the landfill, better odor control and a pause on any expansion. Supporters of a bigger landfill say it's the best solution for Vermont's rubbish problem. Officials at Casella, the Rutland-based company that started with one garbage truck in 1975 and now brings in annual revenues of $600 million, point out that trash has to go somewhere. "Everyone wants you to pick it up on Tuesday morning or Wednesday morning, but nobody wants you to put it down anywhere," observed Jeremy Labbe, general manager of the Coventry landfill, during an interview on the grounds. As he spoke, trucks full of trash, construction debris and contaminated soil pulled into the entrance of the 627-acre property. The semis groaned as they labored up the site's winding roads to empty their loads onto what some locals have dubbed Mount Casella. Last year, Casella accepted 600,000 tons of waste in Coventry. Roughly 70 percent of it was from Vermont. The landfill does not accept municipal waste from outside the state, but it does take construction and demolition debris, contaminated soil, and wastewater treatment sludge. Casella charges higher disposal fees for out-of-state trash. Each day's deposits are covered with dirt, sometimes the same contaminated soil that trucks pay to unload. All but a closed 11-acre section of the landfill that predates Casella's ownership is lined with heavy layers of plastic. Grass covers large, filled-up sections of the Mount Casella range — composed of at least five mini-mountains. But whatever the cover, snow, rain and waste decomposition generate large amounts of liquid known as leachate, or, as some refer to it, garbage coffee — liquid that has seeped through trash and picked up harmful chemicals on the way. Casella collects the leachate with wells and pipes built into the sprawling mounds and trucks it to sewage treatment plants, including one in Newport. The company estimates that the expansion could generate an additional 7 million gallons a year. That increases the risk of problems, including leaks into groundwater from plastic liners that invariably deteriorate, according to opponents. They say Casella has already failed to address the presence of arsenic and other substances in leachate near the old, unlined sections of the landfill. They worry about additional leachate containing a host of potentially harmful toxins from items allowed at the dump: flame-retardants in cellphones and computers, non-friable asbestos, and nonstick pans. They further point out that banned items can wind up in the dump anyway, including lead-based batteries and light bulbs containing mercury. Tests have not detected any contaminants beyond the landfill property line, according to Jeff Bourdeau, waste management specialist at ANR. "At this point, we've no evidence of any pollution off-site," he said. Casella officials echo that. "Leachate from the landfill is not going to make it into the lake," vice president Joe Fusco said. "There are redundant liner systems — test wells that would catch any kind of leak, so to speak, from the landfill well in advance of any kind of environmental damage." Molly Walsh Trucks leaving Coventry landfill In the July 20 letter to ANR, CLF and five other environmental groups contend that Casella has not sufficiently addressed concerns about leachate. The groups say Vermont should pursue aggressive recycling and composting programs, not a landfill expansion. The groups also suggest that the payments Casella makes to the host community, Coventry — $779,000 last year — will ultimately not cover the legacy of pollution. The Northeast is facing a landfill capacity shortage, and the state should "ensure that Coventry does not become a dumping ground for the rest of New England," the letter reads. It's from Kirstie L. Pecci, director of the Zero Waste Project at CLF, and was signed by the Vermont Natural Resources Council, VPIRG, the Toxics Action Center, Clean Water Action and Vermont Conservation Voters. The Coventry proposal is "so clearly dangerous and so clearly an antiquated and irresponsible idea" that it was "easy to wrangle a coalition together in short order," Pecci said. Despite precautions, dumps pollute, and even sewage plants often can't properly treat landfill leachate, so the Coventry proposal would be a step backward, Pecci said. She noted that Casella has lost several recent landfill expansion battles, including one in Southbridge, Mass., and another in Bethlehem, N.H. The hubbub over the Vermont expansion has landed Coventry, population 1,048, back in state headlines. It's probably the biggest story there since Phish played in a field next to the local airport in 2004, attracting 65,000 fans. The small town outside Newport has few businesses, and the village center looked tired last week. The burned husk of a vacant house occupied one lot, and the windows were boarded up at the graceful old church on the green. In a community where household income is 35 percent lower than the state average, even a landfill has fans. Coventry has no zoning, so the expansion does not require local approval. But town leaders are supportive. "I believe they've been a very good neighbor not just to Coventry but to the whole area," said Rep. Michael Marcotte (R-Coventry), who is chair of the Coventry Selectboard. "They employ a significant number of people there at good wages." The landfill is a much more sophisticated operation than it was back in the 1980s, Marcotte recalled. Back then, a Coventry businessman named Charlie Nadeau owned the relatively small dump and ran stock-car races nearby. Québec investors purchased the landfill, then sold it to Casella in the mid-'90s. While the members of DUMP want landfills to be opened elsewhere in Vermont to take the pressure off Coventry, environmentalists such as the CLF's Pecci say that's a bad idea. Even Casella VP Fusco says Vermont doesn't need a new landfill. "The facility in Coventry is a crucial and important part of Vermont's waste management infrastructure," Fusco said, "and it has the capacity to serve the state of Vermont for at least a few more decades." Sen. John Rodgers (D-Essex/Orleans) has deep concerns about the expansion, citing worries about the lake. "I don't think any of us are excited about the potential for that landfill growing," said Rodgers, who is running as a write-in candidate for governor. Gov. Phil Scott deferred comment to his chief of staff, Jason Gibbs, who said his boss did not want to prejudge the state review and would not take a stand now. ANR is expected to take months to consider public input, so the landfill's future won't be decided quickly. And if the agency says yes to the expansion, the District 7 Environmental Commission in St. Johnsbury would still need to finish its Act 250 review, which could have more potential hurdles. Meanwhile, last Thursday ANR ordered Casella to collect additional groundwater monitoring data at six test wells near the landfill. Cathy Jamieson, ANR solid waste program manager, said the order was in response to public comments that state groundwater protections are not being met at the landfill now. The results likely won't be in before mid-September.
Who was it that had the cost-per-batch projection graph? If they're still around, can they do an update on whether they did hit the break-even point ever? I think minus start-up equipment costs, if you include things like sanitizer (amortized over many batches), bottle caps, mesh bag replacements every so often, etc, I'm probably right around $1 per bottle for most batches, maybe slightly more. If I did straight 2-row, grew the hops myself, and only used US-05, I could probably get it down under $0.50 per bottle I think, but then what's the point? Who was it that had the cost-per-batch projection graph? If they're still around, can they do an update on whether they did hit the break-even point ever? I did it. I got caught up with Life and didn't end up keeping track of my subsequent batches, unfortunately (or doing many subsequent batches!). I had only done it as a curiosity anyway - I don't do this to save money; I do it because it's fun. The graph just proved that it wasn't just a money-loser. I still buy the occasional six pack from the store, especially since most of the grocery stores around here have build-your-own six packs now. When I was consuming more beer (~25 years ago) I signed up for a business license (trivial in AK) and purchased hops and malt in bulk. If you could get enough people to go in you could get a ton of 2-Row delivered for about $.25 per lb. Minimum order was a ton. My bill for the consumables for 10 gal batch was about $10. You have to brew quite a bit to make that reasonable - we were brewing nearly every weekend. A new brewery that just opened near me (actually owned by the guy that runs my LHBS) which plans to offer home brewers access to entire 50lb. sacks of grain at the "bulk" price. Once that's available, I should be able to cut my per-batch cost down quite a bit if I keep a sack of 2-row and a sack of pilsner on hand. If you get into homebrewing because you think you'll save money vs buying craft beer, you're getting into homebrewing for the wrong reasons. You really don't save money over craft beer, but you DO get the satisfaction of knowing you're drinking a beer YOU made, making what you like, etc.. and at least I think brewing is enjoyable. Plus, all the cool stuff you can do with your beers... That all being said I just opened one of my chocolate cherry stouts that I bottled ~2 weeks ago. Last week the carbonation wasn't right, and there wasn't much head. MUCH better now. A new brewery that just opened near me (actually owned by the guy that runs my LHBS) which plans to offer home brewers access to entire 50lb. sacks of grain at the "bulk" price. Once that's available, I should be able to cut my per-batch cost down quite a bit if I keep a sack of 2-row and a sack of pilsner on hand. I'm paying about $1.30/lb on 50s. It varies a little depending what you get - e.g., Rahr Pils is cheaper than imported Belgian Pils. A new brewery that just opened near me (actually owned by the guy that runs my LHBS) which plans to offer home brewers access to entire 50lb. sacks of grain at the "bulk" price. Once that's available, I should be able to cut my per-batch cost down quite a bit if I keep a sack of 2-row and a sack of pilsner on hand. I'm paying about $1.30/lb on 50s. It varies a little depending what you get - e.g., Rahr Pils is cheaper than imported Belgian Pils. You should be able to get base malts a lot cheaper than that if you're buying direct from a brewery (and they aren't upcharging). I'm paying 32-36 for base sacks and 65-80 for specialty when buying from my friend, but he's only charging me his cost on the grain. That's via local homebrew supply store. I think that kind of connection requires a friend at a brewery. Most places wouldn't be willing to take orders for a random stranger. The beauty of brewpubs and microbreweries? They are small enough that it's easy to go in and make friends with them. Ask for a private tour (bring some friends), ask a lot of intelligent, informed questions. Attend their tastings. Ask more questions. Make your face known. I know several places that would be more than happy to add a couple of extra items to their bulk orders to help out an aspiring homebrewer. In the grand scheme of things, I don't mind paying the mark up. I like having a big homebrew supply store in state. It's handy to be able to hop online and order a random keg fitting and have it on my doorstep inside 24 hours without shelling out for overnight shipping. I think my "steam Bock" experiment is a bust. After 3 months of cellaring, I've had one bottle with a strong band aid flavor, and another that wasn't band aids, but still wasn't good. I'll keep letting them sit until I need the bottles to see if they improve, but I'll probably end up dumping them. I'm pretty sure band-aid flavor is an infection. I doubt that gets any better, unless it was only that bottle. I'm catching up on this thread after a while of not reading, but someone had a question about the no mash out double batch sparge method a few pages back. I've been using that method with good success for a while. Before the sweet wort is drained from the mash and the grain is rinsed (sparged) of the residual sugars, many brewers perform a mashout. Mashout is the term for raising the temperature of the mash to 170°F prior to lautering. This step stops all of the enzyme action (preserving your fermentable sugar profile) and makes the grainbed and wort more fluid. For most mashes with a ratio of 1.5-2 quarts of water per pound of grain, the mashout is not needed. The grainbed will be loose enough to flow well. For a thicker mash, or a mash composed of more than 25% of wheat or oats, a mashout may be needed to prevent a Set Mash/Stuck Sparge. This is when the grain bed plugs up and no liquid will flow through it. A mashout helps prevent this by making the sugars more fluid; like the difference between warm and cold honey. The mashout step can be done using external heat or by adding hot water according to the multi-rest infusion calculations. (See chapter 16.) A lot of homebrewers tend to skip the mashout step for most mashes with no consequences. There are chemistry reasons for mashout, but they don't impact small batches like what we brew. When you're brewing on a brewery scale, mashout is absolutely necessary due to the amount of time where the wort is still in contact with the grain. I mash for 60 minutes, I'm usually all ready to boil within 20 more. So on May 5 I bottled a pretty simple ale where I undershot my OG (only got 1.039), fermented for 2 weeks with US-05 in the low 60s, and it tasted pretty clean at bottle time. I would like to serve a bottle to a friend on Friday (so just under 2 weeks after bottling) since it's a batch he was interested in. If I put a bottle in the fridge on Friday morning, what are the chances that it will 1) be carbonated properly, 2) taste good. I figure since it's a low gravity beer, and US-05 is pretty clean anyway, even though it's green it should still be mostly drinkable, right? Or am I doing the beer a disservice by serving it so early? 2 weeks should be fine for a beer of that gravity, and I'd expect it to be finished carbonating by then. That's how long I would wait to taste my beers back when I bottle conditioned them. It might get better if left longer, but it certainly won't be bad. So I brewed a partial mash NZ IPA (all NZ hops) this weekend and it was my first time using the US-05 yeast. Its always cheaper than the equivalent Wyeast pack, and is just a dry yeast. What is the alcohol tolerance and attenuation for this stuff? So I brewed a partial mash NZ IPA (all NZ hops) this weekend and it was my first time using the US-05 yeast. Its always cheaper than the equivalent Wyeast pack, and is just a dry yeast. What is the alcohol tolerance and attenuation for this stuff? Officially 80% attenuation and 12% alcohol tolerance. Unofficial reports have put the attenuation much higher than that for certain mutant batches a few years ago. It's almost the same strain as Wyeast 1056 or White Labs WLP001, but I think there's been a little genetic drift over the years. I've done 5 batches with US-05 and have gotten attenuation ranging from 77% to 87% (although the 87% was a honey ale). 80% is realistic. So I brewed a partial mash NZ IPA (all NZ hops) this weekend and it was my first time using the US-05 yeast. Its always cheaper than the equivalent Wyeast pack, and is just a dry yeast. What is the alcohol tolerance and attenuation for this stuff? Officially 80% attenuation and 12% alcohol tolerance. Unofficial reports have put the attenuation much higher than that for certain mutant batches a few years ago. It's almost the same strain as Wyeast 1056 or White Labs WLP001, but I think there's been a little genetic drift over the years. I've done 5 batches with US-05 and have gotten attenuation ranging from 77% to 87% (although the 87% was a honey ale). 80% is realistic. Awesome, I think my OG was only 1.062 so I'm aiming for a 6.5% ABV when its all said and done. Which doesn't sound like it should be all that hard to accomplish. And NZ hops smell so god damned delicious. I am thinking of ordering more before they sell out. Does anyone have any wisdom to impart on thick vs thin mashes? I do BIAB with a sort-of-sparge (bag in a colander perched over a bucket, I dump 1 gallon of room temp water over the bag and collect the runnings in the bucket. I've had some batches with good efficiency, and some with bad efficiency, and looking at my notes, it does look like the closer I am to 2qts of water per lb of grain, the better my efficiency is. I know 1.25qt/lb is supposed to be standard. What sort of side-effects am I looking at for a thin mash? So I brewed a partial mash NZ IPA (all NZ hops) this weekend and it was my first time using the US-05 yeast. Its always cheaper than the equivalent Wyeast pack, and is just a dry yeast. What is the alcohol tolerance and attenuation for this stuff? Officially 80% attenuation and 12% alcohol tolerance. Unofficial reports have put the attenuation much higher than that for certain mutant batches a few years ago. It's almost the same strain as Wyeast 1056 or White Labs WLP001, but I think there's been a little genetic drift over the years. I've done 5 batches with US-05 and have gotten attenuation ranging from 77% to 87% (although the 87% was a honey ale). 80% is realistic. Is there any appreciable difference between using a dry yeast and not? Does anyone have any wisdom to impart on thick vs thin mashes? I do BIAB with a sort-of-sparge (bag in a colander perched over a bucket, I dump 1 gallon of room temp water over the bag and collect the runnings in the bucket. I've had some batches with good efficiency, and some with bad efficiency, and looking at my notes, it does look like the closer I am to 2qts of water per lb of grain, the better my efficiency is. I know 1.25qt/lb is supposed to be standard. What sort of side-effects am I looking at for a thin mash? Running a mash that is too thin may raise the pH to undesirable levels, which can extract tannins from the grain husks. If you haven't noticed any tannins in your beer, you're probably OK though. It really depends on your brewing water and grain bill how thin you can go without running into problems. Increasing the mash water implies a corresponding reduction in sparge water, which can actually reduce your efficiency, though that evidently has not been an issue for you, possibly due to your BIAB sparge not really extracting much anyway. And, while it can raise the mash pH, it actually lowers the sparge pH, and sparging is when the pH is most likely to rise to tannin extraction levels. Is there any appreciable difference between using a dry yeast and not? I don't think so. I always use US-05 for my IPAs (and lots of my other beers, too) and have never had an issue with it. The quality of dry yeasts is much better these days than it used to be. I certainly wouldn't pay 3-4x as much for a vial WLP001. I think the primary benefit of liquid yeasts at this point is that they have a lot more strains available. Is there any appreciable difference between using a dry yeast and not? I don't think so. I always use US-05 for my IPAs (and lots of my other beers, too) and have never had an issue with it. The quality of dry yeasts is much better these days than it used to be. I certainly wouldn't pay 3-4x as much for a vial WLP001. I think the primary benefit of liquid yeasts at this point is that they have a lot more strains available. Yeah. If you want flavor contributions from the yeast, liquid gives way more options. The difference isn't quite that steep around here though. US-05 is ~$3 and WLP001 is ~$6. I generally assume 75% efficiency when planning. With a 2qt/lb ratio I've been getting around that. When my water/grain ratio drops, I get closer to 60%. I actually don't change how much my sparge is, I just adjust how much top-off water is needed. Is that bad? Usually I have 4 gallons of spring water in 1 gallon jugs. I have 2-2.5 gallons of strike water (so two jugs plus part of a third), add ~5lb of grain, let the bag drain after the mash, sparge with 1 gallon (the 4th jug). Towards the end of the mash, I boil the remainder of the third gallon off to the side, and then after sparging I add the additional boiling water until my pot is at about the 3 gallon mark (usually a little higher). So increasing the water during the mash reduces the amount of boiling water added to the wort at boil time, and has no impact on the amount of water I use to sparge. 1) I sparge with room temp water. Should I be heating it up to 160 instead? Sparge with ~175 degree water Quote: 2) I might be sparging too quickly. I literally just pour the gallon of water out of the jug over the bag and let it drip into the bucket. Best results if you slowly run the sparge water into the top of your mash tun while slowly draining the wort of the bottom. If you can manage it you would like this process to last 30 min. These two things should improve your efficiency I generally assume 75% efficiency when planning. With a 2qt/lb ratio I've been getting around that. When my water/grain ratio drops, I get closer to 60%. I actually don't change how much my sparge is, I just adjust how much top-off water is needed. Is that bad? Usually I have 4 gallons of spring water in 1 gallon jugs. I have 2-2.5 gallons of strike water (so two jugs plus part of a third), add ~5lb of grain, let the bag drain after the mash, sparge with 1 gallon (the 4th jug). Towards the end of the mash, I boil the remainder of the third gallon off to the side, and then after sparging I add the additional boiling water until my pot is at about the 3 gallon mark (usually a little higher). So increasing the water during the mash reduces the amount of boiling water added to the wort at boil time, and has no impact on the amount of water I use to sparge. You may as well sparge to target your entire boil volume - you might sqeeze out a few more efficiency points. This is what I do when I batch sparge - I target 6.5 gallons to boil, so if I collect 3 gallons from the mash runoff, I sparge with 3.5 gallons. Quote: 1) I sparge with room temp water. Should I be heating it up to 160 instead? Yes - I would definitely heat the sparge water. Some people claim it doesn't matter, but everything is more soluble at higher temps, so it stands to reason you'll rinse out more sugars with hotter water. Don't go over 170 F though. Quote: 2) I might be sparging too quickly. I literally just pour the gallon of water out of the jug over the bag and let it drip into the bucket. Going slower would probably help too. Getting the water to really integrate with the grain would be better, though I'm not sure how you'd do that in a BIAB system. I transferred my Kiwi IPA on thursday. It is now being dry hopped with Moteuka and Wakatu hops for two weeks. We tested the gravity and the final was 1.014, so I ended up with a roughly 6.3% ABV IPA. And when I type this, I say this with great reticence as I LOVE my hops, I added an additional 3 oz's of Nelson Sauvin in with the boil..but it was damn hoppy, might be TOO hoppy. I really hope it mellows out during these next few weeks of dry hopping and bottle conditioning. Thanks to the stupidity of government, it was recently discovered that taking homebrew out of your home is illegal in the state of Iowa. To the legislatures credit, they quickly stepped up and passed a law legalizing the practice of sharing homebrew in the public space. Of course, they couldn't make it an immediate change, so on July 1st one of my favorite craft bars is doing a "Prohibition is over (again)" celebration of homebrew in Iowa. The owner asked if I'd make something to share as part of an event he's planning, so am brewing yet another variation on my Wheat Wine DIPA today. I'm playing more and more with different base malts and mash rests, so decided to really have fun with this batch. This is a very non-traditional style. It actually was my first homebrew I created, before I knew anything about recipe construction, and yet it somehow worked. It started as an American Wheat extract recipe that I simply doubled up all the ingredients on, and then hopped it like a west coast pale. It had (and still has) a nice balance of grapefruit citrus flora and malty background, a balance I personally love. I've moved farther away from the 50/50 or 60/40 wheat barley splits that are to standard, so it's probably not even fair to call it a wheat wine anymore, but I shall anyways. The 6-Row is there (vs 2-row) for the diastatic power to assist in breaking down the long chain sugars in the wheat malt. I've read so much conflicting information on wheat extraction that I'd just prefer to make sure I cover my bases when brewing them AG. The Golden Promise provides a nice sweet mouthfeel in the finish of beers, and well I've really liked using it and Maris Otter as my base malts lately. The cara because I like the color and hint of carmelization it brings. When I came up with the recipe though it was just because of the color. Happy accidents. The hops have changed over time, though I've settled on a Columbus/Amarillo/Cascade procession with equal amounts at each stage (for five gallons I used between 2 and 3oz each). And then of course AA2 for a very clean finish and neutral profile. I'll be doing a 3 step mash on this, a protein rest in the 120-125 range, then both a beta and alpha rest stage before sparging. As I don't really want excessive carmelization, these will be infusion steps not decoction, so I'm starting with a rather thick mash and moving towards the thinner side as I go. 1.25 qts/lb at the beginning moving towards 2 for the alpha rest. It makes for a long brew day, but hopefully something worth the effort at the end. I have a strange brew on my hands. My Wee Heavy was a standard brew, but I way under primed it and it didn't carbonate. I ended up popping every bottle, dumping in a little bit of boiled sugar water, and then recapping them. I sanitized everything as best I could. Most bottles have been a pretty standard Scottish ale, but every couple of bottles I get a weird one. Two of the bottles have tasted like a dubbel (nice raisin and plum notes), I had one that was close to a Newcastle, and last night I had one that was toasty like a stout. I don't know what to make of it. I didn't mislabel the bottles, and while the dubbel could be yeast left over in the bottle from a previous batch, the stout has me stumped. I transferred my Kiwi IPA on thursday. It is now being dry hopped with Moteuka and Wakatu hops for two weeks. We tested the gravity and the final was 1.014, so I ended up with a roughly 6.3% ABV IPA. And when I type this, I say this with great reticence as I LOVE my hops, I added an additional 3 oz's of Nelson Sauvin in with the boil..but it was damn hoppy, might be TOO hoppy. I really hope it mellows out during these next few weeks of dry hopping and bottle conditioning. Bottled my IPA last thursday. It did mellow out a lot and it was pretty citrusy but in a very different way from using maybe Centennial or Cascade hops. I thought I tasted a bit of bubblegum, but not in the spoiled way, but in a "thats a damn weird hop" kinda way. It'll be good after it gets some carbonation. Since I do 2.5 gallon batches, I usually use a Wyeast smackpack and pitch directly instead of using a starter. Yesterday I brewed a dubbel (1.060 OG) and pitched a packet of Wyeast 1214 about 4 hours after smacking. It had swelled a little bit, but admittedly not as much as I'm used to from a smack pack. For the first 36 hours, there was basically no activity. Now, I'm finally seeing a very thin layer of foam on the top of the carboy, and a very gradual bubble in the airlock. The last time I made this recipe, the fermentation took off like a rocket and the exhaust out of the airlock smelled great. The current airlock exhaust is odorless. What should I expect the results of the slower initial fermentation to be? Since I do 2.5 gallon batches, I usually use a Wyeast smackpack and pitch directly instead of using a starter. Yesterday I brewed a dubbel (1.060 OG) and pitched a packet of Wyeast 1214 about 4 hours after smacking. It had swelled a little bit, but admittedly not as much as I'm used to from a smack pack. For the first 36 hours, there was basically no activity. Now, I'm finally seeing a very thin layer of foam on the top of the carboy, and a very gradual bubble in the airlock. The last time I made this recipe, the fermentation took off like a rocket and the exhaust out of the airlock smelled great. The current airlock exhaust is odorless. What should I expect the results of the slower initial fermentation to be? This is why I prefer white labs. The smack pack exists mostly as a proof...there's not truly enough food in the smack pack to overcome the inherent Cell count disadvantage of liquid, but you are rarely prepared for it when they don't proof. White labs you can see the yeasts freshness by color at least. The only foolproof way is to do proper starters. As with any fermentation, the life will persevere for your beer but expect some surprises. Basically any fermentation issue can occur with low pitch. Even a normal beer. But nutrient and oxygen levels really need to be strong.
- 123. Determine u(-36). -15 Let m(f) = -32*f**3 + 2*f**2 + 4*f - 2. What is m(1)? -28 Let r(a) = -2036*a + 50884. Give r(25). -16 Let a(t) = -210*t - 1830. What is a(-10)? 270 Let z(k) = -10*k + 666. Determine z(67). -4 Let s(l) = l**3 - 5*l**2 - 13*l - 4. Determine s(7). 3 Let t(j) = -j**2 - 49*j + 4. Give t(-50). -46 Let y(f) = -18*f**2 - 7*f + 11. Determine y(2). -75 Let l(c) = -3*c**3 + 85*c**2 + 618*c + 12. What is l(-6)? 12 Let w(h) = -215*h + 268. Determine w(2). -162 Let y(c) = -c**2 - 88*c + 370. What is y(-92)? 2 Let a(b) = 2*b**3 - 84*b**2 + 238*b - 54. Give a(39). 102 Let z(i) = -3*i**3 + 42*i**2 - 41*i - 15. Give z(13). -41 Let h(a) = -1019*a - 22413. Determine h(-22). 5 Let l(q) = -37*q**2 - 1310*q - 543. Calculate l(-35). -18 Let m(y) = -9*y + 215. Give m(31). -64 Let c(j) = 3*j**3 + 21*j**2 - 38*j - 155. Calculate c(-5). 185 Let n(z) = -z**2 + 53*z + 424. Give n(60). 4 Let x(j) = -57*j**2 - 11*j + 25. What is x(2)? -225 Let n(k) = -16*k**2 + k + 6. What is n(3)? -135 Let g(r) = 11*r**2 - 43*r + 45. What is g(2)? 3 Let f(l) = -2*l**2 + 22*l - 47. What is f(11)? -47 Let i(n) = 2*n**3 - 55*n**2 + 10*n + 22. Give i(27). -437 Let c(g) = 52*g + 1619. Determine c(-31). 7 Let o(b) = 40*b + 1917. Give o(-50). -83 Let m(s) = -s**3 - 3*s**2 + 27*s - 14. Calculate m(0). -14 Let w(p) = -351*p - 23520. Calculate w(-67). -3 Let d(c) = -c**2 + 118*c - 3407. Give d(63). 58 Let a(w) = 7*w**2 - 249*w - 3203. Give a(-10). -13 Let n(h) = -h**3 + 19*h**2 + 73*h + 50. Calculate n(23). -387 Let a(h) = h**2 + 30*h + 102. What is a(-26)? -2 Let m(z) = -136*z - 49. What is m(-1)? 87 Let c(y) = -y**3 - 9*y**2 + 308*y + 2248. What is c(-7)? -6 Let i(z) = 6*z**3 - 207*z**2 + 104*z - 69. Give i(34). -1 Let p(m) = -4*m + 1. Calculate p(0). 1 Let k(n) = -21*n - 1006. What is k(-47)? -19 Let l(p) = 4*p**2 - 2392*p - 4789. What is l(-2)? 11 Let x(f) = 208*f - 811. Determine x(4). 21 Let m(o) = -1288*o + 1194. What is m(1)? -94 Let r(u) = u**2 + 193*u + 9263. Give r(-102). -19 Let g(v) = -23*v**2 - 52*v + 13. Determine g(-1). 42 Let v(j) = -j + 63. What is v(1)? 62 Let t(n) = -2*n**2 - 101*n + 2669. Calculate t(-70). -61 Let l(r) = -3*r**3 - 4*r**2 + 6*r - 12. Determine l(-5). 233 Let t(l) = 16*l + 20. Calculate t(7). 132 Let t(w) = -277*w - 9413. Determine t(-34). 5 Let x(h) = 2*h**3 + 132*h**2 - h - 70. Calculate x(-66). -4 Let a(d) = 700*d + 1847. Determine a(-2). 447 Let v(x) = -x**3 - 7*x**2 + 7*x - 123. What is v(-10)? 107 Let n(q) = q**3 - 6*q**2 - 48*q + 329. Give n(6). 41 Let n(z) = z**2 - 198*z + 2087. Determine n(11). 30 Let v(b) = -32*b**2 - 19*b + 3. What is v(2)? -163 Let d(u) = 2*u**2 - 74*u + 341. Give d(5). 21 Let s(p) = -p**2 + 42*p + 16. Give s(42). 16 Let w(s) = -s**3 - 31*s**2 - 224*s + 28. What is w(-14)? -168 Let r(z) = -23*z - 191. What is r(0)? -191 Let u(r) = -202*r - 804. What is u(-4)? 4 Let n(w) = 2*w**3 + w**2 - 3*w + 32. What is n(-5)? -178 Let k(y) = y**3 - 9*y**2 - 4*y + 4. What is k(8)? -92 Let h(u) = 446*u - 4012. What is h(9)? 2 Let w(m) = 4*m**3 - 22*m**2 - 20*m + 14. Determine w(5). -136 Let r(v) = 3*v**2 + 238*v - 202. Determine r(-78). -514 Let t(g) = -4*g**3 - 311*g**2 + 88*g + 744. What is t(-78)? -36 Let h(m) = 103*m**2 + 9*m + 13. Calculate h(0). 13 Let h(n) = -n - 163. Determine h(-15). -148 Let r(i) = -43*i + 162. Determine r(-2). 248 Let l(x) = x**3 - 162*x**2 + 5535*x + 97. Give l(49). -1 Let t(y) = 24*y - 36. What is t(-6)? -180 Let m(c) = 39*c**3 + 92*c**2 - 5*c - 9. Calculate m(-3). -219 Let n(u) = u**2 + 28*u + 40. What is n(-20)? -120 Let r(k) = -k**3 - 38*k**2 - 37*k - 86. Calculate r(-37). -86 Let c(s) = -6*s - 94. Calculate c(-22). 38 Let o(x) = x**3 - 12*x**2 + 36*x - 16. Give o(4). 0 Let h(s) = -s**2 + 20*s + 191. Give h(26). 35 Let m(z) = -z**3 + 36*z**2 + 54*z - 467. Calculate m(37). 162 Let y(n) = -2*n**2 - 137*n - 1792. Give y(-15). -187 Let j(m) = -m**2 + 49*m + 262. Determine j(-5). -8 Let q(v) = v**3 - 35*v**2 - 17*v + 601. What is q(35)? 6 Let a(l) = -l**3 - 2*l**2 + 165*l - 12. Calculate a(-14). 30 Let w(b) = -b**3 + 8*b**2 - 156*b + 448. Determine w(3). 25 Let f(b) = 21*b**3 - 318*b**2 + 47*b + 19. Determine f(15). 49 Let z(x) = -x**3 - 13*x**2 + 769*x - 36. Calculate z(-35). -1 Let i(t) = -529*t - 3717. What is i(-7)? -14 Let f(q) = -655*q + 4673. What is f(7)? 88 Let i(q) = 44*q - 34. Calculate i(1). 10 Let c(x) = -x**3 - 29*x**2 - 58*x - 71. What is c(-27)? 37 Let t(y) = -22*y**2 + 2006*y - 374. What is t(91)? -10 Let p(q) = q**3 + 18*q**2 + 101*q + 471. Calculate p(-13). 3 Let k(v) = -v**2 - 119*v + 4980. Determine k(-152). -36 Let c(d) = 30*d + 256. What is c(-6)? 76 Let l(n) = 39*n - 179. Give l(8). 133 Let u(r) = 10*r - 132. Give u(30). 168 Let y(g) = 272*g + 2658. What is y(-10)? -62 Let t(o) = -2110*o + 25327. Determine t(12). 7 Let i(w) = -202*w - 879. Give i(-4). -71 Let i(x) = x**2 + 41*x - 83. Calculate i(-38). -197 Let x(d) = -d**3 + 128*d**2 + 663*d + 257. Determine x(133). -9 Let v(f) = 40*f + 2133. What is v(-54)? -27 Let a(f) = 8*f - 178. Calculate a(26). 30 Let s(a) = 223*a + 869. Give s(-7). -692 Let w(s) = -252*s + 3021. What is w(12)? -3 Let p(m) = 35*m**2 + 246*m + 9. Calculate p(-7). 2 Let p(y) = y**3 + 59*y**2 - 251*y + 16. Give p(-63). -47 Let z(p) = -p**3 + 15*p**2 + 220*p - 138. Calculate z(24). -42 Let c(l) = -l**3 - 3*l**2 + 65*l + 286. Determine c(-6). 4 Let b(g) = 96*g**2 - 1447*g + 61. What is b(15)? -44 Let v(z) = z**3 - 5*z**2 - 149*z + 23. Give v(-10). 13 Let k(m) = 130*m + 512. Determine k(-6). -268 Let z(t) = 592*t + 447. Calculate z(-1). -145 Let f(q) = -q**2 - 46*q - 338. Give f(-50). -538 Let z(b) = -155*b + 6963. Give z(45). -12 Let j(w) = 99*w - 3155. Give j(32). 13 Let l(n) = -3*n**3 - 228*n**2 - 223*n + 122. Determine l(-75). -28 Let j(a) = 15*a**2 - 7*a - 7. Determine j(-5). 403 Let u(n) = -127*n - 3040. Calculate u(-24). 8 Let p(m) = -21*m**2 - 12*m + 13. What is p(1)? -20 Let r(a) = 90*a - 42. Calculate r(2). 138 Let f(b) = -b**3 + 102*b**2 - 107*b + 575. What is f(101)? -31 Let y(n) = -n**3 + n**2 - 26*n + 26. Give y(6). -310 Let n(f) = -f**2 - 13*f - 68. Determine n(-8). -28 Let l(o) = o**2 + 49*o + 450. Determine l(-37). 6 Let z(d) = d**3 - 76*d**2 + 1365*d - 101. What is z(47)? -7 Let y(k) = -6*k**2 - 52*k - 44. Give y(0). -44 Let j(y) = -39*y - 1048. Give j(-25). -73 Let m(d) = -155*d**2 + 24*d - 38. Determine m(2). -610 Let l(c) = 156*c - 465. Determine l(3). 3 Let o(k) = -42*k - 1182. Calculate o(-28). -6 Let x(i) = -2*i**2 - 3*i + 16. Calculate x(-3). 7 Let z(u) = 138*u + 833. Give z(-6). 5 Let d(f) = f**3 - 16*f**2 + 55*f - 2. Determine d(11). -2 Let q(d) = -6*d - 901. Determine q(0). -901 Let p(a) = -a**2 + 1090. What is p(-34)? -66 Let m(t) = -3618*t - 50653. What is m(-14)? -1 Let l(y) = 3*y**3 + 89*y**2 + 285*y + 17. Calculate l(-26). 43 Let a(j) = -272*j - 4. What is a(-1)? 268 Let b(y) = -y**3 + 6*y**2 + 125*y + 114. Give b(-8). 10 Let o(d) = -93*d**2 + 188*d - 10. Determine o(2). -6 Let s(o) = -33*o - 1544. Calculate s(-42). -158 Let a(f) = 30*f**3 - 8*f**2 + 15*f - 10. Calculate a(2). 228 Let z(j) = -156*j - 1173. What is z(-7)? -81 Let u(r) = 5*r**2 + 284*r + 567. Calculate u(-2). 19 Let q(b) = 10*b**3 + 69*b**2 + 71*b + 3. What is q(-1)? -9 Let b(i) = -10*i - 240. Calculate b(-18). -60 Let v(m) = -m**3 + 10*m**2 + 14*m - 131. Calculate v(10). 9 Let x(r) = 2*r**3 + 100*r**2 - 430*r + 102. Determine x(-54). -6 Let m(n) = -n**2 + 49*n - 579. Determine m(28). 9 Let z(v) = -v**2 - 264*v + 20222. Calculate z(62). 10 Let s(b) = -85*b + 92. Calculate s(2). -78 Let z(d) = -9*d**2 + 28*d + 81. Calculate z(-5). -284 Let w(c) = 9*c**2 + 386*c + 340. Give w(-42). 4 Let s(n) = n**3 + 29*n**2 + 52*n - 46. Calculate s(-27). 8 Let h(y) = 30*y + 231. Calculate h(-6). 51 Let t(h) = -h**3 + 3*h**2 + 13*h - 31. Determine t(4). 5 Let v(i) = 6*i**2 + 4*i - 9. Calculate v(-3). 33 Let x(u) = 18*u + 236. What is x(-13)? 2 Let k(l) = 23*l**2 - 216*l + 201. Give k(1). 8 Let g(y) = 51*y + 101. Give g(-4). -103 Let s(d) = -99*d + 1108. Give s(11). 19 Let x(g) = -12*g - 308. Give x(-25). -8 Let t(j) = 101*j + 1162. Calculate t(-12). -50 Let d(x) = x**3 - 70*x**2 - 707*x - 317. Determine d(79). -1 Let p(o) = o**2 - 70*o + 708. Calculate p(13). -33 Let h(v
Al Gore Confident About a Post-Jobs Apple At the AllThingsD conference in Hong Kong this week, former Vice President, and currentApple board member, Al Gore chatted with Walt Mossberg about a number of issues. He had a few things to say about Apple in the post-Steve Jobs era, the most important of which is that he thinks Apple has strong management and a bright future. Walt Mossberg Interviews Apple Board Member Al GoreSource: AllThingsD Mr. Gore’s first comments were on the passing of Mr. Jobs. He described the memorial service as a “beautiful and moving event.” And while acknowledging that the loss is terrible, he thinks that Mr. Jobs greatest work was the building of Apple Inc. itself. He was also asked about the management team at Apple. Mr. Gore said that each of the team members could be a CEO of a world-class organization. When asked about the possibility of them leaving to do that very thing, Mr. Gore said it was something the Apple board was keenly aware of and that they paid close attention to it. (Mr. Gore is on the compensation committee of the Apple board.) When asked about how Apple will fare without Steve Jobs at the fore, Mr. Gore responded that no one will be able to replace him. He also said that Mr. Jobs made it clear that the board should make its own decisions going forward and not play the “What would Steve do?” game. As for the board itself, Mr. Gore expressed his admiration for his fellow board members. He said that he wouldn’t change a thing about the way it has operated and feels that board members handled the privacy, medical, and succession issues quite well, all while avoiding tipping his hat as to how the board operated or what it was concerned with at the present time. “I have the deepest respect for my fellow board members, we’re all very good friends… I think that people who specialize in kibitzing about these things — I respect them, it’s good for them to think about this kind of stuff, but I wouldn’t change a thing about the way the Apple board has operated.” Mr. Gore also spoke about his role as a Google advisor. He said that he doesn’t involve himself in discussions on anything relating to any business Apple is in, and that this has resulted in him having fewer conversations with Larry Page and other members of Google’s management. For those interested, the former U.S. Vice President also spoke at length on the environment, TV (he is a partner in Current TV), the role of journalism and information, China as a U.S. competitor, and much more.
Following a meeting on a multi-vehicle highway accident last year in Gray Summit, Mo., the National Transportation Safety Board is calling for the first-ever nationwide ban on driver use of cell phones while driving a motor vehicle. The NTSB's recommendation specifically calls for all 50 states and Washington D.C. to ban all nonemergency use of portable electronic devices (other than those designed to support the driving task) for all drivers. The board also is urging the use of the National Highway Traffic Safety Administration (NHTSA) model of high-visibility enforcement to support these bans and implementation of targeted communication campaigns to inform motorists of the new law and heightened enforcement. "According to NHTSA, more than 3,000 people lost their lives last year in distraction-related accidents," NTSB Chairman Deborah Hersman said in a statement. "It is time for all of us to stand up for safety by turning off electronic devices when driving. No call, no text, no update, is worth a human life." On Aug. 5, 2010, on a section of Interstate 44 in Gray Summit, a pickup truck ran into the back of a truck-tractor that had slowed due to an active construction zone. The pickup truck, in turn, was struck from behind by a school bus. That school bus was then hit by a second school bus that had been following. Two people died and 38 others were injured in the accident. The NTSB's investigation revealed that the pickup driver sent and received 11 text messages in the 11 minutes preceding the accident. The last text was received moments before the pickup struck the truck-tractor. The NTSB said the Missouri accident is the most recent distraction accident the organization has investigated. However, the first investigation involving a driver being distracted while using a wireless electronic device occurred in 2002, when a novice driver, distracted by a conversation on her cell phone, veered off the roadway in Largo, Md., crossed the median, flipped the car over, and killed five people. Since then, the NTSB has seen the deadliness of distraction across all modes of transportation, reporting incidents from coast to coast involving all manner of vehicles, including airliners, boats, buses and tractor-trailers. The worst reported accident involving texting while operating a vehicle came in a 2008 collision of a commuter train with a freight train in Chatsworth, Calif. The commuter train engineer, who had a history of using his cell phone for personal communications while on duty, ran a red signal while texting. That train collided head on with a freight train, killing 25 people and injuring dozens more. In the last two decades, there has been exponential growth in the use of cell phone and personal electronic devices, according to the NTSB. Globally, there are 5.3 billion mobile phone subscribers, or 77 percent of the world population. In the United States, that percentage is even higher, exceeding 100 percent. Further, a Virginia Tech Transportation Institute study of commercial drivers found that a safety-critical event is 163 times more likely if a driver is texting, emailing, or accessing the Internet. "The data is clear; the time to act is now. How many more lives will be lost before we, as a society, change our attitudes about the deadliness of distractions?" Hersman said.
This well-illustrated textbook contains 2388 figures and 75 tables in 1326 pages. It is packaged into a single volume, with 55 chapters divided into two sections. As its name suggests, this is a radiology textbook written by 80 Asian--Oceanian authors working in 15 different countries in the region. As such, this textbook is the first of its kind and the editors deserve special recognition on this feat alone. This book is aimed primarily at radiology trainees and the general radiologist wanting some revision or a concise reference. This book succeeds in giving the readers an Asian--Oceanian perspective of radiology. All the authors are prominent radiologists practicing in the Asian--Oceanian region and demonstrate their wealth of experience in the various chapters throughout the book. More importantly, many of these authors have worked in countries outside their own region, giving them the unique advantage of knowing the practice of radiology in different parts of the world. Most of the material covered in this book is directly applicable to radiology anywhere in the world and not just in Asia or Oceania. The basics of radiology are covered in the first section (Imaging techniques---basic principles) which includes radiography, contrast agents, ultrasonography, computed tomography, magnetic resonance imaging, nuclear medicine, interventional radiology and radiation protection. The second section, which makes up 1000 pages of this book, covers anatomy and diseases using a system-based approach. The second section of this book is organized with respect to six organ systems: head and neck; thorax and circulation; abdomen: solid organs; abdomen: hollow organs; reproductive; and musculoskeletal. Within each system sub-section, a detailed radiological description of the organ or region is first provided as a backbone on which pathology specific to the region is then discussed. This detailed anatomical information is particularly relevant to our ever-changing specialty, where with each advance, the anatomical detail available on the images increases significantly. The diseases are presented in a problem orientated approach to the use of imaging which is useful especially for trainees since it simulates the real clinical situation. All chapters provide insights on specific diseases from an Asian--Oceanian perspective. Sections that deserve special mention for their regional perspective on diseases include those on diseases of the ear, nose and throat, nasopharyngeal carcinoma, diseases of the oesophagus, stomach and duodenum. In addition, in the final sub-section (multisystem diseases and future trends), there are separate chapters on tropical diseases and paediatrics to further elaborate on the regional experience. Given the constraints of production, the editors and authors deserve to be congratulated on their efforts in providing a chapter on the imaging of SARS. They have provided important information of the first worldwide epidemic of this millennium, severe acute respiratory syndrome, making it one of the very few textbooks with such current and timely information. This textbook is strong in ultrasound, with illustrative examples of diseases covering all organs. This is probably a reflection of the method of radiology practice in the region and the expertise thus available. This is sometimes a deficiency in the American and European radiology textbooks, and this book therefore succeeds in providing a different perspective to the investigative approach to diseases in this part of the world. It is difficult not to compare this textbook with other major general textbooks in radiology, namely, Grainger and Allison\'s *Diagnostic Radiology* and *Textbook of Radiology and Imaging* by David Sutton. In common with both of these major works, the *Asian--Oceanian Textbook of Radiology* is comprehensive, well written and well illustrated. The information provided is succinct, as a result of which the editors have been able to keep this a single volume book (particularly attractive in a busy practice or study session from a practical point of view). There is room for improvement for a second edition particularly in the quality of some of the images. In addition, there have been major improvements in the spatial resolution for computed tomography with multi-detector machines. This and other radiology textbooks will need to incorporate new images and discuss the use of planes other than the axial plane as the primary plane of imaging or illustration. In summary, this is a concise and well-written textbook, which should serve both radiology trainees and radiologists well.
Wild Planet Wild Alaska Sockeye Salmon Fresh from the pristine, icy waters of Alaska, our Wild Sockeye Salmon is a deep red color and chock full of true salmon flavor. Skinless and boneless fillets are hand packed and cooked just once to retain their natural juices, rich with omega-3 oils. Combine with cucumbers, dill and a touch of sour cream on pumpernickel bread for a protein-packed sandwich, or try our Savory Salmon Cakes for a quick and easy meal. Top a whole grain cracker with sockeye salmon, capers and a squeeze of lemon for a light and healthy snack. OU Kosher Pareve. Available in a 6-ounce can.
Search form Hey, Verizon: Don't Mess With Jeff Jarvis Jeff Jarvis, director of CUNY’s Tow-Knight Center for Entrepreneurial Journalism, is a prominent blogger, podcaster and author of a number of books on the future of journalism and the Internet. This week, when Verizon refused to activate his new Nexus 7 tablet, Jarvis knew just how to respond. He also knew Verizon’s foot dragging violated the rules that govern the company’s 4G/LTE mobile broadband service — or more precisely, the openness commitments that Verizon made in return for the right to use the public airwaves and provide that service. So Jarvis took his case to Twitter. He tweeted at Verizon’s customer support account and then turned to Verizon’s corporate communications office. After a series of unhelpful responses, he blogged about his experiences. He then took his complaints to the Federal Communications Commission. Free Press has been down this road with Verizon too, filing complaints about the same sort of violations. Our complaints and others ultimately led to Verizon promising, in July 2012, not to break these rules anymore — and making a $1.25 million “contribution” to the treasury in lieu of an official fine. Yet here it is, testing the limits of those commitments once more. What exactly are these openness commitments? Most of us are used to phone-company tactics that force subscribers to buy phones and tablets straight from the carrier. This isn’t the way the world works with other technologies. Imagine being told that you had to buy your laptop from Comcast if you wanted it to work on your home Internet connection. But that’s the way that wireless has worked for too long. The FCC tried to give users more control in 2008, putting conditions on Verizon’s use of a huge swath of the public airwaves used to power its network. Here’s the language Verizon agreed to about keeping that network open: Licensees … shall not deny, limit, or restrict the ability of their customers to use the devices and applications of their choice. So in much the same way that the electric company can’t tell you what kind of TV you can buy and plug into your home outlet, Verizon shouldn’t be allowed to tell you what apps and devices you can use on its network. This is a critical protection — giving people more choice in the marketplace while fostering innovation and competition. Back in 2011, Verizon was caught blocking tethering applications, which allow people to use their phones as mobile hotspots and access the Internet on their computers via Wi-Fi. It’s a simple idea: You paid for the data, and you should be able to use it anyway you want, and on any device you choose. Like Jarvis, we filed an FCC complaint, and in late 2012 the FCC made Verizon stop blocking tethering applications, and made it stop charging people twice for the same data. Verizon is working hard to undermine openness not just on wireless devices but across the Internet. In court last week, Verizon argued that it should be allowed to edit the Internet — blocking sites if it wants, or making them pay more to reach Verizon customers. It’s all part of Verizon's campaign to undermine the FCC’s authority to protect consumers online. This is like Exxon saying the Environmental Protection Agency lacks the authority to stop polluters from destroying the environment. Jeff Jarvis has filed his complaint about Verizon’s blocking. It’s now up to the FCC to stop Verizon’s latest assault on open networks. Learn More Freepress.net is a project of Free Press and the Free Press Action Fund. Free Press and the Free Press Action Fund do not support or oppose any candidate for public office. We are nonpartisan organizations fighting to save the free and open Internet, curb runaway media consolidation, protect press freedom, and ensure diverse voices are represented in our media.
Q: UITextField shouldChangeCharactersinRange as external function I want to validate UITextField input for multiple view controllers. The following works: validate.h #import <UIKit/UIKit.h> @interface validate : UITextField <UITextFieldDelegate> @end validate.m #import "validate.h" @implementation validate viewController.h #import <UIKit/UIKit.h> #include "limiteTextField.h" @interface ViewController : UIViewController <UITextFieldDelegate> @property (strong, nonatomic) IBOutlet limiteTextField *myTextField; @end viewController.m #import "ViewController.h" @interface ViewController () @end @implementation ViewController -(BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string { //my code for validating } - (void)viewDidLoad { [super viewDidLoad]; _myTextField.delegate=self; I want to be able to use the shouldChangeCharactersInRange as an external function so don't have to rewrite all its code for each view controller. A: One approach is to create some kind of validator class with a singleton and assigning it as the textfields delegate: TextFieldValidator.h #import <UIKit/UIKit.h> @interface TextFieldValidator : NSObject <UITextFieldDelegate> + (instancetype)sharedValidator; @end TextFieldValidator.m #import "TextFieldValidator.h" @implementation TextFieldValidator + (instancetype)sharedValidator { static TextFieldValidator *sharedValidator = nil; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ sharedValidator = [[self alloc] init]; }); return sharedValidator; } - (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string { NSCharacterSet *allowedCharacters = [NSCharacterSet decimalDigitCharacterSet]; return [[textField.text stringByReplacingCharactersInRange:range withString:string] rangeOfCharacterFromSet:allowedCharacters.invertedSet].location == NSNotFound; } @end SomeViewController.m // [...] self.textField.delegate = [TextFieldValidator sharedValidator]; // [...]
Q: Bootstrap input group line break formatting Basically I have a bunch of rows with a check box and a label taking up 2 column spaces. Some of the labels are longer then others so when you resize the browser or are viewing on a mobile device the columns with longer labels will collapse to a second row and the shorter ones stay beside their check box. It looks like crap. HTML: <div class = "row"> <div class="col-lg-2"> <div class="input-group"> <input type="checkbox"> Small Label </div> </div> <div class="col-lg-2"> <div class="input-group"> <input type="checkbox"> Big Label that collapses first </div> </div> </div> Is there a way to make it so that if one of them collapses then the whole row does? Even better would be to have a dynamic font that worked like an image and just grew and shrank taking up a maximum of 100% as necessary to not cause a collapse at all. I could just use images but I have a lot of these these labels and it will take forever to make an image for each. A: You can add a custom CSS to your bootstrap style and define some simple CSS rules as you would like to force the style to behave... CSS Example: .input-group { display: inline; } I think the right HTML element for this is a list.. although, If you are going to edit the CSS... It's good to know that you can add a custom css file to your project and use a CSS class with your bootstrap style like this: CSS: .checkbox-inline { display: inline; } HTML: <div class="input-group checkbox-inline"> <input type="checkbox"> Small Label </div> There are many possible answers... maybe, you will also find this question useful.
Prothrombin time, activated partial thromboplastin time and dilute Russell's Viper Venom times are not shorter in patients with the prothrombin G20210A mutation, and dilute Russell's Viper Venom time may be longer. Prothrombin G20210A (PT20210) carriers have increased prothrombin levels and increased risk for venous thrombosis. We hypothesized PT20210 carriers would have decreased PT, aPTT, and dRVVT clotting times. We reviewed 1186 thrombotic risk panels that included PT, aPTT, dRVVT, and PT20210 genotype with potential confounding variables, excluding samples consistent with anticoagulant therapy or lupus anticoagulant presence. We examined relationships of PT20210 with PT, aPTT, and dRVVT correcting for covariates using multivariate regression. We confirmed associations in 1876 separate panel results and a group of homozygotes for PT20210 and used general linear models to determine if associated tests predict PT20210 status. Neither PT, aPTT, nor dRVVT was shorter in PT20210 carriers. Contrary to our hypothesis, PT20210 was significantly associated with higher dRVVT (p=0.001), but not PT or aPTT. dRVVT differences were significant in a replicate sample p=0.035 and an additional sample of PT20210 homozygotes (p=0.02). Of all variables available, only dRVVT predicted PT20210 carrier status (p=0.0008, AUC=0.64). We observed an association between longer dRVVT and the prothrombin G20210A mutation in a retrospective observational study. These findings merit further study in large well-characterized clinical cohorts and laboratory research experiments.
/** * Copyright (c) Jonathan Cardoso Machado. All Rights Reserved. * * This source code is licensed under the MIT license found in the * LICENSE file in the root directory of this source tree. */ import 'should' import { Curl, CurlCode, Easy } from '../../lib' const url = 'http://example.com/' // This is the only test that does not uses a express server // It makes a request to a live server, which can cause issues if there are network problems // @TODO Run a server side by side with the test suite to remove the need to make a external request let curl: Easy describe('easy', () => { beforeEach(() => { curl = new Easy() curl.setOpt('URL', url) }) afterEach(() => { curl.close() }) it('works', () => { const retCode = curl.perform() retCode.should.be.equal(CurlCode.CURLE_OK) }) describe('callbacks', () => { it('WRITEFUNCTION - should rethrow error', () => { curl.setOpt('WRITEFUNCTION', () => { throw new Error('Error thrown on callback') }) const perform = () => curl.perform() perform.should.throw('Error thrown on callback') }) it('WRITEFUNCTION - should throw error if has invalid return type', () => { // @ts-ignore curl.setOpt('WRITEFUNCTION', () => { return {} }) const perform = () => curl.perform() perform.should.throw( 'Return value from the WRITE callback must be an integer.', ) }) it('HEADERFUNCTION - should rethrow error', () => { curl.setOpt('HEADERFUNCTION', () => { throw new Error('Error thrown on callback') }) const perform = () => curl.perform() perform.should.throw('Error thrown on callback') }) it('HEADERFUNCTION - should throw error if has invalid return type', () => { // @ts-ignore curl.setOpt('HEADERFUNCTION', () => { return {} }) const perform = () => curl.perform() perform.should.throw( 'Return value from the HEADER callback must be an integer.', ) }) it('READFUNCTION - should rethrow error', () => { curl.setOpt('UPLOAD', true) // @ts-ignore curl.setOpt('READFUNCTION', () => { throw new Error('Error thrown on callback') }) const perform = () => curl.perform() perform.should.throw('Error thrown on callback') }) it('READFUNCTION - should throw error if has invalid return type', () => { curl.setOpt('UPLOAD', true) // @ts-ignore curl.setOpt('READFUNCTION', () => { return {} }) const perform = () => curl.perform() perform.should.throw( 'Return value from the READ callback must be an integer.', ) }) if (Curl.isVersionGreaterOrEqualThan(7, 64, 0)) { it('TRAILERFUNCTION - should rethrow error', () => { curl.setOpt('UPLOAD', true) curl.setOpt('HTTPHEADER', ['x-random-header: random-value']) // @ts-ignore curl.setOpt('TRAILERFUNCTION', () => { throw new Error('Error thrown on callback') }) curl.setOpt(Curl.option.READFUNCTION, (buffer) => { const data = 'HELLO' buffer.write(data) return 0 }) const perform = () => curl.perform() perform.should.throw('Error thrown on callback') }) it('TRAILERFUNCTION - should throw error if has invalid return type', () => { curl.setOpt('UPLOAD', true) curl.setOpt('HTTPHEADER', ['x-random-header: random-value']) // @ts-ignore curl.setOpt('TRAILERFUNCTION', () => { return {} }) curl.setOpt(Curl.option.READFUNCTION, (buffer) => { const data = 'HELLO' buffer.write(data) return 0 }) const perform = () => curl.perform() perform.should.throw( 'Return value from the Trailer callback must be an array of strings or false.', ) }) } }) })
The use of sodium valproate in a case of status epilepticus. A child in status epilepticus, who did not respond to intravenous diazepam, was treated with sodium valproate by naso-gastric tube. Subsequent clinical and encephalographic improvement appeared to be related to the sodium valproate, and its value in cases of status epilepticus is discussed.
‘Blunt Talk’s’ Seth MacFarlane on Casting Patrick Stewart in His First Single-Camera Comedy: ‘It’s a No Brainer’ Patrick Stewart may be most beloved for his sci-fi and classical theater roles, but to his “American Dad” showrunner Seth MacFarlane, his comedic chops are just as impressive. In knowing how versatile Stewart is, MacFarlane set out to cast him on a single-camera comedy, ultimately culminating in new Starz series “Blunt Talk,” the cast and crew shared on stage at the Television Critics’ Association’s summer press tour. “One of the pleasures of my career is to connect people of whom I am a fan, and I’d been a fan of Patrick’s for a number of years and of Jonathan’s [Ames] work,” MacFarlane said. “It struck me as criminal that Patrick had never been cast in a single-camera comedy. He’s very quietly – or not so quietly if you’re me – has conquered every genre he’s attempted. He’s done hourlong dramas, live theater, multi-camera comedy, single-camera comedy, hosted SNL, every time he’s stepped into a new genre he’s performed like he’s been in it for years. To me it was a no brainer. It was a matter of someone creating a character worthy of his talents. For me it was literally about putting those pieces together.” The person who created that character, Walter Blunt, was Jonathan Ames, best known for “Bored to Death.” To the showrunner, he saw Stewart’s outlandish news anchor character as a continuation — though slightly more subdued — of the Howard Beal character from “Network.” “I rewatched ‘Network’ and thought, I want more Howard Beal,” he said. “At some point the movie became more about the William Holden, Faye Dunaway relationship. I saw this as a continuation – if ‘Network’ had continued, let’s find out if Howard Beal continued his broadcast. I see the show as a cross between ‘Network’ and ‘PG Woodhouse.'” “Blunt Talk” will premiere on Sunday, August 22 at 9 p.m. ET on Starz.
Unlike other rodents, juvenile female guinea pigs rarely display sexual receptivity (lordosis) following a variety of estradiol and progesterone treatments that elicit estrous behavior in adult females. The long-range goals of the experiments described in this application are to elucidate the neural mechanisms underlying the behavioral hyporesponsiveness to steroid hormones in immature females. The primary foci of this proposal are a caudally-projecting, steroid responsive pathway originating in the hypothalamus that appears to mediate the behavioral actions of estradiol and progesterone, the neurotransmitters acting within this pathway, and the activity of other central nervous system components, particularly one originating in or traversing the medial preoptic area, that may inhibit or override its activity, during the juvenile period. A combination of physiological, pharmacological and neuroanatomical techniques will be used to determine the integrity, in juvenile females, of the putative pathways mediating steroid-induced lordosis. Several neurotransmitter systems will be studied, to ascertain whether hypoactivity in neurotransmitter systems thought to be stimulatory for lordosis (norepinephrine, substance P) or hyperactivity of systems thought to inhibit receptivity (endogenous opiates) might underlie the juvenile female's immature behavioral response to steroid hormones. The results of these studies will provide insight into the mechanisms of steroid hormone action in the central nervous system, especially within the context of puberty. These data will help establish a framework within which one can study normal, as well as aberrant sexual maturation and/or responsiveness to estradiol and progesterone in humans and laboratory animals.
The head of Russia's Roscosmos space agency has vowed to verify whether or not the United States actually landed on the moon, according to the Associated Press. Discussing a proposed Rusian mission to the moon, Dmitry Rogozin jokingly said in a Saturday video posted to Twitter: "I answer questions of the President of Moldova: whether there were Americans on the moon... We have set this objective to fly and verify whether they've been there or not." Отвечаю на вопросы президента Молдавии: были ли американцы на Луне, зачем у @roscosmos есть истребители и трамваи и как российская космонавтика поможет молдавскому винограду?https://t.co/IRV3HUT6Sz — Дмитрий Рогозин (@Rogozin) November 24, 2018 Rogozin was responding to a question about whether NASA actually landed on the moon almost 50 years ago. While Rogozin's comments may have been made in jest, in 2015 a former spokesman for the Russian Investigative Committee called for a probe into the American moon landings. Until Russia can "verify" the moon landing, we anticipate a lively discussin on Van Allen belt radiation and laser reflectors to ensue...
WASHINGTON (Reuters) - Philip Morris International Inc should not be allowed to claim its iQOS electronic tobacco product is less risky than cigarettes, U.S. health advisers said on Thursday, dealing a blow to the company as it seeks to strengthen its portfolio of alternative nicotine devices. The recommendation is not binding and the U.S. Food and Drug Administration could still allow Philip Morris to make such a claim, but some analysts think the agency might ask for additional data first. “It’s a process,” said Bonnie Herzog, an analyst at Wells Fargo. The FDA will likely approve the request eventually, she said, “but timing is tough to predict.” Philip Morris, which has spent more than $3 billion to develop products that can counteract the decline in traditional cigarette sales, said it was encouraged by some of the committee members’ comments that iQOS may have risk-reduction potential. “We are confident in our ability to address the valid questions raised by the Committee with the FDA as the review process for our application continues,” Corey Henry, a Philip Morris spokesman, said in a statement. FDA Commissioner Scott Gottlieb recently proposed a broad tobacco policy shift that would reduce nicotine in cigarettes to “non-addictive” levels while increasing development of lower-risk alternatives for those unable to quit. IQOS is a sleek, penlike device that heats tobacco but does not ignite it - an approach Philip Morris says produces far lower levels of carcinogens than regular cigarettes. It is used by nearly 4 million people in 30 markets outside the United States but needs FDA authorization to be marketed in America. Slideshow ( 12 images ) Last month, a Reuters investigation described irregularities in the clinical trials that supported Philip Morris' iQOS application to the FDA. (here) and (here) The company’s shares fell 2.8 percent to close at $107.49 on Thursday, after falling as much as 6.8 percent. Matthew Myers, president of the Campaign for Tobacco Free Kids, said panelists “identified that serious questions remain” about the company’s application. He said it could amend the application and the panel’s recommendation does not rule out an ultimate approval. The panel said Philip Morris had not proven that iQOS reduced harm compared with cigarettes. It did conclude that the product exposes users to lower levels of harmful chemicals but said the company had not shown that lowering exposure to those chemicals is reasonably likely to translate into a measurable reduction in disease or death. Philip Morris needs to show both in order to claim in its marketing materials that the product reduces a user’s exposure to harmful chemicals. Some panelists were concerned that not all the harmful or potentially harmful chemicals in iQOS were lower than in cigarettes. Philip Morris presented data showing an overall exposure reduction of about 95 percent. “The negative recommendations did not come as a surprise,” said Gregory Conley, president of the American Vaping Association. He said the panelists “disconnected themselves from the facts in favor of ideology.” The FDA is expected to decide whether Philip Morris can sell iQOS within the next few months. It will decide separately whether to authorize the modified-risk claims. There is no time frame for when that decision might come. If cleared, iQOS would be sold in the United States by Philip Morris’ partner Altria Group Inc. Altria shares closed 2.3 percent lower at $69.91.
CsPbBr3 Quantum Dots 2.0: Benzenesulfonic Acid Equivalent Ligand Awakens Complete Purification. The stability and optoelectronic device performance of perovskite quantum dots (Pe-QDs) are severely limited by present ligand strategies since these ligands exhibit a highly dynamic binding state, resulting in serious complications in QD purification and storage. Here, a "Br-equivalent" ligand strategy is developed in which the proposed strong ionic sulfonate heads, for example, benzenesulfonic acid, can firmly bind to the exposed Pb ions to form a steady binding state, and can also effectively eliminate the exciton trapping probability due to bromide vacancies. From these two aspects, the sulfonate heads play a similar role as natural Br ions in a perfect perovskite lattice. Using this approach, high photoluminescence quantum yield (PL QY) > 90% is facilely achieved without the need for amine-related ligands. Furthermore, the prepared PL QYs are well maintained after eight purification cycles, more than five months of storage, and high-flux photo-irradiation. This is the first report of high and versatile stabilities of Pe-QD, which should enable their improved application in lighting, displays, and biologic imaging.
He started the season 9-0, had a microscopic earned run average of 1.71 and led the American League in victories. It was only June 8. The start to the season by Clay Buchholz had the makings of a summer that dreams are made of, without a doubt. Then, however, his body failed him. The twinge in his neck became deeper pain — far too much to deal with while throwing fastballs, cutters and curves. The disabled list beckoned. He’s still there, too, though not for long. Thursday’s rehabilitation start against the Rochester Red Wings at Frontier Field went precisely according to plan. Which means he’ll be going back onto the Boston Red Sox active roster very shortly. Buchholz pitched a solid 32/3 innings against the Wings — allowing four hits and two runs while striking out five — and will be rejoining the Red Sox on Saturday. He is scheduled to start on Tuesday at Tampa Bay, his first major league start since June 8. “Overall I felt really good about everything,” Buchholz said after the 7-2 victory over the Wings which evened the best-of-five Governors’ Cup first-round series 1-1. While he reached his pitch limit earlier than he had wanted — he left with two outs in the fourth after throwing his 71st pitch — he was still very close to the old Buchholz. He struck out James Beresford looking and Chris Parmelee swinging in the first inning. He froze Eric Fryer with a breaking ball for a called strike three to end a first-and-third threat in the second. He then fanned Antoan Richardson and Eduardo Escobar in a 1-2-3 third inning. “He was really pounding the strike zone,” Fryer said. “He had a nice cutter and he was keeping us off balance.” Most importantly for Buchholz, he felt fine. There have been critics during his extended stay on the disabled list, folks who questioned his toughness and pain threshold. “When you want to be out there, when you’re on a run like I put together — that doesn’t happen every year for a lot of guys, or anybody really — and then you have to just stop and not be able to throw, it sort of stinks. “I’ve been criticized over all this but I know how my body feels. If I go out there and I’m not me, I’m not helping the team at all. I want to be ready. I’m not saying I can’t feel nothing (in his neck when he returns to the mound for the Red Sox), but I definitely can’t be feeling like I felt.” The second inning was a bit of a chore. Buchholz needed 26 pitches, though he faced only five batters. He issued a one-out walk to Aaron Hicks, Eric Farris lined out to left, then Ray Olmedo’s chopper to third became an infield single as Justin Henry couldn’t make a play. He escaped trouble by striking out Fryer, however, with a pitch at the knees. He was in the zone often (52 strikes) and satisfied with his velocity (89-93 mph). “Whenever I try to reach back and throw something, I know it will be there,” he said. Somewhere else he knows he’ll be: back in the Red Sox rotation next week.
Topics Winter skiing, springtime on the links, summer sailing and autumn leaf-peeping — the weather forecast is the driving force behind the planning of many vacations. But when it comes to predicting the weather, WBAL meteorologist Tony Pann takes it all in stride. Pann grew up in the blustery, changeable climate of Chicago, and has since delivered the weather report for television stations in New York and Washington, as well as Baltimore. "I've seen it all," he says good-naturedly. With forecasters facing a torrent of criticism about a recent never-transpiring snowstorm, Pann reminds us that Mother Nature remains in control. "Even in this day and age, with the... Related "Television Stations" Articles Winter skiing, springtime on the links, summer sailing and autumn leaf-peeping — the weather forecast is the driving force behind the planning of many vacations.But when it comes to predicting the weather, WBAL meteorologist Tony Pann takes it all in... Sinclair Broadcast Group Inc.'s shopping spree continued Thursday with the announcement of a $373.3 million deal that extends its television reach to a third of U.S. viewing households.Sinclair will gain 20 additional stations in eight markets by... Ravens center Matt Birk is expected to film a video this week opposing Maryland's new same-sex marriage law, making him the second player on the team to take a public stand on the referendum. Birk, a Roman Catholic and father of six, is pressing a view... The Baltimore Ravens and Hearst Broadcasting announced a new deal Sunday night that will keep the team on WBAL radio and television for the next five years. Given the incredibly strong media performanance of all things Ravens locally and nationally, this... Paul Kruger's on-again, off-again flirtation with outside linebacker is back on.Initially drafted as a linebacker before switching to defensive end last season, Kruger has returned to the position that he said is his personal preference."I've been... As Kentucky Derby winner I'll Have Another vanned out of Baltimore-Washington International Thurgood Marshall Airport with a police escort Monday afternoon, he was greeted by local horse fans trying to make him feel instantly at home. "People... The Corporation for Public Broadcasting (CPB) was formed with the passage of the Public Broadcasting Act of 1967. As stated by President Lyndon Johnson when he signed the bill into law, while the CPB will receive support from the government, it will be... Sinclair Broadcast Group reported Wednesday that profit jumped 33 percent in the first quarter, thanks to double-digit gains in spending by automotive advertisers and revenue generated by the 2011 Super Bowl.Net income in the broadcast group rose to $15.3... The eviction of Occupy Baltimore protesters from a park near the Inner Harbor was carried out "in a respectful way," Baltimore's mayor said this morning, adding that she was pleased there were no injuries or arrests. "It certainly wasn't... Verizon and Sinclair Broadcast Group have reached an agreement, according to Bill Fanshawe, general manager of Baltimore's WBFF and WNUV television stations. WBFF is owned and WNUV is managed by the Hunt Valley based broadcaster. "We have... City police have identified the man who was fatally shot on Christmas Eve in Cherry Hill. Damond Wallace, 30, was found at about 11 p.m. in the 2700 block of Giles Rd. suffering from gunshot wounds to his head, torso and arm. He was taken to an... Since entering the NFL as a second-round draft pick two seasons ago, Ravens defensive end Paul Kruger has contributed 12 tackles and just one memorable play -- an overtime interception against Dennis Dixon and the Steelers in 2009. But Kruger feels he... Former Ravens fullback Lorenzo Neal, who played in Baltimore in 2008, was arrested for driving under the influence on July 4 after he crashed his truck into a pole in California. KSFN in Fresno reported that Neal’s blood alcohol content "was just a... For about 40 miles, Elsya and her 20-month-old son were trapped in a car with a stranger at the wheel.With the back doors unhitched and flapping, and speeding at more than 100 miles per hour,she says,the strange man threatened her,... Lawrence Hubert "Larry" Taylor, a pioneering WMAR-TV broadcast engineer who was also an original Rodgers Forge resident, died July 15 of complications from a fall at Oak Crest Village retirement community.He was 90.Mr. Taylor, the son of a truck... Dorothy E. Brunson, who became the first African-American woman in the nation to own a radio station when she bought WEBB-AM in Baltimore, died Sunday of complications from ovarian cancer at Mercy Medical Center.The Northwest Baltimore resident was... When she was 12 years old, Cathy Hobbs would hop in the passenger seat of her mother's brown Cadillac Coupe de Ville and deliver the Columbia Flier.Last week, Hobbs returned to Columbia with a different job: to teach a home staging certification course.... In recent weeks, at least one local television station has referred to Barry H. Landau, the New York man arrested with an accomplice and charged with stealing documents from the Maryland Historical Society, as a "presidential historian." The... Campus Hills resident Charles Kloch, a retired 70-year-old banker, began traveling before he could walk, when his father took the family on vacation to Canada. He has no intention of stopping.He has travelled to 105 countries — not counting the year he...
The present invention relates to a data communication apparatus and a data communication method, which shortens the time needed for a previous-procedure to be performed prior to data communication using a modem, such as facsimile communication. Recently, in data communication apparatus, data communication using a V.34 modem (33.6 kbps), which is conformed to recommends of the ITU-T. The ITU-T also recommends T30 ANEXF (so-called Super G3) as facsimile communication standards using the V.34 modem for facsimile apparatuses. A previous-procedure for facsimile communication is carried out according to the standards procedure, after which communication of image data is executed. Such a communication protocol will be explained based on the sequence chart illustrated in FIG. 1. FIG. 1 is a control signal view for a communication protocol for facsimile communication according to prior art. Referring to FIG. 1, reference character 19a denotes a communication procedure for selecting a modulation mode from among a V34 half-duplex, V34 full duplex, V17 half-duplex, etc. Reference character 19b denotes a communication procedure for line probing to check a line and determine various kinds of parameters. Reference character 19c denotes a communication procedure for modem training. Reference character 19d denotes a communication procedure for setting a modem parameter. Reference character 19e denotes a communication procedure for exchanging a facsimile control signal. And, reference character 19f denotes a data communication procedure for the primary channel. The upper side in the diagram is a sequence for a caller modem, and the lower side is a sequence on an answer modem, and the sequences progress from left to right. The above communication protocols will be specifically described as follows: First, in the communication procedure 19a for selecting a modulation mode, the selection of a modulation mode and communication procedure, which permit communication between a caller modem and an answer modem, through a V.21 modem (300 bps, full-duplex), is carried out after a line connection is established. A facsimile apparatus using a V.34 modem selects a V.34 modem as the modulation mode and facsimile communication as a communication procedure. After that, in the communication procedure 19b for line probing, the caller modem transmits a line probing tone. The line probing tone is received by the answer modem, a line inspection is carried out, and a training parameter is selected based on the result of the line inspection. In the communication procedure 19c for modem training, the caller modem sends training signals based on the training parameter selected under the line probing communication procedure 19b, while the answer modem receives the training signals, learns a filter coefficient for an adaptive equalizer for compensating the line characteristic and checks the reception quality of the training signals. In the communication procedure 19d for selecting a modem parameter, modem parameters are negotiated between the caller modem and the answer modem in full-duplex communication at 1200 bps. As a result, an optimal modem parameter is selected from the modem parameters preset in the apparatus, the result of the line inspection and the inspection of the reception quality of the training signals. In the communication procedure 19e for a facsimile control signal, negotiation of facsimile control signals NSF, CSI, DIS, TSI, DCS, CFR, etc. is executed in full duplex communication at 1200 bps. Then, in the data communication procedure 19f, image data is transmitted from the caller modem in half-duplex communication at 2400 bps to 33.6 kbps. Image data is received by the answer modem. In the case of performing communication at the maximum communication rate of 33.6 kbps, image data can transmit in approximately three seconds per a sheet of paper of size A4. The caller and answer modes, which execute the aforementioned communication protocol, carry out communication in accordance with the training parameter selected under the communication procedure 19b for communication line probing and the modem parameter selected under the communication procedure 19d for selection of a modem parameter. To compensate the line characteristic, the receiver modem executes communication using the filter coefficient that has learned in the modem training 19b. This ensures optimal data communication according to the line quality. However, the above-described prior art structure involves five channels of a procedure before starting sending image data after line establishment, and thus requires about 7 seconds. By contrast, since electric transmission of a single sheet of image data at the maximum communication rate of 33.6 kbps takes about 3 seconds, the procedures requires over 60% of the entire time of 11 seconds required for transmission of one sheet of an original including the later-procedure about 1 second. This time needed for the previous-procedure gets greater as the number of transmission/reception lines increases, and generates wasteful time and communication cost. In consideration of the above-mentioned problems, an object of the present invention is to provide a data communication apparatus which can shorten time for setting various kinds of parameters of a modem and time for a previous-procedure including time for modem training before the image transmission. Also, an object of the present invention is to provide a data communication apparatus such that a supported short previous-procedure function appropriately operates even when a communication error rate is high or line characteristics are changed. More specifically, there is provided a data communication apparatus comprising: storing means for storing various kinds of modem control information for each destination in association with an operation key; calling means for generating a call to said destination by-a transmission command from said operation key to start communication; and communication control means for transmitting a shift notify signal indicative of the shift to a previous-procedure for short communication in a case of communication using said calling means, thereafter controlling the modem based on said various kinds of control information so as to shorten a previous-procedure and carry out data transmission. The data communication apparatus according to the first aspect of the present invention comprise storing means for storing various kinds of modem control information for each destination in associate with an operation key, calling means for generating a call to said destination by a transmission command from said operation key to start communication; and communication control means for transmitting a shift notify signal indicative of a shift to a short previous-procedure in a case of communication using said calling means, thereafter controlling the modem based on said various kinds of control information so as to shorten a previous-procedure and carry out data transmission. According to the above-mentioned structure, communication to the destination registered in the operation key is executed in the short previous-procedure based on control information stored in storing means, so that an operator can shorten communication time by considerably simple operation. Since it is unnecessary to obtain modem control information suitable for the destination by communication in the previous-procedure, time required for a previous-procedure can be greatly shortened. Control information described here is time for modem parameter or modem training. The second aspect of the present invention, in the data communication apparatus of the first aspect, the communication control means transmits the shift notification signal that indicates shift to a short previous-procedure in place of a calling menu signal, in response to a deformed answer signal from an answer side apparatus with respect to a calling signal transmitted in previous-procedure configured to conform to Recommendation ITU/V.34 dated September 1994. According to the above-described structure, the apparatus in answer side receives either the calling menu signal in V.34 protocol or the shift notify signal indicating shift to the short previous-procedure. Since the apparatus has only to identify these signals, the apparatus can receive the shift notify signal to the short previous-procedure without requiring a greatly change in a receiving signal processing circuit. The third aspect of the present invention, in the data communication apparatus of the first or second aspect, the communication control means confirms whether or not various kinds of the modem control information are stored in said storing means prior to the shift to the short previous-procedure, and executes the short previous-procedure only when said control information is stored. According to the above-mentioned structure, control of communication control means can be simply carried out. Namely, communication control means check a writing state of storing means, and select whether or not control information is written to storing means, so that communication control means may change the ON/OFF state of short previous-procedure, easily. The forth aspect of the present invention, in the data communication apparatus of the first or second aspect, the data communication apparatus farther comprises parameter registering means for storing various kinds of the modem control information obtained in the previous-procedure executed with respect to the destination registered in said storing means in a state that only destination information corresponding to said operation key is stored in said storing means. According to the above-described structure, if the destination information has only to be registered correspond to the operation key, control information actually obtained in communication with the destination in the normal previous-procedure is automatically stored, and subsequent communication can be automatically executed in the short previous-procedure. Therefore, this gives a good operability to the operator. Destination information described here is one that specifies the destination such as a destination""s name, a telephone number, ID information. The fifth aspect of the present invention, the data communication apparatus of the forth aspect, the apparatus further comprises error detecting means for detecting an error of communication, and the parameter registering means does not execute the registration of control information when error detecting means detects an error in the previous-procedure for registering the parameter for short previous-procedure. The above-described parameter registering means in the fifth aspect automatically registers various kinds of the modem control information obtained in the previous-procedure. However, if control information in the previous-procedure where a error has been occurred is registered, there is a high possibility that an error will occur again. For this reason, registration of control information is not carried out. Since the short previous-procedure is not executed in next communication, there can be avoided a case in which communication time is increased due to the short previous-procedure execution error. The six aspect of the present invention, in the data communication apparatus of the fifth aspect, the parameter registering means executes once stores control information obtained in the normal previous-procedure in said storing means, after which said control information is erased, thereby executing processing in which no registration of control information is carried out. According to the above-mentioned structure, since processing for prohibiting registration of parameter (control information) to be carried out first and processing for erasing control information registered in storing means can be executed in the same processing flow, simplification of processing and software can be improved. The seven aspect of the present invention, the data communication apparatus of one to six aspect, when the short previous-procedure proceeds abnormally, said communication control means performs a shift to communication by the normal previous-procedure or second communication procedure having low communication speed so as to continue communication after a lapse of a predetermined period of time. According to the above-mentioned structure, when the short previous-procedure does not normally proceed within a predetermined period of time, communication in the other communication procedure whose communication speed is low is continued. For this reason, there can be avoided a case in which a communication error occurs by failure in the short procedure. In this case, the communication protocol to be shifted may be a communication protocol whose communication speed is the same as that of the short previous-procedure execution time or a communication protocol whose communication speed is lower than that of the short previous-procedure execution time. According to the eighth aspect of the present invention, in the data communication apparatus of the seventh aspect, the second communication procedure is a communication protocol according to Recommendation ITU/T.30, dated 1976 and the predetermined period is time for which NSF/CSI/DIS signals of T.30 can be received twice or more after shift to the second communication procedure. According to the above-mentioned structure, in the case of failure in the short previous-procedure, shift to T.30 communication procedure is carried out, and a standby state for a control signal of 300 bps is set. NSF/CSI/DIS signals are repeatedly transmitted. For this reason, even if first reception of a control signal ends in failure, at least two signal reception times can be ensured not to generate a communication error. Therefore, even if the apparatus on the destination is changed to one, which is not equipped with the short previous-procedure, or the short previous-procedure ends in failure, shift to T.30 protocol can be surely executed. The nine aspect of the present invention, the data communication apparatus of the one to six aspect, when the short previous-procedure does not proceeds normally, the communication control means change the procedure from the short previous-procedure to a second communication procedure having low communication speed so as to continue communication after the number of retrial times of said short previous-procedure reaches a fixed value. Although the shift to the second communication procedure was executed after a lapses of a predetermined period of time in the seven aspect, the shift to the second communication procedure is executed by the number of retrials of the short previous-procedure in the nine aspect. The ten aspect of the present invention, in the data communication apparatus of the one to fourth aspect, the apparatus further comprises error detecting means for detecting an error of communication, wherein when said error detecting means detects an error after starting execution of the short previous-procedure, said communication control means maintains destination information stored in said storing means on one hand and erases only the corresponding various kinds of control information in said storing means on the other hand. In other words, if some error occurs after the start of executing the short previous-procedure and control information used at this time is used in the future, there is a high-possibility that an error will occur again. For this reason, such information is erased. Such communication error occurring after the start of executing the short previous-procedure can be applied to both a case in which the previous-procedure is executing and a case in which data communication is executing after the previous-procedure is accomplished. However, treatment of control information thereafter differs, depending on whether or not the short previous-procedure is ended. The eleven aspect of the present invention, in the data communication apparatus of the seven aspect, the apparatus further comprises parameter registering means for executing a normal previous-procedure in the same call so as to newly store various kinds of the modem control information obtained in the executed normal previous-procedure in said storing means when the short previous-procedure does not proceeds normally. According to the above-mentioned structure, control information stored in storing means is automatically updated in the same call. Therefore, next communication is started in the short previous-procedure. There is a case in which an error occurs again based on updated control information. However, if such occurrence of error is repeated, the execution of the short previous-procedure itself is prohibited as described later. The twelve aspect of the present invention, in the data communication apparatus of the ten aspect, the error detecting means determines an error of communication when a data transmission error rate of communication obtained after executing the short previous-procedure or the normal communication previous-procedure increases to a predetermined value or more. According to the above-described structure, for example, when the error gradually increases during communication, control information registered in storing means is determined as an unsuitable parameter. This is particularly useful for a case in which line characteristic is better than a normal case at a parameter registration time. The thirteen aspect of the present invention, in, the data communication apparatus of the ten aspect, the error detecting means determines an error of communication when a data transmission error rate of communication obtained after executing the short previous-procedure or the normal previous-procedure is lower than a data transmission error rate of a communication protocol whose speed is slower than communication speed. According to the above-described structure, there can be avoided a disadvantage in which communication is repeated at communication speed which is lower than the normal communication speed. In contrast to the previous case, this is particularly useful for a case in which line characteristic is worse than a normal case at a parameter registration time. The fourteen aspect of the present invention, in the data communication apparatus of the twelve aspect or thirteen aspect, the apparatus further comprises parameter registering means for storing various kinds of the modem control information obtained in a next normal previous-procedure executed with respect to the destination in said storing means after erasing control information from said storing means. According to the above-mentioned structure, next communication is executed in the normal previous-procedure. At this time, registration of modem control information is automatically performed, and next communication is automatically executed in the short previous-procedure. The fifteen aspect of the present invention,-in the data communication apparatus of the first to fourth aspect, when the number of error generations of the short previous-procedure or an error rate to the number of short previous-procedure executions exceeds a predetermined value, the short previous-procedure with respect to the destination in subsequent communication is prohibited from being executed. According to the above-mentioned structure, there can be avoided a case in which the short previous-procedure is executed in accordance with control information having a high error rate to generate an error repeatedly. The error rate may be calculated at the time when the short previous-procedure execution time reaches a predetermined value, or the number of errors may be simply counted. When the error rate is high, the execution of the short previous-procedure is prohibited at the earliest time, so that the number of times of executing waste short previous-procedure can be reduced. To count the number of errors, as described in aspect sixteen, the apparatus comprises an error counter for counting a number of error generations of short previous-procedure, wherein said error counter counts up every time when various kinds of control information stored in said storing means are erased. The seventeen aspect of the present invention, in the data communication apparatus of the sixteen aspect, the error counter determines whether or not communication is one that is started in the short previous-procedure before counting up, and the error counter executes no counting operation when a result of said determination is no. According to the above-mentioned structure, the error counter can surely perform the counting operation. Particularly, in a case where control information is erased from storing means in the normal communication protocol as described in aspect ten, the counting operation of the error counter is prohibited, so that no error occurs in the count value. The eighteen aspect of the present invention, in the data communication apparatus of the fifteen aspect, the communication control means confirms whether or not various kinds of the modem control information are registrable in said storing means prior to the shift to the short previous-procedure, and executes no short previous-procedure when various kinds of the modem control information are non-registrable. According to the above-described structure, when the error often occurs in the short previous-procedure, registration of control information to storing means is prohibited so that the execution of the short previous-procedure itself is prevented. At this time, if registration of control information to storing means is prohibited, the modem parameter is stored, after which communication in the normal communication protocol is started. Therefore, a waste short previous-procedure in which the error occurs again is not executed. The nineteen aspect of the present invention, in the data communication apparatus of the eighteen aspect, the apparatus further comprises parameter registering means for storing various kinds of the modem control information obtained in the normal previous-procedure, wherein prohibition of executing the short previous-procedure is carried out by prohibiting said various kinds of the modem control information from being written to said parameter registering means. According to the above-mentioned structure, for example, the prohibition of various kinds of modem control information to storing means by parameter registration means based on software, thereby making it possible to prohibit the short previous-procedure from being executed easily. The twenty aspect of the present invention, in the data communication apparatus of the first to fourth aspect, the apparatus further comprises operating means for changing a destination""s name stored in said storing means or a destination""s telephone number; and memory controlling means for automatically erasing all information of a relevant information storing area including modem control information stored in association with said destination""s telephone number when there is a change in said destination""s telephone number input by said operating means. According to the above-mentioned structure, the parameter can be registered in association with a new destination""s telephone number. When the telephone number is changed, the line characteristic is also changed in many cases. However, control information can be erased and updated in a state that the operator has no awareness thereof. Control information to be erased here includes short previous-procedure registration prohibition information, that is, all information stored in association with the destination""s telephone number. The twenty-one aspect of the present invention, in the data communication apparatus of the first to fourth aspect, the apparatus further comprises operating means for inputting an identification code peculiar to a self-apparatus; and memory controlling means for automatically erasing all information of a relevant information storing area including modem control information stored in association with all telephone numbers destination-registered when there is a change in said identification code input by said operating means. According to the above-mentioned structure, when the telephone number of the self-apparatus is changed, all information including control information and short previous-procedure registration prohibition information can be automatically erased without executing the operation by the operator. The twenty-two aspect of the present invention, the apparatus further comprises parameter registering means for storing various kinds of the modem control information obtained in a next normal previous-procedure executed with respect to a destination in said storing means in a state that only the destination corresponding to an operation key is stored in storing means. Thereby, the automatic registration of parameter can be executed with respect to storing means in which control information has been automatically erased in next communication. The twenty-three aspect of the present invention, the apparatus further comprises memory control means for automatically erasing all information of a relevant information storing area including modem control information stored in association with all telephone numbers destination-registered when an error continuously occurs in the short previous-procedure executed with respect to a plurality of destinations. According to the above-mentioned structure, on the presumption that the reason why a continuous error occurs in the short previous-procedure lies in a change in the kind of line to which the self-apparatus is connected, the execution of the short previous-procedure can be prohibited. Therefore, in subsequent communication, the normal communication is sequentially executed, and new control information can be registered. The twenty-four aspect of the present invention, in the data communication apparatus of the first to fourth aspect, said control information is deleted from said storing means after a lapse of a predetermined period of time from registration of control information to said storing means so as to execute an update of control information by parameter registering means. According to the above-mentioned structure, modem control information to be registered in memory means can be maintained to an optimal state. The twenty-five aspect of the present invention, in the data communication apparatus of the first to twenty-four aspect, the apparatus further comprises memory control means for rewriting the modem control information registered in said storing means, wherein every time when a normal previous-procedure is executed to acquire modem control information, said memory control means adds modem control information newly acquired, corrects the modem control information registered in said storing means so as to be recorded again. According to the above-mentioned structure, since the parameter is corrected and learned for each communication so as to be maintained to be an optimal value, suitable communication having short communication time, high communication speed, and a low error rate can be carried out while using the short previous-procedure. The twenty-six aspect of the present invention, in the data communication apparatus of the first to twenty-four aspect, suitable modem control information is calculated based on a plurality of modem control information obtained by repeating execution of a normal previous-procedure a plurality of times, and said calculated control information is registered in said storing means. According to the above-mentioned structure, since initial registration of modem control information to be registered in storing means can be adjusted and registered, a probability of success of the short previous-procedure can be improved.
Been there, done that. I blogged before it was popular and now I'm not interested in being lumped with all the other bloggers, some who aren't evidence based. This blog is geared toward providers and colleagues. It's to provoke thought, discuss value, and create ideas. I hope you enjoy! Subscribe to this blog Subscribe Search This Blog Consult with Us Posts I have worked with hundreds of clients, some who come right into my office and tell me they are addicted to sugar.It’s a common theme that is explored individually.After reviewing patterns and lifestyles, recommendations are made to change the makeup of meals as well as the timing and almost always we are able to decrease the intensity of the cravings once we fuel the body correctly. I was intrigued to read the study led by Yale University and the University of Southern California who reviewed the relationship between glucose drops and responses by the brain.The study was published September 19th in the Journal of Clinical Investigation.The Yale scientists manipulated glucose levels intravenously and monitored blood glucose levels while subjects were shown pictures of high calorie foods, low calorie food, and non food items.Each subject underwent MRI scans which showed that when glucose levels dropped, the hypothalamus sensed the change. The insula and striatum, other parts of the brai… Here we are again, it’s the holiday season. It seems to come up faster and faster the older I get.As I continue to see clients and get busier and busier, the common theme during the holidays is to work towards a goal of weight maintenance or continued weight loss through the holidays. This can certainly be a difficult thing to do as the holiday season is themed around all different types of food events.Ask anyone what they think of when they hear Thanksgiving and I guarantee the majority say turkey with all the fixings. Since our society puts a lot of focus on food and how it makes us feel, it’s no wonder that some can become quite anxious this week.I’m choosing to forego any big research topics this week to provide some tips on how to make this holiday healthy. 1)Focus on the true meaning of the holiday and not the food it’s centered around.Celebrate the relationships you are thankful for and make this the priority for the day. 2)Remember you can always have any of the foods that will b… A recent study completed by the University of Montreal and published in the Archives of Pediatric and Adolescent Medicine suggests that it may be possible to predict obesity as early as 3.5 years of age.Laura Pryor, a PhD candidate, and her team analyzed data drawn from the Quebec Longitudinal study of child development which ran from 1998 to 2006.1,957 children’s height and weights were analyzed from 5 months to 8 years of age.In addition to weight and height, BMI’s (body mass index) were configured and analyzed and differentiated into three trajectory groups: children with low but stable BMI, children with moderate BMI, and high- rising BMI (elevated BMI that was rising). An interesting summary from this study was that the research team noticed that all three trajectory groups were similar until about 2.5 years of age.The BMI’s of the high-rising trajectory group increased significantly at 2.5 years of age and by middle school 50% of these children were obese.When looking into factor… As we enter into the month of November and gear up for the holidays, Angela Farris helps familiarize us with the many types of squash as well as giving us tips on how to prepare. Enjoy! Squash is not only colorful, it’s tasty! Winter squash varietals come in various shapes and sizes but share similar characteristics. Winter squash tends to have a hard outer shell that encloses a vibrant flesh that can boast many vitamins and minerals including vitamins A, C & E, beta-carotene, magnesium, manganese and potassium. A quick, easy way to prepare your squash is to oven bake it. Preheat your oven to 350°. Scrub the outside of your squash thoroughly, cut in half length wise (Beware! squash can be difficult to cut due to its size and firmness. Take extra precautions and find a firm grip before slicing), remove the seeds and place face down in a roasting pan. Add half an inch of water in the bottom of the pan to provide moisture. Depending on size, bake between 1 – 2 hrs or until flesh is t… As some of you know, I trained for and completed my first Half Marathon Saturday October 15, 2011.In my younger years (high school) I ran sprints and hurdles in track along with all year round soccer.I continued soccer through my college years and in my twenties through club and co-ed teams and also ran, mostly for exercise, right up until I had children.I would say I’ve always been an athletic person by nature but running anything more than 200 yards really wasn’t my cup of tea. As years passed on, I started my family and was very lucky to have two beautiful children seventeen months apart.Boy did that put some stress on my body!After three years of being pregnant and breast feeding you can imagine how off my game I was with the whole physical activity aspect of life. For a dietitian, who has always been active, you can imagine how lost I felt as I continued to put myself at the bottom of the priority list.So, last October I went out and bought a treadmill.My youngest was 20 months old… Read below as Sarah writes about recovery and strength. Sarah is hopeful and determined and making strides each and every day. Let's cheer her on as she continues to gain strength through her recovery process. In the past several weeks, I have been on the threshold of recovery's door demonstrating a balancing act most trapeze artists would be in awe of. Each time I have found myself faced with which direction to step, each time, I somehow found the strength to step towards change and recovery. In my opinion recovery is different for every one. So I had to stand back and ask myself “What is recovery to me?” Is recovery the switch that has gone off in my head which says, “Fight harder! You are not going to let this control you anymore!”? Is it the ghost of a friend saying, “What are you doing?! I had no choice in my death, you do! Wake up!”? Is recovery the hope and dream to one day wake up and never have to think about food? Is it the chance to finally be comfortable with m… I see clients of all ages, from infancy to geriatrics.The common theme of all of my clients is at one time or another they were toddlers.Sometimes I see older clients who describe early patterns that I suspect are partly responsible for the way a current client eats or “diets.”At other times, I see toddlers who “rule their roost” and need a little nudge from their parents for a stricter regimen. I’ve written a lot about toddlers.Mostly because I think this is a very specific time to start introducing good nutrition habits, but also because I am in the toddler zone with a four and almost three year old.This is a time when textures, smells, and taste can be very scary.If you mix that in with a child who is developing their independence, this can be one challenging time. I have many parents ask me what the right thing to do is for their toddler and I’ve addressed this in some of my other articles (check them out by searching toddler in the blog search).Here are a couple of things I will al… By Angela A. Farris BPA has been in the news for years. Articles focus on potential harm and health dangers of using products containing BPA. Should we really be worried? BPA, or Bisphenol A, is an organic compound with properties used to produce clear durable plastics and strong hold resins. Current research reviewed by the National Toxicology Program at the National Institutes of Health and the Food and Drug Administration found that low levels of BPA exposure could have potentially harmful effects on the brain, prostate gland, and fetuses, infants and young children. BPA mimics the sex hormone estrogen found naturally in our bodies and can alter our hormonal balance. Disrupting this balance could affect a range of functions like reproduction, development, and metabolism. Individual state governments are working to ban BPA in baby bottles, but it is not currently banned on a federal level. The FDA is taking precautionary steps to reduce the exposure of BPA in our food supply and prod… A multi-center study including 351 people took a good look at dietary intervention and the ability to decrease LDL cholesterol.David J. A. Jenkins of St. Michael’s Hospital and the University of Toronto and colleagues compared dietary interventions of a control diet and a portfolio diet for six months.The control diet emphasized high fiber and whole grains and the portfolio diet emphasized soy protein, plant sterols, viscous fibers, and nuts.Diets were randomized from June 2007 to January 2009 and counseling was delivered at routine dietary portfolio (two visits in six months) or an intensive dietary portfolio (seven visits in six months). Results showed that the control diet decreased LDL Cholesterol numbers by 8.0 mg/dL and the Portfolio diet decreased LDL Cholesterol by 24 mg/dL for the routine counseling and 26 mg/dL for the intensive counseling. This research study proves a relationship with food, dietary adherence, and reduction of LDL Cholesterol.When looking at the portfolio die… As most of you know I am a big fan of guest bloggers and am always looking for a way to communicate the right information to the Kindred Community. I am feeling really excited to be able to introduce a new guest blogger. Recently I met Angela Farris, a dietetic intern at the University of Maryland at College Park. I am excited for you to get to know her just as I have. Angela will be appearing in guest blogs and helping out with some other very neat things with Kindred Nutrition and The Center for Intuitive Eating. Check out her bio below and look for her first guest post September 29th. Angela A. Farris Biosketch Angela is a dietetic intern with the University of Maryland College Park. Her program emphasizes information management and communication enabling her to visit a variety of sites over the next year; including IT rotations at the Center for Nutrition Policy and Promotion and the International Food Information Council. Prior to starting her dietetic internship Angela rece… I was asked this specific question a lot in the last two weeks from clients, friends, and a physician group I work with.We’ve all heard that our children are the first generation that will not outlive their parents.Fifteen percent of children in the United States are considered overweight and another fifteen percent are at risk of being overweight.There’s a lot of scary statistics out there and it's starting to cause some panic. Here are my thoughts.I feel strongly that it’s important to catch irregular eating, unhealthy habits, or weight gain in kids earlier than later.When I say early I mean as early as two believe it or not. There’s a reason that pediatricians and dietitians recommend changing from whole milk to skim after the age of two, as brains no longer require the extra fat intake for development. If your child is in the toddler years or beyond and eats multiple servings of food, is ‘addicted’ to junk food, or there is concern of weight gain or excess weight for height, it … I hope you all had a great labor day. I had a wonderful one filled with friends and family. I was especially excited when I was notified at the end of the weekend that my Ask Amy's blog was awarded Nutrition Expert of the Year AND Blog of the year through Around the Plate. Have you checked out Around the Plate? It's a great blog that consists of many Nutrition Experts, Healthy Eating Champions, and Recipe Guru's. As always thanks to all of my fans who follow Kindred Nutrition's Ask Amy blog. It's a passion of mine and I especially love to compile blogs based off of your suggestions. Have any nutrition questions that have been on your mind? Email me at [email protected] and I'll get you the answers. Most of my blogs are based off of current research or are responses to questions asked by people in and outside of the Kindred Community.For today, though, let’s change it up a bit and discuss constraints or obstacles I see in my current practice. My clients are very motivated and for the most part are very successful but when analyzing each client there are specific culprits that challenge every client’s success.I figured if it’s happening within the Kindred Community it's most likely happening outside of it. Let's discuss the white elephant in the room, shall we? 1.Foregoing the food diary: Let’s face it.Keeping a food diary is a complete drag.It’s time consuming and it holds you accountable to all those little morsels you drank or ate that are easily forgotten.Unfortunately, this is exactly why I recommend keeping a food diary.You can’t get to where you want to go without knowing where you came from.Once you keep a journal and are able to assess your intake it’s easier to imp… A study published in Applied Physiology, Nutrition and Metabolism this week concluded that obese people who are otherwise considered healthy, live as long as their “skinny” peers. The study also suggests that healthy obese individuals are less likely to die from cardiovascular complications than their lean counterparts. I silently laughed as the media got a hold of this study with titles such as, “Fat and Healthy? Study finds Slim isn’t always Superior”, or “Being fat isn’t a death sentence at all”, or my favorite “Obesity Police busted? Study says fat people can be healthy.” The study monitored 6,000 obese Americans over 16 years and compared them to the death rate of lean individuals.Once compared, the conclusion was found that the mortality rate of obese individuals who had none or minor co-morbidities were no higher than that of lean people. CBS interviewed Dr. Kulk who stated "I think this is a common notion, that if you are overweight you are unhealthy and that if you are s… As Sarah continues to make her way to progress to recovery from Anorexia, she has found it helpful to express her thoughts and feelings by writing guest blog posts. If you've ever known anyone who struggles with Anorexia, have struggled with eating disorders in the past, or just want to educate yourself some more read below. We are all cheering for Sarah as she works her way to recovery. One of the hardest things for me to do is to come out of hiding. To stop hiding what I do not eat, stop hiding behind clothing, to not hide my emotions, and most of all to not hide the fact that yes I do indeed have an eating disorder. The biggest step for anyone with an eating disorder to take is to admit they have a problem and to seek help.For me, it took almost 22 years.There are days when I wish I would never have opened my mouth as it was so much easier to hide. Coming out of hiding means confiding in friends and family and let me tell you, in my situation, not everyone is supportive.Some… I just love when I get questions or blog ideas from my Kindred Community and have to give my husband kudos for forwarding this study to me and asking me what I think.Today a study was mentioned on Yahoo titled “Healthy Eating Privilege of the Rich?”The study focuses on the food pyramid changes in 2010 recommending an increase in consumption of potassium, dietary fiber, vitamin D, and calcium rich foods.Two thousand individuals in the state of Washington participated in a telephone survey which then led to the request for a printed survey to be completed.Only 1300 of the printed surveys were returned.The information provided in this article states that the printed survey focused on food eaten. Nutrients were then analyzed and estimated in cost. Conclusions from this study state that the more money spent on food, the closer the people were to meeting the food pyramid guidelines of potassium, dietary fiber, vitamin D, and calcium which then led to a request for the government to do more… I was wondering what we’d start seeing as the National Calorie Labeling Law comes into effect and I have to say I’ve been quite impressed with multiple restaurants attempting reformulations or portion size changes.For instance, this week I read that the Cheesecake Factory is going to create a Skinnylicious menu where entrees would be less than 590 calories.Of course I love this idea, the name not so much. Of all the changes I’ve seen so far, McDonalds seems to have the most in depth plan to date.On July 26, 2011, the President of McDonalds announced the following commitment to offer improved nutrition choices. 1)To automatically include produce or low fat dairy products in happy meals by the end of Q1 2012.This will provide a 20% reduction in calories. 2)To reduce added sugars, saturated fat and calories by 2020 and reduce sodium by 15% by the year 2015. 3)To offer increased access to nutrition information.McDonalds will even be launching their own mobile application for McDonalds provid…
Most popular 360 posts about internet in India: Important: Ssl unblocker youtubezalmos and insert following line: do shell ssl unblocker youtubezalmos script "pppd call VPN M" with administrator privileges Now save it to Application folder and you can run it as regular app. Just drag and drop Run AppleScript from left pane.sNMP commands. Zoom ssl unblocker youtubezalmos Tunnel Mode Transport Mode is used where traffic is destined for a security gateway and the security gateway is acting as a host e.g. Tunnel mode encrypts both payload and the whole header (UDP/TCP and IP)).the answer ssl unblocker youtubezalmos is to be found in the dictionary. Success in all areas of loves speed. So lets rephrase the question: Why does success love speed? I dont recall ever succeeding at something because I got there last. So its not just money. Ssl unblocker youtubezalmos for Description, tap it to turn off. We recommend using CUMC Exchange. Make sure your full CUMC email address is listed, then type in its password. At the Exchange window, if the Advanced Settings slider is set to on,protects your data from hackers at WiFi ssl unblocker youtubezalmos hotspots, use the Get IPVanish button on this page and our exclusive Coupon Code FIRETV 25 to take a total of 60 off! IPVanish unblocks Kodi, this is an IPVanish 1 Year VPN subscription.this post may contain ssl unblocker youtubezalmos affiliate links. if youre setting up Incoming Connection in your home computer, which ssl unblocker youtubezalmos can change at any time. If this is the case, you probably have a dynamic public IP address,in other words, it can also be called as a mini-Internet. RECOMMENDED : Click here to repair Windows problems optimize system how to download from a blocked website performance A VPN or a Virtual Private Network is used to access a particular network from another network. Download Secure VPN on your phone, and youll be able to connect up to a virtual private network with just the tap of a button. All you need is an Internet connection and a free account! All of these features are tied directly to IPVanishs huge network of over 850 servers in 60 different countries, providing lag-free connections that are safe and anonymous. IPVanish includes the following features: Download torrents with full privacy and anonymity, keeping you safe from ISP warning. Ssl unblocker youtubezalmos EU: when used with MPLS, this document provides a ssl unblocker youtubezalmos sample configuration of a Multiprotocol Label Switching (MPLS )) VPN when Border Gateway Protocol (BGP)) or Routing Information Protocol (RIP)) is present on the customer s site.please sign up for a Netflix account before continuing. Note: This tutorial assumes you are already a Netflix customer. If that is not the case, tutorial: How to unblock Netflix on a Samsung Smart ssl unblocker youtubezalmos TV using a VPN/DNS provider.please let us know how we can make this ssl unblocker youtubezalmos website more comfortable for you. netflix uses a process called geo-targeting where they can use your IP address to find out where you are in the world. Sorry for the inconvenience! Netflix has not come to your part of the world yet. This is because.to remove the users access to a network address objects or groups, 6. Select one or more network address objects or groups from the Networks list and click the right arrow button ssl unblocker youtubezalmos (- )) to move them to the Access List column.with the new update ssl unblocker youtubezalmos from Apple, if you use PPTP for macOS Sierra,some fraud prevention services do not explicitly look for proxy / ssl unblocker youtubezalmos VPN / bad. IPs. The system is serving millions of API requests a week and growing as more people find it useful in protecting their online infrastructure. or after theyre installed. And a botnet refers ssl unblocker youtubezalmos to a robot network, also known as a zombie network. IoT devices often end up attracting little or no attention to cybersecurity while theyre being designed, as a result, when theyre shipped,there were no immediate reports of telecharger cyberghost 6 0 ssl unblocker youtubezalmos 8 premium gratuit any harm to staff. The telecharger cyberghost 6 0 8 premium gratuit packages are being examined by attending emergency services, Ssl unblocker youtubezalmos you can build the CA on your OpenVPN server ssl unblocker youtubezalmos or your local machine. Separate dedicated machine to serve as your CA (certificate authority)). The server should have a basic UFW firewall configured. If you dont want to use a dedicated machine for your CA,vPN Type popup menu. In the list, click on the. The VPN Type should automatically be set to L2TP over IPSec, choose PPTP Next, as ssl unblocker youtubezalmos seen in the picture below. Select VPN.petersburger Institut für Arktis- und Antarktisforschung) AARTFAAC Amsterdam-ASTRON Radio Transients Facility and Analysis Centre AARV Arctic ssl unblocker youtubezalmos Area Research Vessel AAS Advanced Automation System AAS American Astronautical Society AAS American Astronomical Society AAS Atomic Absorption Spectrometry,exe in-case you have already hotspot shield installed on your computer apply update then. Start hotspot shield. #3- After that ssl unblocker youtubezalmos Install the update HSS-nodrv-update. Exe as administrator (Really important)) Wait for the patch to say done! #2 After that Run hotspot shield patch. access blocked sites and browse safely set ssl unblocker youtubezalmos up vpn at home mac with free Hidester Web Proxy.you essentially create a static route that bypasses Google DNS. This is more router configuration but is very straightforward. Block Google DNS Next you need to block Google DNS in order for the Chromecast to work ssl unblocker youtubezalmos properly over a VPN. home VPN CyberGhost VPN Crack Final Pro. CyberGhost VPN Crack Serial Key PreActivated Full. CyberGhost VPN 6 Crack Premium CyberGhost VPN 6 Crack Pro version is ssl unblocker youtubezalmos the best VPN for anonymization of your identity and provides the high protection to your data.use a VPN to protect your privacy and enjoy safe and anonymous web ssl unblocker youtubezalmos browsing worldwide. Mac Android. Download FREE Avast Secureline VPN for Windows, in order to view this page correctly,this search uses multiple ssl unblocker youtubezalmos select criteria, first, you can perform thorough search through the cache to get the list of found items. Next you can inspect. Including full text search and search by URL. Price: Free, size: 414.9 KB, license: Freeware,using the "hardcoded" path to ssl unblocker youtubezalmos the config file is not always suitable. Hi, this is above and beyond being able to change proxies quickly. Chrome needs to have a autodetect setting that does not require user intervention of any sort. and more? How do I setup an OpenVPN Server on Ubuntu Linux version ssl unblocker youtubezalmos 14.04 or LTS server site to site tunnels to shield my browsing activity from bad guys on public Wi-Fi,
.TH SIGACTION 2 .SH NAME sigaction, signal \- manage signal state and handlers .SH SYNOPSIS .ft B #include <signal.h> .in +5 .ti -5 int sigaction(int \fIsig\fP, const struct sigaction *\fIact\fP, struct sigaction *\fIoact\fP) .in -5 .br void (*signal(int \fIsig\fP, void (*\fIhandler\fP)(int)))(int); .ft P .SH DESCRIPTION .de SP .if t .sp 0.4 .if n .sp .. .B Sigaction() is used to examine, set, or modify the attributes of a signal. The argument .I sig is the signal in question. The .I act argument points to a structure containing the new attributes of the signal, the structure pointed to by .I oact will receive the old attributes that were in effect before the call. .PP The .I act and .I oact arguments may be .B NULL to indicate that either no new attributes are to be set, or that the old attributes are not of interest. .PP The structure containing the signal attributes is defined in <signal.h> and looks like this: .PP .RS .nf .ft B .ta +4n +12n struct sigaction { void (*sa_handler)(int sig); sigset_t sa_mask; int sa_flags; }; .ft R .fi .RE .PP The .B sa_handler field contains the address of a signal handler, a function that is called when the process is signalled, or one of these special constants: .PP .TP 12 .B SIG_DFL Default signal handling is to be performed. This usually means that the process is killed, but some signals may be ignored by default. .TP .B SIG_IGN Ignore the signal. .PP The .B sa_mask field indicates a set of signals that must be blocked when the signal is being handled. Whether the signal .I sig itself is blocked when being handled is not controlled by this mask. The mask is of a "signal set" type that is to be manipulated by the .BR sigset (3) functions. .PP How the signal is handled precisely is specified by bits in .BR sa_flags . If none of the flags is set then the handler is called when the signal arrives. The signal is blocked during the call to the handler, and unblocked when the handler returns. A system call that is interrupted returns .B \-1 with .B errno set to .BR EINTR . The following bit flags can be set to modify this behaviour: .PP .TP 15 .B SA_RESETHAND Reset the signal handler to .B SIG_DFL when the signal is caught. .TP .B SA_NODEFER Do not block the signal on entry to the handler. .TP .B SA_COMPAT Handle the signal in a way that is compatible with the the old .B signal() call. .PP The old .B signal() signal system call sets a signal handler for a given signal and returns the old signal handler. No signals are blocked, the flags are .BR "SA_RESETHAND | SA_NODEFER | SA_COMPAT" . New code should not use .BR signal() . Note that .B signal() and all of the .B SA_* flags are MINIX 3 extensions. .PP Signal handlers are reset to .B SIG_DFL on an .BR execve (2). Signals that are ignored stay ignored. .SS Signals MINIX 3 knows about the following signals: .PP .nf .ta +11n +7n +8n signal num notes description .SP SIGHUP 1 km Hangup SIGINT 2 k Interrupt (usually DEL or CTRL\-C) SIGQUIT 3 kcm Quit (usually CTRL\-\e) SIGILL 4 Kc Illegal instruction SIGTRAP 5 Kc Trace trap SIGABRT 6 kcm Abort program SIGBUS 7 Kc Bus error SIGFPE 8 Kc Floating point exception SIGKILL 9 k Kill SIGUSR1 10 k User defined signal #1 SIGSEGV 11 Kc Segmentation fault SIGUSR2 12 k User defined signal #2 SIGPIPE 13 k Write to a pipe with no reader SIGALRM 14 k Alarm clock SIGTERM 15 km Terminate (default for kill(1)) SIGEMT 16 xKc Emulator trap SIGCHLD 17 pi Child process terminated SIGCONT 18 pi Continue if stopped SIGSTOP 19 ps Stop signal SIGTSTP 20 ps Interactive stop signal SIGWINCH 21 xi Window size change SIGTTIN 22 ps Background read SIGTTOU 23 ps Background write SIGVTALRM 24 k Virtual alarm clock SIGPROF 25 k Profiler alarm clock .ft R .fi .PP The letters in the notes column indicate: .PP .TP 5 .B k The process is killed if the signal is not caught. .TP .B K The process is killed if the signal is not caught. If the signal is received due to an exception while ignored or masked, the process is killed even if a handler is defined to catch the signal. .TP .B c The signal causes a core dump. .TP .B i The signal is ignored if not caught. .TP .B m The signal is converted to a message for system processes. .TP .B x MINIX 3 extension, not defined by \s-2POSIX\s+2. .TP .B p These signals are not implemented, but \s-2POSIX\s+2 requires that they are defined. .TP .B s The process should be stopped, but is killed instead. .PP The .B SIGKILL and .B SIGSTOP signals cannot be caught or ignored. The .B SIGILL and .B SIGTRAP signals cannot be automatically reset. The system silently enforces these restrictions. This may or may not be reflected by the attributes of these signals and the signal masks. .SS Types \s-2POSIX\s+2 prescribes that <sys/types.h> has the following definition: .PP .RS .B "typedef int (*sighandler_t)(int)" .RE .PP With this type the following declarations can be made: .PP .RS .ft B .nf sighandler_t sa_handler; sighandler_t signal(int \fIsig\fP, sighandler_t \fIhandler\fP); .fi .ft R .RE .PP This may help you to understand the earlier declarations better. The .B sighandler_t type is also very useful in old style C code that is compiled by a compiler for standard C. .SH "SEE ALSO" .BR kill (1), .BR kill (2), .BR pause (2), .BR sigprocmask (2), .BR sigsuspend (2), .BR sigpending (2), .BR sigset (3). .SH DIAGNOSTICS .B Sigaction() returns .B 0 on success or .B \-1 on error. .B Signal() returns the old handler on success or .B SIG_ERR on error. The error code may be: .PP .TP 10 .B EINVAL Bad signal number. .TP .B EFAULT Bad .I act or .I oact addresses. .SH AUTHOR Kees J. Bot ([email protected]) .\" .\" $PchId: sigaction.2,v 1.2 1996/04/11 06:00:28 philip Exp $
Agua Fria Freddie 2013 While Punxsutawney Phil is famous for his annual weather prognostications on February 2nd, he lacks knowledge of weather in Arizona. Legend has it that a lesser-known prognosticator lives right here in Arizona with a 95% accuracy rate. Locals know him as Agua Fria Freddie, a shadow-seeking rattlesnake that comes out every year on February 2nd to predict the coming of Spring to us desert-dwellers. This event is actually a non-event, as we cannot find record of any gathering to watch this legend search for his shadow. There are some legitimate reasons for not having an actual event to celebrate his findings this year. Nobody ever volunteers to get close enough to look for the shadow of a rattlesnake. Tourists are already in shorts and t-shirts The big party is at the Phoenix Open Golf Tournament Super Bowl party-throwers are preparing for tomorrow Rustler’s Rooste got to Freddie first and he’s an appetizer It’s not difficult to predict when Spring will arrive in Arizona when the sun shines about 300 days a year, but it is comforting to know that Agua Fria Freddie is watching out for us. Just watch out for him the other 364 days a year when you’re hiking around South Mountain. We hear that the TV stations in Phoenix are hiring Tukee the weather parrot to fill in over the Summer. “Hot & Sunny…bawk!!!” Rick Garrett The Markit Team
One African-American woman pleads not to be set up on a blind date with the only black professional you know. "I've got a guy for you. He's so hot, it's perfect." It was the spring of '08 when Maureen Dowd, the Pulitzer Prize-winning New York Times columnist, decided to take it upon herself to find me a man — specifically, Reggie Love, Barack Obama's "body guy" (read: personal aide). Standing outside the auditorium in Philadelphia where Obama the candidate had just given his loaded race speech, I thought to myself, How could a woman I'd helped with a computer problem once as an assistant at the Times spot the man for me? "Don't be silly," she said, when I demurred. "My assistant will set it up." Advertisement - Continue Reading Below I was 27 and working as the only black reporter at Politico.com when a black man was running for president. The issue of race in America had hit a fever pitch, and suddenly everyone seemed determined to cast me as Michelle redux — and all that was missing was my very own Barack. But as promising as it was to see African-American love on CNN and not just BET, the would-be first couple had become a sort of infomercial guarantee that my girlfriends and I knew was less than assured. According to a 2009 report from Yale University, black women with advanced degrees (like yours truly) are twice as likely as white women to have never been married by age 45. Other official number crunchers (and dream crushers) like the Census Bureau estimate that single sisters with a college education outnumber their black male counterparts nearly 3-to-1 in big cities (thanks in large part to the depressing poverty-and-prison cycle), leaving a very limited dating pool for anyone looking for a black boyfriend. More From Marie Claire So when Maureen's assistant called about my date with Mr. Love, it seemed statistically ill-advised to turn down such a prospect. Then he showed up an hour late to our date, wearing gym clothes, and we stumbled through a series of conversational nonstarters. It was clear that Reggie and I had just two things in common: We're black and we have bachelor's degrees. While most of us have been subject to ill-conceived blind-date profiling — why does the marathon runner with the MBA always get set up with the Gold's Gym trainer? — mine was dismayingly consistent. Never mind the complicated algorithms of eHarmony, my matchmakers used simple math: Black professional + black professional = Huxtables. I'm not saying that my colleagues and neighbors should have set me up with white friends, necessarily. I was just disappointed that, despite all my more awesome qualities, the main thing they all saw was my skin. Recently, my mother met a black man at Ikea, gave him my card, and then yelled at me when I raised the obvious psycho-killer issues. She said he was nice, a lawyer. Like Maureen, Mom was playing into the kind of stereotypes I was trying to dodge. But I realized that both women just wanted to see me happy. And, even though I was grateful Ikea guy never called, I know that the more weird setups I endure, the better my chances of getting to Mr. Right, whatever his race. Helena Andrews's memoir, Bitch Is the New Black (HarperCollins), is out this month. She writes a weekly column for Slate's TheRoot.com.
For five years, George Osborne has been managing failure. The Chancellor’s sixth Budget, like its predecessors, was delivered in coalition; the presence of Nick Clegg and Danny Alexander on the government front bench is a permanent reminder of how the Conservatives fell short at the last general election. As his party’s chief strategist in 2010, Osborne continues to live in the shadow of that campaign. This political failure was followed by an economic one. Osborne’s original ambition was to eliminate the structural deficit in a single term. The collapse of growth after he entered office forced him to postpone this goal. Higher-than-forecast borrowing cost the UK its triple-A credit rating, the metric that he had adopted as the defining test of his economic credibility. Few politicians have recovered from such a gap between promise and delivery. Osborne’s skill has been to transform this political base metal into gold. He has been the great alchemist of this parliament. The Chancellor made a virtue of coalition government by co-opting the Lib Dems’ best ideas – increasing the personal tax allowance, granting new freedoms over pensions – and aggressively rebranding them as Conservative achievements. The Tories’ junior partners protest indignantly, reminding voters that David Cameron told Clegg during the first 2010 leaders’ debate that the country could not “afford” to “take every­one out of their first £10,000 of income tax”. (Osborne's Budget increased the threshold to £10,800.) But, as Ronald Reagan observed: “If you’re explaining, you’re losing.” When the near-disappearance of growth almost halted deficit reduction, Osborne chose not to impose additional fiscal tightening, instead redefining austerity as a two-term project. Labour has been left unsure whether to applaud the Chancellor for adopting the more moderate path it advocated (“a victory for sensible Keynesian thinking” was how his shadow, Ed Balls, recently described it to me) or to denounce him for failing on his own terms. In both cases, it has been forced to concede that it, too, would impose austerity after the election, an admission that has corroded its left-wing support. There are some Conservatives who wonder aloud whether greater deficit reduction would have been more politically hazardous, liberating Labour to promise the return of big spending. Osborne’s greatest act of conjury, as ­fiscal boundaries have shifted, has been to entrench an image of himself as a figure of unbending constancy. Aides say that the Chancellor, whose once-poor approval ­ratings now exceed those of the three main party leaders, is congratulated by the public on “sticking to the plan” during his hard-hat tours. Like Margaret Thatcher (who was sometimes for turning), he knows that, in politics, appearance matters more than reality. The truth is that Osborne has changed. Midway through the parliament, after the humbling experience of his 2012 “omni­shambles” Budget, he began to remake himself as a more complex and sophisticated politician. Osborne now speaks of the state as an ally as often as he does of it as an enemy and compares himself to Michael Heseltine. He has resurrected the cause of “full employment” (albeit more loosely defined than in previous decades), championed increases in the minimum wage (which will rise by 3 per cent, to £6.70 an hour, from October) and begun the construction of a “northern powerhouse” to challenge London’s hegemony. This ideological rebalancing is driven by Osborne’s Huddersfield-born, comprehensive-educated adviser Neil O’Brien, who wrote of the need for the Tories to decontaminate their brand in urban regions during his time as director of Policy Exchange. That Osborne embraced such an interesting thinker is evidence, say Tory MPs, of his intellectual restlessness, contrasting him with the dependable but unimaginative Cameron. But like a rock star whose new album includes traditional material for older fans, the Chancellor is playing some familiar tunes. He has revived his 2007 pledge to raise the inheritance tax threshold to £1m. It was this policy that spooked Gordon Brown into abandoning plans for an early election and that earned Osborne his reputation as a strategic grandmaster. But the politics is not uncomplicated for him. If the measure will appeal to aspirational voters, the decision to prioritise the reduction of a tax paid by just 4.9 per cent of estates risks reinforcing the Tories’ status as the party of the privileged. Few policies more sharply contradict Michael Gove’s exhortation to be “warriors for the dispossessed” and to penalise “the undeserving rich”. The Budget promised less post-election austerity than implied by the Autumn Statement, as Osborne sought to neutralise Labour's 1930s attack line. But because of his ambition of a surplus by the end of the next parliament, accompanied by no further tax rises, a fiscal chasm remains between his plans and those of Labour. Ed Balls’s decision to leave room to borrow to invest would give him nearly £30bn of additional spending each year. It was partly the fear of massacred public services that denied the Tories a majority in 2010, in the most propitious circumstances. Osborne’s wager is that their unexpected resilience will persuade voters that further austerity is tolerable; that fear of a “tax bombshell” and “economic chaos” under Labour will predominate. When the Tories entered office, some doubted that this question would even arise. The belief was that they would be evicted from government on a wave of popular outrage over the cuts. But the wave never came. Osborne has managed failure well indeed.
The research proposals in the SPORE encompass a broad range of activities, including studies in cell lines, animal models, and clinical trials. These studies will generate a number of different types of data. In order to design and analyze such a wide variety of studies properly, a variety of mathematical techniques will be required. In addition, a strong data management system is vital to the operation of the SPORE. If Projects are to cooperate efficiently, they must have access to relevant data. The data must be easily accessible, but stored in a secure manner with assurance of patient confidentiality. Data and information must flow smoothly between projects. Data quality and integrity must be assured by data audit and backup procedures. In addition, there needs to be an efficient interface between the computational biology and data storage facilities provided by SPORE Innovative Technology, Computational Biology and Microarray Core. This is particularly true for the large amounts of microarray and proteomics expression profiling information. In order to meet these needs, the Biostatistics/ Informatics Core therefore brings together a number of biostatisticians and biomathematical scientists with expertise in a number of statistical and data management disciplines. By placing these personnel within the Biostatistics/Informatics Core rather than in individual Projects, the ability of the Projects to interact is strengthened. Thus, the Biostatistics/Informatics Core will provide expertise in study design, data analysis, and data management to all Projects and Cores. The specific aims of the Biostatistics and Informatics Core are: 1. to provide the statistical design and analysis required to achieve the specific aims of each project; 2. to assist in the design, evaluation, and analysis of new research arising from the individual projects, the Developmental Projects, and the Career Development Trainees; 3. to provide database support and expertise for the collection, storage, and retrieval of clinical data, including provisions for quality assurance, auditing procedures, and patient confidentiality; 4. to assist the other Cores and Projects in the collection, entry, and maintenance of data specific to those Cores and Projects. In this regard, to interface with the Innovative Technology/Computational Biology/Microarray Core in providing biostatistical analysis and assistance with data warehousing of the microarray and proteomics data; and 5. to coordinate the data acquisition and biostatistical analysis and audit planning for the Administrative Core.
Sound Devices introduced AES 42 digital microphone support in the 788T with firmware revision 1.5 beta. AES 42 is a protocol for direct digital interconnection between microphones and devices, such as mixers and recorders. Why AES 42? One benefit of AES 42 argued by microphone manufacturers is that by digitizing the signal at the capsule, several analog gain stages are eliminated. Theoretically, conversion at capsule-level improves signal transfer and can result in higher dynamic range. With a digital signal in the microphone, the mic can have on-board digital signal processing (DSP) to perform numerous functions including: gain control, frequency-domain adjustments such as high-pass filters, and dynamics control such as compression/limiting. AES 42 and Word Clock The 788T provides support and connectivity for Mode 1 (asynchronous) connections. Mode 1 uses the microphone's own clock generator for its word-clock-source. Because the word-clock-signal out of the microphone may differ from the recorder's set sampling rate, the 788T's hardware sampling rate converter (SRC) is activated for AES 42 inputs when using the recorder's word clock. The recorder provides connection for multiple (up to four) AES 42 input sources at any supported sampling rate. The 788T accommodates AES 42 (and AES3) inputs with sampling rates from 32 kHz to 192 kHz. Connectivity and Gain The 788T provides "digital phantom power" for AES 42 microphone operation. Similar to a balanced, analog microphone connection, AES 42 requires two-conductors and a shield. An AES 42 microphone signal appears on both "sides" of a connected input pair. For example, when connecting an AES 42 microphone to AES pair 1/2, the microphone signal appears at both input 1 and input 2. When an AES 42 microphone is connected to the 788T, the recorder's front panel gain controls digital gain levels in the recorder, not digital gain in the microphone and not analog levels. Unsupported AES42 Features The AES42 protocol includes several modes of microphone control and connection. The 788T presently does not support the following: Mode 2, the 788T does not send word clock signals to AES42 microphones nor does it send control signals to control... Is AES 42 Relevant? As AES 42 hardware becomes available from manufacturers such as Neumann, Schoeps, and Sennheiser, Sound Devices will evaluate its application in mixers and recorders. The identified benefits of AES 42 appear real for some applications. Whether they prove to be a demonstrable for field production remains to be seen.